US20070070067A1 - Scene splitting for perspective presentations - Google Patents
Scene splitting for perspective presentations Download PDFInfo
- Publication number
- US20070070067A1 US20070070067A1 US11/412,410 US41241006A US2007070067A1 US 20070070067 A1 US20070070067 A1 US 20070070067A1 US 41241006 A US41241006 A US 41241006A US 2007070067 A1 US2007070067 A1 US 2007070067A1
- Authority
- US
- United States
- Prior art keywords
- sub
- scene
- rendering
- images
- steps
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/16—Indexing scheme for image data processing or generation, in general involving adaptation to the client's capabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/52—Parallel processing
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/10—Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/12—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/06—Use of more than one graphics processor to process data before displaying to one or more screens
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
Abstract
A controlling device 110 that splits a 3D scene 131 into 3D sub-scenes, each including a sub-volume 133 of the 3D scene 131, and distributes the 3D sub-scenes to multiple rendering devices 120. Each rendering device 120 independently determines a 2D sub-image 141 responsive to its 3D sub-scene and a rendering viewpoint 132. The 2D sub-images 141 are composited using a back-to-front partial ordering with respect to the rendering viewpoint 132.
Description
- 1. Field of the Invention
- The invention relates to scene splitting for perspective presentations.
- 2. Related Art
- In some applications of computing devices, it is desirable to present a visualization of a scene to a user. Some of these applications include the following:
-
- CAD (computer aided design);
- computer aided search, such as used in the oil and gas industry;
- computer simulations, such as battlefield simulations and flight simulation; and
- video games, including multiplayer video games.
- One problem in the known art is that computing the scene to be presented requires relatively large resources, including both computing power and memory.
- Known solutions include breaking up computing the scene into parts, and assigning each of those parts to a separate graphics processor. These separate graphics processors each operate under control of a single controlling processor, which determines how to break up computing the scene into parts. The controlling processor sends each separate graphics processor a set of commands telling the receiver what to render. Each graphics processor generates data showing how to render its part of the scene. This data might be sent back to the controlling processor for presentation, or might be sent on to a presenting device, such as a graphics compositor, a monitor, or a set of monitors.
- While this method generally achieves the goal of providing increased resources to render the scene, it still has several drawbacks. One drawback is that it might still take substantial resources to compose a single image for the presentation device, from the distinct sub-images generated by multiple graphics processors. For example, if one of the graphics processors is assigned objects to render that are “behind” others, as seen from a selected viewpoint, rendering the 2D (2-dimensional) image for display might involve substantial resources, including such effects as occlusion and partial occlusion, transparency, and reflection.
- Some known systems distribute the 3D (3-dimensional) scene for rendering in a relatively simple manner, such as slices of the 3D scene to be rendered, and include specialized hardware as a graphics compositor. These systems include the HP “Sepia” product and the Orad “DVG” product. However, specialized hardware can be quite expensive, and is in general not very suitable for flexible configuration of the system.
- Other known systems also distribute the 3D scene for rendering in a relatively simple manner, and include software to perform the function of a graphics compositor (either in the controlling device itself, or in a separate processor). However, software solutions are subject to the drawback that they are much slower when the data they work with does not fit into rapidly accessible memory, such as main memory (as opposed to disk drive storage).
- Moreover, both hardware and software “flat” distribution solutions are subject to the drawback that they use substantial network bandwidth, and might involve limitations due to use of that resource.
- Other known systems also distribute the 3D scene for rendering in a more complex manner, and a tree or other multi-tiered structure for the rendering processors to deliver their results to a graphics compositor (again, either in the controlling device itself, or in a separate processor). However, multi-tier solutions are subject to the draw-back that they involve substantially greater latency between the time the rendering processor generates its portion of the 3D scene, and when the 3D scenes can be combined into a 2D image capable of being presented.
- Accordingly, it would be advantageous to provide methods and systems in which 3D scenes might be rendered, and composed into 2D images, and which are not subject to drawbacks of the known art.
- The invention provides techniques, embodied in methods and systems, including scene splitting for perspective presentations.
- A system embodying the invention includes a controlling device and a set of rendering devices, with the effect that the controlling device can distribute a set of objects to be rendered to the rendering devices. The controlling device splits up the 3D scene to be rendered into a set of 3D sub-scenes, each of which is relatively smaller than the original 3D scene. Each rendering device determines a 2D image in response to the 3D sub-scene assigned to it, and in response to a rendering viewpoint. In one embodiment, elements of a 3D scene are included within an enclosing volume, such as a cube, and a set of 3D sub-scenes are each included within an enclosing sub-volume, such as a smaller cube (i.e., a “cubelet”) proportional to the entire scene's larger enclosing cube. Each rendering device determines a 3D rendering of the elements in its sub-volume, as seen from that rendering viewpoint. Each rendering device also determines a 2D image of the 3D rendering, as seen from that rendering viewpoint.
- Each rendering device sends the 2D image it determines to a compositor, which combines that 2D image with the 2D images from rendering devices in “front” of it with respect to the rendering viewpoint. In various embodiments, the 2D images might be sent for composition in one of several ways, such as one of (a) directly to the controlling device, (b) in a multi-tier hierarchy, such as one determined by the controlling device in response to the rendering viewpoint, (c) a switch coupling rendering devices in response to the rendering viewpoint. A result of compositing the 2D images should be suitable for sending to a presentation device.
- After reading this application, those skilled in the art would recognize that the invention provides an enabling technology by which substantial advance is made in the art of rendering scenes.
- For example, the invention might be used to provide one or more of, or some combination or extension of, any of the following.
-
- rendering 3D scenes in substantially real-time, such as for example as might be used in battlefield simulations, flight simulations, other testing or training devices, and the like;
- rendering 3D scenes in various detail and from various selected perspectives, such as for example as might be used in computer-aided design, in examination of computer simulations of natural phenomena such as weather simulations or wind-tunnel simulations, and the like; and
- rendering 3D scenes to present information, such as for example as might be used in computer-aided presentation or search of databases, user interfaces for computer-aided control of real-time systems or other systems, and the like.
- After reading this application, these and other and further uses of the invention would be clear to those skilled in the art.
-
FIG. 1 shows a block diagram of a system including a controlling device and a set of rendering devices. -
FIG. 2 shows a process flow diagram of a method of using a system including a controlling device and a set of rendering devices. - This application incorporates by reference and claims priority of at least the following documents.
-
- Application Ser. No. 60/676,240, filed Apr. 29, 2005, in the name of inventor Thomas Ruge, titled “Scene Splitting for Perspective Presentations”, attorney docket number 233.1008.01
- Application Ser. No. 60/676,254, filed Apr. 29, 2005, in the name of inventor Thomas Ruge, titled “Alpha Blending”, attorney docket number 233.1012.01
- Application Ser. No. 60/676,241, filed Apr. 29, 2005, in the name of inventor Thomas Ruge, titled “Compression of Streams of Rendering Commands”, attorney docket number 233.1007.01
- These documents are hereby incorporated by reference as if fully set forth herein, and are sometimes referred to herein as the “incorporated disclosures”. Inventions described herein can be used in combination or conjunction with technology described in the incorporated disclosures.
- In the description herein, a preferred embodiment of the invention is described, including preferred process steps and data structures. Those skilled in the art would realize, after perusal of this application, that embodiments of the invention might be implemented using a variety of other techniques not specifically described, without undue experimentation or further invention, and that such other techniques would be within the scope and spirit of the invention.
- Definitions
- The general meaning of each of these following terms is intended to be illustrative and in no way limiting.
-
- The phrases (1) “controlling device” and (2) “rendering device”, and the like, refer respectively to devices for (1) controlling the allocation of rendering commands, and (2) actually rendering 3D scenes and 2D images of those 3D scenes from a selected perspective, as further described below.
- In one embodiment, there is a single controlling device and as many rendering devices as necessary so that information for rendering each cubelet can be rendered with that information fitting into the graphics memory of at most one rendering device, such as for example a 5×5×5 array of rendering devices. However, in the context of the invention, there is no particular requirement of having only a single controlling device or of having a specified number of rendering devices.
- For example, in the oil and gas industry, a database of 50 gigabytes might be allocated into sub-portions in a 10×10×10 array of rendering devices, with the effect of presenting each rendering device with only about 50 megabytes of information to process. In one embodiment, each rendering device might have about 128 megabytes of graphics memory, with the effects that the rendering information would fit into graphics memory, and that each rendering device might operate relatively quickly.
- The phrases (1) “compositor” and (2) “presentation device”, and the like, refer respectively to devices (1) for composing 2D images in response to a 3D scene, or for composing a single 2D image from multiple 2D images, and (2) for making a presentation to a user in response to one or more 2D images, as further described below.
- In one embodiment, the presentation device might include a device for breaking information about a 2D image into a set of information for presentation on multiple display panels, monitors, or projection devices, such as for example a “power wall” of 5×5 display panels. However, in the context of the invention, there is no particular requirement of having any particular number of presentation devices. In alternative embodiments, the 2D image might be transmitted to another computing device for additional processing before, or instead of, actually being presented to a user.
- The phrases “model”, “3D scene”, “3D sub-scene”, “rendering viewpoint”, “visualization of a scene”, “front”, “2D image”, and the like, all refer to facts and information about objects in the 3D scene and the 2D image or images used to represent those objects, as further described below.
- In one embodiment, a “model” includes information about what objects are to be represented in the 3D scene, as distinguished from a “3D scene”, which includes information about where objects are placed in an encompassing volume, what they look like, and what their effects are on viewing other such objects (i.e., whether they are opaque, transparent, translucent, reflective, and the like).
- In one embodiment, a “3D sub-scene”, similar to a “3D scene”, includes information similar to a 3D scene, but only for an selected portion of that 3D scene, such as for example a set of cubelets within a cube encompassing that 3D scene. However, in the context of the invention, there is no particular requirement of using a set of cubelets for allocating rendering commands. In addition or instead of cubelets, the system might allocate rendering commands in response to other volumes, whether smoothly space-filling or not, such as for example, tetrahedra or spheres.
- In one embodiment, the “rendering viewpoint” might be static, or might be dynamic, such as in response to (1) controls by a user, (2) a set of sensors, such as motion sensors focused on the user, (3) a time-varying parameter, such as in a roller-coaster ride, and the like. The “front” of a 3D scene is that 2D image presented to a viewer at the rendering viewpoint.
- In one embodiment, a “2D image” includes a set of information for 2D presentation, such as for example pixel values for color (e.g., red, green, and blue) or a set of presentable polygons or vectors. In the context of the invention, there is no particular requirement of any one selected representation of a 2D image, nor is there any particular requirement of actually presenting the 2D image to a user.
- The phrases (1) “network bandwidth”, (2) “multi-tier hierarchy”, (3) “switch”, and the like, refer respectively to (1) a rate at which information can be sent back or forth between the controlling device and the rendering devices, or among the rendering devices where appropriate, (2) an arrangement in which the controlling device is coupled to the rendering device using one or more intermediate devices, such as for example partial compositors, and (3) an arrangement in which the controlling device and the rendering devices are coupled, as further described below.
- The phrases (1) “scene splitting”, (2) “encompassing volume”, (3) “encompassing sub-volume”, (4) “cubelet”, and the like, refer to concepts relating to allocation of rendering commands by the controlling device to the rendering devices, as further described below.
- The scope and spirit of the invention is not limited to any of these definitions, or to specific examples mentioned therein, but is intended to include the most general concepts embodied by these and other terms.
- System Elements
-
FIG. 1 shows a block diagram of a system including a controlling device and a set of rendering devices. - A
system 100 includes elements as shown inFIG. 1 , plus possibly other elements as described in the incorporated disclosure. These elements include at least acontrolling device 110, a set ofrendering devices 120, a (conceptual—not shown but understood by those skilled in the art) encompassing volume 130, and a (conceptual—not shown but understood by those skilled in the art)2D image 140 capable of presentation. - The controlling
device 110 includes elements as shown inFIG. 1 , plus possibly other elements as described in the incorporated disclosure. These elements include at least a model ordatabase 111, acommunication network 112, and a set of rendering commands 113. - The
rendering devices 120 each include elements as shown inFIG. 1 , plus possibly other elements as described in the incorporated disclosure. These elements include, for eachrendering device 120, at least aninput port 121, a processor andmemory 122, and anoutput port 123. - As described herein, the encompassing volume 130 includes elements as shown in
FIG. 1 , plus possibly other elements as described in the incorporated disclosure. These elements include at least the following: -
- a
3D scene 131 to be rendered (as represented by information available to the controllingdevice 110 or the rendering devices 120); - a rendering viewpoint 132 (as represented by information available to the controlling
device 110 or the rendering devices 120); and - a set of sub-volumes 133 (as determined by the controlling device 110).
- a
- As described herein, the
2D image 140 includes an image responsive to the3D scene 131 and therendering viewpoint 132. - After reading this application, it would be clear to those skilled in the art that the
2D image 140 is responsive to at least the following: -
- a
2D sub-image 141 presented by each of the sub-volumes 133, each responsive to therendering viewpoint 132; - a back-to-front
partial ordering 142 of the sub-volumes 133, also responsive to therendering viewpoint 132; and - a composition of each of those
2D sub-images 141, responsive to the back-to-frontpartial ordering 142.
- a
- As described herein, each
rendering device 120, allocated rendering commands for itssub-volume 133, need only compute the2D sub-image 141 for itsown sub-volume 133, responsive to therendering viewpoint 132. This has the effect of generating a2D sub-image 141 for eachsuch sub-volume 133. - After reading this application, it would be clear to those skilled in the art that each of the
2D sub-images 141 need only encompass those three faces (for a cubelet) of the sub-volume 142 viewable from therendering viewpoint 132. That2D sub-image 141 has a size proportional to O(1/n2), where n is a number ofrendering devices 120 on a side of a cubic arrangement thereof. - The
system 100 also optionally includes acompositing device 150. Thecompositing device 150 includes elements as shown inFIG. 1 , plus possibly other elements as described in the incorporated disclosure. These elements include at least aninput port 151, acompositing element 152, and anoutput port 153. - The
input port 151 is coupled to the2D sub-images 141, and to the back-to-frontpartial ordering 142. Thecompositing element 152 is coupled to theinput port 151, and generates the 2D image 140 (as represented by data in memory, storage, or a signal). Theoutput port 153 is coupled to the2D image 140. - The
system 100 also optionally includes apresentation device 160. Thepresentation device 160 is coupled to the 2D image 140 (as represented by data in memory, storage, or a signal), and is capable of presenting that2D image 140 to auser 170. - Although the
user 170 is shown herein as a person, in the context of the invention, there is no particular requirement that theuser 170 is so limited. Theuser 170 might include a group of people, a computer imaging or motion detection program, an image compression program such as JPEG or MPEG, a system including a broadcast or other distribution system for images, an analysis program for2D image 140, or even an artificial intelligence program capable of reviewing the2D image 140. - Method of Operation
-
FIG. 2 shows a process flow diagram of a method of using a system including a controlling device and a set of rendering devices. - Although described serially, the flow points and method steps of the
method 200 can be performed by separate elements in conjunction or in parallel, whether asynchronously or synchronously, in a pipelined manner, or otherwise. In the context of the invention, there is no particular requirement that the method must be performed in the same order in which this description lists flow points or method steps, except where explicitly so stated. - The
method 200 includes flow points and process steps as shown inFIG. 2 , plus possibly other flow points and process steps as described in the incorporated disclosure. These flow points and process steps include at least the following: -
- At a
flow point 210, themethod 200 is ready to determine a2D image 140 in response to a model, the model including a3D scene 131 and arendering viewpoint 132. - At an (optional)
step 211, further described below, the controllingdevice 110 determines a set ofsub-volumes 133, and allocates them to therendering devices 120. - In the context of the invention, there is no particular requirement that the controlling
device 110 allocates sub-volumes 133 on a one-for-one basis withrendering devices 120. - At a
step 212, the controllingdevice 110 allocates portions of the3D scene 131 to therendering devices 120, and sends them information identifying the rendering - At a
step 213, therendering devices 120 each render their allocated portions of the3D scene 131 independently with respect to therendering viewpoint 132, with the effect of each independently generating a2D sub-image 141. - At a
step 214, further described below, therendering devices 120 each couple their independently generated2D sub-images 141 to thecompositing device 150, responsive to the back-to-frontpartial ordering 142. - At a
step 215, further described below, thecompositing device 150 combines the2D sub-images 141 responsive to therendering viewpoint 132, generating thecomplete 2D image 140. - At an (optional)
step 216, thepresentation device 160 presents thecomplete 2D image 140 to theuser 170. In one embodiment, thepresentation device 160 might include more than one power wall, such as for example a cube of 6 power walls to give the illusion of being suspended within the 3D scene. In such embodiments, themethod 200 would determine the2D image 140 for each such power wall with respect to adistinct rendering viewpoint 132. - At a
flow point 220, themethod 200 has finished determining a2D image 140 in response to a model, the model including a3D scene 131 and a rendering view-point 132. - In one embodiment, the
method 200 is repeated rapidly enough that theuser 170 sees the2D image 140 as a motion picture, with the effect that theuser 170 sees the3D scene 131 itself as a virtual reality motion picture. In such embodiments, the model might be responsive to user inputs or other inputs, with the effect that the3D scene 131 and therendering viewpoint 132 might change rapidly with time, and with the effect that theuser 170 would perceive a view very much like actually interacting with a virtual reality as defined by the model.
Software Package Overview
- At a
- The
system 100 uses “Sub-Volumes” to split the 3D scene. These sub-volumes are initially defined by a configuration file (see a sample configuration file be-low). A sub-volume as implemented by thesystem 100 is presented by a cube (defined in 3 dimensions by xmin, xmax, ymin, ymax, zmin, zmax). The content of each sub-volume is rendered by anindividual rendering device 120. The splitting of the 3D-scene happens by assigning each object to at least one sub-volume. The criteria that determines where to assign an individual 3D-object is the spatial overlap between the spatial representation of a 3D object (the “bounding box”) and all sub-volumes. The 3D-object will be copied onto allrendering devices 120 that are assigned to the sub-volumes that overlap or enclose the “bounding box” of a 3D object. This algorithm makes sure that everyrendering device 120 has a copy of at least all the 3D objects it has to render. - Sample Configuration file
-
- #
- # Copyright (C) ModViz, Inc. 2004
- # All Rights Reserved
- #
- # This sample VGP configuration file uses two rendering nodes in alpha
- # compositing mode. Render1 renders a sub-volume encompassing the WORLD_VOLUME
- # minX to 0 and the entire WORLD_VOLUME in Y and Z. Render2 renders a
- # sub-volume encompassing 0 to WORLD_VOLUME maxX and the entire WORLD_VOLUME in
- # Y and Z. This splits the WORLD_VOLUME in half down the X=0 plane and gives
- # each render node half of the WORLD_VOLUME. These two render nodes send their
- # rendered buffers to the AppNode which composites and displays the results in
- # the original application context.
- File: # version number of config file
- VERSION=0.9
- # Application level configuration
- # CONTEXThd —STRATEGY=which OGL context to use (LAST=last one created by the
- # application)
- # WORLD_VOLUME=bounding box of all 3D vertices in the application {minX,maxX,
- # minY,maxY, minZ,maxZ}
- AppNode:
- CONTEXT_STRATEGY=LAST
- # Rendering node configuration
- # NAME=unique name for this node
- # IP_ADDRESS=host ip address and port this node runs on (port is normally the
- # xinetd configured port)
- # SUB_VOLUME=normalized bounding box of the 3D vertices that should be sent to
- # this node {minX,maxX, minY,maxY, minZ,maxZ}
- RenderNode:
- NAME=Render1
- IP_ADDRESS=127.0.0.1:24900
- SUB_VOLUME={−1,0, −1,1, −1,1}
- RenderNode:
- NAME=Render2
- IP_ADDRESS=127.0.0.1:24902
- SUB_VOLUME={0,1, −1,1, −1,1}
- The parameter of a sub-volume can change dynamically if the
controlling device 110 determines a more optimal sub-volume configuration. More optimal is defined by a better load balance of allrendering devices 120. The optimum preferably includes allrendering device 120 needing the same time to render their individual part of a 3D-scene. - The change of the sub-volumes can be expensive (i.e. it takes a long time), because 3D-objects have to be transferred from one
rendering device 120 to another. In order to prevent this costly operation, thesystem 100 implements an optional way of giving a copy of all 3D-objects to allrendering devices 120. In order to prevent an over-load of thememory 122 of therendering devices 120, thesystem 100 can write 3D-objects on to a cheaper slower memory with higher capacity (e.g. a hard disk associated with a rendering device 120) of eachrendering device 120. - Although preferred embodiments are disclosed herein, many variations are possible which remain within the concept, scope, and spirit of the invention. These variations would become clear to those skilled in the art after perusal of this application.
- After reading this application, those skilled in the art will recognize that these alternative embodiments and variations are illustrative and are intended to be in no way limiting. After reading this application, those skilled in the art would recognize that the techniques described herein provide an enabling technology, with the effect that advantageous features can be provided that heretofore were substantially infeasible.
Claims (25)
1. A method, including steps of
allocating information representing a three-dimensional scene among a set of sub-scenes;
generating a two-dimensional sub-image for each sub-scene, responsive to a rendering viewpoint;
combining the two-dimensional sub-images, responsive to the rendering viewpoint.
2. A method as in claim 1 , including steps of presenting a result of the steps of combining.
3. A method as in claim 1 , including steps of sending information representing each sub-scene to a substantially independent computing device.
4. A method as in claim 1 , wherein the steps of generating include rendering the sub-scenes substantially concurrently and substantially independently.
5. A method as in claim 1 , wherein the sub-images include substantially compact and continuous planar regions.
6. A method as in claim 1 , wherein the sub-scenes include substantially compact and continuous spatial regions.
7. A method as in claim 1 , wherein the sub-scenes smoothly fill substantially the entire scene.
8. A method as in claim 1 , wherein the steps of allocating include steps of
determining a set of rendering commands associated with the scene; and
optimizing the set of sub-scenes with respect to at least one selected parameter.
9. A method as in claim 8 , wherein the parameter includes at least one of: a number of rendering commands, an amount of bandwidth for sending rendering commands, an amount of memory for maintaining rendering commands, an amount of time for performing rendering commands.
10. A method as in claim 8 , wherein the steps of optimizing include positioning planar borders between sub-scenes.
11. A method as in claim 8 , wherein
the sub-scenes include rectilinear sub-objects of a rectilinear object encompassing the scene; and
the steps of optimizing include positioning planar borders between sets of sub-scenes, with the effect of allocating selected spatial regions of the scene to selected sub-scenes.
12. A method as in claim 1 , wherein the steps of combining include
determining a partial ordering of the sub-images responsive to the rendering viewpoint; and
combining any overlapping sub-images in response to the partial ordering.
13. A method as in claim 12 , wherein the partial ordering is responsive to a back-to-front ordering of the sub-images responsive to the rendering viewpoint.
14. A method as in claim 12 , wherein the steps of combining overlapping sub-images include steps of
coupling the sub-images in a hierarchy responsive to the partial ordering; and
combining overlapping sub-images substantially concurrently and substantially independently.
15. A method as in claim 12 , wherein the steps of combining overlapping sub-images include steps of
coupling the sub-images using a switch responsive to the partial ordering; and
combining overlapping sub-images substantially concurrently and substantially independently.
16. A method as in claim 1 , wherein the steps of generating include steps of, for at least one selected sub-scene
allocating information representing that sub-scene among a set of sub-sub-scenes;
generating a sub-sub-image for each sub-sub-scene, responsive to the rendering viewpoint; and
combining the sub-sub-images, responsive to the rendering viewpoint.
17. A method as in claim 16 , wherein the at least one sub-scene is selected responsive to at least one of: a desired fineness of detail, a proximity of the rendering viewpoint to the sub-scene, a rate of change of the sub-scene, a relative range of angles within the sub-scene with respect to the rendering viewpoint.
18. A method as in claim 16 , wherein the at least one selected sub-scene is selected responsive to at least one of: a number of rendering commands, an amount of bandwidth for sending rendering commands, an amount of memory for maintaining rendering commands, an amount of time for performing rendering commands.
19. Apparatus including
a set of computing devices, at least one of which takes on the role of a controlling device, and at least one of which takes on the role of a rendering device;
a communication link between the controlling device and one or more rendering devices;
information, at the controlling device, representing a set of objects in a three-dimensional scene and a rendering viewpoint with respect to that scene; and
information, at one such rendering device, representing a two-dimensional sub-image associated with only a portion of that scene;
wherein at least one of those devices takes on the role of a compositing device.
20. Apparatus as in claim 19 , wherein that portion of the scene includes a substantially compact and continuous spatial sub-region of the scene.
21. Apparatus as in claim 19 , wherein that sub-image includes a substantially compact and continuous planar region.
22. Apparatus as in claim 19 , wherein the compositing device includes the controlling device.
23. Apparatus as in claim 19 , including information, at the compositing device, representing a back-to-front partial ordering of one or more such sub-images, with respect to the rendering viewpoint.
24. Apparatus as in claim 22 , wherein
the compositing device includes more than one computing device taking on the role of a portion of the compositing device;
the portions of the compositing device include a hierarchy responsive to the partial ordering.
25. Apparatus as in claim 22 , wherein
the compositing device includes more than one computing device taking on the role of a portion of the compositing device;
the portions of the compositing device include a switch responsive to the partial ordering.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/412,410 US20070070067A1 (en) | 2005-04-29 | 2006-04-26 | Scene splitting for perspective presentations |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US67625405P | 2005-04-29 | 2005-04-29 | |
US67624005P | 2005-04-29 | 2005-04-29 | |
US67624105P | 2005-04-29 | 2005-04-29 | |
US11/412,410 US20070070067A1 (en) | 2005-04-29 | 2006-04-26 | Scene splitting for perspective presentations |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070070067A1 true US20070070067A1 (en) | 2007-03-29 |
Family
ID=37308546
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/412,410 Abandoned US20070070067A1 (en) | 2005-04-29 | 2006-04-26 | Scene splitting for perspective presentations |
US11/412,406 Active 2027-06-30 US7978204B2 (en) | 2005-04-29 | 2006-04-26 | Transparency-conserving system, method and computer program product to generate and blend images |
US11/412,413 Expired - Fee Related US7450129B2 (en) | 2005-04-29 | 2006-04-26 | Compression of streams of rendering commands |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/412,406 Active 2027-06-30 US7978204B2 (en) | 2005-04-29 | 2006-04-26 | Transparency-conserving system, method and computer program product to generate and blend images |
US11/412,413 Expired - Fee Related US7450129B2 (en) | 2005-04-29 | 2006-04-26 | Compression of streams of rendering commands |
Country Status (2)
Country | Link |
---|---|
US (3) | US20070070067A1 (en) |
WO (3) | WO2006119076A2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100026783A1 (en) * | 2008-08-01 | 2010-02-04 | Real D | Method and apparatus to encode and decode stereoscopic video data |
US20160094837A1 (en) * | 2014-09-30 | 2016-03-31 | 3DOO, Inc. | Distributed stereoscopic rendering for stereoscopic projecton and display |
CN112362289A (en) * | 2021-01-13 | 2021-02-12 | 中国空气动力研究与发展中心高速空气动力研究所 | Aircraft split test method and device, computer equipment and readable storage medium |
CN114708369A (en) * | 2022-03-15 | 2022-07-05 | 荣耀终端有限公司 | Image rendering method and electronic equipment |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7190364B2 (en) * | 2004-08-09 | 2007-03-13 | Siemens Medical Solution Usa, Inc. | System and method for polygon-smoothing in texture-based volume rendering |
US20070070067A1 (en) | 2005-04-29 | 2007-03-29 | Modviz, Inc. | Scene splitting for perspective presentations |
US8878833B2 (en) * | 2006-08-16 | 2014-11-04 | Barco, Inc. | Systems, methods, and apparatus for recording of graphical display |
US20080143737A1 (en) * | 2006-12-15 | 2008-06-19 | Qualcomm Incorporated | Post-Render Graphics Transparency |
US8681167B2 (en) | 2008-09-23 | 2014-03-25 | Intel Corporation | Processing pixel planes representing visual information |
US8838797B2 (en) * | 2009-07-10 | 2014-09-16 | Empire Technology Development Llc | Dynamic computation allocation |
US20120169748A1 (en) * | 2011-01-03 | 2012-07-05 | Sifteo Inc. | Distributed graphics engine |
CN102647562A (en) * | 2011-02-22 | 2012-08-22 | 新奥特(北京)视频技术有限公司 | Track synthesis method and track synthesis system for storyboards |
CN102572304A (en) * | 2011-12-13 | 2012-07-11 | 广东威创视讯科技股份有限公司 | Image addition processing method and device |
US8860773B2 (en) | 2012-10-17 | 2014-10-14 | The Mitre Corporation | Telepresence for remote collaboration with a gestural interface |
US9583018B1 (en) * | 2013-06-12 | 2017-02-28 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Reconfigurable image generator |
KR102101834B1 (en) * | 2013-10-08 | 2020-04-17 | 삼성전자 주식회사 | Image processing apparatus and method |
US20150194128A1 (en) * | 2014-01-07 | 2015-07-09 | Nvidia Corporation | Generating a low-latency transparency effect |
US10475227B1 (en) * | 2014-02-28 | 2019-11-12 | Ansys, Inc. | Systems and methods for three dimensional computation and visualization using a parallel processing architecture |
US20150350295A1 (en) * | 2014-05-28 | 2015-12-03 | Joel Solomon Isaacson | System And Method For Loading Assets During Remote Execution |
AU2015202359A1 (en) | 2015-05-05 | 2016-11-24 | Canon Kabushiki Kaisha | Parallelising per-pixel compositing |
CN105374070B (en) * | 2015-12-11 | 2018-07-06 | 中国航空工业集团公司西安航空计算技术研究所 | A kind of 3D image processing algorithms modeling and simulating method |
JP6808419B2 (en) * | 2016-09-26 | 2021-01-06 | キヤノン株式会社 | Image processing system and its control method |
US10452868B1 (en) | 2019-02-04 | 2019-10-22 | S2 Systems Corporation | Web browser remoting using network vector rendering |
US10552639B1 (en) | 2019-02-04 | 2020-02-04 | S2 Systems Corporation | Local isolator application with cohesive application-isolation interface |
US11880422B2 (en) | 2019-02-04 | 2024-01-23 | Cloudflare, Inc. | Theft prevention for sensitive information |
US10558824B1 (en) | 2019-02-04 | 2020-02-11 | S2 Systems Corporation | Application remoting using network vector rendering |
Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5182797A (en) * | 1989-10-23 | 1993-01-26 | International Business Machines Corporation | Multi-processor graphics display system for displaying hierarchical data structures |
US5265203A (en) * | 1990-09-14 | 1993-11-23 | Hughes Aircraft Company | Hardware multiprocess scheduler in a graphics rendering processor |
US5371849A (en) * | 1990-09-14 | 1994-12-06 | Hughes Aircraft Company | Dual hardware channels and hardware context switching in a graphics rendering processor |
US5398315A (en) * | 1992-12-30 | 1995-03-14 | North American Philips Corporation | Multi-processor video display apparatus |
US5548737A (en) * | 1991-07-10 | 1996-08-20 | International Business Machines Corporation | Dynamic load balancing for a multiprocessor pipeline by sorting instructions based on predetermined execution time |
US5655120A (en) * | 1993-09-24 | 1997-08-05 | Siemens Aktiengesellschaft | Method for load balancing in a multi-processor system where arising jobs are processed by a plurality of processors under real-time conditions |
US5714997A (en) * | 1995-01-06 | 1998-02-03 | Anderson; David P. | Virtual reality television system |
US5745125A (en) * | 1996-07-02 | 1998-04-28 | Sun Microsystems, Inc. | Floating point processor for a three-dimensional graphics accelerator which includes floating point, lighting and set-up cores for improved performance |
US5872972A (en) * | 1996-07-05 | 1999-02-16 | Ncr Corporation | Method for load balancing a per processor affinity scheduler wherein processes are strictly affinitized to processors and the migration of a process from an affinitized processor to another available processor is limited |
US5896139A (en) * | 1996-08-01 | 1999-04-20 | Platinum Technology Ip, Inc. | System and method for optimizing a scene graph for optimizing rendering performance |
US6128642A (en) * | 1997-07-22 | 2000-10-03 | At&T Corporation | Load balancing based on queue length, in a network of processor stations |
US6243107B1 (en) * | 1998-08-10 | 2001-06-05 | 3D Labs Inc., Ltd. | Optimization of a graphics processor system when rendering images |
US6292822B1 (en) * | 1998-05-13 | 2001-09-18 | Microsoft Corporation | Dynamic load balancing among processors in a parallel computer |
US6292200B1 (en) * | 1998-10-23 | 2001-09-18 | Silicon Graphics, Inc. | Apparatus and method for utilizing multiple rendering pipes for a single 3-D display |
US20020002587A1 (en) * | 2000-07-17 | 2002-01-03 | Siemens Aktiengesellschaft | Method and Arrangement for Determining Current Projection Data for a Projection of a Spatially Variable Area |
US20020015055A1 (en) * | 2000-07-18 | 2002-02-07 | Silicon Graphics, Inc. | Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units |
US6373977B1 (en) * | 1998-03-22 | 2002-04-16 | Hewlett-Packard Company | Methods and apparatus for constructing a 3D model of a scene and rendering new views of the scene |
US6438576B1 (en) * | 1999-03-29 | 2002-08-20 | International Business Machines Corporation | Method and apparatus of a collaborative proxy system for distributed deployment of object rendering |
US6459429B1 (en) * | 1999-06-14 | 2002-10-01 | Sun Microsystems, Inc. | Segmenting compressed graphics data for parallel decompression and rendering |
US20030117397A1 (en) * | 2001-12-21 | 2003-06-26 | Hubrecht Alain Yves Nestor | Systems and methods for generating virtual reality (VR) file(s) for complex virtual environments |
US20030191860A1 (en) * | 2002-04-05 | 2003-10-09 | Gadepalli Krishna K. | Accelerated collaboration of high frame rate applications |
US20030189574A1 (en) * | 2002-04-05 | 2003-10-09 | Ramsey Paul R. | Acceleration of graphics for remote display using redirection of rendering and compression |
US20040066384A1 (en) * | 2002-09-06 | 2004-04-08 | Sony Computer Entertainment Inc. | Image processing method and apparatus |
US20040125111A1 (en) * | 2002-12-30 | 2004-07-01 | Silicon Graphics, Inc. | System, method, and computer program product for near-real time load balancing across multiple rendering pipelines |
US20040179007A1 (en) * | 2003-03-14 | 2004-09-16 | Bower K. Scott | Method, node, and network for transmitting viewable and non-viewable data in a compositing system |
US20070262989A1 (en) * | 2003-08-18 | 2007-11-15 | Fovia, Inc. | Method and system for adaptive direct volume rendering |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2683341A1 (en) * | 1991-10-30 | 1993-05-07 | Apple Computer | Process and apparatus for reducing data transmission by placing data in an indexed cache memory |
JP3736859B2 (en) * | 1992-05-12 | 2006-01-18 | 日本フイルコン株式会社 | A device that creates operational data for the machine |
EP0718980A1 (en) * | 1994-12-20 | 1996-06-26 | International Business Machines Corporation | Data compression method of individual sequences of strings of a data stream based on a dictionary and device for performing the same |
US5841432A (en) * | 1996-02-09 | 1998-11-24 | Carmel; Sharon | Method and system of building and transmitting a data file for real time play of multimedia, particularly animation, and a data file for real time play of multimedia applications |
EP1365385B1 (en) * | 1998-11-09 | 2012-06-13 | Broadcom Corporation | Graphics display system with processing of graphics layers, alpha blending and composition with video data |
US6335765B1 (en) * | 1999-11-08 | 2002-01-01 | Weather Central, Inc. | Virtual presentation system and method |
JP3262772B2 (en) | 1999-12-17 | 2002-03-04 | 株式会社ナムコ | Image generation system and information storage medium |
US7505046B1 (en) * | 2000-05-02 | 2009-03-17 | Adobe Systems Incorporated | Preserving opaque-like rendering in transparent 2D graphics using knockout groups |
US6924807B2 (en) * | 2000-03-23 | 2005-08-02 | Sony Computer Entertainment Inc. | Image processing apparatus and method |
US6795076B2 (en) * | 2001-06-28 | 2004-09-21 | Sun Microsystems, Inc. | Graphics system with real-time convolved pixel readback |
US6933947B2 (en) * | 2002-12-03 | 2005-08-23 | Microsoft Corporation | Alpha correction to compensate for lack of gamma correction |
US7075541B2 (en) * | 2003-08-18 | 2006-07-11 | Nvidia Corporation | Adaptive load balancing in a multi-processor graphics processing system |
US7483583B2 (en) * | 2003-12-16 | 2009-01-27 | Infocus Corporation | System and method for processing image data |
US20060195464A1 (en) * | 2005-02-28 | 2006-08-31 | Microsoft Corporation | Dynamic data delivery |
US20070070067A1 (en) | 2005-04-29 | 2007-03-29 | Modviz, Inc. | Scene splitting for perspective presentations |
-
2006
- 2006-04-26 US US11/412,410 patent/US20070070067A1/en not_active Abandoned
- 2006-04-26 US US11/412,406 patent/US7978204B2/en active Active
- 2006-04-26 US US11/412,413 patent/US7450129B2/en not_active Expired - Fee Related
- 2006-04-27 WO PCT/US2006/016362 patent/WO2006119076A2/en active Search and Examination
- 2006-04-27 WO PCT/US2006/016363 patent/WO2006119077A2/en active Application Filing
- 2006-04-27 WO PCT/US2006/016367 patent/WO2006119078A2/en active Application Filing
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5182797A (en) * | 1989-10-23 | 1993-01-26 | International Business Machines Corporation | Multi-processor graphics display system for displaying hierarchical data structures |
US5265203A (en) * | 1990-09-14 | 1993-11-23 | Hughes Aircraft Company | Hardware multiprocess scheduler in a graphics rendering processor |
US5371849A (en) * | 1990-09-14 | 1994-12-06 | Hughes Aircraft Company | Dual hardware channels and hardware context switching in a graphics rendering processor |
US5548737A (en) * | 1991-07-10 | 1996-08-20 | International Business Machines Corporation | Dynamic load balancing for a multiprocessor pipeline by sorting instructions based on predetermined execution time |
US5398315A (en) * | 1992-12-30 | 1995-03-14 | North American Philips Corporation | Multi-processor video display apparatus |
US5655120A (en) * | 1993-09-24 | 1997-08-05 | Siemens Aktiengesellschaft | Method for load balancing in a multi-processor system where arising jobs are processed by a plurality of processors under real-time conditions |
US5714997A (en) * | 1995-01-06 | 1998-02-03 | Anderson; David P. | Virtual reality television system |
US5745125A (en) * | 1996-07-02 | 1998-04-28 | Sun Microsystems, Inc. | Floating point processor for a three-dimensional graphics accelerator which includes floating point, lighting and set-up cores for improved performance |
US5872972A (en) * | 1996-07-05 | 1999-02-16 | Ncr Corporation | Method for load balancing a per processor affinity scheduler wherein processes are strictly affinitized to processors and the migration of a process from an affinitized processor to another available processor is limited |
US5896139A (en) * | 1996-08-01 | 1999-04-20 | Platinum Technology Ip, Inc. | System and method for optimizing a scene graph for optimizing rendering performance |
US6128642A (en) * | 1997-07-22 | 2000-10-03 | At&T Corporation | Load balancing based on queue length, in a network of processor stations |
US6373977B1 (en) * | 1998-03-22 | 2002-04-16 | Hewlett-Packard Company | Methods and apparatus for constructing a 3D model of a scene and rendering new views of the scene |
US6292822B1 (en) * | 1998-05-13 | 2001-09-18 | Microsoft Corporation | Dynamic load balancing among processors in a parallel computer |
US6243107B1 (en) * | 1998-08-10 | 2001-06-05 | 3D Labs Inc., Ltd. | Optimization of a graphics processor system when rendering images |
US6292200B1 (en) * | 1998-10-23 | 2001-09-18 | Silicon Graphics, Inc. | Apparatus and method for utilizing multiple rendering pipes for a single 3-D display |
US6438576B1 (en) * | 1999-03-29 | 2002-08-20 | International Business Machines Corporation | Method and apparatus of a collaborative proxy system for distributed deployment of object rendering |
US6459429B1 (en) * | 1999-06-14 | 2002-10-01 | Sun Microsystems, Inc. | Segmenting compressed graphics data for parallel decompression and rendering |
US20020002587A1 (en) * | 2000-07-17 | 2002-01-03 | Siemens Aktiengesellschaft | Method and Arrangement for Determining Current Projection Data for a Projection of a Spatially Variable Area |
US20020015055A1 (en) * | 2000-07-18 | 2002-02-07 | Silicon Graphics, Inc. | Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units |
US20030117397A1 (en) * | 2001-12-21 | 2003-06-26 | Hubrecht Alain Yves Nestor | Systems and methods for generating virtual reality (VR) file(s) for complex virtual environments |
US20030191860A1 (en) * | 2002-04-05 | 2003-10-09 | Gadepalli Krishna K. | Accelerated collaboration of high frame rate applications |
US20030189574A1 (en) * | 2002-04-05 | 2003-10-09 | Ramsey Paul R. | Acceleration of graphics for remote display using redirection of rendering and compression |
US20040066384A1 (en) * | 2002-09-06 | 2004-04-08 | Sony Computer Entertainment Inc. | Image processing method and apparatus |
US20040125111A1 (en) * | 2002-12-30 | 2004-07-01 | Silicon Graphics, Inc. | System, method, and computer program product for near-real time load balancing across multiple rendering pipelines |
US20040179007A1 (en) * | 2003-03-14 | 2004-09-16 | Bower K. Scott | Method, node, and network for transmitting viewable and non-viewable data in a compositing system |
US20070262989A1 (en) * | 2003-08-18 | 2007-11-15 | Fovia, Inc. | Method and system for adaptive direct volume rendering |
Non-Patent Citations (2)
Title |
---|
Gary M. Davis, "Down on the Farm" 2003, Discreet Training Specialist, 3ds max, http://www.visualZ.com * |
Rowlan, J.S.; Lent, G.E.; Gokhale, N.; Bradshaw, S.; , "A distributed, parallel, interactive volume rendering package," Visualization, 1994., Visualization '94, Proceedings., IEEE Conference on , vol., no., pp.21-30, CP2, 17-21 Oct 1994 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100026783A1 (en) * | 2008-08-01 | 2010-02-04 | Real D | Method and apparatus to encode and decode stereoscopic video data |
US20160094837A1 (en) * | 2014-09-30 | 2016-03-31 | 3DOO, Inc. | Distributed stereoscopic rendering for stereoscopic projecton and display |
CN112362289A (en) * | 2021-01-13 | 2021-02-12 | 中国空气动力研究与发展中心高速空气动力研究所 | Aircraft split test method and device, computer equipment and readable storage medium |
CN114708369A (en) * | 2022-03-15 | 2022-07-05 | 荣耀终端有限公司 | Image rendering method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
US20060244758A1 (en) | 2006-11-02 |
WO2006119076A2 (en) | 2006-11-09 |
WO2006119077A2 (en) | 2006-11-09 |
WO2006119078A2 (en) | 2006-11-09 |
US7978204B2 (en) | 2011-07-12 |
WO2006119077A3 (en) | 2007-12-13 |
US7450129B2 (en) | 2008-11-11 |
US20060248571A1 (en) | 2006-11-02 |
WO2006119076A3 (en) | 2008-05-08 |
WO2006119078A3 (en) | 2007-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20070070067A1 (en) | Scene splitting for perspective presentations | |
EP3673463B1 (en) | Rendering an image from computer graphics using two rendering computing devices | |
US6525731B1 (en) | Dynamic view-dependent texture mapping | |
US7405734B2 (en) | Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units | |
US5694533A (en) | 3-Dimensional model composed against textured midground image and perspective enhancing hemispherically mapped backdrop image for visual realism | |
JP5891426B2 (en) | An image drawing apparatus, an image drawing method, and an image drawing program for drawing an all-around stereoscopic image | |
US8042094B2 (en) | Architecture for rendering graphics on output devices | |
US20070279411A1 (en) | Method and System for Multiple 3-D Graphic Pipeline Over a Pc Bus | |
US20080129819A1 (en) | Autostereoscopic display system | |
CN102834849A (en) | Image drawing device for drawing stereoscopic image, image drawing method, and image drawing program | |
US20040085310A1 (en) | System and method of extracting 3-D data generated for 2-D display applications for use in 3-D volumetric displays | |
US11417060B2 (en) | Stereoscopic rendering of virtual 3D objects | |
KR102059732B1 (en) | Digital video rendering | |
US6559844B1 (en) | Method and apparatus for generating multiple views using a graphics engine | |
CN106412562A (en) | Method and system for displaying stereoscopic content in three-dimensional scene | |
Nonaka et al. | Hybrid hardware-accelerated image composition for sort-last parallel rendering on graphics clusters with commodity image compositor | |
Pomi et al. | Interactive ray tracing for virtual TV studio applications | |
JP2000030080A (en) | Virtual reality system | |
Harish et al. | Designing perspectively correct multiplanar displays | |
Yin et al. | Multi-screen Tiled Displayed, Parallel Rendering System for a Large Terrain Dataset. | |
CN112911260A (en) | Multimedia exhibition hall sand table projection display system | |
Burnett | Light‐Field Displays and Extreme Multiview Rendering | |
JPH11184453A (en) | Display device and control method therefor, computer readable memory | |
Borgeat et al. | Collaborative visualization and interaction for detailed environment models | |
Schulze-Döbold | Interactive volume rendering in virtual environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NVIDIA CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MODVIZ, INC.;REEL/FRAME:020746/0597 Effective date: 20080225 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |