WO2008064352A2 - Systems for immersive environments with mulitple points of view - Google Patents

Systems for immersive environments with mulitple points of view Download PDF

Info

Publication number
WO2008064352A2
WO2008064352A2 PCT/US2007/085459 US2007085459W WO2008064352A2 WO 2008064352 A2 WO2008064352 A2 WO 2008064352A2 US 2007085459 W US2007085459 W US 2007085459W WO 2008064352 A2 WO2008064352 A2 WO 2008064352A2
Authority
WO
WIPO (PCT)
Prior art keywords
displays
venue
viewer
video
hardware
Prior art date
Application number
PCT/US2007/085459
Other languages
French (fr)
Other versions
WO2008064352A3 (en
Inventor
Mark W. Miles
Original Assignee
Miles Mark W
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Miles Mark W filed Critical Miles Mark W
Publication of WO2008064352A2 publication Critical patent/WO2008064352A2/en
Publication of WO2008064352A3 publication Critical patent/WO2008064352A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices
    • G09G2300/026Video wall, i.e. juxtaposition of a plurality of screens to create a display screen of bigger dimensions

Abstract

In one embodiment, the invention provides a method for creating a viewspace within an existing venue whose primary purpose is other than to provide an immersive experience to a viewer. The viewspace may be defined by a plurality of displays at least some of which may be spatially separated. The displays may be driven by a plurality of video streams, each portraying a different view of the environment, the streams being synchronized with respect to a common point in time.

Description

SYSTEMS FOR IMMERSIVE ENVIRONMENTS WITH MULTIPLE POINTS OF VIEW
FIELD OF THE INVENTION Embodiments of this invention relate to immersive environments. BACKGROUND
An immersive environment refers to an environment which is designed to simulate an object space in a manner in which an individual in real space (i.e. physical space) experiences the object space in a way that sensory awareness of the real space is diminished and awareness of the immersive environment is enhanced so that there is the illusion that the individual is no longer in the real space, but rather in the immersive environment. One example of an immersive environment includes a computer generated space which can be seen through the use of appropriate display mediums. An immersive environment may be completely synthesized, having no imagery from the real world. For example, many recent computer generated movies do not include images captured from the real world but are instead populated entirely by characters, settings, and objects that are generated using computer software.
A viewspace as described in US Patent Application No, 11/136,111 , hereby incorporated by reference, refers to a a location in real space which is visually augmented by the presence of multiple windows which can portray imagery or visual content corresponding to an immersive environment. A viewspace may be inside a building or outside, depending on the capabilities of the display devices used for the windows. BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows an example of a multichannel viewspace environment configured for entertainment, in accordance with one embodiment of the invention; Figure 2 illustrates a system architecture for generating and portraying a multichannel viewspace, in accordance with one embodiment of the invention;
Figure 3 illustrates a viewspace in a physical environment with different display orientations, positions, and spacings, in accordance with one embodiment of the invention. Figure 4 shows three vertical display configurations, in accordance with one embodiment of the invention.
Figures 5A and 5B show full and magnified views of a viewspace template, in accordance with one embodiment of the invention.
DETAILED DESCRIPTION
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details.
Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
Referring now to Figure 1 , an architecture that can be used to generate and support a viewspace, in accordance with one embodiment is shown. The architecture comprises a machine readable medium 100 which is connected via a buss 104to a processing unit 102 and to a video signal generation array 110. Video signal generation array 110 is connected via a transmission array 114 to a display array 116 which is visible to a viewer 118.
Machine-readable medium 100, which may be in the form of an array of hard disk drives, optical disks, tapes, or other high data content medium, is used to store digital content in the form of files of a variety of formats. These formats include but are not limited to mpegi, mpeg2, mpeg4, and DVD formats.
Video signal generator 108 is one element in the signal generation array 110, which provides video signals for transmission. Processing unit 102, may be in the form of a dedicated piece of electronic hardware which resides in its own enclosure and directs the flow of data from storage medium 100 to video signal generation array 110. Additionally, in one embodiment, the unit 102provides a control interface to the outside world. For this embodiment, each of the elements 108 of the video signal generation array 110, and the storage medium 100, also reside in their own physical enclosures and communication and control occurs on a network, such as Ethernet or some other format. The combination of storage medium 100, processing unit 102, and video signal generator 108 is collectively referred to as a video server.
Video signal generator 108 may be capable of reading data directly from media IOOacross buss 104, though other configurations are possible. For example the video signal generator 108 may accept data from the media 100 indirectly through the processing unit 102.
In another embodiment the data on media 100 may not be in the form of video information or compressed video information. Instead it may be in the form of files containing graphics drawing commands (VRML, X3D, are examples) or a graphics command language which adheres to a certain standard (such as OpenGL (open graphics library, or Direct 3D) and/or a sequence of represented geometric shapes (polygons, non-uniform rational b-splines (nurbs)). In either case, the video signal generator 108 contains additional computing power to generate 3D imagery based on these files and is often referred to as a graphics processing unit or GPU. The video generator or GPU thus synthesizes or renders imagery in real time, based on the representational data stored on the media, as well as the video signals which correspond to that imagery. In this configuration, the data supplied to the video generation array 110, also has the potential to be generated in real time versus from a storage medium.
Boundary 106, indicates that another configuration is possible where by all three components of storage 100, control 102, and signal generation array 110, reside in a single physical enclosure. One example of such a configuration would be a PC wherein the storage medium may reside at least partially within the same enclosure, the control function of 102 is performed by the CPU, and the video signal generation derives from an array of specially designed boards which reside on the internal communications buss 104. This buss may be one of any number of formats such as PCI, PCI-X 2.0, and PCI Express though others are possible.
Signal generation array 110, is connected to an array of displays 116, via transmission array 114. Transmission array 114may be in the form of an electric or optical cable array, or some form of wireless network based on radio, infrared, or other communications means. Display array 116, is comprised of a collection of video displays which present individual views or perspectives on the overall environment which is represented in the data. The environment, and therefore the content as represented by the video data, can change over time. Thus, the viewer 114 is presented with a array of dynamic imagery which is designed to enhance his or her experience.
Whether the video server exists as a collection of individual components connected by a network, or components which are integrated in a single enclosure and reside on a buss, there will be a requirement to provide for data stream synchronization. Each video data stream comprises a sequence of video frames which can be generated at one of several standard video playback rates such as 24 frames per second (fps) or 30 fps. The data for each stream is initially created such that frame 1 for a particular stream portrays a view of the environment at the same point in time that frame 1 of another stream portrays that same environment. In order for continuity to be maintained during playback (or real time generation), the individual streams must be played back in a synchronized fashion. Thus, frame 1 of stream X corresponds to frame 1 of stream Y, frame 2 of stream X corresponds to frame 2 of stream Y and so on for the length of the content. Thus, all the streams portray their different respective views of the environment, but they all portray the same point in time in the evolution of the environment.
Referring now to Figure 2, methods for synchronization, in accordance with one embodiment are illustrated. Processing unit 200 is shown connected to buss 204, on which reside the elements of video signal generation array 202.
Synchronization buss 208 is also shown connecting the elements of array 202. Extension buss 206, connects primary buss 204 to supplemental buss 210 on which reside the elements of supplemental video signal generation array 212. PC array 214 comprises a collection of individual PCs 216, which are connected by external buss 218. A PC nominally refers to a personal computer though it may be any kind of unit which comprises at least a central processing unit, storage, and interface, and an interconnect such as an internal buss for connecting additional processing hardware.
Synchronization may occur in a number of ways, but generally requires some communication between the elements of the signal generation array 202 and 212, and between signal generation arrays 202 and 212, and processing unit 200. In a mode known as software synchronization, software which is running on the processing unit 200, serves to start and stop the playback of content, as well as monitor the synchronization of the video streams. If a lack of synchronization is detected for example if signal generation element 1 falls behind the rest of the array, then the processing unit can issues instructions for the element temporarily speed up, or for the array to slow down. Both detection and commands occur via control data which is transmitted over the 204. In a mode known as hardware synchronization, processing unit 200starts and stops playback, but synchronization signals are transmitted via synchronization buss 208. In this case one of the elements in the signal generation array is selected as master and generates the master signal to which all the other elements synchronize themselves too. Alternatively, this master signal is generated by an external source and all of the elements synchronize to it. The size of the signal generation array may be increased by extending the size of the buss using buss extension 206, which effectively connects buss 204 to supplemental buss 210, upon which supplemental signal generation array 212, resides. The overall capability of the system can therefore be increased by increasing the length of the buss and the number of elements connected to it. PC array 214 is one example. Each PC 216 contains an array of video signal generation units and a processing unit as well as storage. Synchronization in this case occurs over external buss 218 using either software or hardware synchronization.
The content can be generated by any number of techniques which range from computer graphics (CG) based software which includes off-the-shelf products such as MAYA and Lightwave. Additionally, content may generated from animated sources, including but not limited to two dimensional imagery drawn by hand or with the help of a computer and live imagery captured using a specially modified camera or camera array. Existing two dimensional imagery (video footage, or other imagery generated by the aforementioned sources) can be modified, put into 3D form, and formatted in a way such that it can be portrayed the array. Content may also be generated in real time via output from the aforementioned camera source, or by 3D image generations systems which are capable of the required speeds. Multiple combinations of the above sources are also possible. Content video can be played back or generated as a series of separate environments and data stream sets which differ in appearance and action, or as a single long sequence and associated data stream which portrays one environment. Referring now to Figure 3, a viewspace 300 is illustrated, and comprises a collection of video displays 302, and a joined array of displays 304. Video displays 302 and 304, may come from one or a combination of different categories including flat panel displays (FPDs), front projection displays, and rear projection displays. Technologies for FPDs include LCDs (liquid crystal displays) and plasma displays. Front projection displays are essentially reflective screens which have images projected on to them from the viewer side via a projection unit which generates the light. Rear projection displays are essentially transmissive screens which have images projected on to them form the side opposite that of the viewer via projection unit which generates light.
Viewers 306, reside within the array. The displays are oriented in an encompassing fashion within a physical space in a venue. That is to say, in a way which gives the viewer or viewers 306, a sense of being immersed within the environment though many other configurations are possible. The displays are positioned in a combination of different positions and orientations that is optimized for the physical space in which they reside, and for the viewer or viewers 306. The majority of the displays are separated spatially, by distances that are significantly larger than the size of a pixel.
Joined display array 304, is an exception in that it takes two or more video displays and combines them so that the distance between them is within a factor of two or three the size of a pixel within the display. Spatially separating the displays simplifies the installation of such an array into an existing physical space. The displays may be positioned in a more arbitrary fashion depending on what physical obstacles exist within the space. By crafting the content portrayed on the displays in an appropriate fashion, the separation does not have a major impact on how the content is perceived. In cases where the physical interior allows, multiple displays may joined to make a larger seamless display like joined display 304. There are some advantages in joining displays, but the cost is in increased complexity of installation, alignment of the displays, and the adaptation to the features of the physical space.
Referring now to Figure 4, three configurations for vertically paired displays are shown. Configuration 402, has screens 402, which are at a 90 degree vertical angle and driven by projectors 404, which are oriented normal to them. Light path 408, is shown being partially obstructed by the viewer 406.
Configuration 410, reveals one option for reducing this obstruction. In this case, lower screen 412, is shown mounted at an angle which is greater than 90 degrees, and projector 414, mounted at an angle which complements this orientation. The consequence is that viewer 418, no longer obstructs light path 420, even though the viewer is the same distance from screen 412.
Configuration 420, reveals yet a third solution. In this case screen 422 is shown mounted vertically, but is now configured for rear projection from suitably positioned projector 424. In this configuration, the light path is protected from obstruction by the viewer 426, regardless of the viewer's position.
Referring now to Figure 5A, an image is shown which was generated from data describing a physical space corresponding to a venue in the form of a bar, which has a viewspace installation resident in its interior. The bar contains the normal interior accoutrements such as tables, chairs, a stage, and lighting, as well as an actual bar where drinks are served. In addition, the data set portrays patrons 502, and reflective screen array 500. The bar being a place where people can meet and drink exemplifies an existing venue that serves a purpose other than to provide an immersive experience to a viewer. In one embodiment disclosed herein are methods and systems for creating an immersive environment or viewspace in an existing venue whose primary purpose is other than to provide an immersive experience to a viewer. In one embodiment, a software model for a venue in which a viewspace is to be installed is created. The software model may be defined by a data set comprising camera position, orientation, and parameter information relating to a the venue. An example of a data set is shown in magnified detail in Figure 5B, where one subset of the screen array 510, is shown facing patron 512. Located near patron 512, is virtual camera array 514. Virtual camera array 514, serves as a tool used by some of the aforementioned CG design tools such as MAYA and Lightwave, to provide information on how the imagery shown in screen array 510, should be portrayed. In particular information including but not limited to field of view, depth of focus, orientation, and viewer perspective are associated with the each of the cameras. In the figure, there are four cameras which correspond to the four screens of subset 510. In this case, the cameras are positioned near patron 512, so that they can approximate the perspective of a viewer located in that position who is observing the portions of the virtual environment which are displayed on the screen array 510.
All of the data represented in this data set, which can include but are not limited to, physical features, appearances, lighting and surface textures, physical elements (furniture, patrons, other fixtures) of the interior space, position and orientation of reflective screens, and position and associated data of the virtual cameras, when combined are known as a Viewspace template. Templates may reside in any number of different file formats (.obj, .dxf, maya files, etc.) The template functions as a design tool for use by a content creator. Such a template can be created using any off-the-shelf architecture design or 3D modeling tool, (such as Autocad, or Sketchup) which can be used to generate the physical features of the space. Positioning and features of the cameras can be subsequently incorporated using a CG design tool, though some aspects may also be created using the 3D modeling tool. The resulting data set, the template, may then be used as a means to generate the Viewspace content. The simplest form of a template, an undressed template, contains camera information and screen position/orientation information only. The more complex version, a dressed template, includes detail on the physical features of the interior of the real space. In one process, the template is loaded in to a CG design tool, and embedded inside another 3D environment which represents the content to be created. The cameras serve to record or film their respective perspectives of this external world, and the screens portray what the cameras have recorded. The resulting collection of sequences may then be played back on the computer, in a way which juxtaposes the imagery recorded in the individual screens, against the visual representation of the interior space. In this fashion, a content creator may visually observe how their content looks and feels in the context of the interior space. The nature of the content or elements within may be adjusted accordingly to fit or interact with the real physical space in a way which is more pleasing and appealing. Objects or elements which pass from one screen to another may be moved or positioned such that their speed and trajectory fit better given the real geometry of the interior. Additionally more subtle effects like color, lighting, and surface textures, among others, may be modeled and tested in advance.
In this way, minimal computing power may be used to assess and examine many different variations on a particular content piece in low resolution. Once the final piece has been settled on, then full high resolutions sequences may be rendered for each screen or channel. Rendering is a computational process where by 3D models with high resolution and voluminous amounts of data concerning subtle details of the model (lighting, shading, texture, transparency, smoothness, among others) are generated from relatively primitive data sets defining the models. This is a process which is well known in the art, and there are numerous software and hardware tools for performing this operation. The result is a single frame (if the data is fixed) or a sequence of frames (if the data and model change over time) for each camera. Since this can require a lot of computing power, and therefore cost, the template is a valuable tool which allows much greater flexibility in the design of the content while reducing the cost of its development.
Additionally, since each interior space will have a different template associated with it, it is possible to create content for one venue and then very easily and inexpensively modify it to be played in a different venue using the data set of the alternative venue's template as the tool to make the modifications. Once the elements of the virtual environment have been suitably modified, then a new data set is created. This data set, a scene, comprises the template and the virtual environment. Certain elements of the scene, in particular the camera position and characteristics, and the virtual environment, are used to direct how and what portions of the virtual environment are rendered into frame sequences.
In one embodiment of the invention a template for a venue is provided to facilitate the generation of immersive content in accordance with the techniques disclosed herein. The template may be recorded on a machine-readable medium. Examples ofmachine-readable media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), among others, and transmission type media such as digital and analog communication links.

Claims

CLAIMSWhat is claimed is:
1. A method, comprising: mounting a plurality of displays within a physical space, the displays being position with respect to a viewer to provide an immersive experience of an the environment to said viewer, wherein at least some of the displays are spatially separated; and driving the displays with a plurality of video streams, each portraying a different view of the environment, the streams being synchronized with respect to a common point in time.
2. The method of claim 1 , wherein the physical space comprises an existing venue serving a purpose other than to provide an immersive experience to the viewer.
3. The method of claim 1 , wherein the plurality of video streams is generated using hardware comprising a central processing unit, at least one storage device, video playback hardware, and a communications network for directing video data from the storage hardware to the video playback hardware for synchronized playback on the displays.
4. The method of claim 3, further comprising storing the video data in the form of graphics drawing commands or a graphics command language, the video playback hardware capable of interpreting said commands and generating graphical constructs based on them.
5. A venue, comprising: a plurality of displays within a physical space defining the venue, the displays being position with respect to a viewer to provide an immersive experience of an the environment to said viewer, wherein at least some of the displays are spatially separated; and hardware to drive the displays with a plurality of video streams, each portraying a different view of the environment, the streams being synchronized with respect to a common point in time.
6. The venue of claim 5, wherein at least some of the displays close enough to provide the effect of a single seamless composite display.
7. The venue of claim 5, wherein at least some of the displays are arranged to form a vertical pair.
8. The venue of claim 7, a lower display in a vertical pair is mounted at an angle which is greater than 90°, and a projector for said below a display is mounted and an angle so as to reduce obstruction by the viewer of a video stream being projected onto the lower display by the projector.
9. The venue of claim 5, further comprising a projector mounted for rear projection of a video stream onto a lower display in a vertical pair thereby to prevent obstruction by the view of a video stream being projected by the projector regardless of a position of the viewer within the venue.
10. The venue of claim 5, which serves a purpose other than to provide the immersive experience to the viewer.
11. The venue of claim 5, wherein the hardware comprises a central processing unit, at least one storage device, video playback hardware, and a communications network for directing video data from the storage hardware to the video playback hardware for synchronized playback on the displays.
12. The venue of claim 11 , wherein the hardware stores the video data in the form of graphics drawing commands or a graphics command language, the video playback hardware capable of interpreting said commands and generating graphical constructs based on them.
13. A method, comprising: generating a software model comprising camera position, orientation, and parameter information relating to a venue comprising a plurality of displays to provide an immersive experience of an the environment to a viewer; loading the software model into a design tool; and generating immersive video content to be displayed on the plurality of displays using the design tool based on the software model.
14. The method of claim 13, wherein the software model comprises physical information about the venue.
15. The method of claim 13, wherein the camera information is used to define provide perspectives from which portions of an immersive environment are recorded using computer graphics software.
16. A machine-readable medium, comprising: data defining a software model comprising camera position, orientation, and parameter information relating to a venue comprising a plurality of displays to provide an immersive experience of an the environment to a viewer.
17. The machine-readable medium of claim 16, wherein the software model comprises physical information about the venue.
PCT/US2007/085459 2006-11-21 2007-11-21 Systems for immersive environments with mulitple points of view WO2008064352A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US86678106P 2006-11-21 2006-11-21
US60/866,781 2006-11-21

Publications (2)

Publication Number Publication Date
WO2008064352A2 true WO2008064352A2 (en) 2008-05-29
WO2008064352A3 WO2008064352A3 (en) 2008-12-04

Family

ID=39430614

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/085459 WO2008064352A2 (en) 2006-11-21 2007-11-21 Systems for immersive environments with mulitple points of view

Country Status (1)

Country Link
WO (1) WO2008064352A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8699566B2 (en) 2010-01-27 2014-04-15 International Business Machines Corporation Adaptive and integrated visualization of spatiotemporal data from large-scale simulations

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6917362B2 (en) * 2002-01-25 2005-07-12 Hewlett-Packard Development Company, L.P. System and method for managing context data in a single logical screen graphics environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6917362B2 (en) * 2002-01-25 2005-07-12 Hewlett-Packard Development Company, L.P. System and method for managing context data in a single logical screen graphics environment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8699566B2 (en) 2010-01-27 2014-04-15 International Business Machines Corporation Adaptive and integrated visualization of spatiotemporal data from large-scale simulations

Also Published As

Publication number Publication date
WO2008064352A3 (en) 2008-12-04

Similar Documents

Publication Publication Date Title
US8471844B2 (en) Streaming geometry for use in displaying and editing 3D imagery
JP4548413B2 (en) Display system, animation method and controller
US7868847B2 (en) Immersive environments with multiple points of view
US9396588B1 (en) Virtual reality virtual theater system
US6429867B1 (en) System and method for generating and playback of three-dimensional movies
DeFanti et al. The future of the CAVE
JP2023088956A (en) Methods and systems for generating and displaying 3d videos in virtual, augmented or mixed reality environment
Bimber et al. The virtual showcase as a new platform for augmented reality digital storytelling
US20200225737A1 (en) Method, apparatus and system providing alternative reality environment
US20110084983A1 (en) Systems and Methods for Interaction With a Virtual Environment
US20110227917A1 (en) System and method for using off-screen mask space to provide enhanced viewing
US10859852B2 (en) Real-time video processing for pyramid holographic projections
KR102059732B1 (en) Digital video rendering
CN103309145A (en) 360-degree holographic phantom imaging system
US20160286195A1 (en) Engine, system and method for providing three dimensional content and viewing experience for same
WO2013041152A1 (en) Methods to command a haptic renderer from real motion data
WO2008064352A2 (en) Systems for immersive environments with mulitple points of view
JP2000030080A (en) Virtual reality system
JP2020174329A (en) Information processing device, display method, and computer program
US9185374B2 (en) Method and system for producing full motion media to display on a spherical surface
Balogh et al. HoloVizio-True 3D display system
Boswell et al. The Wedge Virtual Reality theatre
Moreland et al. Pre-rendered stereoscopic movies for commodity display systems
Kuchelmeister Universal capture through stereographic multi-perspective recording and scene reconstruction
Rakkolainen et al. Interactive" immaterial" screen for performing arts

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07854757

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07854757

Country of ref document: EP

Kind code of ref document: A2