US20110197225A1 - Video/audio output apparatus and video/audio output method - Google Patents
Video/audio output apparatus and video/audio output method Download PDFInfo
- Publication number
- US20110197225A1 US20110197225A1 US13/087,979 US201113087979A US2011197225A1 US 20110197225 A1 US20110197225 A1 US 20110197225A1 US 201113087979 A US201113087979 A US 201113087979A US 2011197225 A1 US2011197225 A1 US 2011197225A1
- Authority
- US
- United States
- Prior art keywords
- data
- audio
- screen
- audio source
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234318—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
Abstract
A video/audio output apparatus comprises a control unit adapted to perform screen management of output video, and generate positional relationship information for each input video data; an extraction unit adapted to generate partial image data from the each input video data; an input unit adapted to input audio source differentiated audio data; and a tile generation unit adapted to configure tile data by compiling the partial image data generated by the extraction unit and the audio source differentiated audio data for each drawing region on a screen, based on the positional relationship information generated by the control unit.
Description
- This application claims the benefit of U.S. application Ser. No. 11/964,299, filed Dec. 26, 2007 and Japanese Patent Application No. 2006-352803, filed Dec. 27, 2006, which are hereby incorporated by reference herein in their entireties.
- 1. Field of the Invention
- The present invention relates to a video/audio output apparatus, a video/audio output method, a computer program and a storage medium, and in particular to a preferred technique used for matching playback audio with playback video.
- 2. Description of the Related Art
- In video/audio output apparatuses capable of simultaneous playback of plural pieces of video and audio data, part of one screen sometimes gets hidden by another screen. In such a case, the audio data for each screen needs to be composed using one method or another in order to output audio. Technology concerning apparatuses for performing such processing is disclosed in Japanese Patent Laid-Open No. 05-19729, for example.
- The “image apparatus” disclosed in Japanese Patent Laid-Open No. 05-19729 refers to positional relationships including the size and overlap of images corresponding to input video signals or to the selection information of specific video. The audio signal synchronized with a large-size image, an image positioned in front of other images, or a selected specific image is set as a standard value, and processing is then automatically performed to reduce the amplitude of audio signals synchronized with other images.
- This technology enables sound volume control of audio data corresponding to each screen to be performed automatically based on the configuration of the screen when simultaneously outputting a plurality screens. However, this technology is only for controlling the sound volume of audio data corresponding to each screen, and does not enable audio management of individual objects on each screen.
- Thus, there are cases in which two objects A and B exist on a CH.1 screen, and a CH.2 screen newly overlaps the object B, such as shown in
FIG. 2 , for example. In such a case, audio management of individual objects is not possible with technology using a conventional method. - Consequently, there are disadvantageous times when an audio source B corresponding to the object B hidden by CH.2 and not displayed, as shown in
FIG. 3 , is actually output. Conventional technology thus does not enable output audio to be matched with the configuration of output video after a plurality of screens have been composed in a video/audio output apparatus that simultaneously outputs a plurality of screens. - The present invention was made in consideration of the above problem, and has as its object to enable output audio to be matched with the configuration of output video after a plurality of screens have been composed.
- According to one aspect of the present invention, a video/audio output apparatus comprises:
- a control unit adapted to perform screen management of output video, and generate positional relationship information for each input video data;
- an extraction unit adapted to generate partial image data from the each input video data;
- an input unit adapted to input audio source differentiated audio data; and
- a tile generation unit adapted to configure tile data by compiling the partial image data generated by the extraction unit and the audio source differentiated audio data for each drawing region on a screen, based on the positional relationship information generated by the control unit.
- According to another aspect of the present invention, a video/audio output method comprises:
- a control step of performing screen management of output video, and generating positional relationship information for each input video data;
- an extraction step of generating partial image data from the each input video data;
- an input step of inputting audio source differentiated audio data; and
- a tile generation step of configuring tile data by compiling the partial image data generated in the extraction step and the audio source differentiated audio data for each drawing region on a screen, based on the positional relationship information generated in the control step.
- Further features of the present invention will become apparent from the following description of exemplary embodiments, with reference to the attached drawings.
-
FIG. 1 shows a specific example of a typical effect of preferred embodiments. -
FIG. 2 shows an exemplary operation in a common display. -
FIG. 3 shows the effect when a video/audio output apparatus of preferred embodiments is not applied. -
FIG. 4 shows the relationship between drawing position information, partial image data, and audio source differentiated data in tile data of preferred embodiments. -
FIG. 5 shows the relationship between drawing position information, partial image data, audio source differentiated data, and sound volume information in tile data of preferred embodiments. -
FIG. 6 is a block diagram showing an exemplary configuration of the video/audio output apparatus according to a first embodiment. -
FIG. 7 is a block diagram showing an exemplary configuration of the video/audio output apparatus according to a second embodiment. -
FIG. 8 is a block diagram showing an exemplary configuration of the video/audio output apparatus according to a third embodiment. - Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
-
FIG. 6 is a block diagram showing a first embodiment of the present invention. As shown inFIG. 6 , a video/audio output apparatus 700outputs video data video output unit 740. The video/audio output apparatus 700 also composes and outputs audio data to anaudio output unit 750. - In this example, the input audio is assumed to consist of
normal audio data 731 to be synchronized with video data 730 (first video data) and 732 (second video data), and audio source differentiatedaudio data 733 in which the audio sources are separated for each object in the video data. - Firstly, the
video data image extraction unit 701. Theimage extraction unit 701 divides each frame of thevideo data partial image data 722. - The
normal audio data 731 is input to an audiosource separation unit 702. The audiosource separation unit 702, in addition to separating the audio data for each audio source included in the input audio data, specifies the coordinates of the audio sources on the screen and outputs the audio source differentiated audio data in association the audio source coordinate information as audio source differentiateddata 723. - While audio source separation and coordinate specification may be performed using an analysis method that employs object recognition, a simple method can be employed that involves separating the left and right stereo output as two pieces of audio source differentiated audio data, and setting the coordinates thereof as the arbitrary coordinates of the left and right halves of the screen. Note that audio source differentiated
audio data 733, which has already been separated into audio source differentiated data, is not input to the audiosource separation unit 702 when input to the video/audio output apparatus 700. - A
screen control unit 703, which manages the screen configuration of video data in the output image, generates screenpositional relationship information 721 that includes the output position and vertical positional relationship of each screen (input video), and the type of composition processing, such as opaque composition/translucent composition or the like, and outputs the generated screenpositional relationship information 721 to atile generation unit 705. The screenpositional relationship information 721 shows the final configuration of the output screen. - The
tile generation unit 705 receives as input thepartial image data 722, the audio source differentiateddata 723 and the screenpositional relationship information 721, which are output by the above described units, and the audio source differentiatedaudio data 733, which had already been separated as audio source differentiated data when input to the video/audio output apparatus 700. Thetile generation unit 705 generates and outputs this data astile data 710, which is a data unit, for each drawing region on each screen. That is, thetile generation unit 705 configures tile data by compiling thepartial image data 722 and the audio source differentiatedaudio data positional relationship information 721. - The case where two audio sources are included in the single frame of
output image data 500, as shown inFIG. 4 , will be described as an example. In the case ofFIG. 4 , the audio sources A and B are included in CH.1, and the audio source coordinates thereof correspond respectively to firstpartial image data 501 and secondpartial image data 502. - In such a case, the first
partial image data 501, the CH.1 audio source A, and the drawing position information of the firstpartial image data 501 form one piece of tile data. Similarly, the secondpartial image data 502, the CH.1 audio source B, and the drawing position information of the secondpartial image data 502 form one piece of tile data. Since audio source differentiated data corresponding to other portions does not exist, the tile data for these portions is configured by only partial image data and drawing position information. - In the case where the tile data includes sound volume information, as shown in the example in
FIG. 5 ,partial image data 601 to 606 forms tile data having partial image data, drawing position information, audio source differentiated data, and sound volume information. The tile data for other portions is configured by only partial image data and drawing position information. -
Tile data 710 thus configured is input to animage processing unit 708. Theimage processing unit 708 outputs tile data after performing processing on each piece of input tile data to improve the picture quality and the like of thepartial image data 713, and update thepartial image data 713. - Tile data output from the
image processing unit 708 is input to ascreen composition unit 706. Thescreen composition unit 706 disposes thepartial image data 713 with reference to thedrawing position information 712 of the plural pieces of input tile data, and outputs output screen data. - The output screen data (output video) output from the
screen composition unit 706 is input to thevideo output unit 740. Thevideo output unit 740 outputs on an arbitrary display the inputted output screen data. As a result, a plurality of inputted video streams are output as a single video stream in thevideo output unit 740. - In relation to audio output, on the other hand, an
audio composition unit 707 receives the tile data as inputs, and composes audio with reference to the audio sourcedifferentiated data 714 and thesound volume information 711 in the tile data. Specifically, theaudio composition unit 707 composes the audio sourcedifferentiated data 714 included in the tile data by a ratio of thesound volume information 711, and generates one screen of output audio for each channel of theaudio output unit 750. That is, theaudio composition unit 707 functions as an audio data generation unit that generates audio data which includes a proportion of the audio source differentiated data relative to the overall sound volume as sound volume information. - Since the
tile generation unit 705 only adds audio sourcedifferentiated data 714 andsound volume information 711 to tiledata 710 whose audio is to be output, the output audio data is composed only for audio sourcedifferentiated data 714 to be output. The audio sourcedifferentiated data 714 to be output here is audio sourcedifferentiated data 714 that corresponds to thepartial image data 713 displayed on theoutput image data 500, for example. - Further, a
screen selection unit 704 provides a user interface that enables the user to select either an arbitrary range on an output screen or a screen, and inputs the specified screen information to thescreen control unit 703 asscreen control information 720. Thescreen control information 720 thus inputted makes it possible for the user to change the screen configuration as a result, by changing the screen configuration managed by thescreen control unit 703. - As described above, the compatibility of
output image data 500 in thevideo output unit 740 and output audio data in theaudio output unit 750 can be achieved in a video/audio output apparatus that receives as input a plurality of video streams and a plurality of audio streams corresponding to video streams. Output audio data can thus be matched with output image data. -
FIG. 7 is a block diagram showing an exemplary configuration of a second embodiment of the present invention. Similar to the video/audio output apparatus 700 according to the first embodiment, video/audio output apparatus 800 according to this embodiment comprises an image extraction unit 801 (which inputsfirst video data 840 andsecond video data 842, and outputs partial image data 832), an audio source separation unit 802 (which inputsnormal audio data 841, and outputs audio source differentiated data 833), ascreen control unit 803, ascreen selection unit 804, and a tile generation unit 805 (which inputs thepartial image data 832, the audio sourcedifferentiated data 833, and audio source differentiated audio data 843). This configuration differs from the first embodiment shown inFIG. 6 in that a plurality ofvideo output units audio output units image processing units video output unit 850 and a secondvideo output unit 851 are assumed to be independent. - In the present embodiment, the
screen control unit 803 performs screen management for both the firstvideo output unit 850 and the secondvideo output unit 851 based on screen control information from thescreen selection unit 804. Thescreen control unit 803 inputs screenpositional relationship information 831 to a firstscreen composition unit 806, a firstaudio composition unit 807, a secondscreen composition unit 809, and a secondaudio composition unit 810. Thus, in the present embodiment, drawing position information is not included intile data 820, unlike the first embodiment. - The first
screen composition unit 806 and the secondscreen composition unit 809 compose, in specified positional relationships, video streams to be played in the video output units, with reference to the screenpositional relationship information 831 respectively input from thescreen control unit 803 and the tile data 820 (includingsound volume information 821,partial image data 823, and/or audio source differentiated data 824) via firstimage processing unit 808 and secondimage processing unit 811 respectively, and output the composed video streams. - Similarly, the first
audio composition unit 807 and the secondaudio composition unit 810 select and compose audio streams to be played in the audio output units, with reference to the screenpositional relationship information 831 respectively input from thescreen control unit 803, and output the composed audio streams. - Therefore, even if there are a plurality of video output units and audio output units with independent screen configurations, it is possible to match the video and audio output of the video output units and audio output units.
-
FIG. 1 shows a typical effect of the present embodiment. Two screens CH.1 100 and CH.2 110 are output on a single video output unit, with anobject A 101 and anobject B 102 existing on CH.1. - Thus,
FIG. 1 shows that in the case where theobject B 102 of the CH.1 100 is hidden by the CH.2 110, only the CH.1audio source A 103 corresponding to theobject A 101 is output and the CH.1audio source B 104 corresponding to theobject B 102 is erased from the output audio of anaudio output unit 120. Note that a case where there is no audio source corresponding to the CH.2 110 is shown in this example for simplification. -
FIG. 2 shows a general use case of a display. A single screen CH.1 200 is output on a signal video output unit, with anobject A 201 and anobject B 202 existing on the CH.1 200. -
FIG. 2 shows that, in this case, a CH.1audio source A 203 and a CH.1audio source B 204 corresponding respectively to theobject A 201 and theobject B 202 are output from the output audio of anaudio output unit 220. In such a case, the output audio is the same for both the prior art and the present invention, since audio data corresponding to the CH.1 200 is output. -
FIG. 3 shows the effect when the video/audio output apparatus of the present invention is not applied. In this case, two screens CH.1 300 and CH.2 310 are output on a single video output unit, with anobject A 201 and anobject B 202 existing on the CH.1 300, and theobject B 202 of the CH.1 300 being hidden by the CH.2 310. - In such a case, conventional technology only enables audio data corresponding to the CH.1 300 to be controlled together, and does not enable audio management to be performed for each object. Thus, not only audio data corresponding to the object A 301 (that is, CH. 1 audio source A 303) but also audio data corresponding to the object B 302 (that is, CH. 1 audio source B 304) would be output from the output audio of an
audio output unit 320 despite theobject B 302 being hidden by the CH.2 310. - Also, audio data corresponding to the
object A 301 may sometimes not be output despite theobject A 301 appearing on the output screen. In either case, it is possible that the output image and the output audio may not be matched. -
FIG. 4 shows the relationship between drawing position information, partial image data, and audio source differentiated data in the tile data of the present embodiment. In this example,output image data 500 is divided into 16 blocks, with the CH.1 audio source A being corresponded to firstpartial image data 501 and the CH.1 audio source B being similarly corresponded to secondpartial image data 502. -
FIG. 5 shows the relationship between sound volume information, drawing position information, partial image data, and audio source differentiated data in the tile data of the present embodiment. In this example,output image data 600 is divided into 16 blocks, with the CH.1 audio source A being corresponded topartial image data 601 at a sound volume of 100%. - Similarly, the CH.1 audio source B is corresponded to
partial image data 602 at a sound volume of 60%. Similarly, the CH.1 audio source B is corresponded topartial image data 603 to 606 at respective sound volumes of 10%. Thus, even in the case where audio sources are positioned over a wide area on the output screen, the distribution of the audio sources can be represented by adding sound volume information. - A third embodiment of the present invention will be described next with reference to
FIG. 8 . - Similar to the video/
audio output apparatus 700 according to the first embodiment, videoaudio output apparatus 900 according to this embodiment comprises an image extraction unit 901 (which inputsfirst video data 930 andsecond video data 932, and outputs partial image data 922), an audio source separation unit 902 (which inputsnormal audio data 931, and outputs audio source differentiated data 923), a screen control unit 903 (which inputs image control information 920), aimage selection unit 904, a tile generation unit 905 (which inputs thepartial image data 922, the audio sourcedifferentiated data 923, and audio sourcedifferentiated audio data 933, and outputs tile data includingsound volume information 911,partial image data 913, and/or audio source differentiated data 914),screen composition unit 906, andaudio composition unit 907. InFIG. 8 , thescreen control unit 903 outputs screenpositional relationship information 921 to thescreen composition unit 906 and theaudio composition unit 907. The selection ofpartial image data 913 to be drawn and audio sourcedifferentiated data 914 to be played is performed respectively by the screen composition unit 906 (which outputs a composed screen to a video output unit 940) and the audio composition unit 907 (which outputs a composed audio to a audio output unit 950). Since the specific functions and operations are similar to the first and second embodiments, a detailed description thereof will be omitted. - Although embodiments of the present invention have been described in detail above, it is possible for the invention to take on the form of a system, apparatus, computer program or storage medium. More specifically, the present invention may be applied to a system comprising a plurality of devices or to an apparatus comprising a single device.
- It should be noted that there are cases where the object of the invention is attained also by supplying a program, which implements the functions of the foregoing embodiments, directly or remotely to a system or apparatus, reading the supplied program codes with a computer of the system or apparatus, and then executing the program codes.
- Accordingly, since the functions of the present invention are implemented by computer, the program codes per se installed in the computer also fall within the technical scope of the present invention. In other words, the present invention also covers the computer program itself that is for the purpose of implementing the functions of the present invention.
- In this case, so long as the system or apparatus has the functions of the program, the form of the program, e.g., object code, a program executed by an interpreter or script data supplied to an operating system, etc., does not matter.
- Examples of storage media that can be used for supplying the program are a floppy (registered trademark) disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, CD-RW, magnetic tape, non-volatile type memory card, ROM, DVD (DVD-ROM, DVD-R), etc.
- As for the method of supplying the program, a client computer can be connected to a website on the Internet using a browser possessed by the client computer, and the computer program per se of the present invention or a compressed file that contains an automatic installation function can be downloaded to a recording medium such as a hard disk. Further, the program of the present invention can be supplied by dividing the program code constituting the program into a plurality of files and downloading the files from different websites. In other words, a WWW server that downloads, to multiple users, the program files that implement the functions of the present invention by computer also is covered by the present invention.
- Further, it is also possible to encrypt and store the program of the present invention on a storage medium such as a CD-ROM, distribute the storage medium to users, allow users who meet certain requirements to download decryption key information from a website via the Internet, and allow these users to run the encrypted program by using the key information, whereby the program is installed in the user computer. Further, besides the case where the aforesaid functions according to the embodiment are implemented by executing the read program by computer, an operating system or the like running on the computer may perform all or a part of the actual processing so that the functions of the foregoing embodiment can be implemented by this processing.
- Furthermore, after the program read from the storage medium is written to a memory provided in a function expansion board inserted into the computer or a function expansion unit connected to the computer, a CPU or the like mounted on the function expansion board or function expansion unit performs all or a part of the actual processing so that the functions of the foregoing embodiment can be implemented by this processing.
- Thus, in accordance with the present invention, as described above, it is possible to provide a technique through which the confidentiality of print data can be maintained even under such circumstances as interruption of power.
- As described above, tile data in which the output audio is matched with the audio source object displayed on the output screen can be configured according to the present invention. In particular, output audio can be matched with the configuration of output video after a plurality of screens have been composed in a video/audio output apparatus that simultaneously outputs a plurality of screens.
- While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
Claims (12)
1. A video/audio output apparatus comprising:
a control unit configured to perform screen management of output video, and generate positional relationship information for each input video data;
an image division unit configured to generate partial image data by dividing each input video data;
an input unit configured to input audio data;
an audio separation unit configured to generate audio source differentiated data by separating the audio data input for each audio source included in the audio data;
a tile generation unit configured to generate tile data by compiling the generated partial image data and the generated audio source differentiated data for each drawing region on a screen, based on the generated positional relationship information;
a screen composition unit configured to generate one piece of screen data by composing the generated tile data;
an output unit configured to display the generated screen data on a display device; and
an audio data composition unit configured to generate audio data for one screen by composing the audio source differentiated data in the generated tile data.
2. The apparatus according to claim 1 , wherein the audio separation unit further specifies coordinates of each audio source on the screen, and associates the separated audio data with information of the audio source coordinates.
3. The apparatus according to claim 1 , wherein the tile data includes a proportion of the audio source differentiated audio data relative to an overall sound volume as sound volume information.
4. A video/audio output method comprising:
performing screen management of output video, and generating positional relationship information for each input video data;
generating partial image data by dividing each input video data;
inputting audio data;
generating audio source differentiated data by separating the audio data for each audio source included in the audio data;
generating tile data by compiling the generated partial image data and the generated audio source differentiated data for each drawing region on a screen, based on the generated positional relationship information;
generating one piece of screen data by composing the generated tile data;
displaying the generated screen data on a display device; and
generating audio data for one screen by composing the audio source differentiated data in the generated tile data.
5. The method according to claim 4 , further comprising:
specifying coordinates of each audio source on the screen; and
associating the separated audio data with information of the audio source coordinates.
6. The method according to claim 4 , wherein the tile data includes a proportion of the audio source differentiated audio data relative to an overall sound volume as sound volume information.
7. A computer program, stored on a storage medium, for causing a computer to execute:
performing screen management of output video, and generating positional relationship information for each input video data;
generating partial image data by dividing each input video data;
inputting audio data;
generating audio source differentiated data by separating the audio data input for each audio source included in the audio data;
generating tile data by compiling the generated partial image data and the generated audio source differentiated data for each drawing region on a screen, based on the generated positional relationship information;
generating one piece of screen data by composing the generated tile data;
displaying the generated screen data on a display device; and
generating audio data for one screen by composing the audio source differentiated data in the generated tile data.
8. The computer program according to claim 7 , further comprising:
specifying coordinates of each audio source on the screen; and
associating the separated audio data with information of the audio source coordinates.
9. The computer program according to claim 7 , wherein the tile data includes a proportion of the audio source differentiated audio data relative to an overall sound volume as sound volume information.
10. A computer-readable storage medium storing the computer program as claimed in claim 7 .
11. The computer-readable storage medium according to claim 10 , wherein the computer program further comprises:
specifying coordinates of each audio source on the screen; and associating the separated audio data with information of the audio source coordinates.
12. The computer-readable storage medium storing the computer program as claimed in claim 10 wherein the tile data includes a proportion of the audio source differentiated audio data relative to an overall sound volume as sound volume information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/087,979 US20110197225A1 (en) | 2006-12-27 | 2011-04-15 | Video/audio output apparatus and video/audio output method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006352803A JP5230096B2 (en) | 2006-12-27 | 2006-12-27 | VIDEO / AUDIO OUTPUT DEVICE AND VIDEO / AUDIO OUTPUT METHOD |
JP2006-352803 | 2006-12-27 | ||
US11/964,299 US8037507B2 (en) | 2006-12-27 | 2007-12-26 | Video/audio output apparatus and video/audio output method |
US13/087,979 US20110197225A1 (en) | 2006-12-27 | 2011-04-15 | Video/audio output apparatus and video/audio output method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/964,299 Continuation US8037507B2 (en) | 2006-12-27 | 2007-12-26 | Video/audio output apparatus and video/audio output method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110197225A1 true US20110197225A1 (en) | 2011-08-11 |
Family
ID=39585995
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/964,299 Expired - Fee Related US8037507B2 (en) | 2006-12-27 | 2007-12-26 | Video/audio output apparatus and video/audio output method |
US13/087,979 Abandoned US20110197225A1 (en) | 2006-12-27 | 2011-04-15 | Video/audio output apparatus and video/audio output method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/964,299 Expired - Fee Related US8037507B2 (en) | 2006-12-27 | 2007-12-26 | Video/audio output apparatus and video/audio output method |
Country Status (3)
Country | Link |
---|---|
US (2) | US8037507B2 (en) |
JP (1) | JP5230096B2 (en) |
CN (1) | CN101212577B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9071867B1 (en) * | 2013-07-17 | 2015-06-30 | Google Inc. | Delaying automatic playing of a video based on visibility of the video |
US9703841B1 (en) | 2016-10-28 | 2017-07-11 | International Business Machines Corporation | Context-based notifications in multi-application based systems |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101019569B1 (en) | 2005-08-29 | 2011-03-08 | 에브릭스 테크놀로지스, 인코포레이티드 | Interactivity via mobile image recognition |
JP5231926B2 (en) * | 2008-10-06 | 2013-07-10 | キヤノン株式会社 | Information processing apparatus, control method therefor, and computer program |
JP5618043B2 (en) * | 2009-09-25 | 2014-11-05 | 日本電気株式会社 | Audiovisual processing system, audiovisual processing method, and program |
JP5978574B2 (en) * | 2011-09-12 | 2016-08-24 | ソニー株式会社 | Transmission device, transmission method, reception device, reception method, and transmission / reception system |
WO2015008538A1 (en) * | 2013-07-19 | 2015-01-22 | ソニー株式会社 | Information processing device and information processing method |
WO2019187437A1 (en) * | 2018-03-29 | 2019-10-03 | ソニー株式会社 | Information processing device, information processing method, and program |
CN109788308B (en) * | 2019-02-01 | 2022-07-15 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio and video processing method and device, electronic equipment and storage medium |
WO2022064905A1 (en) * | 2020-09-25 | 2022-03-31 | ソニーグループ株式会社 | Information processing device, information processing method, and program |
CN112561585A (en) * | 2020-12-16 | 2021-03-26 | 中国人寿保险股份有限公司 | Information service system and method based on graph |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6959322B2 (en) * | 1993-10-01 | 2005-10-25 | Collaboration Properties, Inc. | UTP based video conferencing |
US20060230427A1 (en) * | 2005-03-30 | 2006-10-12 | Gerard Kunkel | Method and system of providing user interface |
US20080066103A1 (en) * | 2006-08-24 | 2008-03-13 | Guideworks, Llc | Systems and methods for providing blackout support in video mosaic environments |
US20080209472A1 (en) * | 2006-12-11 | 2008-08-28 | David Eric Shanks | Emphasized mosaic video channel with interactive user control |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0519729A (en) | 1991-07-12 | 1993-01-29 | Hitachi Ltd | Image device and its sound volume control method |
US5583980A (en) * | 1993-12-22 | 1996-12-10 | Knowledge Media Inc. | Time-synchronized annotation method |
AU756265B2 (en) * | 1998-09-24 | 2003-01-09 | Fourie, Inc. | Apparatus and method for presenting sound and image |
EP1142276B1 (en) | 1999-01-04 | 2005-05-04 | Thomson Licensing S.A. | Television remote control system with a picture-outside-picture display |
JP3910537B2 (en) | 2001-03-26 | 2007-04-25 | 富士通株式会社 | Multi-channel information processing device |
JP4335087B2 (en) * | 2004-07-29 | 2009-09-30 | 大日本印刷株式会社 | Sound playback device |
-
2006
- 2006-12-27 JP JP2006352803A patent/JP5230096B2/en not_active Expired - Fee Related
-
2007
- 2007-12-26 US US11/964,299 patent/US8037507B2/en not_active Expired - Fee Related
- 2007-12-27 CN CN200710306002.3A patent/CN101212577B/en not_active Expired - Fee Related
-
2011
- 2011-04-15 US US13/087,979 patent/US20110197225A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6959322B2 (en) * | 1993-10-01 | 2005-10-25 | Collaboration Properties, Inc. | UTP based video conferencing |
US20060230427A1 (en) * | 2005-03-30 | 2006-10-12 | Gerard Kunkel | Method and system of providing user interface |
US20080066103A1 (en) * | 2006-08-24 | 2008-03-13 | Guideworks, Llc | Systems and methods for providing blackout support in video mosaic environments |
US20080209472A1 (en) * | 2006-12-11 | 2008-08-28 | David Eric Shanks | Emphasized mosaic video channel with interactive user control |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9071867B1 (en) * | 2013-07-17 | 2015-06-30 | Google Inc. | Delaying automatic playing of a video based on visibility of the video |
US9703841B1 (en) | 2016-10-28 | 2017-07-11 | International Business Machines Corporation | Context-based notifications in multi-application based systems |
Also Published As
Publication number | Publication date |
---|---|
CN101212577A (en) | 2008-07-02 |
US8037507B2 (en) | 2011-10-11 |
US20080163329A1 (en) | 2008-07-03 |
JP5230096B2 (en) | 2013-07-10 |
JP2008167032A (en) | 2008-07-17 |
CN101212577B (en) | 2015-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8037507B2 (en) | Video/audio output apparatus and video/audio output method | |
KR100868475B1 (en) | Method for creating, editing, and reproducing multi-object audio contents files for object-based audio service, and method for creating audio presets | |
Armstrong et al. | Object-based broadcasting-curation, responsiveness and user experience | |
CN1327436C (en) | Method and apparatus for mixing audio stream, and information storage medium | |
KR101963753B1 (en) | Method and apparatus for playing videos for music segment | |
JP2006004292A (en) | Content reproducing apparatus and menu screen display method | |
KR20160135301A (en) | Audiovisual content item data streams | |
US20080229200A1 (en) | Graphical Digital Audio Data Processing System | |
US20110161923A1 (en) | Preparing navigation structure for an audiovisual product | |
JP2006050469A (en) | Content generating apparatus, content generating method, program and recording medium | |
KR102403149B1 (en) | Electric device and method for controlling thereof | |
KR20120060085A (en) | A system and a method for providing a composition and a record medium recorded program for realizing the same | |
KR102078479B1 (en) | Method for editing video and videos editing device | |
JP2014171053A (en) | Electronic document container data file, electronic document container data file generating apparatus, electronic document container data file generating program, server apparatus, and electronic document container data file generating method | |
US20090222758A1 (en) | Content reproduction apparatus and method | |
Sexton | Immersive Audio: Optimizing Creative Impact without Increasing Production Costs | |
JP2006048465A (en) | Content generation system, program, and recording medium | |
KR101468411B1 (en) | Apparatus for playing and editing MIDI music and Method for the same with user orientation | |
US8014883B2 (en) | Templates and style sheets for audio broadcasts | |
JP2003263521A (en) | Device for creating and reproducing secondary production | |
KR100714409B1 (en) | Apparutus for making video lecture coupled with lecture scenario and teaching materials and Method thereof | |
US20080279526A1 (en) | Record/playback apparatus and control method therefor | |
KR101125358B1 (en) | A apparatus of operating multimedia presentation with personal computer and arranging method of divided control screen thereof | |
KR20030034410A (en) | method and the system for producting BIFS(BInary Format for Scenes language) for MPEG-4 contents | |
KR101125345B1 (en) | A apparatus of operating multimedia presentation with excel and powerpoint for personal computer and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |