US20020122112A1 - Group-wise video conferencing uses 3d-graphics model of broadcast event - Google Patents

Group-wise video conferencing uses 3d-graphics model of broadcast event Download PDF

Info

Publication number
US20020122112A1
US20020122112A1 US09/053,448 US5344898A US2002122112A1 US 20020122112 A1 US20020122112 A1 US 20020122112A1 US 5344898 A US5344898 A US 5344898A US 2002122112 A1 US2002122112 A1 US 2002122112A1
Authority
US
United States
Prior art keywords
server
end users
client
users
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/053,448
Inventor
Raoul Mallart
Atul Sinha
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
US Philips Corp
Original Assignee
US Philips Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Assigned to U.S. PHILIPS CORPORATION reassignment U.S. PHILIPS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MALLART, RAOUL, SINHA, ATUL
Application filed by US Philips Corp filed Critical US Philips Corp
Priority to US09/053,448 priority Critical patent/US20020122112A1/en
Priority to EP99909145A priority patent/EP0988753A2/en
Priority to KR1019997011598A priority patent/KR100722704B1/en
Priority to JP55140299A priority patent/JP4350806B2/en
Priority to CNB998009032A priority patent/CN1213566C/en
Priority to PCT/IB1999/000574 priority patent/WO1999053691A2/en
Publication of US20020122112A1 publication Critical patent/US20020122112A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding

Definitions

  • the invention relates to a method and a system for enhancing broadcasting with a service that enables interaction among multiple end users that are geographically distributed.
  • Examples of communication involving multiple users are a broadcast and a conference.
  • the broadcast is typically a one-to-many exchange of pre-recorded or real time information without interaction of the receiving party with the broadcasting process.
  • a conference is a form wherein dialogues are typically real-time and dynamic in the sense that receiver and sender interact and frequently change their roles and determine the information exchanged.
  • a cable-TV network connects multiple users to an application server.
  • the server provides the graphics for a virtual environment via teletext pages, each respective one supplying the graphics data for a respective user.
  • the telephone network is being used for communicating to the server commands from the user controlling his/her telephone keys to control his/her graphics avatar in the virtual environment.
  • the teletext graphics pages get updated under control of the commands entered by the user.
  • the telephone network is also being used to enable communication between users under control of a chat box application run on the server.
  • broadcast and conference modes of communication systems are implemented in an independent manner with separate applications, e.g., a television broadcast of a sports program and a telephone or video conference between sports experts being consulted via an audio or video link during a live broadcast of the sports event, while the conference is being broadcasted.
  • the invention provides a method of controlling communication to multiple end users, different users residing at geographically different locations.
  • the method comprises, in a broadcasting mode, broadcasting content information for receipt by the end users, and, in a conferencing mode, enabling interconnecting at least one subset of the end users through a network and enabling interaction between the end users of the subset.
  • the method enables switching between the broadcasting mode and the conference mode.
  • the method of the invention thus integrates broadcasting, e.g., TV broadcasting, with conferencing, and controls the switching between these modes.
  • the invention enables users to discuss certain events that occur in the broadcasting.
  • certain events in the broadcast mode trigger the switching to the conference mode.
  • the conference mode is enhanced with 3D-graphics models of the triggering events in order to serve as a basis for discussion in groups that are smaller than the population of the audience attending the broadcast.
  • Software for real-time conversion of video into 3D graphics is commercially available.
  • FIG. 1 is a diagram of a known broadcasting system
  • FIG. 2 is a diagram of a system in the invention.
  • FIGS. 3 - 5 are diagrams illustrating the method of the invention
  • FIG. 1 is a block diagram with the main components of a conventional broadcast system 100 for downloading information to the end users.
  • System 100 has a camera 101 , a server 102 and multiple clients, of which only client 104 is shown in order to not obscure the drawing.
  • Server 102 is typically part of professional studio equipment.
  • Client 104 makes accessible to the end user the information broadcasted by server 102 .
  • client 104 comprises consumer electronics equipment.
  • Server 102 comprises a real-time encoder 108 , a storage 110 , a mixer 112 , a transport encoder 114 , and a transmitter 116 .
  • Mixer 112 mixes the data supplied by encoder 108 and storage 110 .
  • Storage 110 stores pre-recorded video or graphics data.
  • Real-time encoder 108 encodes the video captured by camera 101 into a format suitable for the mixing with the data supplied by storage 110 .
  • Encoder 114 encodes the stream into the MPEG-2 TS format.
  • the mixing is carried out under control of studio personnel, e.g., the local editor.
  • Client 104 comprises a set-top box 118 and a television apparatus 120 .
  • Set-top box 118 comprises a receiver 122 and a decoder 124 .
  • Transmitter 116 in server 102 communicates with receiver 112 of client 104 using an MPEG-2 Transport Stream (TS) format.
  • TS MPEG-2 Transport Stream
  • FIG. 2 is a block diagram with the main components of a system 200 of the invention.
  • System 200 integrates broadcasting with conferencing.
  • the system architecture is discussed with reference to FIG. 2, its operation is explained further with reference to FIGS. 3 - 5 .
  • System 200 presents an integrated approach to broadcast and conferencing modes under software application control. This approach allows switching the users between the broadcast and conference modes. The switching can be controlled by the server, by the end user, or by both.
  • the conference is triggered by the context set by the broadcast mode. In the conference mode, the clients receiving the broadcast are split into smaller groups for multi-user communication, e.g., discussions about a controversial action during a sports event broadcasted. At the end of the conferencing, the users in a group join the broadcast program in a suitable manner.
  • audio, video and 3D graphics models are generated based on the content of the broadcast programs, and are transported to the users.
  • the users' clients employ speech, audio, video, and graphics data, and use streaming protocols, and distributed shared object support.
  • a service provider can introduce above functionality in an evolutionary manner.
  • This evolution can proceed in a variety of ways. For example, one could introduce this functionality of switching between broadcast and conferencing modes to all users in a stepwise manner, or to a small set of users, e.g., for professional application.
  • the small set of users is then a set of experts who need to establish a multi-user collaboration/communication, e.g., a set of soccer experts who are located at geographically different sites, and who are called in during a broadcast to give their expert opinion on a particular event that occurred to the broadcast soccer match.
  • This collaboration/communication is then broadcasted to all other users. Note that this approach goes beyond current practice of consulting a remotely located expert with an audio or video link.
  • System 200 comprises a server 202 and multiple clients, of which only a single one, client 204 , is shown in order to not obscure the drawing. The clients reside at different locations.
  • server 202 comprises other components.
  • client 204 comprises other components.
  • the additional components manage the broadcast mode and conference mode as explained below.
  • Server 202 comprises a model generator 206 , an event-triggered controller 208 , a unit 210 that manages the Session Description Protocols (SDP) and the Session Announcement Protocols (SAP).
  • SDP Session Description Protocols
  • SAP Session Announcement Protocols
  • Server 202 describes the groups thus formed using the description protocol SDP and informs the clients of the groups being formed by using the announcement protocol SAP.
  • the clients respond by joining a particular group or by waiving to do so.
  • Joining a group automatically activates the conference software application required for enabling the user to participate in the group activities as discussed below.
  • System 200 further comprises a data base 212 with identifications of the clients, such as of client 204 , and information regarding the preferences, authorization, etc. of the clients, in order to form the groups for the conferencing mode. This information is based, e.g., upon a query among the users carried out in advance.
  • Model generator 206 is coupled to camera 101 via a server input 203 , storage 110 and event-triggered controller 208 .
  • Generator 206 generates 3D graphics models, e.g., in a VRML format, of the video data supplied by camera 101 , or modifies the 3D graphics models stored in storage 110 .
  • Software for real time conversion of video into 3D graphics is known, for example, as a product from Orad Hi-Tech Systems, Ltd.
  • Generator 206 is controlled by controller 208 .
  • Controller 208 triggers generator 206 to create a 3D graphics model in response to the occurrence of a certain event.
  • the event corresponds to a pre-programmed condition or is a manual input by, e.g., a sports commentator or studio personnel, during the broadcasting.
  • Controller 208 also triggers the formation of groups of clients, which could be for entering a conference mode, or for watching a conference between the users of other clients. To this end, controller 208 is connected to SDP&SAP unit 210 .
  • Client 204 has a set-top box 214 that comprises a software application 216 for control of a conferencing mode of this particular client 204 . Conferencing modes are further explained below and with reference to FIGS. 3 - 5 .
  • Application 216 determines, among other things, the type of interaction and communication between client 204 and the other clients in the group to which it is assigned. To this end, application 216 communicates with data base 212 .
  • Client 204 receives via a server output 207 and a client input 217 the 3D graphics data from model generator 206 in server 202 , e.g., via the Internet with an Internet Protocol (IP), or via the broadcast channel with IP over MPEG-2 TS.
  • IP Internet Protocol
  • Application 216 determines, based on the authorization and/or preference information in data base 212 , whether the user is only permitted to watch the 3D scene from different points of view, or also to modify the scene, e.g., to show alternatives to the broadcast event by changing the scene's configuration that has been modeled.
  • generator 206 is preferably capable of generating different models for different groups.
  • Application 216 controls a 3D renderer 218 that comprises, for example, a VRML browser. Decoder 124 and renderer 218 are connected to a compositor 226 that processes the input to prepare for display and play-out to the user at display 120 . Compositor 226 is also connected to an output of A/V/Speech coders 228 .
  • A/V streaming protocols 226 enable efficient audio/video data transport between the clients via realtime communication channels 225 , here through the Internet, in the conferencing mode.
  • A/V/Speech coders 228 take care of the encoding of the A/V/Speech input of client 204 via a microphone 230 and of the decoding of the stream received from the other clients.
  • Client 204 and the other clients in the same group as client 204 interact via the Internet/Multicast Routers 220 .
  • This interaction is supported locally, at client 204 , by a world model and distributed shared object support protocols (S.O.S.) 222 , in order to maintain overall consistence in the 3D model when being manipulated by authorized users.
  • a user input device 232 e.g., a joy-stick, is provided at authorized client 204 for modifying or manipulating in another manner the 3D model via application 216 .
  • a Group Management unit 234 handles group management, authentication access control and subscription issues such as payment.
  • Unit 234 is, for example, part of application 216 or is a separate application, or is implemented with a smart card reader.
  • Unit 234 receives the relevant control information from server 202 via an input 233 .
  • components 124 , 216 , 218 , 222 , 224 , 226 , 228 and 234 may all be implemented in software.
  • Operation is as follows.
  • the transition from a broadcast mode to a conference communication mode is triggered by an event.
  • This triggering can be automatic or manual, determined by a sports commentator for a live broadcast or by studio personnel for a pre-recorded program.
  • model generator 206 creates the 3D graphics models, possibly different ones for different groups of users.
  • FIG. 3 illustrates the transitions between a large group 302 watching the broadcast and smaller groups 304 , 306 , . . . , 308 formed out of larger group.
  • FIG. 4 illustrates a more detailed scenario, wherein a large group 402 comprises a group 404 , a group 406 and a group 408 .
  • the users in group 404 switch between the broadcast mode reception and the conference mode and remain passive in the sense that they merely receive information and do not interact actively.
  • the users in group 408 are divided among a plurality of smaller groups 410 , 412 , . . . , 414 , each not necessarily of the same users during the session, for participating in the conferencing.
  • the users in group 406 form a panel whose conference is merged with the broadcasting to all users who want to receive this.
  • FIG. 5 illustrates a refinement on the scenario of FIG. 4. It is possible that not all users can or want to enter the conference mode, either for attending a conference in a small group or for viewing the conference of another group, e.g., of the soccer experts group. For example, not all users capable of receiving the broadcasted information have the equipment supporting the switching between the broadcast mode and the conferencing mode. Under this scenario, a group 502 stays out of and is not hampered by the switching scenario.

Abstract

A TV broadcast service to multiple geographically distributed end users is integrated with a conferencing mode. Upon a certain event in the broadcast, specific groups of end users are switched to a conference mode under software control so that the group is enabled to discuss the event. The conference mode is enhanced by a 3D graphics model of the video representation of the event that is downloaded to the groups. The end users are capable of interacting with the model to discuss alternatives to the event.

Description

    FIELD OF THE INVENTION
  • The invention relates to a method and a system for enhancing broadcasting with a service that enables interaction among multiple end users that are geographically distributed. [0001]
  • BACKGROUND ART
  • Examples of communication involving multiple users are a broadcast and a conference. The broadcast is typically a one-to-many exchange of pre-recorded or real time information without interaction of the receiving party with the broadcasting process. A conference is a form wherein dialogues are typically real-time and dynamic in the sense that receiver and sender interact and frequently change their roles and determine the information exchanged. [0002]
  • Examples of multimedia methods and systems in a multiple-user, interactive virtual environment that enables conferencing are discussed in, for example, U.S. patent applications of Philips Electronics, Ser. Nos. 08/373,737 (PHN 14,719); 08/597,439 (PHN 15,187) and 08/828,468 (PHN 15,769), herewith incorporated by reference. In an implementation of the known systems, a cable-TV network connects multiple users to an application server. The server provides the graphics for a virtual environment via teletext pages, each respective one supplying the graphics data for a respective user. The telephone network is being used for communicating to the server commands from the user controlling his/her telephone keys to control his/her graphics avatar in the virtual environment. The teletext graphics pages get updated under control of the commands entered by the user. The telephone network is also being used to enable communication between users under control of a chat box application run on the server. [0003]
  • OBJECT OF THE INVENTION
  • Currently, broadcast and conference modes of communication systems are implemented in an independent manner with separate applications, e.g., a television broadcast of a sports program and a telephone or video conference between sports experts being consulted via an audio or video link during a live broadcast of the sports event, while the conference is being broadcasted. [0004]
  • It is an object of the invention to provide a new interactive environment, and to broaden the scope of TV broadcast services. It is further object to integrate broadcast and conferencing. [0005]
  • SUMMARY OF THE INVENTION
  • To this end, the invention provides a method of controlling communication to multiple end users, different users residing at geographically different locations. The method comprises, in a broadcasting mode, broadcasting content information for receipt by the end users, and, in a conferencing mode, enabling interconnecting at least one subset of the end users through a network and enabling interaction between the end users of the subset. The method enables switching between the broadcasting mode and the conference mode. [0006]
  • The method of the invention thus integrates broadcasting, e.g., TV broadcasting, with conferencing, and controls the switching between these modes. The invention enables users to discuss certain events that occur in the broadcasting. Preferably, certain events in the broadcast mode trigger the switching to the conference mode. Preferably, the conference mode is enhanced with 3D-graphics models of the triggering events in order to serve as a basis for discussion in groups that are smaller than the population of the audience attending the broadcast. Software for real-time conversion of video into 3D graphics is commercially available. [0007]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention is explained by way of example and with reference to the accompanying drawings, wherein: [0008]
  • FIG. 1 is a diagram of a known broadcasting system; [0009]
  • FIG. 2 is a diagram of a system in the invention; and [0010]
  • FIGS. [0011] 3-5 are diagrams illustrating the method of the invention
  • Throughout the figures, same reference numerals indicate similar or corresponding features. [0012]
  • PREFERRED EMBODIMENTS
  • Known broadcast system [0013]
  • FIG. 1 is a block diagram with the main components of a [0014] conventional broadcast system 100 for downloading information to the end users. System 100 has a camera 101, a server 102 and multiple clients, of which only client 104 is shown in order to not obscure the drawing. Server 102 is typically part of professional studio equipment. Client 104 makes accessible to the end user the information broadcasted by server 102. Typically, client 104 comprises consumer electronics equipment.
  • [0015] Server 102 comprises a real-time encoder 108, a storage 110, a mixer 112, a transport encoder 114, and a transmitter 116. Mixer 112 mixes the data supplied by encoder 108 and storage 110. Storage 110 stores pre-recorded video or graphics data. Real-time encoder 108 encodes the video captured by camera 101 into a format suitable for the mixing with the data supplied by storage 110. Encoder 114 encodes the stream into the MPEG-2 TS format. Preferably, the mixing is carried out under control of studio personnel, e.g., the local editor.
  • [0016] Client 104 comprises a set-top box 118 and a television apparatus 120. Set-top box 118 comprises a receiver 122 and a decoder 124. Transmitter 116 in server 102 communicates with receiver 112 of client 104 using an MPEG-2 Transport Stream (TS) format.
  • Broadcast and conferencing system [0017]
  • FIG. 2 is a block diagram with the main components of a [0018] system 200 of the invention. System 200 integrates broadcasting with conferencing. The system architecture is discussed with reference to FIG. 2, its operation is explained further with reference to FIGS. 3-5.
  • [0019] System 200 presents an integrated approach to broadcast and conferencing modes under software application control. This approach allows switching the users between the broadcast and conference modes. The switching can be controlled by the server, by the end user, or by both. The conference is triggered by the context set by the broadcast mode. In the conference mode, the clients receiving the broadcast are split into smaller groups for multi-user communication, e.g., discussions about a controversial action during a sports event broadcasted. At the end of the conferencing, the users in a group join the broadcast program in a suitable manner. In order to set the context for the conferencing, audio, video and 3D graphics models are generated based on the content of the broadcast programs, and are transported to the users. For the group-wise conferencing, the users' clients employ speech, audio, video, and graphics data, and use streaming protocols, and distributed shared object support.
  • A service provider can introduce above functionality in an evolutionary manner. This evolution can proceed in a variety of ways. For example, one could introduce this functionality of switching between broadcast and conferencing modes to all users in a stepwise manner, or to a small set of users, e.g., for professional application. The small set of users is then a set of experts who need to establish a multi-user collaboration/communication, e.g., a set of soccer experts who are located at geographically different sites, and who are called in during a broadcast to give their expert opinion on a particular event that occurred to the broadcast soccer match. This collaboration/communication is then broadcasted to all other users. Note that this approach goes beyond current practice of consulting a remotely located expert with an audio or video link. [0020]
  • [0021] System 200 comprises a server 202 and multiple clients, of which only a single one, client 204, is shown in order to not obscure the drawing. The clients reside at different locations. In addition to components 101, 108-116, 122 and 124, mentioned above, server 202 comprises other components. Similarly, in addition to receiver 122 and decoder 124, client 204 comprises other components. The additional components manage the broadcast mode and conference mode as explained below.
  • [0022] Server 202 comprises a model generator 206, an event-triggered controller 208, a unit 210 that manages the Session Description Protocols (SDP) and the Session Announcement Protocols (SAP). These protocols are known Internet protocols that support multicasting. For more information, see for example, the paper “How IP Multicast Works, An IP Multicast Initiative White Paper” of authors Vicki Johnson and Marjory Johnson, Stardust Technologies, Inc., as available on the web at: http://www.ipmulticast.com/community/whitepapers/howipmcworks.html, and its literature references. SDP describes multimedia sessions for the purpose of session initiation, such as invitations and announcements. SAP also is meant to ensure authentication and privacy.
  • [0023] Server 202 describes the groups thus formed using the description protocol SDP and informs the clients of the groups being formed by using the announcement protocol SAP. The clients respond by joining a particular group or by waiving to do so. Joining a group automatically activates the conference software application required for enabling the user to participate in the group activities as discussed below. System 200 further comprises a data base 212 with identifications of the clients, such as of client 204, and information regarding the preferences, authorization, etc. of the clients, in order to form the groups for the conferencing mode. This information is based, e.g., upon a query among the users carried out in advance. Model generator 206 is coupled to camera 101 via a server input 203, storage 110 and event-triggered controller 208. Generator 206 generates 3D graphics models, e.g., in a VRML format, of the video data supplied by camera 101, or modifies the 3D graphics models stored in storage 110. Software for real time conversion of video into 3D graphics is known, for example, as a product from Orad Hi-Tech Systems, Ltd. Generator 206 is controlled by controller 208. Controller 208 triggers generator 206 to create a 3D graphics model in response to the occurrence of a certain event. The event corresponds to a pre-programmed condition or is a manual input by, e.g., a sports commentator or studio personnel, during the broadcasting. Controller 208 also triggers the formation of groups of clients, which could be for entering a conference mode, or for watching a conference between the users of other clients. To this end, controller 208 is connected to SDP&SAP unit 210.
  • [0024] Client 204 has a set-top box 214 that comprises a software application 216 for control of a conferencing mode of this particular client 204. Conferencing modes are further explained below and with reference to FIGS. 3-5. Application 216 determines, among other things, the type of interaction and communication between client 204 and the other clients in the group to which it is assigned. To this end, application 216 communicates with data base 212. Client 204 receives via a server output 207 and a client input 217 the 3D graphics data from model generator 206 in server 202, e.g., via the Internet with an Internet Protocol (IP), or via the broadcast channel with IP over MPEG-2 TS. Application 216 determines, based on the authorization and/or preference information in data base 212, whether the user is only permitted to watch the 3D scene from different points of view, or also to modify the scene, e.g., to show alternatives to the broadcast event by changing the scene's configuration that has been modeled. Within this context, generator 206 is preferably capable of generating different models for different groups. Application 216 controls a 3D renderer 218 that comprises, for example, a VRML browser. Decoder 124 and renderer 218 are connected to a compositor 226 that processes the input to prepare for display and play-out to the user at display 120. Compositor 226 is also connected to an output of A/V/Speech coders 228. A/V streaming protocols 226 enable efficient audio/video data transport between the clients via realtime communication channels 225, here through the Internet, in the conferencing mode. A/V/Speech coders 228 take care of the encoding of the A/V/Speech input of client 204 via a microphone 230 and of the decoding of the stream received from the other clients.
  • [0025] Client 204 and the other clients in the same group as client 204 interact via the Internet/Multicast Routers 220. This interaction is supported locally, at client 204, by a world model and distributed shared object support protocols (S.O.S.) 222, in order to maintain overall consistence in the 3D model when being manipulated by authorized users. To this end, a user input device 232, e.g., a joy-stick, is provided at authorized client 204 for modifying or manipulating in another manner the 3D model via application 216. A Group Management unit 234 handles group management, authentication access control and subscription issues such as payment. Unit 234 is, for example, part of application 216 or is a separate application, or is implemented with a smart card reader. Unit 234 receives the relevant control information from server 202 via an input 233.
  • Note that [0026] components 124, 216, 218, 222, 224, 226, 228 and 234 may all be implemented in software.
  • Operation is as follows. The transition from a broadcast mode to a conference communication mode is triggered by an event. This triggering can be automatic or manual, determined by a sports commentator for a live broadcast or by studio personnel for a pre-recorded program. On the event of a trigger from controller [0027] 208, model generator 206 creates the 3D graphics models, possibly different ones for different groups of users.
  • FIG. 3 illustrates the transitions between a [0028] large group 302 watching the broadcast and smaller groups 304, 306, . . . , 308 formed out of larger group.
  • FIG. 4 illustrates a more detailed scenario, wherein a [0029] large group 402 comprises a group 404, a group 406 and a group 408. The users in group 404 switch between the broadcast mode reception and the conference mode and remain passive in the sense that they merely receive information and do not interact actively. The users in group 408 are divided among a plurality of smaller groups 410, 412, . . . , 414, each not necessarily of the same users during the session, for participating in the conferencing. The users in group 406 form a panel whose conference is merged with the broadcasting to all users who want to receive this.
  • FIG. 5 illustrates a refinement on the scenario of FIG. 4. It is possible that not all users can or want to enter the conference mode, either for attending a conference in a small group or for viewing the conference of another group, e.g., of the soccer experts group. For example, not all users capable of receiving the broadcasted information have the equipment supporting the switching between the broadcast mode and the conferencing mode. Under this scenario, a [0030] group 502 stays out of and is not hampered by the switching scenario.

Claims (10)

We claim:
1. A method of controlling communication to multiple end users at geographically different locations, the method comprising:
in a broadcasting mode broadcasting content information for receipt by the end users;
in a conferencing mode:
enabling interconnecting at least one subset of the end users through a network;
enabling interaction between the end users of the subset via the network; and
enabling switching between the broadcasting mode and the conference mode.
2. The method of claim 1, comprising, while in the conference mode, broadcasting the interaction to another subset of the end-users.
3. The method of claim 1, wherein the switching is enabled by a specific event in the content information broadcasted.
4. The method of claim 1, wherein the content information comprises video information, and wherein the method comprises:
creating a graphics representation of the video information;
in the conference mode supplying the graphics representation to the subset of end users.
5. The method of claim 4, wherein:
one or more specific ones of the end users in the subset is enabled to interactively modify the graphics representation.
6. The method of claim 4, wherein:
while in the conferencing mode, the interaction is broadcasted to another subset of end users; and
one or more specific ones of the end users in the subset is enabled to interactively modify the graphics representation.
7. A system for controlling communication between multiple end users at geographically different locations, the system comprising:
a server;
a respective one of multiple clients for a respective one of the end users, the clients being coupled to the server; wherein:
the server comprises:
a transmission unit for broadcasting content information to the users;
a trigger unit for triggering formation of at least one group of end users upon an event relating to the broadcasting;
a unit for controlling the formation of the group coupled to the trigger unit; and
each respective client being enabled to switch between making accessible to the respective end user the broadcasted content information and enabling entering a conference between the end users of the group via the client.
8. The system of claim 7, wherein:
the server comprises:
a server input for receiving video data; and
a model generator connected to the server input for generating a graphics model based on the video data;
a server output connected to the model generator for supply of the model;
a respective client comprises:
a client input connected to the server output for receipt of the model.
9. A client apparatus for use with a video server, the client apparatus comprising:
a receiver for receiving a TV broadcast;
a coder for coding information received via the Internet from another client; and
an input for receipt of a control signal from the server; wherein:
the apparatus is operative to selectively control switching the apparatus between making accessible to an end user the broadcast or making accessible to the end user a real-time communication channel with another client in response to receipt of the control signal.
10. The apparatus of claim 10, being operative to render a 3D graphics model received from the server and to make the rendered model accessible to the end user while the end user has access to the communication channel.
US09/053,448 1998-04-10 1998-04-10 Group-wise video conferencing uses 3d-graphics model of broadcast event Abandoned US20020122112A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US09/053,448 US20020122112A1 (en) 1998-04-10 1998-04-10 Group-wise video conferencing uses 3d-graphics model of broadcast event
EP99909145A EP0988753A2 (en) 1998-04-10 1999-04-01 Group-wise video conferencing uses 3d-graphics model of broadcast event
KR1019997011598A KR100722704B1 (en) 1998-04-10 1999-04-01 Group-wise video conferencing uses 3D-graphics model of broadcast event
JP55140299A JP4350806B2 (en) 1998-04-10 1999-04-01 Use 3D graphics model for broadcast events in group-style video conferencing
CNB998009032A CN1213566C (en) 1998-04-10 1999-04-01 Group-wise video conferencing uses 3D-graphics model of broacast event
PCT/IB1999/000574 WO1999053691A2 (en) 1998-04-10 1999-04-01 Group-wise video conferencing uses 3d-graphics model of broadcast event

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/053,448 US20020122112A1 (en) 1998-04-10 1998-04-10 Group-wise video conferencing uses 3d-graphics model of broadcast event

Publications (1)

Publication Number Publication Date
US20020122112A1 true US20020122112A1 (en) 2002-09-05

Family

ID=21984308

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/053,448 Abandoned US20020122112A1 (en) 1998-04-10 1998-04-10 Group-wise video conferencing uses 3d-graphics model of broadcast event

Country Status (6)

Country Link
US (1) US20020122112A1 (en)
EP (1) EP0988753A2 (en)
JP (1) JP4350806B2 (en)
KR (1) KR100722704B1 (en)
CN (1) CN1213566C (en)
WO (1) WO1999053691A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020184377A1 (en) * 2001-06-01 2002-12-05 Flavin James D. One to many mapping of application service provision
US6812956B2 (en) * 2001-12-21 2004-11-02 Applied Minds, Inc. Method and apparatus for selection of signals in a teleconference
US20050062843A1 (en) * 2003-09-22 2005-03-24 Bowers Richard D. Client-side audio mixing for conferencing
US20070198740A1 (en) * 2003-12-19 2007-08-23 Koninklijke Philips Electronic, N.V. Broadcast driven virtual community of p2p network
US20080178206A1 (en) * 2006-09-08 2008-07-24 Hitachi Maxell, Ltd. Disk magazine and disk changer system
US7908178B2 (en) 2004-05-04 2011-03-15 Paul Nykamp Methods for interactive and synchronous displaying session
US20120176466A1 (en) * 2010-07-08 2012-07-12 Lisa Marie Bennett Wrench Method of collecting and employing information about parties to a televideo conference
US9924205B2 (en) 2014-06-10 2018-03-20 Tencent Technology (Shenzhen) Company Limited Video remote-commentary synchronization method and system, and terminal device
CN108306862A (en) * 2018-01-02 2018-07-20 北京星光影视设备科技股份有限公司 The long-range real-time interaction methods of 3D and conference system

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003518840A (en) * 1999-10-29 2003-06-10 ユナイテッド ビデオ プロパティーズ, インコーポレイテッド TV video conferencing system
WO2002080556A1 (en) * 2001-03-30 2002-10-10 David Becker F Remote collaboration technology design and methodology
KR100914636B1 (en) * 2001-05-29 2009-08-28 코닌클리케 필립스 일렉트로닉스 엔.브이. A method of transmitting a visual communication signal, a transmitter for transmitting a visual communication signal and a receiver for receiving a visual communication signal
US20030223562A1 (en) * 2002-05-29 2003-12-04 Chenglin Cui Facilitating conference calls by dynamically determining information streams to be received by a mixing unit
KR20040020101A (en) * 2002-08-29 2004-03-09 최형주 Total image data service method using video conference communication
CN1331359C (en) * 2005-06-28 2007-08-08 清华大学 Transmission method for video flow in interactive multi-viewpoint video system
US7768543B2 (en) * 2006-03-09 2010-08-03 Citrix Online, Llc System and method for dynamically altering videoconference bit rates and layout based on participant activity
KR20130015766A (en) * 2011-08-05 2013-02-14 (주)유니파인테크 Broadcasting system using vedioconferencing and method thereof
KR101295976B1 (en) * 2012-06-04 2013-08-13 충북대학교 산학협력단 3d video conference system
CN109688365B (en) * 2018-12-27 2021-02-19 北京真视通科技股份有限公司 Video conference processing method and computer-readable storage medium
US11211050B2 (en) * 2019-08-13 2021-12-28 International Business Machines Corporation Structured conversation enhancement

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5315633A (en) * 1991-12-20 1994-05-24 Unisys Corporation Digital video switch for video teleconferencing
US5440624A (en) * 1992-11-10 1995-08-08 Netmedia, Inc. Method and apparatus for providing adaptive administration and control of an electronic conference
US5867653A (en) * 1996-04-18 1999-02-02 International Business Machines Corporation Method and apparatus for multi-cast based video conferencing

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020184377A1 (en) * 2001-06-01 2002-12-05 Flavin James D. One to many mapping of application service provision
US7398195B2 (en) * 2001-06-01 2008-07-08 Progress Software Corporation One to many mapping of application service provision
US6812956B2 (en) * 2001-12-21 2004-11-02 Applied Minds, Inc. Method and apparatus for selection of signals in a teleconference
US7230639B2 (en) 2001-12-21 2007-06-12 Applied Minds, Inc. Method and apparatus for selection of signals in a teleconference
US20050062843A1 (en) * 2003-09-22 2005-03-24 Bowers Richard D. Client-side audio mixing for conferencing
US20070198740A1 (en) * 2003-12-19 2007-08-23 Koninklijke Philips Electronic, N.V. Broadcast driven virtual community of p2p network
US8566475B2 (en) 2003-12-19 2013-10-22 Koninklijke Philips N.V. Broadcast driven virtual community of P2P network
US8069087B2 (en) 2004-05-04 2011-11-29 Paul Nykamp Methods for interactive and synchronous display session
US7908178B2 (en) 2004-05-04 2011-03-15 Paul Nykamp Methods for interactive and synchronous displaying session
US8311894B2 (en) 2004-05-04 2012-11-13 Reliable Tack Acquisitions Llc Method and apparatus for interactive and synchronous display session
US20080178206A1 (en) * 2006-09-08 2008-07-24 Hitachi Maxell, Ltd. Disk magazine and disk changer system
US20120176466A1 (en) * 2010-07-08 2012-07-12 Lisa Marie Bennett Wrench Method of collecting and employing information about parties to a televideo conference
US8817966B2 (en) * 2010-07-08 2014-08-26 Lisa Marie Bennett Wrench Method of collecting and employing information about parties to a televideo conference
US9485462B2 (en) 2010-07-08 2016-11-01 Lisa Marie Bennett Wrench Method of collecting and employing information about parties to a televideo conference
US9490993B1 (en) 2010-07-08 2016-11-08 Lisa Marie Bennett Wrench Method of collecting and employing information about parties to a televideo conference
US9924205B2 (en) 2014-06-10 2018-03-20 Tencent Technology (Shenzhen) Company Limited Video remote-commentary synchronization method and system, and terminal device
CN108306862A (en) * 2018-01-02 2018-07-20 北京星光影视设备科技股份有限公司 The long-range real-time interaction methods of 3D and conference system

Also Published As

Publication number Publication date
KR100722704B1 (en) 2007-06-04
EP0988753A2 (en) 2000-03-29
JP2002510457A (en) 2002-04-02
WO1999053691A2 (en) 1999-10-21
KR20010013590A (en) 2001-02-26
CN1213566C (en) 2005-08-03
CN1273002A (en) 2000-11-08
WO1999053691A3 (en) 1999-12-29
JP4350806B2 (en) 2009-10-21

Similar Documents

Publication Publication Date Title
US20020122112A1 (en) Group-wise video conferencing uses 3d-graphics model of broadcast event
KR100573209B1 (en) A unified distributed architecture for a multi-point video conference and interactive broadcast systems
EP1472871B1 (en) Remote server switching of video streams
DE602004006352T2 (en) Audio / Video conference with presence notification using content-based data transfer
DE60222890T2 (en) Methods and apparatus for implementing highly interactive entertainment services using media streaming technology that enables the provision of virtual reality services
US8555309B2 (en) Converged communication server with transaction management
US20070067818A1 (en) Means and method for mobile television
JP2013515445A (en) System and method for bidirectional synchronized video viewing
JP2007028586A (en) Interactive multimedia content production system
JP2007082182A (en) Creating method of interactive multimedia content
US20110035767A1 (en) Iptv remote broadcasting system for audience participation and service providing method thereof
JP2005198313A (en) Digital real-time interactive program system
EP1190572A1 (en) Methods and apparatus for information broadcasting and reception
WO2008000114A1 (en) Method for interfusing conference television system with iptv system and apparatus thereof
KR100384757B1 (en) Distributed internet broadcasting method and system using camera and screen capture
JP2005191968A (en) Two-way broadcasting system enabling viewing audience to produce and transmit program
CN108833175A (en) A kind of live network broadcast method and system based on video conference
JP2004015087A (en) Viewer participating type two-way communication service system
KR100548233B1 (en) Interactive broadcasting system using network
KR20040089729A (en) Interactive television system
Wong et al. Software-only video production switcher for the Internet MBone
JP2005039598A (en) Interactive distribution system
Rowe The Future of Interactive Television
Deicke et al. A client/server application as an example for MPEG-4 systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: U.S. PHILIPS CORPORATION, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MALLART, RAOUL;SINHA, ATUL;REEL/FRAME:009125/0232

Effective date: 19980331

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION