US20100266051A1 - Method for video enabled electronic commerce - Google Patents

Method for video enabled electronic commerce Download PDF

Info

Publication number
US20100266051A1
US20100266051A1 US12/828,500 US82850010A US2010266051A1 US 20100266051 A1 US20100266051 A1 US 20100266051A1 US 82850010 A US82850010 A US 82850010A US 2010266051 A1 US2010266051 A1 US 2010266051A1
Authority
US
United States
Prior art keywords
vision
content
user
image
enabled content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/828,500
Inventor
Subutai Ahmad
G. Scott France
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Electric Planet Interactive
Original Assignee
Elet Systems LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elet Systems LLC filed Critical Elet Systems LLC
Priority to US12/828,500 priority Critical patent/US20100266051A1/en
Assigned to ELECTRIC PLANET, INC. reassignment ELECTRIC PLANET, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AHMED, SUBUTAI, FRANCE, G. SCOTT
Assigned to ELECTRIC PLANET INTERACTIVE reassignment ELECTRIC PLANET INTERACTIVE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FRANCE, G. SCOTT, AHMAD, SUBUTAI
Assigned to ELET SYSTEMS L.L.C. reassignment ELET SYSTEMS L.L.C. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ELECTRIC PLANET INTERACTIVE
Publication of US20100266051A1 publication Critical patent/US20100266051A1/en
Assigned to IV GESTURE ASSETS 12, LLC reassignment IV GESTURE ASSETS 12, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ELET SYSTEMS L.L.C.
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: IV GESTURE ASSETS 12, LLC
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers

Definitions

  • the present invention relates to electronic commerce, and more particularly to conducting electronic commerce by enabling creation of vision-enabled content.
  • Activities such as advertising, entertainment and education are commonly conducted over a network such as the Internet.
  • the creator of an activity conducts that activity by publishing content which then becomes available to users who are connected to the network and have the necessary program to receive and display that content, such as a web browser.
  • advertisements in the form of linked banners appear on a multitude of web sites.
  • Streaming audio and video as well as audio and video clips have become commonplace.
  • virtual classrooms and interactive learning materials are being used for long-distance learning.
  • Such activities are constrained by the limitation of the technology being used to send, receive and navigate them.
  • Users receiving content over a network currently interact with the content with input devices such as a mouse and keyboard.
  • input devices such as a mouse and keyboard.
  • true interaction with the content must be left to the imagination.
  • Users viewing an advertisement for shirts may be able to select different styles and colors of shirts, but would not be able to see himself or herself wearing the shirt. Since the user does not know how he or she looks in the shirt, the user is less likely to purchase the shirt from the advertiser and will more likely go to a store where the user may try the shirt on before purchase. Thus, the advertiser will probably lose the sale.
  • Adding to the impersonality of gameplay are game controllers.
  • the realism of the game can often depend on how the player's commands are input into the computer. Movement of a user to make the animated character perform a similar movement is much more desirable than pushing a button to make a movement. Take, for example, a boxing game. A player would be much more likely to enjoy the game if the player could physically move his or her arm in a punching motion and see the animated character make a similar move in the game.
  • a method for conducting commerce over a network via vision-enabled content First, content is encoded to convert it into vision-enabled content. Payment is received for vision-enablement of the content. Also, a program to decode the vision-enabled content is provided. Finally, the vision-enabled content is sent to a user over a network. The program decodes the vision-enabled content and receives an image of the user.
  • the vision-enabled content may include advertising content, entertainment content, and education content.
  • the program combines the image of the user with the vision-enabled content.
  • the encoding allows a content publisher to distribute virtual content which can be received and interacted with by a user. For example, this would allow display of the user image interacting with a product or as part of entertainment content, such as an image of the user wearing a piece of clothing or along side a music star in a music video. This also allows a plurality of users to interact with each other, such as playing a game in which characters in the game bear resemblance to the users.
  • the program utilizes the image of the user to control the vision-enabled content.
  • Controlling of the content includes not only selecting certain images based on the user image, but also controlling the way the content appears, such as using the person image to control the way a character moves through a game for example, with the game flow changing as a result of the actions of the character. In this way, a user is able to use movements to control the content being perceived by the user.
  • the encoding of the content may be performed via tools with payment being received in exchange for use of the tools. This allows a content provider to create its own vision-enabled content.
  • payment may be received based on a number of users receiving the vision-enabled content.
  • payment could be received based on a quantity, i.e., an amount, of vision-enabled content sent.
  • Payment may also be received from a content provider for storing the vision-enabled content.
  • payment is received from the user.
  • payment could be received from the user in exchange for the program.
  • an upgrade for the program can be offered. Payment could be received in exchange for the upgrade.
  • an identity of a user may be recognized, such as from the person image, and the vision-enabled content can be selected based on the identity of the user.
  • the user may also be associated with a group and the vision-enabled content selected based on the association of the user with the group.
  • body part recognition may be performed on the person image. This allows the user to assist in the selection of the content such as by performing a particular gesture. As an option, the content may be selected based on an interpretation of movement of the body part of the user.
  • the outputted content may include an interaction between the person image and the content, such as a portion of the person image appearing to interact with video images.
  • body part recognition may be performed on the person image.
  • the content may include an image of the body part of the user.
  • the content may be output in real time via a data stream or sent in encapsulated form.
  • a background may be removed from the person image to assist in the recognition of a user in the person image.
  • the background may also be removed to allow a portion of the person image to appear to interact with the content.
  • statistical data may be collected and used to create user profiles and informational databases.
  • payment may be received in exchange for access to the statistics.
  • FIG. 1 is a diagram illustrating an interconnection between users, a content developer, a content publisher, and a technology provider in accordance with a business model of the present invention
  • FIG. 2 is a diagram illustrating components of a business model of the present invention
  • FIG. 3 illustrates an advertising model of the present invention corresponding to block A of FIG. 2 ;
  • FIG. 4 illustrates a process flow of the advertising model shown in FIG. 3 in accordance with one embodiment of the present invention
  • FIG. 5A illustrates a process of the present invention associated with operation 412 of FIG. 4 for personalizing content
  • FIG. 5B illustrates processes associated with operation 414 of FIG. 4 in accordance with one embodiment of the present invention
  • FIG. 6 illustrates an entertainment/educational model of the present invention corresponding to blocks B and C of FIG. 2 ;
  • FIG. 7 illustrates a process flow of the entertainment/educational model shown in FIG. 6 in accordance with one embodiment of the present invention.
  • the present invention is adapted for controlling content based on an image or series of images of a user.
  • a user 100 connects to a content publisher's website 102 over a wide area network 104 , e.g., the Internet, via a station, i.e., a computer, or other processing device such as a television.
  • Vision-enabled content is sent to the user's computer in either encapsulated or streaming form over the wide area network 104 from the content publisher's website 102 where it is presented to the user via display or audio.
  • An image or plurality of images of a user 100 are received and content based on the image of the user is selected from the vision-enabled content and displayed in such a way that the content appears to interact with the user 100 , e.g., a portion of the image of the user 100 appears with the content, and/or movements of the user are recognized and used to control the content. More detail is provided below.
  • the content offered by the content publisher 102 may be created by a content developer 106 and sent to the content publisher.
  • the tools, i.e., programs and hardware, necessary to encode the content into the vision-enabled format may be received from a technology provider 108 . These programs and tools may be sent to either the content publisher or the content developer, or both.
  • FIG. 2 is a diagram that illustrates various components of a business model of the present invention.
  • the content is encoded in a manner that converts it into vision-enabled content.
  • the content may include streaming video, animated objects, web pages, games, advertising, educational applications, audio data, or anything else.
  • some or all of the tools 200 necessary to perform such encoding may be provided to a content provider, such as a content publisher or developer. This allows the content provider to create its own vision-enabled content. Payment would be received in exchange for use of these tools 200 .
  • the technology provider may receive the content and perform the encoding of the content. Encoding fees 202 would be charged for performing the encoding. Once encoded, the content is sent to a publisher's website for dissemination.
  • the encoded content is sent to a user over a network.
  • the vision-enabled content is sent to the user's station via a data stream or in the form of an applet, where it is decoded by a program, e.g., a plug in 204 .
  • the data stream may be compressed.
  • the plug in receives an image of the user from a camera 206 connected to the user's station.
  • the applet may control the content based on the person image. Alternatively, the content may be controlled from the content provider's location. In either case, the user is allowed to interact with the content, as discussed below in more detail.
  • the plug in 204 and/or applet could be downloaded from the technology provider or content publisher. Preferably, a basic version of the plug in is downloadable for free. Alternatively, the user may be charged for the plug in 204 . The user may be able to download an upgrade for the plug in, for which upgrade fees 208 may be charged to the user. It should be kept in mind that the plug in and/or applet could also be installed from a computer readable medium such as a floppy disk or compact disc.
  • Fees may be charged to the content provider based on number of downloads of content, amount of content transmitted over the network, etc. Fees may also be charged per data stream or per group of data streams up to or over a predetermined number. These would be kept track of via statistics returned to either the content provider or the technology provider. Alternatively, fees may be charged based on the size of the audience that the content provider wishes to address. Payment may also be received from a content provider for storing and hosting the vision-enabled content.
  • interactive advertising 210 is sent to a user.
  • a fee may be charged to an advertiser for each time a vision-enabled advertisement is selected, such as when a user clicks on a banner advertisement 212 .
  • interactive entertainment 214 is sent to the user.
  • interactive education 216 is sent to the user.
  • the plug in and encoding may be provided for free in order to collect statistics. These statistics may be made available for a fee.
  • FIG. 3 illustrates an advertising model of the present invention corresponding to block A of FIG. 2 .
  • a content publisher provides a website 300 which offers interactive advertising.
  • Interactive advertising includes such things as encapsulated banners that begin an automatic download of the applet containing the vision-enabled content, web pages with products displayed, etc. when clicked on.
  • Users 302 connect to the content publisher's website via a wide area network and are allowed to browse web pages of the website 300 .
  • the publisher may receive statistics on the browsing habits of the users, such as how long the user 302 was connected to the website 300 and whether the user 302 interacted with an advertisement on a web page. Further, group statistics may be collected. Also, eye tracking may be used to determine whether the user 302 looked at an advertisement.
  • the advertising model is given by way of example and in no way is it intended to limit the present invention merely to advertising.
  • FIG. 4 illustrates an exemplary process flow of the advertising model shown in FIG. 3 .
  • a user activates an advertisement such as by clicking on a banner with a mouse.
  • the plug in is compatible with the user's computer, in operation 408 it is determined whether the user wants the plug in. If the user indicates that the user does not want the plug in, the standard advertisement is sent to the user's computer, as in operation 406 . If the user indicates that the user wants the plug in, the plug in is sent to the user's station in operation 410 from either the technology provider's website or the content publisher's website. The user may then install the plug in.
  • the plug in is enabled, or the user has installed the plug in
  • content in the form of an applet is streamed to the user in operation 412 .
  • the user interacts with the applet.
  • the user determines in operation 416 such as by determining when the user leaves a web page or closes the applet, statistics are provided to the content producer in operation 418 .
  • the statistics are analyzed. These statistics can include things that are unique to this particular user, or can be combined with statistics of many users. It should be noted that the applet may be received as a single file or may be streamed to the user.
  • FIG. 5A depicts an optional process associated with operation 412 of FIG. 4 for personalizing content.
  • a recognition of the user takes place.
  • the user may be recognized based on a cookie, an email address of the user, or user indicia.
  • the cookie could be stored on the user's station or the advertiser's server.
  • the user may be recognized based on image comparison by comparing the person image to images stored in a database.
  • user-entered identification indicia may be received. Such user-entered indicia could be used to allow access to an exclusive section of a website, such as one reserved for registered users of the plug in, applet, or site only.
  • user information is retrieved from a database in operation 502 .
  • user information could include information previously input by a user, past purchases, and statistical information collected from previous browsing by the user.
  • One example would be determining interests and/or buying habits of the user based on advertisements selected by the user in previous sessions or products previously purchased.
  • An individualized advertisement applet is selected in operation 504 based on the user information and sent to the user in operation 506 .
  • an attempt to associate the user with a group is performed in operation 508 .
  • the association may be made based on information such as the user's email address or user-input interests. Further, an association may be imputed by country as well as from the type of site being visited: commercial, government, technical.
  • an advertisement applet is chosen in operation 510 that is targeted at the group with which the user is associated. If the user cannot be associated with a group, a standard or random advertising applet is selected in operation 512 and sent to the user as in operation 506 .
  • FIG. 5B shows exemplary processes associated with operation 414 of FIG. 4 . More particularly, FIG. 5B illustrates what occurs at the user's station after receipt of the applet.
  • an image of the user appears in the content.
  • a visual image of the user taken by a camera is received in operation 520 .
  • the background may also be removed to allow a portion of the person image to appear to interact with the content.
  • body part recognition is performed on the image or extracted person image of the user to identify a head, eyes, arms, torso, etc. of the user. Further details regarding detecting body parts may be found in a patent application entitled “SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR TRACKING A HEAD OF A CAMERA-GENERATED IMAGE OF A PERSON” filed Jul. 30, 1999 under application Ser. No. 09/364,859, issued as U.S. Pat. No. 6,545,706, and which is incorporated herein by reference in its entirety.
  • an object such as a product for sale is composited to the image of the user by utilizing the body part recognition.
  • the user's head may be shown wearing a hat.
  • Further details regarding compositing objects to an image of a user may be found in a patent application entitled “METHOD AND APPARATUS FOR MODEL-BASED COMPOSITING” filed Oct. 15, 1997 under application Ser. No. 08/951,089, issued as U.S. Pat. No. 6,532,022, and which is herein incorporated by reference in its entirety.
  • the user is given the opportunity to purchase the object with which his or her image was interacting.
  • the purchase may be completed in operation 532 .
  • Statistics are collected in operation 534 in a manner similar to that presented above.
  • the user is given the choice to continue or quit in operation 536 . If the user wishes to continue, such as to view other objects composited to his or her image, some or all of operations 520 through 536 are repeated until the user wishes to quit.
  • a record of some of the occurrences is offered to the user in operation 538 and created in operation 540 if the user desires one.
  • the record could include a visual copy of the interactive session just completed, financial information if an object was purchased, and statistical information.
  • a user with the necessary plug in connects to a website with an advertisement, e.g., banner, for sunglasses.
  • the user wishes to purchase a pair of sunglasses, but wishes to see how he or she will look wearing the sunglasses.
  • the plug in detects the camera connected to the user's station and receives an image of the user.
  • the user's head is identified in the image of the user and may be separated from the rest of the user's body to form a person image.
  • the user's eyes are also identified in the image of the user to determine proper placement of the sunglasses.
  • the user browses the advertisements for a pair of sunglasses to “try on.”
  • a pair of sunglasses such as by pointing and clicking on a desired pair of sunglasses
  • the person image of the user's head is processed to composite the selected pair of sunglasses to the person image.
  • the image of the user's head is displayed wearing the pair of sunglasses over the eyes.
  • the user could then select different pairs of sunglasses to “try on,” each of which would appear on the present person image of the user's head or on a new person image of the user's head.
  • multiple images of the user turning his or her head would be captured to allow the user to manipulate the image of the head to permit viewing of a face as well as a profile for example. Two images would produce only the face and profile views.
  • multiple images taken as the user turns his or her head could be used to produce the appearance of a rotating head interacting with the content. It should be kept in mind that this scenario could apply to any body part recognized in operation 526 , not just the head.
  • Feedback may be sent to the advertiser to indicate which pair of sunglasses the user is currently looking at.
  • statistics may be sent to the advertiser upon termination of the session. Such statistics could include the amount of time the user spent looking at sunglasses, a listing of pairs of sunglasses selected, activities requested by the user, such as head rotation, etc. The statistics may then be used to create a user profile. The statistics may also be used to assist the advertiser in improving its content.
  • the user is utilized as an input device to control the content.
  • images of the user are used to control movement of objects in the content as well as the flow of the content. It should be kept in mind that an image of the user may still be displayed interacting with the content.
  • a visual image of the user taken by a camera is received in operation 550 .
  • body part recognition is performed on the image of the user to identify a head, arms, or torso, etc. of the user.
  • multiple images of the user are received in real time via a data stream so that consecutive images may be compared to allow detection of movement.
  • gesture recognition may be performed. For example, pointing up and down may be used to control scrolling of a web page, as may facing up and down with the head. More information on gesture recognition is found in a patent application entitled “METHOD AND APPARATUS FOR REAL-TIME GESTURE RECOGNITION” filed Oct. 15, 1997 under application Ser. No. 08/951,070, issued as U.S. Pat. No. 6,072,494, and which is herein incorporated by reference in its entirety.
  • buttons may be enabled. For example, moving a hand may control movement of a cursor on the screen. Pushing the hand forward may indicate pressing a button positioned under the cursor on the screen.
  • operation 556 the user is given the opportunity to purchase the object with which his or her image is interacting. The purchase may be completed in operation 558 . Statistics are collected in operation 560 in a manner similar to that presented above. The user is given a choice to continue or quit in operation 562 . If the user wishes to continue, operations 550 through 562 are repeated until the user wishes to quit. A record of the occurrences is offered to the user in operation 564 and created in operation 566 .
  • FIG. 6 illustrates an entertainment/educational model of the present invention corresponding to blocks B and C of FIG. 2 .
  • users 600 connect to a host 602 and are allowed to request entertainment content such as audio, video, and game data.
  • the users 600 are students that connect to the host 602 and request educational content such as audio and video.
  • Content in the form of HTML and applets is sent to the users 600 .
  • audio and/or images, and optionally, game data may be transmitted between the users, such as during a group game or when attending a virtual classroom.
  • a moderator 604 such as a referee of a game or an instructor may communicate with the host and/or the users.
  • the moderator 604 may receive different applets than the users 600 to enable the moderator 604 to moderate a gaming or educational session.
  • the host 602 may receive statistics on the browsing habits of each of the users 600 , such as how long a user 600 was connected to the host 602 and how long the user 600 used interactive content. Further, group statistics may be collected. The statistics may also be used during subsequent game plays to provide information about games that players particularly like playing as well as to modify a skill level of a game for a particular player. It should be noted that the entertainment/educational model is given by way of example and in no way is it intended to limit the present invention.
  • FIG. 7 illustrates an exemplary process flow of the entertainment/educational model shown in FIG. 6 .
  • a user activates an entertainment or educational session by connecting to the host 602 .
  • the host may provide a listing of the vision-enabled content available to the user from which the user may choose.
  • the host looks for the plug in on the user's station.
  • the plug in may connect to the technology provider's website to check for an upgrade in operation 704 . This may occur in the background. If the plug in is current, the process continues. If the plug in is not current, the user is given the option of downloading the update in operation 706 . If the user chooses to get the update, it is downloaded onto the user's station in operation 708 . If the user chooses not to get the update, the process continues.
  • the user is given the option to get the plug in operation 710 . If the user chooses to get the plug in, it is downloaded onto the user's station, as in operation 708 . If the user chooses not to get the plug in, the process is aborted in operation 712 .
  • a user recognition process is performed to identify the user and/or a group to which the user belongs. This allows the host to target options towards the user or group. In the entertainment embodiment, for example, past user performance may be used to group the user with game players of similar skill. See the previous discussion with reference to FIG. 5A for a description of the recognition process.
  • a determination of whether the entertainment or educational activity will be interacted with by the user individually or with a group is made in operation 716 . If the activity is to be performed by the user individually, options are presented in operation 718 . In operation 720 , the user is allowed to select the desired activity, i.e., entertainment activity or educational application, from the options presented in operation 718 and the process continues.
  • desired activity i.e., entertainment activity or educational application
  • options are presented in operation 722 and the user selects an activity from the options in operation 724 .
  • an IP address for the group members is sent to the user in operation 726 . This allows the members of the group to interact directly with each other without the host once the applet is received from the host, though the group activity may be performed through the host as well.
  • the applet corresponding to the selected activity is sent to the user.
  • the user and/or group is allowed to interact with the activity in operation 732 .
  • the user plays the game, attends a virtual lecture, etc.
  • Operation 732 is repeated until it is determined in operation 734 that the user or group has completed interacting with the activity.
  • statistics are sent to the host, which may be used to create and supplement user and/or group profiles.
  • a group of users each having the proper plug in connect to an entertainment host to play a group game.
  • Each user receives the applet associated with the game to be played from the host over a wide area network.
  • each player is represented by an animated character.
  • a visual image of each of the users is obtained, a person image is recognized and body part recognition is performed on each of the images of the user to separate out a head, arms, and torso of the user, for example.
  • the background is also separated from the person image.
  • the person image of the head of each player is composited to the animated character corresponding to that player and either the person image or data representing the animated character is distributed to each of the users.
  • the game is played either through the host or among the players across the network.
  • each animated character bears the likeness of the associated player.
  • movement of a player during play is recognized and the corresponding animated character performs similar movements.
  • interactions between the animated characters and objects appearing on the display may be required.
  • contact and collisions of the objects with the animated characters, as well as the animated characters with each other may form part of the game, as in a game of virtual basketball.
  • the contact and/or collision is detected and the objects and/or animated characters are made to react accordingly.
  • More information concerning detecting interactions between the animated characters and objects may be found in a patent application entitled “SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR DETECTING COLLISIONS BETWEEN VIDEO IMAGES GENERATED BY A CAMERA AND AN OBJECT DEPICTED ON A DISPLAY” filed Jul. 30, 1999 under application Ser. No. 09/364,629, issued as U.S. Pat. No. 6,738,066, and herein incorporated by reference in its entirety.
  • a user is watching television.
  • An applet that allows remote control of the television is enabled.
  • the user's movements are recognized. Different movements of the user implement different commands, such as changing the volume and switching channels.
  • the user may be recognized upon turning the television on, and the user's favorite channel would be tuned to.

Abstract

A method is provided for conducting commerce over a network via vision-enabled content. First, content is encoded to convert it into vision-enabled content. Payment is received for vision-enabling the content. Also, a program to decode the vision-enabled content is provided. Finally, the vision-enabled content is sent to a user over a network. The program decodes the vision-enabled content and receives an image of the user. The vision-enabled content may include advertising content, entertainment content, and educational or instructional content. In one embodiment, the program combines the image of the user with the vision-enabled content. In another embodiment, the program utilizes the image of the user to control the vision-enabled content.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This patent application is a divisional of U.S. patent application Ser. No. 11/507,794, filed Aug. 21, 2006, which is a continuation of U.S. patent application Ser. No. 09/371,462, filed Aug. 1, 1999, now issued as U.S. Pat. No. 7,113,918. The above-identified patent applications are hereby incorporated herein by reference in their entirety.
  • BACKGROUND OF THE INVENTION
  • 1. The Field of the Invention
  • The present invention relates to electronic commerce, and more particularly to conducting electronic commerce by enabling creation of vision-enabled content.
  • 2. The Relevant Art
  • Activities such as advertising, entertainment and education are commonly conducted over a network such as the Internet. The creator of an activity conducts that activity by publishing content which then becomes available to users who are connected to the network and have the necessary program to receive and display that content, such as a web browser. For example, advertisements in the form of linked banners appear on a multitude of web sites. Streaming audio and video as well as audio and video clips have become commonplace. Further, virtual classrooms and interactive learning materials are being used for long-distance learning.
  • Such activities, however, are constrained by the limitation of the technology being used to send, receive and navigate them. Users receiving content over a network currently interact with the content with input devices such as a mouse and keyboard. As a result, true interaction with the content must be left to the imagination. Users viewing an advertisement for shirts, for example, may be able to select different styles and colors of shirts, but would not be able to see himself or herself wearing the shirt. Since the user does not know how he or she looks in the shirt, the user is less likely to purchase the shirt from the advertiser and will more likely go to a store where the user may try the shirt on before purchase. Thus, the advertiser will probably lose the sale.
  • The problem is similar in entertainment. There are currently several products on the market which allow replay of downloaded audio and video. For example, Windows Media Player by Microsoft® Corporation and RealPlayer by RealNetworks, Inc. allow an entertainment producer to transmit audio and video clips as well as streaming audio and video. Both of these products allow the user to interact with the content in the limited sense that the user is able to select the clips and streams and start and stop playback at will.
  • Unfortunately, these products are directed towards playback of content alone. Most users prefer to watch motion video on a television rather than over the Internet, typically because of the location and smaller size of the computer display. If the content is the same, there is little reason to watch it on the computer. There needs to be something that makes a user want to watch the content on the computer, such as a vision-based interaction between the user and the content.
  • Network gaming is a popular pastime for many people. While gaming technology has come far, gaming is still very impersonal in that the animated characters that represent each player bear only the likeness given it by the programmer and no resemblance to the actual player. Game play would be much more enjoyable if the animated characters of a game bore the likenesses of the associated players.
  • Adding to the impersonality of gameplay are game controllers. The realism of the game can often depend on how the player's commands are input into the computer. Movement of a user to make the animated character perform a similar movement is much more desirable than pushing a button to make a movement. Take, for example, a boxing game. A player would be much more likely to enjoy the game if the player could physically move his or her arm in a punching motion and see the animated character make a similar move in the game.
  • BRIEF SUMMARY OF THE INVENTION
  • A method is provided for conducting commerce over a network via vision-enabled content. First, content is encoded to convert it into vision-enabled content. Payment is received for vision-enablement of the content. Also, a program to decode the vision-enabled content is provided. Finally, the vision-enabled content is sent to a user over a network. The program decodes the vision-enabled content and receives an image of the user. The vision-enabled content may include advertising content, entertainment content, and education content.
  • In one embodiment of the present invention, the program combines the image of the user with the vision-enabled content. The encoding allows a content publisher to distribute virtual content which can be received and interacted with by a user. For example, this would allow display of the user image interacting with a product or as part of entertainment content, such as an image of the user wearing a piece of clothing or along side a music star in a music video. This also allows a plurality of users to interact with each other, such as playing a game in which characters in the game bear resemblance to the users.
  • In another embodiment of the present invention, the program utilizes the image of the user to control the vision-enabled content. Controlling of the content includes not only selecting certain images based on the user image, but also controlling the way the content appears, such as using the person image to control the way a character moves through a game for example, with the game flow changing as a result of the actions of the character. In this way, a user is able to use movements to control the content being perceived by the user.
  • The encoding of the content may be performed via tools with payment being received in exchange for use of the tools. This allows a content provider to create its own vision-enabled content.
  • In one aspect of the present invention, payment may be received based on a number of users receiving the vision-enabled content. Alternatively, payment could be received based on a quantity, i.e., an amount, of vision-enabled content sent. Payment may also be received from a content provider for storing the vision-enabled content.
  • In another aspect of the present invention, payment is received from the user. For instance, payment could be received from the user in exchange for the program. Optionally, an upgrade for the program can be offered. Payment could be received in exchange for the upgrade.
  • To personalize the content, an identity of a user may be recognized, such as from the person image, and the vision-enabled content can be selected based on the identity of the user. The user may also be associated with a group and the vision-enabled content selected based on the association of the user with the group.
  • As an option, body part recognition may be performed on the person image. This allows the user to assist in the selection of the content such as by performing a particular gesture. As an option, the content may be selected based on an interpretation of movement of the body part of the user.
  • The outputted content may include an interaction between the person image and the content, such as a portion of the person image appearing to interact with video images. As mentioned previously, body part recognition may be performed on the person image. In such case, the content may include an image of the body part of the user. The content may be output in real time via a data stream or sent in encapsulated form.
  • A background may be removed from the person image to assist in the recognition of a user in the person image. The background may also be removed to allow a portion of the person image to appear to interact with the content.
  • As an option, statistical data may be collected and used to create user profiles and informational databases. Optionally, payment may be received in exchange for access to the statistics.
  • These and other aspects and advantages of the present invention will become more apparent when the Description below is read in conjunction with the accompanying Drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings.
  • FIG. 1 is a diagram illustrating an interconnection between users, a content developer, a content publisher, and a technology provider in accordance with a business model of the present invention;
  • FIG. 2 is a diagram illustrating components of a business model of the present invention;
  • FIG. 3 illustrates an advertising model of the present invention corresponding to block A of FIG. 2;
  • FIG. 4 illustrates a process flow of the advertising model shown in FIG. 3 in accordance with one embodiment of the present invention;
  • FIG. 5A illustrates a process of the present invention associated with operation 412 of FIG. 4 for personalizing content;
  • FIG. 5B illustrates processes associated with operation 414 of FIG. 4 in accordance with one embodiment of the present invention;
  • FIG. 6 illustrates an entertainment/educational model of the present invention corresponding to blocks B and C of FIG. 2; and
  • FIG. 7 illustrates a process flow of the entertainment/educational model shown in FIG. 6 in accordance with one embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The present invention is adapted for controlling content based on an image or series of images of a user. With reference to FIG. 1, a user 100 connects to a content publisher's website 102 over a wide area network 104, e.g., the Internet, via a station, i.e., a computer, or other processing device such as a television. Vision-enabled content is sent to the user's computer in either encapsulated or streaming form over the wide area network 104 from the content publisher's website 102 where it is presented to the user via display or audio. An image or plurality of images of a user 100 are received and content based on the image of the user is selected from the vision-enabled content and displayed in such a way that the content appears to interact with the user 100, e.g., a portion of the image of the user 100 appears with the content, and/or movements of the user are recognized and used to control the content. More detail is provided below.
  • The content offered by the content publisher 102 may be created by a content developer 106 and sent to the content publisher. The tools, i.e., programs and hardware, necessary to encode the content into the vision-enabled format may be received from a technology provider 108. These programs and tools may be sent to either the content publisher or the content developer, or both.
  • FIG. 2 is a diagram that illustrates various components of a business model of the present invention. First, the content is encoded in a manner that converts it into vision-enabled content. The content may include streaming video, animated objects, web pages, games, advertising, educational applications, audio data, or anything else. As mentioned above, some or all of the tools 200 necessary to perform such encoding may be provided to a content provider, such as a content publisher or developer. This allows the content provider to create its own vision-enabled content. Payment would be received in exchange for use of these tools 200.
  • Alternatively, the technology provider may receive the content and perform the encoding of the content. Encoding fees 202 would be charged for performing the encoding. Once encoded, the content is sent to a publisher's website for dissemination.
  • The encoded content is sent to a user over a network. Preferably, the vision-enabled content is sent to the user's station via a data stream or in the form of an applet, where it is decoded by a program, e.g., a plug in 204. The data stream may be compressed. The plug in receives an image of the user from a camera 206 connected to the user's station. The applet may control the content based on the person image. Alternatively, the content may be controlled from the content provider's location. In either case, the user is allowed to interact with the content, as discussed below in more detail.
  • The plug in 204 and/or applet could be downloaded from the technology provider or content publisher. Preferably, a basic version of the plug in is downloadable for free. Alternatively, the user may be charged for the plug in 204. The user may be able to download an upgrade for the plug in, for which upgrade fees 208 may be charged to the user. It should be kept in mind that the plug in and/or applet could also be installed from a computer readable medium such as a floppy disk or compact disc.
  • Fees may be charged to the content provider based on number of downloads of content, amount of content transmitted over the network, etc. Fees may also be charged per data stream or per group of data streams up to or over a predetermined number. These would be kept track of via statistics returned to either the content provider or the technology provider. Alternatively, fees may be charged based on the size of the audience that the content provider wishes to address. Payment may also be received from a content provider for storing and hosting the vision-enabled content.
  • In one embodiment of the present invention, interactive advertising 210 is sent to a user. A fee may be charged to an advertiser for each time a vision-enabled advertisement is selected, such as when a user clicks on a banner advertisement 212. In another embodiment, interactive entertainment 214 is sent to the user. In yet another embodiment, interactive education 216 is sent to the user.
  • As an option, the plug in and encoding may be provided for free in order to collect statistics. These statistics may be made available for a fee.
  • FIG. 3 illustrates an advertising model of the present invention corresponding to block A of FIG. 2. In this model, a content publisher provides a website 300 which offers interactive advertising. Interactive advertising includes such things as encapsulated banners that begin an automatic download of the applet containing the vision-enabled content, web pages with products displayed, etc. when clicked on. Users 302 connect to the content publisher's website via a wide area network and are allowed to browse web pages of the website 300. The publisher may receive statistics on the browsing habits of the users, such as how long the user 302 was connected to the website 300 and whether the user 302 interacted with an advertisement on a web page. Further, group statistics may be collected. Also, eye tracking may be used to determine whether the user 302 looked at an advertisement. It should be noted that the advertising model is given by way of example and in no way is it intended to limit the present invention merely to advertising.
  • FIG. 4 illustrates an exemplary process flow of the advertising model shown in FIG. 3. In operation 400, a user activates an advertisement such as by clicking on a banner with a mouse. In operation 402, it is determined whether the user has the plug in and whether it is enabled. This is preferably done by the content publisher's website. If it is determined that the user does not have the plug in, it is determined whether the plug in will operate on the user's system in operation 404. This determination may be based on hardware and/or software considerations, such as whether the plug in is compatible with the user's web browser. If the plug in is not compatible with the user's computer, a standard advertisement is sent to the user's computer in operation 406 via HTML.
  • If the plug in is compatible with the user's computer, in operation 408 it is determined whether the user wants the plug in. If the user indicates that the user does not want the plug in, the standard advertisement is sent to the user's computer, as in operation 406. If the user indicates that the user wants the plug in, the plug in is sent to the user's station in operation 410 from either the technology provider's website or the content publisher's website. The user may then install the plug in.
  • If the plug in is enabled, or the user has installed the plug in, content in the form of an applet is streamed to the user in operation 412. (See the discussion of FIG. 5A below for more detail on operation 412.) In operation 414, the user interacts with the applet. (See the discussion of FIG. 58 below for more detail on operation 414.) When the user is finished interacting with the applet, determined in operation 416 such as by determining when the user leaves a web page or closes the applet, statistics are provided to the content producer in operation 418. In operation 420, the statistics are analyzed. These statistics can include things that are unique to this particular user, or can be combined with statistics of many users. It should be noted that the applet may be received as a single file or may be streamed to the user.
  • FIG. 5A depicts an optional process associated with operation 412 of FIG. 4 for personalizing content. First, in operation 500, a recognition of the user takes place. For example, the user may be recognized based on a cookie, an email address of the user, or user indicia. The cookie could be stored on the user's station or the advertiser's server. Alternatively, the user may be recognized based on image comparison by comparing the person image to images stored in a database. Optionally, user-entered identification indicia may be received. Such user-entered indicia could be used to allow access to an exclusive section of a website, such as one reserved for registered users of the plug in, applet, or site only.
  • If the user is recognized in operation 500, user information is retrieved from a database in operation 502. Such user information could include information previously input by a user, past purchases, and statistical information collected from previous browsing by the user. One example would be determining interests and/or buying habits of the user based on advertisements selected by the user in previous sessions or products previously purchased. An individualized advertisement applet is selected in operation 504 based on the user information and sent to the user in operation 506.
  • If the user is not recognized in operation 500, an attempt to associate the user with a group is performed in operation 508. The association may be made based on information such as the user's email address or user-input interests. Further, an association may be imputed by country as well as from the type of site being visited: commercial, government, technical. If the user can be associated with a group, an advertisement applet is chosen in operation 510 that is targeted at the group with which the user is associated. If the user cannot be associated with a group, a standard or random advertising applet is selected in operation 512 and sent to the user as in operation 506.
  • FIG. 5B shows exemplary processes associated with operation 414 of FIG. 4. More particularly, FIG. 5B illustrates what occurs at the user's station after receipt of the applet.
  • In a composited model of the process, an image of the user appears in the content. First, a visual image of the user taken by a camera is received in operation 520. Preferably, it is determined whether to remove a background from the image of the user in operation 522 in order to extract a person image. Removal of the background from the person image assists in the recognition of a user in the person image as well as reduces error caused by animate objects located in the background, such as a television picture. The background may also be removed to allow a portion of the person image to appear to interact with the content.
  • If the background of the image is to be removed, it is removed in operation 524. More information about extracting an image from its background may be found in a patent application entitled “METHOD AND APPARATUS FOR PERFORMING A CLEAN BACKGROUND SUBTRACTION” filed Oct. 15, 1998 under application Ser. No. 09/174,491, issued as U.S. Pat. No. 6,411,744, and which is herein incorporated by reference for all purposes.
  • In operation 526, body part recognition is performed on the image or extracted person image of the user to identify a head, eyes, arms, torso, etc. of the user. Further details regarding detecting body parts may be found in a patent application entitled “SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR TRACKING A HEAD OF A CAMERA-GENERATED IMAGE OF A PERSON” filed Jul. 30, 1999 under application Ser. No. 09/364,859, issued as U.S. Pat. No. 6,545,706, and which is incorporated herein by reference in its entirety.
  • In operation 528, an object such as a product for sale is composited to the image of the user by utilizing the body part recognition. For example, the user's head may be shown wearing a hat. Further details regarding compositing objects to an image of a user may be found in a patent application entitled “METHOD AND APPARATUS FOR MODEL-BASED COMPOSITING” filed Oct. 15, 1997 under application Ser. No. 08/951,089, issued as U.S. Pat. No. 6,532,022, and which is herein incorporated by reference in its entirety.
  • In operation 530, the user is given the opportunity to purchase the object with which his or her image was interacting. The purchase may be completed in operation 532. Statistics are collected in operation 534 in a manner similar to that presented above. The user is given the choice to continue or quit in operation 536. If the user wishes to continue, such as to view other objects composited to his or her image, some or all of operations 520 through 536 are repeated until the user wishes to quit. A record of some of the occurrences is offered to the user in operation 538 and created in operation 540 if the user desires one. The record could include a visual copy of the interactive session just completed, financial information if an object was purchased, and statistical information.
  • In an exemplary scenario, a user with the necessary plug in connects to a website with an advertisement, e.g., banner, for sunglasses. The user wishes to purchase a pair of sunglasses, but wishes to see how he or she will look wearing the sunglasses. The user clicks on the advertisement, which begins a download of vision-enabled content to the user's station. The plug in detects the camera connected to the user's station and receives an image of the user. The user's head is identified in the image of the user and may be separated from the rest of the user's body to form a person image. The user's eyes are also identified in the image of the user to determine proper placement of the sunglasses. Meanwhile, the user browses the advertisements for a pair of sunglasses to “try on.” Upon selection of a pair of sunglasses, such as by pointing and clicking on a desired pair of sunglasses, the person image of the user's head is processed to composite the selected pair of sunglasses to the person image. Then, the image of the user's head is displayed wearing the pair of sunglasses over the eyes. The user could then select different pairs of sunglasses to “try on,” each of which would appear on the present person image of the user's head or on a new person image of the user's head.
  • Preferably, multiple images of the user turning his or her head would be captured to allow the user to manipulate the image of the head to permit viewing of a face as well as a profile for example. Two images would produce only the face and profile views. However, multiple images taken as the user turns his or her head could be used to produce the appearance of a rotating head interacting with the content. It should be kept in mind that this scenario could apply to any body part recognized in operation 526, not just the head.
  • Feedback may be sent to the advertiser to indicate which pair of sunglasses the user is currently looking at. Alternatively or in combination with the feedback, statistics may be sent to the advertiser upon termination of the session. Such statistics could include the amount of time the user spent looking at sunglasses, a listing of pairs of sunglasses selected, activities requested by the user, such as head rotation, etc. The statistics may then be used to create a user profile. The statistics may also be used to assist the advertiser in improving its content.
  • In a non-composited model of the process, the user is utilized as an input device to control the content. In other words, images of the user are used to control movement of objects in the content as well as the flow of the content. It should be kept in mind that an image of the user may still be displayed interacting with the content. First, a visual image of the user taken by a camera is received in operation 550. In operation 552, body part recognition is performed on the image of the user to identify a head, arms, or torso, etc. of the user. Preferably, multiple images of the user are received in real time via a data stream so that consecutive images may be compared to allow detection of movement.
  • A visual interpretation of user movement is performed in operation 554 and used to select content for display. In this way, movement of the user controls the content. In one embodiment, gesture recognition may be performed. For example, pointing up and down may be used to control scrolling of a web page, as may facing up and down with the head. More information on gesture recognition is found in a patent application entitled “METHOD AND APPARATUS FOR REAL-TIME GESTURE RECOGNITION” filed Oct. 15, 1997 under application Ser. No. 08/951,070, issued as U.S. Pat. No. 6,072,494, and which is herein incorporated by reference in its entirety.
  • In another embodiment, virtual buttons may be enabled. For example, moving a hand may control movement of a cursor on the screen. Pushing the hand forward may indicate pressing a button positioned under the cursor on the screen.
  • In operation 556, the user is given the opportunity to purchase the object with which his or her image is interacting. The purchase may be completed in operation 558. Statistics are collected in operation 560 in a manner similar to that presented above. The user is given a choice to continue or quit in operation 562. If the user wishes to continue, operations 550 through 562 are repeated until the user wishes to quit. A record of the occurrences is offered to the user in operation 564 and created in operation 566.
  • FIG. 6 illustrates an entertainment/educational model of the present invention corresponding to blocks B and C of FIG. 2. In an entertainment model, users 600 connect to a host 602 and are allowed to request entertainment content such as audio, video, and game data. In an educational model, the users 600 are students that connect to the host 602 and request educational content such as audio and video. Content in the form of HTML and applets is sent to the users 600. Further, audio and/or images, and optionally, game data may be transmitted between the users, such as during a group game or when attending a virtual classroom. Optionally, a moderator 604 such as a referee of a game or an instructor may communicate with the host and/or the users. The moderator 604 may receive different applets than the users 600 to enable the moderator 604 to moderate a gaming or educational session.
  • The host 602 may receive statistics on the browsing habits of each of the users 600, such as how long a user 600 was connected to the host 602 and how long the user 600 used interactive content. Further, group statistics may be collected. The statistics may also be used during subsequent game plays to provide information about games that players particularly like playing as well as to modify a skill level of a game for a particular player. It should be noted that the entertainment/educational model is given by way of example and in no way is it intended to limit the present invention.
  • FIG. 7 illustrates an exemplary process flow of the entertainment/educational model shown in FIG. 6. In operation 700, a user activates an entertainment or educational session by connecting to the host 602. The host may provide a listing of the vision-enabled content available to the user from which the user may choose. In operation 702, the host looks for the plug in on the user's station. Optionally, the plug in may connect to the technology provider's website to check for an upgrade in operation 704. This may occur in the background. If the plug in is current, the process continues. If the plug in is not current, the user is given the option of downloading the update in operation 706. If the user chooses to get the update, it is downloaded onto the user's station in operation 708. If the user chooses not to get the update, the process continues.
  • If the plug in is not found, the user is given the option to get the plug in operation 710. If the user chooses to get the plug in, it is downloaded onto the user's station, as in operation 708. If the user chooses not to get the plug in, the process is aborted in operation 712.
  • In operation 714, a user recognition process is performed to identify the user and/or a group to which the user belongs. This allows the host to target options towards the user or group. In the entertainment embodiment, for example, past user performance may be used to group the user with game players of similar skill. See the previous discussion with reference to FIG. 5A for a description of the recognition process.
  • With continuing reference to FIG. 7, a determination of whether the entertainment or educational activity will be interacted with by the user individually or with a group is made in operation 716. If the activity is to be performed by the user individually, options are presented in operation 718. In operation 720, the user is allowed to select the desired activity, i.e., entertainment activity or educational application, from the options presented in operation 718 and the process continues.
  • If the activity is to be performed by a group, options are presented in operation 722 and the user selects an activity from the options in operation 724. Based on the activity selected by the user in operation 724, an IP address for the group members is sent to the user in operation 726. This allows the members of the group to interact directly with each other without the host once the applet is received from the host, though the group activity may be performed through the host as well.
  • In operation 730, the applet corresponding to the selected activity is sent to the user. The user and/or group is allowed to interact with the activity in operation 732. In other words, the user plays the game, attends a virtual lecture, etc. Operation 732 is repeated until it is determined in operation 734 that the user or group has completed interacting with the activity. In operation 736, statistics are sent to the host, which may be used to create and supplement user and/or group profiles.
  • In an exemplary entertainment scenario, a group of users each having the proper plug in connect to an entertainment host to play a group game. Each user receives the applet associated with the game to be played from the host over a wide area network. In the game, each player is represented by an animated character. After a visual image of each of the users is obtained, a person image is recognized and body part recognition is performed on each of the images of the user to separate out a head, arms, and torso of the user, for example. The background is also separated from the person image. Then, the person image of the head of each player is composited to the animated character corresponding to that player and either the person image or data representing the animated character is distributed to each of the users. The game is played either through the host or among the players across the network. During play, each animated character bears the likeness of the associated player. Optionally, movement of a player during play is recognized and the corresponding animated character performs similar movements.
  • Depending on the game, interactions between the animated characters and objects appearing on the display may be required. For example, contact and collisions of the objects with the animated characters, as well as the animated characters with each other, may form part of the game, as in a game of virtual basketball. In such case, the contact and/or collision is detected and the objects and/or animated characters are made to react accordingly. More information concerning detecting interactions between the animated characters and objects may be found in a patent application entitled “SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR DETECTING COLLISIONS BETWEEN VIDEO IMAGES GENERATED BY A CAMERA AND AN OBJECT DEPICTED ON A DISPLAY” filed Jul. 30, 1999 under application Ser. No. 09/364,629, issued as U.S. Pat. No. 6,738,066, and herein incorporated by reference in its entirety.
  • In another exemplary entertainment scenario, a user is watching television. An applet that allows remote control of the television is enabled. The user's movements are recognized. Different movements of the user implement different commands, such as changing the volume and switching channels. Optionally, the user may be recognized upon turning the television on, and the user's favorite channel would be tuned to.
  • While this invention has been described—in terms of several preferred embodiments, it is contemplated that alternatives, modifications, permutations, and equivalents thereof will become apparent to those skilled in the art upon a reading of the specification and study of the drawings. It is therefore intended that the true spirit and scope of the present include all such alternatives, modifications, permutations, and equivalents.

Claims (20)

1. A method of providing vision-enabled content over a network, comprising:
encoding content, at a server, into vision-enabled content, the vision-enabled content including content configured to interact with an image or plurality of images of a subject; and
sending a program to decode the vision-enabled content from the server to a computing device, the program including computer instructions that, upon execution, cause a computer to:
decode vision-enabled content;
receive an image of a subject; and
combine the received image of the subject with the vision-enabled content.
2. The method of claim 1, further comprising sending an upgrade for the program to the computing device, payment being received in exchange for the upgrade.
3. The method of claim 1, further comprising sending vision-enabled content to a plurality of computing devices and receiving payment:
based on an amount of computing devices receiving the vision-enabled content;
based on a quantity of vision-enabled content sent to computing devices over the network;
for encoding the content; or
any combination of the foregoing.
4. The method of claim 1, further comprising:
receiving data identifying a user operating the computing device;
recognizing an identity of the user based on the received data;
selecting vision-enabled content based on the identity of the user; and
sending the selected vision-enabled content to the computing device.
5. The method of claim 1, further comprising:
receiving data regarding a user operating the computing device;
associating the user with a group based on the received data;
selecting vision-enabled content based on the association of the user with the group; and
sending the selected vision-enabled content to the computing device.
6. A system for providing vision-enabled content, comprising:
a processor that implements an encoder configured to encode content into vision-enabled content, the vision-enabled content including content configured to interact with an image or plurality of images of a subject; and
a communication interface configured to communicate a decoder to a computing device over a network, the decoder including computer instructions that, upon execution, cause a computer to:
decode vision-enabled content;
receive an image of a subject; and
combine the received image of the subject with the vision-enabled content.
7. The system of claim 6, further comprising an applet that includes the vision-enabled content encoded by the encoder, the applet configured to be sent over the network to the computing device.
8. A computer program product for providing vision-enabled content over a network, the computer program product being embodied in a computer storage medium and comprising computer instructions that, upon execution, cause a computer to:
encode content into vision-enabled content, the vision-enabled content including content configured to interact with an image or plurality of images of a subject; and
send a program to decode the vision-enabled content to a computing device, the program being configured to:
decode vision-enabled content;
receive an image of a subject; and
combine the received image of the subject with the vision-enabled content.
9. A system for providing vision-enabled content, the system comprising:
means for encoding content into vision-enabled content, the vision-enabled content including content configured to interact with an image or plurality of images of a subject; and
means for sending a program to decode the vision-enabled content to a computing device, the program being configured to:
decode vision-enabled content;
receive an image of a subject; and
combine the received image of the subject with the vision-enabled content.
10. A method for receiving vision-enabled content over a network, the method comprising:
receiving vision-enabled content at a computing device, the vision-enabled content including content configured to interact with an image or plurality of images of a subject;
decoding the vision-enabled content at the computing device;
receiving an image of a subject at the computing device; and
combining the received image of the subject with the vision-enabled content.
11. The method of claim 10, wherein the vision-enabled content includes advertising content relating to an item of apparel, and wherein combining the received image of the subject with the vision-enabled content includes compositing an image of the item of apparel with the received image of the subject, the method further comprising displaying the resulting composite image on a display of the computing device.
12. The method of claim 10, wherein the vision-enabled content includes multi-player game content and wherein combining the received image of the subject with the multi-player game content includes compositing the received image of the subject with an animated character representing the subject in a multi-player game.
13. The method of claim 12, further comprising:
sending data representing the composited image of the subject and animated character to computing devices associated with players of the multi-player game;
receiving data representing respective composited images of the players with corresponding animated characters; and
displaying composited images of the players and corresponding animated characters on a display of the computing device.
14. The method of claim 12, further comprising:
performing body part recognition on a plurality of images of the subject;
interpreting movement of a body part of the subject within the plurality of images as a gesture; and
controlling the animated character within the multi-player game based on the gesture.
15. A system for receiving vision-enabled content over a network, the system comprising:
a communication interface configured to receive a decoder and vision-enabled content, the vision-enabled content including content configured to interact with an image or plurality of images of a subject;
a processor that implements the decoder, the decoder including computer instructions that, upon execution, cause a computer to:
decode the vision-enabled content;
receive an image of a subject; and
combine the received image of the subject with the vision-enabled content.
16. The system of claim 15, further comprising a camera configured to capture the image of the subject and provide the image of the subject to the decoder.
17. A computer program product for receiving vision-enabled content over a network, the computer program product being embodied in a computer storage medium and comprising computer instructions that, upon execution, cause a computer to:
receive vision-enabled content at a computing device, the vision-enabled content including content configured to interact with an image or plurality of images of a subject;
decode the vision-enabled content at the computing device;
receive an image of a user at the computing device; and
combine the received image of the user with the vision-enabled content.
18. The computer program product of claim 17, wherein the computer instructions that, upon execution, cause a computer to combine the received image of the user with the vision-enabled content comprise computer instructions that, upon execution, cause a computer to:
perform body part recognition on the received image of the user; and
composite the vision-enabled content to the received image of the user.
19. The computer program product of claim 17, wherein the computer instructions that, upon execution, cause a computer to combine the received image of the user with the vision-enabled content comprise computer instructions that, upon execution, cause a computer to:
remove a background from the received image of the user to obtain an extracted image of the user;
perform body part recognition on the extracted image of the user; and
composite the vision-enabled content to the extracted image of the user.
20. A system, comprising:
means for receiving vision-enabled content at a computing device, the vision-enabled content including content configured to interact with an image or plurality of images of a subject;
means for decoding the vision-enabled content at the computing device;
means for receiving an image of a subject; and
means for combining the received image of the subject with the vision-enabled content.
US12/828,500 1999-08-01 2010-07-01 Method for video enabled electronic commerce Abandoned US20100266051A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/828,500 US20100266051A1 (en) 1999-08-01 2010-07-01 Method for video enabled electronic commerce

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/371,462 US7113918B1 (en) 1999-08-01 1999-08-01 Method for video enabled electronic commerce
US11/507,794 US7760182B2 (en) 1999-08-01 2006-08-21 Method for video enabled electronic commerce
US12/828,500 US20100266051A1 (en) 1999-08-01 2010-07-01 Method for video enabled electronic commerce

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/507,794 Division US7760182B2 (en) 1999-08-01 2006-08-21 Method for video enabled electronic commerce

Publications (1)

Publication Number Publication Date
US20100266051A1 true US20100266051A1 (en) 2010-10-21

Family

ID=37018993

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/371,462 Expired - Fee Related US7113918B1 (en) 1999-08-01 1999-08-01 Method for video enabled electronic commerce
US11/507,794 Expired - Fee Related US7760182B2 (en) 1999-08-01 2006-08-21 Method for video enabled electronic commerce
US12/828,500 Abandoned US20100266051A1 (en) 1999-08-01 2010-07-01 Method for video enabled electronic commerce

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/371,462 Expired - Fee Related US7113918B1 (en) 1999-08-01 1999-08-01 Method for video enabled electronic commerce
US11/507,794 Expired - Fee Related US7760182B2 (en) 1999-08-01 2006-08-21 Method for video enabled electronic commerce

Country Status (1)

Country Link
US (3) US7113918B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics

Families Citing this family (251)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8352400B2 (en) 1991-12-23 2013-01-08 Hoffberg Steven M Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore
US7904187B2 (en) 1999-02-01 2011-03-08 Hoffberg Steven M Internet appliance system and method
US7418407B2 (en) * 1999-10-14 2008-08-26 Jarbridge, Inc. Method for electronic gifting using merging images
US7565671B1 (en) 2000-02-01 2009-07-21 Swisscom Mobile Ag System and method for diffusing image objects
JP4356226B2 (en) * 2000-09-12 2009-11-04 ソニー株式会社 Server apparatus, distribution system, distribution method, and terminal apparatus
US6990639B2 (en) 2002-02-07 2006-01-24 Microsoft Corporation System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration
US8745541B2 (en) 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
US7665041B2 (en) 2003-03-25 2010-02-16 Microsoft Corporation Architecture for controlling a computer using hand gestures
US8261336B2 (en) * 2004-06-15 2012-09-04 Emc Corporation System and method for making accessible a set of services to users
US7697827B2 (en) 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
US8001474B2 (en) * 2006-09-25 2011-08-16 Embarq Holdings Company, Llc System and method for creating and distributing asynchronous bi-directional channel based multimedia content
US8005238B2 (en) 2007-03-22 2011-08-23 Microsoft Corporation Robust adaptive beamforming with enhanced noise suppression
US8005237B2 (en) 2007-05-17 2011-08-23 Microsoft Corp. Sensor array beamformer post-processor
US8629976B2 (en) 2007-10-02 2014-01-14 Microsoft Corporation Methods and systems for hierarchical de-aliasing time-of-flight (TOF) systems
US20120011454A1 (en) * 2008-04-30 2012-01-12 Microsoft Corporation Method and system for intelligently mining data during communication streams to present context-sensitive advertisements using background substitution
US10203861B2 (en) * 2008-05-21 2019-02-12 Please Don't Go, LLC. Messaging window overlay for a browser
US8385557B2 (en) 2008-06-19 2013-02-26 Microsoft Corporation Multichannel acoustic echo reduction
US8325909B2 (en) 2008-06-25 2012-12-04 Microsoft Corporation Acoustic echo suppression
US8203699B2 (en) 2008-06-30 2012-06-19 Microsoft Corporation System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed
US8681321B2 (en) 2009-01-04 2014-03-25 Microsoft International Holdings B.V. Gated 3D camera
US20100199231A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Predictive determination
US8448094B2 (en) 2009-01-30 2013-05-21 Microsoft Corporation Mapping a natural input device to a legacy system
US8565476B2 (en) 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US8577085B2 (en) 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
US8588465B2 (en) 2009-01-30 2013-11-19 Microsoft Corporation Visual target tracking
US8267781B2 (en) 2009-01-30 2012-09-18 Microsoft Corporation Visual target tracking
US8682028B2 (en) 2009-01-30 2014-03-25 Microsoft Corporation Visual target tracking
US8577084B2 (en) 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
US8487938B2 (en) 2009-01-30 2013-07-16 Microsoft Corporation Standard Gestures
US7996793B2 (en) 2009-01-30 2011-08-09 Microsoft Corporation Gesture recognizer system architecture
US8565477B2 (en) 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US8295546B2 (en) 2009-01-30 2012-10-23 Microsoft Corporation Pose tracking pipeline
US8294767B2 (en) 2009-01-30 2012-10-23 Microsoft Corporation Body scan
US8773355B2 (en) 2009-03-16 2014-07-08 Microsoft Corporation Adaptive cursor sizing
US9256282B2 (en) 2009-03-20 2016-02-09 Microsoft Technology Licensing, Llc Virtual object manipulation
US8988437B2 (en) 2009-03-20 2015-03-24 Microsoft Technology Licensing, Llc Chaining animations
US9313376B1 (en) 2009-04-01 2016-04-12 Microsoft Technology Licensing, Llc Dynamic depth power equalization
US8181123B2 (en) 2009-05-01 2012-05-15 Microsoft Corporation Managing virtual port associations to users in a gesture-based computing environment
US8942428B2 (en) 2009-05-01 2015-01-27 Microsoft Corporation Isolate extraneous motions
US9015638B2 (en) 2009-05-01 2015-04-21 Microsoft Technology Licensing, Llc Binding users to a gesture based system and providing feedback to the users
US8649554B2 (en) 2009-05-01 2014-02-11 Microsoft Corporation Method to control perspective for a camera-controlled computer
US9898675B2 (en) 2009-05-01 2018-02-20 Microsoft Technology Licensing, Llc User movement tracking feedback to improve tracking
US8503720B2 (en) 2009-05-01 2013-08-06 Microsoft Corporation Human body pose estimation
US9377857B2 (en) 2009-05-01 2016-06-28 Microsoft Technology Licensing, Llc Show body position
US8340432B2 (en) 2009-05-01 2012-12-25 Microsoft Corporation Systems and methods for detecting a tilt angle from a depth image
US8660303B2 (en) 2009-05-01 2014-02-25 Microsoft Corporation Detection of body and props
US8253746B2 (en) 2009-05-01 2012-08-28 Microsoft Corporation Determine intended motions
US9498718B2 (en) 2009-05-01 2016-11-22 Microsoft Technology Licensing, Llc Altering a view perspective within a display environment
US8638985B2 (en) * 2009-05-01 2014-01-28 Microsoft Corporation Human body pose estimation
US9383823B2 (en) 2009-05-29 2016-07-05 Microsoft Technology Licensing, Llc Combining gestures beyond skeletal
US8509479B2 (en) 2009-05-29 2013-08-13 Microsoft Corporation Virtual object
US8320619B2 (en) 2009-05-29 2012-11-27 Microsoft Corporation Systems and methods for tracking a model
US8625837B2 (en) 2009-05-29 2014-01-07 Microsoft Corporation Protocol and format for communicating an image from a camera to a computing environment
US8418085B2 (en) 2009-05-29 2013-04-09 Microsoft Corporation Gesture coach
US8379101B2 (en) 2009-05-29 2013-02-19 Microsoft Corporation Environment and/or target segmentation
US8744121B2 (en) 2009-05-29 2014-06-03 Microsoft Corporation Device for identifying and tracking multiple humans over time
US8693724B2 (en) 2009-05-29 2014-04-08 Microsoft Corporation Method and system implementing user-centric gesture control
US9182814B2 (en) 2009-05-29 2015-11-10 Microsoft Technology Licensing, Llc Systems and methods for estimating a non-visible or occluded body part
US8856691B2 (en) 2009-05-29 2014-10-07 Microsoft Corporation Gesture tool
US9400559B2 (en) 2009-05-29 2016-07-26 Microsoft Technology Licensing, Llc Gesture shortcuts
US8542252B2 (en) 2009-05-29 2013-09-24 Microsoft Corporation Target digitization, extraction, and tracking
US8487871B2 (en) 2009-06-01 2013-07-16 Microsoft Corporation Virtual desktop coordinate transformation
US8390680B2 (en) 2009-07-09 2013-03-05 Microsoft Corporation Visual representation expression based on player expression
US9159151B2 (en) 2009-07-13 2015-10-13 Microsoft Technology Licensing, Llc Bringing a visual representation to life via learned input from the user
US8275590B2 (en) 2009-08-12 2012-09-25 Zugara, Inc. Providing a simulation of wearing items such as garments and/or accessories
US8264536B2 (en) 2009-08-25 2012-09-11 Microsoft Corporation Depth-sensitive imaging via polarization-state mapping
US9141193B2 (en) 2009-08-31 2015-09-22 Microsoft Technology Licensing, Llc Techniques for using human gestures to control gesture unaware programs
US8508919B2 (en) 2009-09-14 2013-08-13 Microsoft Corporation Separation of electrical and optical components
US8330134B2 (en) 2009-09-14 2012-12-11 Microsoft Corporation Optical fault monitoring
US8760571B2 (en) 2009-09-21 2014-06-24 Microsoft Corporation Alignment of lens and image sensor
US8976986B2 (en) 2009-09-21 2015-03-10 Microsoft Technology Licensing, Llc Volume adjustment based on listener position
US8428340B2 (en) 2009-09-21 2013-04-23 Microsoft Corporation Screen space plane identification
US9014546B2 (en) 2009-09-23 2015-04-21 Rovi Guides, Inc. Systems and methods for automatically detecting users within detection regions of media devices
US8452087B2 (en) 2009-09-30 2013-05-28 Microsoft Corporation Image selection techniques
US8723118B2 (en) 2009-10-01 2014-05-13 Microsoft Corporation Imager for constructing color and depth images
US8867820B2 (en) 2009-10-07 2014-10-21 Microsoft Corporation Systems and methods for removing a background of an image
US7961910B2 (en) 2009-10-07 2011-06-14 Microsoft Corporation Systems and methods for tracking a model
US8564534B2 (en) 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system
US8963829B2 (en) 2009-10-07 2015-02-24 Microsoft Corporation Methods and systems for determining and tracking extremities of a target
US9400548B2 (en) 2009-10-19 2016-07-26 Microsoft Technology Licensing, Llc Gesture personalization and profile roaming
US8988432B2 (en) 2009-11-05 2015-03-24 Microsoft Technology Licensing, Llc Systems and methods for processing an image for target tracking
US8843857B2 (en) 2009-11-19 2014-09-23 Microsoft Corporation Distance scalable no touch computing
US9244533B2 (en) 2009-12-17 2016-01-26 Microsoft Technology Licensing, Llc Camera navigation for presentations
US20110150271A1 (en) 2009-12-18 2011-06-23 Microsoft Corporation Motion detection using depth images
US8320621B2 (en) 2009-12-21 2012-11-27 Microsoft Corporation Depth projector system with integrated VCSEL array
US9268404B2 (en) 2010-01-08 2016-02-23 Microsoft Technology Licensing, Llc Application gesture interpretation
US8631355B2 (en) 2010-01-08 2014-01-14 Microsoft Corporation Assigning gesture dictionaries
US9019201B2 (en) 2010-01-08 2015-04-28 Microsoft Technology Licensing, Llc Evolving universal gesture sets
US8334842B2 (en) 2010-01-15 2012-12-18 Microsoft Corporation Recognizing user intent in motion capture system
US8933884B2 (en) 2010-01-15 2015-01-13 Microsoft Corporation Tracking groups of users in motion capture system
US8676581B2 (en) 2010-01-22 2014-03-18 Microsoft Corporation Speech recognition analysis via identification information
US8265341B2 (en) 2010-01-25 2012-09-11 Microsoft Corporation Voice-body identity correlation
US8864581B2 (en) 2010-01-29 2014-10-21 Microsoft Corporation Visual based identitiy tracking
US8891067B2 (en) 2010-02-01 2014-11-18 Microsoft Corporation Multiple synchronized optical sources for time-of-flight range finding systems
US8619122B2 (en) 2010-02-02 2013-12-31 Microsoft Corporation Depth camera compatibility
US8687044B2 (en) 2010-02-02 2014-04-01 Microsoft Corporation Depth camera compatibility
US8717469B2 (en) 2010-02-03 2014-05-06 Microsoft Corporation Fast gating photosurface
US8659658B2 (en) 2010-02-09 2014-02-25 Microsoft Corporation Physical interaction zone for gesture-based user interfaces
US8499257B2 (en) 2010-02-09 2013-07-30 Microsoft Corporation Handles interactions for human—computer interface
US8633890B2 (en) 2010-02-16 2014-01-21 Microsoft Corporation Gesture detection based on joint skipping
US8928579B2 (en) 2010-02-22 2015-01-06 Andrew David Wilson Interacting with an omni-directionally projected display
US8411948B2 (en) 2010-03-05 2013-04-02 Microsoft Corporation Up-sampling binary images for segmentation
US8422769B2 (en) 2010-03-05 2013-04-16 Microsoft Corporation Image segmentation using reduced foreground training data
US8655069B2 (en) 2010-03-05 2014-02-18 Microsoft Corporation Updating image segmentation following user input
US20110223995A1 (en) 2010-03-12 2011-09-15 Kevin Geisner Interacting with a computer based application
US8279418B2 (en) 2010-03-17 2012-10-02 Microsoft Corporation Raster scanning for depth detection
US8213680B2 (en) 2010-03-19 2012-07-03 Microsoft Corporation Proxy training data for human body tracking
US8514269B2 (en) 2010-03-26 2013-08-20 Microsoft Corporation De-aliasing depth images
US8523667B2 (en) 2010-03-29 2013-09-03 Microsoft Corporation Parental control settings based on body dimensions
US8605763B2 (en) 2010-03-31 2013-12-10 Microsoft Corporation Temperature measurement and control for laser and light-emitting diodes
US9646340B2 (en) 2010-04-01 2017-05-09 Microsoft Technology Licensing, Llc Avatar-based virtual dressing room
US9098873B2 (en) 2010-04-01 2015-08-04 Microsoft Technology Licensing, Llc Motion-based interactive shopping environment
US8351651B2 (en) 2010-04-26 2013-01-08 Microsoft Corporation Hand-location post-process refinement in a tracking system
US8379919B2 (en) 2010-04-29 2013-02-19 Microsoft Corporation Multiple centroid condensation of probability distribution clouds
US8284847B2 (en) 2010-05-03 2012-10-09 Microsoft Corporation Detecting motion for a multifunction sensor device
US8885890B2 (en) 2010-05-07 2014-11-11 Microsoft Corporation Depth map confidence filtering
US8498481B2 (en) 2010-05-07 2013-07-30 Microsoft Corporation Image segmentation using star-convexity constraints
US8457353B2 (en) 2010-05-18 2013-06-04 Microsoft Corporation Gestures and gesture modifiers for manipulating a user-interface
US9274594B2 (en) * 2010-05-28 2016-03-01 Microsoft Technology Licensing, Llc Cloud-based personal trait profile data
US8803888B2 (en) 2010-06-02 2014-08-12 Microsoft Corporation Recognition system for sharing information
US8751215B2 (en) 2010-06-04 2014-06-10 Microsoft Corporation Machine based sign language interpreter
US9008355B2 (en) 2010-06-04 2015-04-14 Microsoft Technology Licensing, Llc Automatic depth camera aiming
US9557574B2 (en) 2010-06-08 2017-01-31 Microsoft Technology Licensing, Llc Depth illumination and detection optics
US8330822B2 (en) 2010-06-09 2012-12-11 Microsoft Corporation Thermally-tuned depth camera light source
US8675981B2 (en) 2010-06-11 2014-03-18 Microsoft Corporation Multi-modal gender recognition including depth data
US8749557B2 (en) 2010-06-11 2014-06-10 Microsoft Corporation Interacting with user interface via avatar
US9384329B2 (en) 2010-06-11 2016-07-05 Microsoft Technology Licensing, Llc Caloric burn determination from body movement
US8982151B2 (en) 2010-06-14 2015-03-17 Microsoft Technology Licensing, Llc Independently processing planes of display data
US8670029B2 (en) 2010-06-16 2014-03-11 Microsoft Corporation Depth camera illuminator with superluminescent light-emitting diode
US8558873B2 (en) 2010-06-16 2013-10-15 Microsoft Corporation Use of wavefront coding to create a depth image
US8296151B2 (en) 2010-06-18 2012-10-23 Microsoft Corporation Compound gesture-speech commands
US8381108B2 (en) 2010-06-21 2013-02-19 Microsoft Corporation Natural user input for driving interactive stories
US8416187B2 (en) 2010-06-22 2013-04-09 Microsoft Corporation Item navigation using motion-capture data
US9075434B2 (en) 2010-08-20 2015-07-07 Microsoft Technology Licensing, Llc Translating user motion into multiple object responses
US8613666B2 (en) 2010-08-31 2013-12-24 Microsoft Corporation User selection and navigation based on looped motions
US20120058824A1 (en) 2010-09-07 2012-03-08 Microsoft Corporation Scalable real-time motion recognition
US8437506B2 (en) 2010-09-07 2013-05-07 Microsoft Corporation System for fast, probabilistic skeletal tracking
US8988508B2 (en) 2010-09-24 2015-03-24 Microsoft Technology Licensing, Llc. Wide angle field of view active illumination imaging system
US8681255B2 (en) 2010-09-28 2014-03-25 Microsoft Corporation Integrated low power depth camera and projection device
US8548270B2 (en) 2010-10-04 2013-10-01 Microsoft Corporation Time-of-flight depth imaging
US9484065B2 (en) 2010-10-15 2016-11-01 Microsoft Technology Licensing, Llc Intelligent determination of replays based on event identification
US8592739B2 (en) 2010-11-02 2013-11-26 Microsoft Corporation Detection of configuration changes of an optical element in an illumination system
US8866889B2 (en) 2010-11-03 2014-10-21 Microsoft Corporation In-home depth camera calibration
US8667519B2 (en) 2010-11-12 2014-03-04 Microsoft Corporation Automatic passive and anonymous feedback system
US10726861B2 (en) 2010-11-15 2020-07-28 Microsoft Technology Licensing, Llc Semi-private communication in open environments
US9349040B2 (en) 2010-11-19 2016-05-24 Microsoft Technology Licensing, Llc Bi-modal depth-image analysis
US10234545B2 (en) 2010-12-01 2019-03-19 Microsoft Technology Licensing, Llc Light source module
US8553934B2 (en) 2010-12-08 2013-10-08 Microsoft Corporation Orienting the position of a sensor
US8618405B2 (en) 2010-12-09 2013-12-31 Microsoft Corp. Free-space gesture musical instrument digital interface (MIDI) controller
US8408706B2 (en) 2010-12-13 2013-04-02 Microsoft Corporation 3D gaze tracker
US9171264B2 (en) 2010-12-15 2015-10-27 Microsoft Technology Licensing, Llc Parallel processing machine learning decision tree training
US8884968B2 (en) 2010-12-15 2014-11-11 Microsoft Corporation Modeling an object from image data
US8920241B2 (en) 2010-12-15 2014-12-30 Microsoft Corporation Gesture controlled persistent handles for interface guides
US8448056B2 (en) 2010-12-17 2013-05-21 Microsoft Corporation Validation analysis of human target
US8803952B2 (en) 2010-12-20 2014-08-12 Microsoft Corporation Plural detector time-of-flight depth mapping
US8385596B2 (en) 2010-12-21 2013-02-26 Microsoft Corporation First person shooter control with virtual skeleton
US9848106B2 (en) 2010-12-21 2017-12-19 Microsoft Technology Licensing, Llc Intelligent gameplay photo capture
US9821224B2 (en) 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Driving simulator control with virtual skeleton
US8994718B2 (en) 2010-12-21 2015-03-31 Microsoft Technology Licensing, Llc Skeletal control of three-dimensional virtual world
US9823339B2 (en) 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Plural anode time-of-flight sensor
US9123316B2 (en) 2010-12-27 2015-09-01 Microsoft Technology Licensing, Llc Interactive content creation
US8488888B2 (en) 2010-12-28 2013-07-16 Microsoft Corporation Classification of posture states
US9037600B1 (en) 2011-01-28 2015-05-19 Yahoo! Inc. Any-image labeling engine
US9218364B1 (en) * 2011-01-28 2015-12-22 Yahoo! Inc. Monitoring an any-image labeling engine
US8587583B2 (en) 2011-01-31 2013-11-19 Microsoft Corporation Three-dimensional environment reconstruction
US9247238B2 (en) 2011-01-31 2016-01-26 Microsoft Technology Licensing, Llc Reducing interference between multiple infra-red depth cameras
US8401225B2 (en) 2011-01-31 2013-03-19 Microsoft Corporation Moving object segmentation using depth images
US8401242B2 (en) 2011-01-31 2013-03-19 Microsoft Corporation Real-time camera tracking using depth maps
US8724887B2 (en) 2011-02-03 2014-05-13 Microsoft Corporation Environmental modifications to mitigate environmental factors
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US8497838B2 (en) 2011-02-16 2013-07-30 Microsoft Corporation Push actuation of interface controls
US9551914B2 (en) 2011-03-07 2017-01-24 Microsoft Technology Licensing, Llc Illuminator with refractive optical element
US9067136B2 (en) 2011-03-10 2015-06-30 Microsoft Technology Licensing, Llc Push personalization of interface controls
US8571263B2 (en) 2011-03-17 2013-10-29 Microsoft Corporation Predicting joint positions
US9470778B2 (en) 2011-03-29 2016-10-18 Microsoft Technology Licensing, Llc Learning from high quality depth measurements
US10642934B2 (en) 2011-03-31 2020-05-05 Microsoft Technology Licensing, Llc Augmented conversational understanding architecture
US9760566B2 (en) 2011-03-31 2017-09-12 Microsoft Technology Licensing, Llc Augmented conversational understanding agent to identify conversation context between two humans and taking an agent action thereof
US9842168B2 (en) 2011-03-31 2017-12-12 Microsoft Technology Licensing, Llc Task driven user intents
US9298287B2 (en) 2011-03-31 2016-03-29 Microsoft Technology Licensing, Llc Combined activation for natural user interface systems
US8503494B2 (en) 2011-04-05 2013-08-06 Microsoft Corporation Thermal management system
US8824749B2 (en) 2011-04-05 2014-09-02 Microsoft Corporation Biometric recognition
US8620113B2 (en) 2011-04-25 2013-12-31 Microsoft Corporation Laser diode modes
US8702507B2 (en) 2011-04-28 2014-04-22 Microsoft Corporation Manual and camera-based avatar control
US9259643B2 (en) 2011-04-28 2016-02-16 Microsoft Technology Licensing, Llc Control of separate computer game elements
US10671841B2 (en) 2011-05-02 2020-06-02 Microsoft Technology Licensing, Llc Attribute state classification
US8888331B2 (en) 2011-05-09 2014-11-18 Microsoft Corporation Low inductance light source module
US9064006B2 (en) 2012-08-23 2015-06-23 Microsoft Technology Licensing, Llc Translating natural language utterances to keyword search queries
US9137463B2 (en) 2011-05-12 2015-09-15 Microsoft Technology Licensing, Llc Adaptive high dynamic range camera
US8788973B2 (en) 2011-05-23 2014-07-22 Microsoft Corporation Three-dimensional gesture controlled avatar configuration interface
US8760395B2 (en) 2011-05-31 2014-06-24 Microsoft Corporation Gesture recognition techniques
US9594430B2 (en) 2011-06-01 2017-03-14 Microsoft Technology Licensing, Llc Three-dimensional foreground selection for vision system
US8526734B2 (en) 2011-06-01 2013-09-03 Microsoft Corporation Three-dimensional background removal for vision system
US8897491B2 (en) 2011-06-06 2014-11-25 Microsoft Corporation System for finger recognition and tracking
US8929612B2 (en) 2011-06-06 2015-01-06 Microsoft Corporation System for recognizing an open or closed hand
US10796494B2 (en) 2011-06-06 2020-10-06 Microsoft Technology Licensing, Llc Adding attributes to virtual representations of real-world objects
US9013489B2 (en) 2011-06-06 2015-04-21 Microsoft Technology Licensing, Llc Generation of avatar reflecting player appearance
US9208571B2 (en) 2011-06-06 2015-12-08 Microsoft Technology Licensing, Llc Object digitization
US8597142B2 (en) 2011-06-06 2013-12-03 Microsoft Corporation Dynamic camera based practice mode
US9724600B2 (en) 2011-06-06 2017-08-08 Microsoft Technology Licensing, Llc Controlling objects in a virtual environment
US9098110B2 (en) 2011-06-06 2015-08-04 Microsoft Technology Licensing, Llc Head rotation tracking from depth-based center of mass
US9597587B2 (en) 2011-06-08 2017-03-21 Microsoft Technology Licensing, Llc Locational node device
US8786730B2 (en) 2011-08-18 2014-07-22 Microsoft Corporation Image exposure using exclusion regions
US9557836B2 (en) 2011-11-01 2017-01-31 Microsoft Technology Licensing, Llc Depth image compression
US9117281B2 (en) 2011-11-02 2015-08-25 Microsoft Corporation Surface segmentation from RGB and depth images
US8854426B2 (en) 2011-11-07 2014-10-07 Microsoft Corporation Time-of-flight camera with guided light
US8724906B2 (en) 2011-11-18 2014-05-13 Microsoft Corporation Computing pose and/or shape of modifiable entities
US8509545B2 (en) 2011-11-29 2013-08-13 Microsoft Corporation Foreground subject detection
US8803800B2 (en) 2011-12-02 2014-08-12 Microsoft Corporation User interface control based on head orientation
US8635637B2 (en) 2011-12-02 2014-01-21 Microsoft Corporation User interface presenting an animated avatar performing a media reaction
US9100685B2 (en) 2011-12-09 2015-08-04 Microsoft Technology Licensing, Llc Determining audience state or interest using passive sensor data
US8630457B2 (en) 2011-12-15 2014-01-14 Microsoft Corporation Problem states for pose tracking pipeline
US8971612B2 (en) 2011-12-15 2015-03-03 Microsoft Corporation Learning image processing tasks from scene reconstructions
US8879831B2 (en) 2011-12-15 2014-11-04 Microsoft Corporation Using high-level attributes to guide image processing
US8811938B2 (en) 2011-12-16 2014-08-19 Microsoft Corporation Providing a user interface experience based on inferred vehicle state
US9342139B2 (en) 2011-12-19 2016-05-17 Microsoft Technology Licensing, Llc Pairing a computing device to a user
US9720089B2 (en) 2012-01-23 2017-08-01 Microsoft Technology Licensing, Llc 3D zoom imager
US8898687B2 (en) 2012-04-04 2014-11-25 Microsoft Corporation Controlling a media program based on a media reaction
US9210401B2 (en) 2012-05-03 2015-12-08 Microsoft Technology Licensing, Llc Projected visual cues for guiding physical movement
CA2775700C (en) 2012-05-04 2013-07-23 Microsoft Corporation Determining a future portion of a currently presented media program
US9652654B2 (en) 2012-06-04 2017-05-16 Ebay Inc. System and method for providing an interactive shopping experience via webcam
KR101911133B1 (en) 2012-06-21 2018-10-23 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Avatar construction using depth camera
US9836590B2 (en) 2012-06-22 2017-12-05 Microsoft Technology Licensing, Llc Enhanced accuracy of user presence status determination
US9696427B2 (en) 2012-08-14 2017-07-04 Microsoft Technology Licensing, Llc Wide angle depth detection
US9314692B2 (en) * 2012-09-21 2016-04-19 Luxand, Inc. Method of creating avatar from user submitted image
US8882310B2 (en) 2012-12-10 2014-11-11 Microsoft Corporation Laser die light source module with low inductance
US9857470B2 (en) 2012-12-28 2018-01-02 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
US9251590B2 (en) 2013-01-24 2016-02-02 Microsoft Technology Licensing, Llc Camera pose estimation for 3D reconstruction
US9052746B2 (en) 2013-02-15 2015-06-09 Microsoft Technology Licensing, Llc User center-of-mass and mass distribution extraction using depth images
US9633272B2 (en) 2013-02-15 2017-04-25 Yahoo! Inc. Real time object scanning using a mobile phone and cloud-based visual search engine
US9940553B2 (en) 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
US9135516B2 (en) 2013-03-08 2015-09-15 Microsoft Technology Licensing, Llc User body angle, curvature and average extremity positions extraction using depth images
US9092657B2 (en) 2013-03-13 2015-07-28 Microsoft Technology Licensing, Llc Depth image processing
US9274606B2 (en) 2013-03-14 2016-03-01 Microsoft Technology Licensing, Llc NUI video conference controls
US9953213B2 (en) 2013-03-27 2018-04-24 Microsoft Technology Licensing, Llc Self discovery of autonomous NUI devices
US9892447B2 (en) 2013-05-08 2018-02-13 Ebay Inc. Performing image searches in a network-based publication system
US9442186B2 (en) 2013-05-13 2016-09-13 Microsoft Technology Licensing, Llc Interference reduction for TOF systems
US8875175B1 (en) * 2013-08-30 2014-10-28 Sony Corporation Smart live streaming event ads playback and resume method
US9462253B2 (en) 2013-09-23 2016-10-04 Microsoft Technology Licensing, Llc Optical modules that reduce speckle contrast and diffraction artifacts
US9443310B2 (en) 2013-10-09 2016-09-13 Microsoft Technology Licensing, Llc Illumination modules that emit structured light
US9674563B2 (en) 2013-11-04 2017-06-06 Rovi Guides, Inc. Systems and methods for recommending content
US9769459B2 (en) 2013-11-12 2017-09-19 Microsoft Technology Licensing, Llc Power efficient laser diode driver circuit and method
US9508385B2 (en) 2013-11-21 2016-11-29 Microsoft Technology Licensing, Llc Audio-visual project generator
US9971491B2 (en) 2014-01-09 2018-05-15 Microsoft Technology Licensing, Llc Gesture library for natural user input
US10203762B2 (en) * 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US10852838B2 (en) 2014-06-14 2020-12-01 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
JP6434532B2 (en) * 2014-07-02 2018-12-05 コヴィディエン リミテッド パートナーシップ System for detecting trachea
US10412280B2 (en) 2016-02-10 2019-09-10 Microsoft Technology Licensing, Llc Camera with light valve over sensor array
US10257932B2 (en) 2016-02-16 2019-04-09 Microsoft Technology Licensing, Llc. Laser diode chip on printed circuit board
US10462452B2 (en) 2016-03-16 2019-10-29 Microsoft Technology Licensing, Llc Synchronizing active illumination cameras
US10796484B2 (en) * 2017-06-14 2020-10-06 Anand Babu Chitavadigi System and method for interactive multimedia and multi-lingual guided tour/panorama tour
TWI715903B (en) * 2018-12-24 2021-01-11 財團法人工業技術研究院 Motion tracking system and method thereof

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4149246A (en) * 1978-06-12 1979-04-10 Goldman Robert N System for specifying custom garments
US5423554A (en) * 1993-09-24 1995-06-13 Metamedia Ventures, Inc. Virtual reality game method and apparatus
US5454043A (en) * 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5515288A (en) * 1993-06-18 1996-05-07 Aberson; Michael Method and control apparatus for generating analog recurrent signal security data feedback
US5563988A (en) * 1994-08-01 1996-10-08 Massachusetts Institute Of Technology Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment
US5759044A (en) * 1990-02-22 1998-06-02 Redmond Productions Methods and apparatus for generating and processing synthetic and absolute real time environments
US5870723A (en) * 1994-11-28 1999-02-09 Pare, Jr.; David Ferrin Tokenless biometric transaction authorization method and system
US5974454A (en) * 1997-11-14 1999-10-26 Microsoft Corporation Method and system for installing and updating program module components
US6005548A (en) * 1996-08-14 1999-12-21 Latypov; Nurakhmed Nurislamovich Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods
US6072494A (en) * 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
US6191773B1 (en) * 1995-04-28 2001-02-20 Matsushita Electric Industrial Co., Ltd. Interface apparatus
US6227974B1 (en) * 1997-06-27 2001-05-08 Nds Limited Interactive game system
US6253193B1 (en) * 1995-02-13 2001-06-26 Intertrust Technologies Corporation Systems and methods for the secure transaction management and electronic rights protection
US6275988B1 (en) * 1995-06-30 2001-08-14 Canon Kabushiki Kaisha Image transmission apparatus, image transmission system, and communication apparatus
US20020004763A1 (en) * 2000-01-20 2002-01-10 Lam Peter Ar-Fu Body profile coding method and apparatus useful for assisting users to select wearing apparel
US6363160B1 (en) * 1999-01-22 2002-03-26 Intel Corporation Interface using pattern recognition and tracking
US6425825B1 (en) * 1992-05-22 2002-07-30 David H. Sitrick User image integration and tracking for an audiovisual presentation system and methodology
US6434255B1 (en) * 1997-10-29 2002-08-13 Takenaka Corporation Hand pointing apparatus
US6587127B1 (en) * 1997-11-25 2003-07-01 Motorola, Inc. Content player method and server with user profile
US6625581B1 (en) * 1994-04-22 2003-09-23 Ipf, Inc. Method of and system for enabling the access of consumer product related information and the purchase of consumer products at points of consumer presence on the world wide web (www) at which consumer product information request (cpir) enabling servlet tags are embedded within html-encoded documents
US6636219B2 (en) * 1998-02-26 2003-10-21 Learn.Com, Inc. System and method for automatic animation generation
US20030227439A1 (en) * 2002-06-07 2003-12-11 Koninklijke Philips Electronics N.V. System and method for adapting the ambience of a local environment according to the location and personal preferences of people in the local environment
US6681031B2 (en) * 1998-08-10 2004-01-20 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
US7042440B2 (en) * 1997-08-22 2006-05-09 Pryor Timothy R Man machine interfaces and applications

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0696100A (en) * 1992-09-09 1994-04-08 Mitsubishi Electric Corp Remote transaction system
WO1998047084A1 (en) * 1997-04-17 1998-10-22 Sharp Kabushiki Kaisha A method and system for object-based video description and linking

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4149246A (en) * 1978-06-12 1979-04-10 Goldman Robert N System for specifying custom garments
US5759044A (en) * 1990-02-22 1998-06-02 Redmond Productions Methods and apparatus for generating and processing synthetic and absolute real time environments
US6425825B1 (en) * 1992-05-22 2002-07-30 David H. Sitrick User image integration and tracking for an audiovisual presentation system and methodology
US5515288A (en) * 1993-06-18 1996-05-07 Aberson; Michael Method and control apparatus for generating analog recurrent signal security data feedback
US5454043A (en) * 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5423554A (en) * 1993-09-24 1995-06-13 Metamedia Ventures, Inc. Virtual reality game method and apparatus
US6625581B1 (en) * 1994-04-22 2003-09-23 Ipf, Inc. Method of and system for enabling the access of consumer product related information and the purchase of consumer products at points of consumer presence on the world wide web (www) at which consumer product information request (cpir) enabling servlet tags are embedded within html-encoded documents
US5563988A (en) * 1994-08-01 1996-10-08 Massachusetts Institute Of Technology Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment
US5870723A (en) * 1994-11-28 1999-02-09 Pare, Jr.; David Ferrin Tokenless biometric transaction authorization method and system
US6253193B1 (en) * 1995-02-13 2001-06-26 Intertrust Technologies Corporation Systems and methods for the secure transaction management and electronic rights protection
US6191773B1 (en) * 1995-04-28 2001-02-20 Matsushita Electric Industrial Co., Ltd. Interface apparatus
US6275988B1 (en) * 1995-06-30 2001-08-14 Canon Kabushiki Kaisha Image transmission apparatus, image transmission system, and communication apparatus
US6005548A (en) * 1996-08-14 1999-12-21 Latypov; Nurakhmed Nurislamovich Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods
US6227974B1 (en) * 1997-06-27 2001-05-08 Nds Limited Interactive game system
US7042440B2 (en) * 1997-08-22 2006-05-09 Pryor Timothy R Man machine interfaces and applications
US6072494A (en) * 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
US6434255B1 (en) * 1997-10-29 2002-08-13 Takenaka Corporation Hand pointing apparatus
US5974454A (en) * 1997-11-14 1999-10-26 Microsoft Corporation Method and system for installing and updating program module components
US6587127B1 (en) * 1997-11-25 2003-07-01 Motorola, Inc. Content player method and server with user profile
US6636219B2 (en) * 1998-02-26 2003-10-21 Learn.Com, Inc. System and method for automatic animation generation
US6681031B2 (en) * 1998-08-10 2004-01-20 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
US6363160B1 (en) * 1999-01-22 2002-03-26 Intel Corporation Interface using pattern recognition and tracking
US20020004763A1 (en) * 2000-01-20 2002-01-10 Lam Peter Ar-Fu Body profile coding method and apparatus useful for assisting users to select wearing apparel
US20030227439A1 (en) * 2002-06-07 2003-12-11 Koninklijke Philips Electronics N.V. System and method for adapting the ambience of a local environment according to the location and personal preferences of people in the local environment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics

Also Published As

Publication number Publication date
US7113918B1 (en) 2006-09-26
US20060282387A1 (en) 2006-12-14
US7760182B2 (en) 2010-07-20

Similar Documents

Publication Publication Date Title
US7113918B1 (en) Method for video enabled electronic commerce
US10991165B2 (en) Interactive virtual thematic environment
US11538213B2 (en) Creating and distributing interactive addressable virtual content
KR102139241B1 (en) Spectating system and game systems integrated
JP4237050B2 (en) Advanced custom content television
US20180126279A1 (en) Apparatus and methods for multimedia games
US7054928B2 (en) System for viewing content over a network and method therefor
JP6336704B2 (en) System and method for participating in real-time media demonstration and game system
US20150332515A1 (en) Augmented reality system
CN102576247A (en) Hyperlinked 3d video inserts for interactive television
CN107407958A (en) Personalized integrated video user experience
CN103108248A (en) Interactive video implement method and system using the same
CN103414940A (en) System and method for playing interactive internet video advertisements
JP2001168818A (en) Data transmission method and system, information processing method and system, data transmitter, signal processor, contents data processing method and data providing method
KR101197630B1 (en) System and method of providing augmented contents related to currently-provided common contents to personal terminals
WO2010047632A1 (en) Advertising control system and method for motion media content
CN113411625A (en) Processing method and processing device for live broadcast message and electronic equipment
Dholakia et al. The changing information business: Toward content-based and service-based competition
CN114266626A (en) AR (augmented reality) -based time-based optical memory system and using method
WO2001022308A2 (en) Computer-implemented method and system for selecting one or more required items from a virtual store
Bonner The Marketing of Pre-Recorded Video-Cassettes
Penton A video game enabled collaborative virtual shopping environment

Legal Events

Date Code Title Description
AS Assignment

Owner name: ELET SYSTEMS L.L.C., DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ELECTRIC PLANET INTERACTIVE;REEL/FRAME:024859/0001

Effective date: 20071211

Owner name: ELECTRIC PLANET INTERACTIVE, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AHMAD, SUBUTAI;FRANCE, G. SCOTT;SIGNING DATES FROM 20080310 TO 20080314;REEL/FRAME:024858/0937

Owner name: ELECTRIC PLANET, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AHMED, SUBUTAI;FRANCE, G. SCOTT;REEL/FRAME:024858/0797

Effective date: 19990810

AS Assignment

Owner name: IV GESTURE ASSETS 12, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ELET SYSTEMS L.L.C.;REEL/FRAME:027710/0132

Effective date: 20111222

AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:IV GESTURE ASSETS 12, LLC;REEL/FRAME:028012/0370

Effective date: 20120216

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0001

Effective date: 20141014

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION