US20160261669A1 - Generating a Website to Share Aggregated Content - Google Patents

Generating a Website to Share Aggregated Content Download PDF

Info

Publication number
US20160261669A1
US20160261669A1 US15/156,146 US201615156146A US2016261669A1 US 20160261669 A1 US20160261669 A1 US 20160261669A1 US 201615156146 A US201615156146 A US 201615156146A US 2016261669 A1 US2016261669 A1 US 2016261669A1
Authority
US
United States
Prior art keywords
media content
digital media
computer
social event
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/156,146
Inventor
Max Elliott
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment LLC
Original Assignee
Sony Interactive Entertainment America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Interactive Entertainment America LLC filed Critical Sony Interactive Entertainment America LLC
Priority to US15/156,146 priority Critical patent/US20160261669A1/en
Publication of US20160261669A1 publication Critical patent/US20160261669A1/en
Assigned to SONY COMPUTER ENTERTAINMENT AMERICA LLC reassignment SONY COMPUTER ENTERTAINMENT AMERICA LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ELLIOTT, MAX
Assigned to SONY INTERACTIVE ENTERTAINMENT AMERICA LLC reassignment SONY INTERACTIVE ENTERTAINMENT AMERICA LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SONY COMPUTER ENTERTAINMENT AMERICA LLC
Assigned to Sony Interactive Entertainment LLC reassignment Sony Interactive Entertainment LLC MERGER (SEE DOCUMENT FOR DETAILS). Assignors: SONY INTERACTIVE ENTERTAINMENT AMERICA LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6263Protecting personal data, e.g. for financial or medical purposes during internet communication, e.g. revealing personal data from cookies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/101Collaborative creation, e.g. joint development of products or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • H04L51/32
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences

Definitions

  • This disclosure relates generally to transmission of digital information and, more particularly, to methods and systems for aggregating and sharing digital content associated with social events.
  • File sharing services are widely used by users to upload images and videos captured with mobile devices. Other users, who have access to such content online, can view the uploaded images and videos.
  • a user may define privacy rules and indicate whether the content is publically available to all visitors or whether it can be shared through a specific group of friends or connected members.
  • Conventional file sharing services can be used to share images of sport events, parties, conferences, meetings, and the like between associated participants.
  • an image or a video is captured with a mobile phone, a digital camera, a laptop, or the like and uploaded to a website at a later time. Other users may then review and download the uploaded content.
  • Such file sharing services become especially useful when participants live in different cities, states, or countries, and generally, it creates a great possibility to view or download related content by any participant.
  • web sharing services may be a part of so these social media sites like social networking sites, blogging sites, file sharing sites, and so forth.
  • some social events such as business meetings or conferences, may be photographed by multiple participants. Each of them may take photos and store these photos at different file sharing sites. As a result, it may be difficult for other participants to view or download all photos taken at an event as they will need to access multiple sites and may not know where to look.
  • Participants of a social event may also wish to establish privacy rules for sharing media content depicting the event.
  • Participants are limited to establishing privacy rules with respect to the content they upload themselves, but not with respect to the content uploaded by others.
  • a computer-implemented method for sharing digital media content by a server within a communication network may include a set of user devices receiving digital media content from one or more user devices associated with one or more users, determining one or more parts of the digital media content associated with a social event, aggregating the one or more parts associated with the social event to produce aggregated digital media content, and facilitating access to the aggregated digital media content by the one or more users.
  • the digital media content may include a digital photo, an image, a text, audio, and a video.
  • the user device may include a digital camera, a video camera, a computer, a cellular phone, or a personal digital assistant (PDA).
  • PDA personal digital assistant
  • the social event may include a conference, a game play, or a leisure event.
  • the method may further include obtaining contextual information related to the digital media content.
  • the contextual information may include a tag, a time, a date, a geographical location, a comment, social event data, and information related to one or more individuals or objects presented in the content. Determining parts of the digital media content associated with the social event may be based on the contextual information. Determining parts of the digital media content associated with the social event may include receiving a user request to associate the digital content with the social event.
  • the method may further include receiving privacy instructions from users recognized in the photo or video, with the privacy instructions including a restriction to share content and a modification of the photo or video.
  • the method may further include generating a website to share the media content associated with the social event.
  • the method may further include implementing an image recognition process for the received digital media content and recognizing one or more individuals captured on a photo or a video, wherein the photo and the video relate to the digital media content.
  • the image recognition process may be based on the contextual information.
  • the method may further include filtering parts of the aggregated digital media content based on contextual information, a user selection, a privacy instruction, or a user's personal information.
  • the method may further include determining users associated with the social event, with the determination based on the received digital media content, contextual information, or image recognition results.
  • the method may further include prompting a user associated with the social event to provide one or more of digital content, a comment, or feedback.
  • modules, subsystems, or devices can be adapted to perform the recited steps.
  • Other features and exemplary embodiments are described below.
  • FIG. 1 shows a block diagram illustrating a system environment suitable for aggregating and sharing digital content.
  • FIG. 2 is a diagram of a sharing system.
  • FIG. 3 is a diagram of a user device.
  • FIG. 4 is a process flow diagram of a method for sharing digital content by a server within a communication network comprising a set of user devices.
  • FIG. 5 is a process flow diagram showing a method for sharing digital content within a communication network comprising a set of user terminals.
  • FIG. 6 shows a graphical user interface aggregating digital media content from one or more sources.
  • FIG. 7 shows a graphical user interface including a photo subjected to an image recognition process.
  • FIG. 8 shows a graphical user interface of a user device suggesting a possible social event occurrence.
  • FIG. 9 shows a graphical user interface of a user device suggesting a possible social event occurrence, according to another example embodiment.
  • FIG. 10 is a diagrammatic representation of an example machine in the form of a computer system within which a set of instructions is executed.
  • the embodiments described herein relate to methods for aggregating and sharing digital content associated with social events.
  • the methods may be implemented within a network such as the Internet.
  • participants of a social event may generate digital content such as photos, videos, audio, text, and so forth.
  • the digital content may be generated by personal mobile devices including one or more of cellular phones, smart phones, laptops, computers, digital cameras, and the like.
  • the captured digital content may be associated with the social event.
  • social event may refer to any type of an event involving a group of people. Typical social events may include conferences, meetings, parties, shows, sporting events, business meetings, and so forth.
  • a user may manually input social event information at a user device and indicate or set what content captured or to be captured relates to the social event.
  • the captured content is tagged to denote its relation to the social event. For example, a user attending a party may set a digital camera so that all photos taken with the digital camera are automatically tagged to indicate their relation to the party. Thereafter, the user may sort the taken photos by filtering for those that relate to the party.
  • a social event can be automatically detected. Captured content such as photos, videos, and audio may be analyzed for related parts. For example, captured photos may be subjected to image recognition process. As a result, one or more individuals can be recognized and invited to share and aggregate digital content via a network.
  • a user may manually indicate which parts of the captured photos relate to specific individuals or objects. If a user camera (or mobile phone, laptop, etc.) indicates that two or more photos are taken within the same environment (same place, same party, etc.) and/or one or more of these individuals are captured on two or more photos, the user camera may assume that the captured photos relate to the same social event. Accordingly, the camera may suggest that the user turn on a corresponding operating mode to associate the captured photos with the social event.
  • the user device may determine that there is a similar active device within a certain predetermined distance. Then, both devices within the certain predetermined distance may invite their respective users to tag pictures as taken at the social event. Alternatively, the user device may indicate that another device is within a certain predetermined distance (e.g., 20 feet) and invite its user to generate digital content related to the social event.
  • a certain predetermined distance e.g. 20 feet
  • the captured photos or videos may be tagged and associated with one or more social events on the user device or a remote server.
  • the digital media content may also contain contextual information such as titles, a time, a date, conditions, a location (e.g., GPS coordinates), information related to recognized objects or individuals, and so forth.
  • the user may then upload the captured digital content to a remote server.
  • the content can be hosted on a website and accessed by any person or by a specific group of people, depending on the privacy rules set.
  • the users may sort uploaded photos and aggregate only those which relate to the same social event, or which contain a specific individual, or the like. For instance, among thousands of uploaded photos, the users may easily sort only those in which they appeared.
  • the remote server may aggregate digital content from a number of users participating in the same social event. Different users may upload to the remote server their photos and videos from the event they attended. Accordingly, the remote server may selectively aggregate content from different sources within a single place (site). Alternatively, upon request of the user, the remote server may perform such aggregation.
  • the users therefore, are provided with a useful tool to sort photos or videos from different sources and select only those in which they are interested. For example, the users may see and download photos in which they appear that were previously uploaded by other participants.
  • the remote server may also perform image recognition of the digital content and automatically determine that specific photos/videos relate to the same social event. The owners of the content may then be notified or invited to tag such content and associate it with the social event.
  • the image recognition process may be used to determine specific individuals. Users may manually tag photos having recognized individuals and assign names, nicknames, and the like.
  • personal information of the users or recognized individuals can be assigned to the uploaded content and can be used to sort photos. In some embodiments, personal information can be retrieved from other affiliated sites, such as social networking sites. Mentioned information assigned to the uploaded content may be used to aggregate and filter content stored on the site. Thus, users can use sorting to filter photos or videos in which searched-for individuals appear.
  • the users may set privacy rules within the remote server and/or site hosting the content.
  • one user may establish privacy rules for all photos in which he/she appeared. In other words, even if content is uploaded by other users (other participants of the same social event), and it is recognized that the certain user is shown in specific photos, these photos can be modified (e.g., blurred in part where the first user is shown, deleted, blocked, and so forth) according to the privacy rules preset by this certain user.
  • Various privacy rules can be set by individuals and groups, as can be readily understood by those skilled in the art.
  • embodiments disclosed herein relate to a useful tool that enables people to easily aggregate and share digital content associated with social events via a network.
  • the aggregation can be performed from different sources in association with the same social event.
  • the content such as photos and videos, can be subjected to an image recognition process to define one or more individuals.
  • Shared content may be filtered to sort only those photos or videos in which social event participants appear.
  • users may set privacy rules to hide those parts of photos or video in which they appear.
  • the embodiments described herein can be implemented by various means, depending on the application.
  • the embodiments may be implemented in hardware, firmware, software, or in a combination thereof.
  • the embodiments may be implemented with processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
  • Memory can be implemented within a processor or external to the processor.
  • the term “memory” refers to any type of long term, short term, volatile, nonvolatile, or other storage device and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
  • the embodiments can be implemented with modules such as procedures, functions, and so on, that perform the functions described herein. Any machine-readable medium tangibly embodying instructions can be used in implementing the embodiments described herein.
  • FIG. 1 shows a block diagram illustrating a system environment 100 suitable for aggregating and sharing digital content.
  • the system environment 100 comprises one or more user devices 102 , a sharing system 104 , one or more affiliated sites 106 , one or more e-mail servers 108 , and a network 110 .
  • the network 110 may couple the aforementioned modules.
  • the network 110 is a network of data processing nodes interconnected for the purpose of data communication, which may be utilized to communicatively couple various components of the environment 100 .
  • the network 110 may include the Internet or any other network capable of communicating data between devices. Suitable networks may include or interface with any one or more of, for instance, a local intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a MAN (Metropolitan Area Network), a virtual private network (VPN), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, an Ethernet connection, an ISDN (Integrated Services Digital Network) line, a dial-up port, such as a V.90, V.34 or V.34bis analog modem connection, a cable modem, an ATM
  • communications may also include links to any of a variety of wireless networks, including WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM (Global System for Mobile Communication), CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access), cellular phone networks, GPS, CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network.
  • WAP Wireless Application Protocol
  • GPRS General Packet Radio Service
  • GSM Global System for Mobile Communication
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • cellular phone networks GPS, CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network.
  • WAP Wireless Application Protocol
  • GPRS General Packet Radio Service
  • GSM Global System for Mobile Communication
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • cellular phone networks GPS
  • the network 110 can further include or interface with any one or more of an RS-232 serial connection, an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, an SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.
  • an RS-232 serial connection an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, an SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.
  • the term “user device” refers to a computer, a laptop, a tablet computer, a portable computing device, a PDA, a digital camera, a handheld cellular phone, a mobile phone, a smart phone, a cordless telephone, a handheld device having wireless connection capability, or any other electronic device suitable for capturing photos, videos, or audio.
  • the user devices 102 may also receive or transmit data such as captured photos or videos via a cord or cordless network.
  • the user devices 102 may be configured to browse websites or access remote servers via a network.
  • a user device 102 can also be configured to determine its geographical location based on Global Positioning System (GPS) signals, Internet Protocol (IP) addresses, base station information, and so forth.
  • GPS Global Positioning System
  • IP Internet Protocol
  • the user devices 102 can be used to generate digital media content such as photos, images, videos, audio, text, and so forth, and also to transmit the content via the network 110 .
  • the user devices 102 can also be used to access the content previously stored at a remote database (e.g., at the sharing system 104 ).
  • the user devices 102 may comprise a browser 112 providing the ability to browse and interact with sites on the Internet.
  • the user devices 102 may comprise software 114 to communicate with the sharing system 104 .
  • the software 114 is a mobile application embedded in the user device 102 .
  • the sharing system 104 may be configured to receive digital content from the user devices 102 , store the digital content, and share it with the same or other user devices 102 in an enhanced manner.
  • Digital content can be accomplished with additional contextual information such as file names, titles, brief descriptions, times, dates, tags, and so forth.
  • contextual information may also comprise results of image recognition (i.e., information on recognized individuals captured in a photo or video).
  • the sharing system 104 can be implemented as a server having multiple modules and databases.
  • the sharing system 104 may host a site providing access for its visitors to the system.
  • the site enables visitors to upload or download digital content to the sharing system 104 .
  • the sharing system 104 is described in detail below with reference to FIG. 2 .
  • the one or more affiliated sites 106 may include any website on the Internet that may provide an access to the sharing system 104 .
  • the affiliated sites 106 have a gateway to the sharing system 104 to enable visitors of these sites to upload or download digital content to the sharing system 104 .
  • the e-mail server 108 may transfer e-mail messages from one computer to another computer, using client-server application architecture.
  • the e-mail server 108 may be used by one user device 102 to send a message to another user device 102 , or may be used by the sharing system 104 to send messages to the user devices 102 .
  • FIG. 2 is a diagram of the sharing system 104 .
  • the sharing system 104 may include a communication module 202 , a website generator 204 , an aggregating module 206 , an image recognition module 208 , a processing module 210 , a content database 212 , and a member database 214 .
  • the sharing system 104 may include additional, fewer, or different modules for various applications.
  • all modules can be integrated within a single system, or, alternatively, can be remotely located and optionally be accessed via a third party.
  • the sharing system 104 may be implemented as hardware having software installed thereon that implements the steps necessary to operate the sharing system according to example embodiments disclosed herein.
  • the sharing system 104 may also host a content sharing site 116 directed, among other things, to store, aggregate, and share digital content in an enhanced manner as described further.
  • the users may first register with the sharing site 116 and create a member profile.
  • the membership details may be stored in the member database 214 .
  • the membership profile stored in the database 214 may store personal information, such as a name, a nickname, user credentials, a representative picture/photo/logo, an address, a phone number, a fax number, an e-mail address, a web address, credentials of associated member profiles of social media sites, or any other form of contact and personal information.
  • the users may manage their profiles (personal webpages within the sharing site 116 ) in the member database 214 .
  • the sharing site 116 may manually or automatically access the one or more affiliated sites 106 .
  • the sharing site 116 may also enable communication between users.
  • the sharing site 116 may allow the users to share information via the one or more affiliated sites 106 .
  • the users may also upload digital content to the sharing system 104 , which in turn can be stored in the content database 212 .
  • the users may also generate web pages with the help of the sharing system 104 , which are associated with the uploaded content.
  • the web pages may provide access to the content stored in the content database 212 to one or more different users. Accordingly, each generated web page may relate to the content of one social event.
  • web pages can be managed by one or more users.
  • the web pages may aggregate content from one or more users, and this content can be sorted or filtered according to example embodiments disclosed herein.
  • the communication module 202 may be configured to connect the sharing system 104 to the one or more user devices 102 , the one or more affiliated sites 106 , and the one or more e-mail servers 108 via the network 110 .
  • the connection and data transfer may be provided via an Application Programming Interface (API).
  • API Application Programming Interface
  • the communication module 202 is also configured to provide communication functionality between all modules of the sharing system 104 .
  • the communication module 202 may receive the user requests to store digital content to the member database 214 , manage member profiles in the database 212 , access and manage content stored in the content database 212 , and the like.
  • the communication module 202 can process all such user requests.
  • the website generator 204 may be configured to generate websites comprising the uploaded content.
  • the websites may contain content originating from one source (i.e. one user), or contain content aggregated from different sources (i.e. obtained from different users). Aggregation of the content can be performed based on assigned contextual information. In other words, aggregation can be performed such that all content related to the same social event is collected on a single website. Alternatively, aggregation can be performed relative to those photos/videos, which comprise certain individuals or objects. It should be apparent that different aggregation methods may be applied.
  • the websites may also comprise functionality to manage the digital content. For example, different users, who have access to such websites, may download content, add commentary, re-post to any affiliated website (e.g., social networking website), and so forth.
  • any affiliated website e.g., social networking website
  • the aggregating module 206 is configured to aggregate content stored in the content database 212 . Specifically, upon request of a user, the aggregating module 206 may sort and deliver to the user (or to the generating website) specific parts of the content. The sorting process can be based on contextual information assigned to every piece of the content. In particular, the aggregating module 206 may sort photos or videos (or other parts of the digital content) that relate to the same social event, show the same individuals or objects, were captured in one day and/or in one location, or the like. The aggregation process can be performed using different methods, which are discussed below.
  • the image recognition module 208 is configured to selectively recognize individuals, objects, characters and the like on photos and videos stored in the database 212 .
  • the recognition process can be any one of those known in the art, and specifically it may recognize individuals by their faces, appearance, and other factors.
  • the users may be prompted to facilitate the recognition process. For example, users may define an individual in a photo or add a description of the recognized objects.
  • the results of recognition can be added to the contextual information assigned to the content. Further, this information can be used to aggregate or sort content.
  • the processing module 210 is configured to analyze the uploaded content and determine parts of the content related to the same social events.
  • the processing module 210 may analyze the contextual information of the content to determine which parts of the content (i.e., photos) relate to the same social event.
  • the processing module 210 determines that photos were captured in one place, in one day, by one user, have the same tags, and so forth, it may be assumed that these photos relate to a social event, and the user is prompted to confirm it.
  • the processing module 210 may analyze not only the content uploaded by one user, but also the content uploaded by others.
  • the content database 212 may store content uploaded by users. This database may also comprise contextual information assigned to the content.
  • the member database 214 may store membership-related information, such as user profiles, personal information, and so forth. The users may access the databases 212 and 214 via the communication module 202 to review, modify, or delete information stored therein.
  • FIG. 3 is a diagram of the user device 102 .
  • the user device 102 may include a communication module 302 , a content generator 304 , an image recognition module 306 , a processing module 308 , and a database 310 .
  • the user device 102 may include additional, fewer, or different modules for various applications.
  • all modules can be integrated within a single system, or, alternatively, can be remotely located and optionally be accessed via a third party.
  • the user device 102 may be implemented as hardware having software installed thereon that implements the steps necessary to operate the sharing system according to example embodiments disclosed herein.
  • the communication module 302 may be configured to couple the user device 102 to the sharing system 104 , the one or more other user devices 102 , the one or more affiliated sites 106 , and the one or more e-mail servers 108 via the network 110 .
  • the coupling and data transfer may be provided via an API.
  • the communication module 302 is also configured to provide communication functionality between all modules of the device 102 .
  • the communication module 302 may generate user requests to store digital content to the content database 212 located at the sharing system 104 , manage member profiles in the member database 214 , access and manage content stored in the content database 212 , and the like.
  • the content generator 304 may be configured to generate digital media content such as photos, videos, audio, and text.
  • the content generator 304 is a digital camera embedded in the user device 102 .
  • the content generator 304 may be an input device to generate text data.
  • the content generated by the content generator 304 may be added with contextual information such as title, file name, date, time, name of the device generating such content, and so forth.
  • the image recognition module 306 may be configured to selectively recognize individuals, objects, characters, and the like on photos and videos stored in the database 310 .
  • the recognition process can be any one known in the art, and specifically it may recognize individuals by their faces, appearance, and other factors.
  • the users may be prompted to facilitate the recognition process. For example, users may define an individual on a photo or add a description of the recognized objects.
  • the results of recognition can be added to the contextual information assigned to the content. Further, this information can be used to aggregate or sort content.
  • the processing module 308 is configured to analyze the uploaded content and determine parts of the content related to the same social events.
  • the processing module 308 may analyze contextual information of the content to determine which parts of the content (i.e., photos) relate to the same social event.
  • the processing module 308 may determine that similar devices exist within a proximity distance from the user device 102 . This determination can be done by ascertaining a wireless environment, location (e.g., GPS coordinates), IP address, and so forth.
  • the content database 310 may store content generated by the user device 102 . This database 310 may also comprise contextual information, which is assigned to the content.
  • FIG. 4 is a process flow diagram showing a method 400 for sharing digital content by a server in a communication network comprising a set of user devices.
  • the method 400 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both.
  • the processing logic resides at the sharing system 104 .
  • the method 400 can be performed by the various modules discussed above with reference to FIG. 2 .
  • Each of these modules can comprise processing logic. It will be appreciated by one of ordinary skill that examples of the foregoing modules may be virtual, and instructions said to be executed by a module may, in fact, be retrieved and executed by a processor.
  • the foregoing modules may also include memory cards, servers, and/or computer discs. Although various modules may be configured to perform some or all of the various steps described herein, fewer or more modules may be provided and still fall within the scope of example embodiments.
  • the method 400 may commence at step 402 with the sharing system 104 receiving digital media content from a user device 102 . Receiving can be performed in any suitable manner. The content at this step may also be stored in the content database 212 .
  • the sharing system 104 obtains contextual information related to the received content.
  • such contextual information is part of the content (e.g., embedded therein).
  • the sharing system 104 may generate or update such contextual information based on a member profile (e.g., content can be assigned with information related to the member profile of the user who uploaded the content).
  • the sharing system 104 may utilize a combination of these techniques.
  • the sharing system 104 may determine parts of the content associated with a social event. It may be all uploaded content files related to a single social event, or only some of them. The determination process can be implemented differently.
  • the sharing system 104 may analyze the contextual information assigned to the content and reveal that captured photos or videos relate to the same social event.
  • the sharing system 104 may receive a user selection of specific parts of the content as being related to the same social event. For example, the user may be prompted to select those files that relate to the same social event.
  • the sharing system 104 may perform image recognition process to reveal individuals, objects, and the like. Based on the image recognition process, it may be concluded that certain images relate to the same social event.
  • the sharing system 104 may determine that one individual appears two or more times on different photos or videos. Alternatively, it may be revealed that all photos/videos were captured within the same environment (e.g., in one room). Those skilled in the art would readily understand that different approaches are applicable to making an assumption that a number of files relate to the same social event.
  • the sharing system 104 may generate a website having one or more web pages related to the received content.
  • the website may comprise a part of or the entire content received at step 402 .
  • the website may comprise only those content files that relate to the same social event.
  • the generation of the website may be performed in line with a user's predetermined criteria or selections in any suitable manner. An example of such a website is shown in FIG. 6 .
  • the website may become public and can be accessed by any users or a preset group of users to share the content among them.
  • the sharing system 104 may determine one or more other users associated with the social event.
  • such users may be defined through analysis of the content, contextual information assigned to the content, results of the image recognition process, and so forth. It can also be determined that the determined users are registered within the sharing website 116 . If they are not registered, they can be invited to join the community and register with the website 116 .
  • the users determined as being related to the same social event may optionally be prompted to review the content of the generated website, to share their content, to register with the sharing website, to provide their personal information, to leave comment or feedback, to set their privacy instructions for publishing such content on the Internet, and so forth.
  • Any individual recognized on the stored photos or videos can assign the privacy instructions.
  • the privacy instructions may comprise one or more of restrictions on sharing content in which they appear and visual modification of the photos or video (e.g., blurring parts of the photos in which they appear).
  • the sharing system 104 may intelligently aggregate the digital media content associated with the same social event from one or more sources. More specifically, in one example, the sharing system 104 may aggregate the received content parts from the one user such that these parts relate to the single social event. Alternatively, if it was previously determined that there were several attendants at the social event, and they have also uploaded their content related to the same social event, the sharing system 104 may intelligently aggregate this content from all such users. According to example embodiments disclosed herein, the website generated at step 408 may be updated to include the content from all these sources. In some other embodiments, the content can be aggregated from different databases, including the content database 212 , the user device database 310 , the databases of any affiliated website 106 , and the like.
  • the sharing system 104 may be configured to aggregate the relevant content from these sites (e.g., it may access member profiles and download photos or video previously stored within these affiliated sites). In general, any suitable approach for aggregating the content from different sources can be applied.
  • users are provided access to the generated website having aggregated content. Different users may be provided with different levels of access to the content. One may review content only, while other ones may be allowed to add, update, or delete content depending on the application.
  • a single website comprising aggregated content related to the same social event can be generated. This becomes useful when attendants of this event live in different cities, states, or countries, so that they do not need to travel from site to site to review all photos or videos captured during the social event.
  • the users may apply filtering to the aggregated digital media content included in the generated website. The filtering process can be based on the contextual information, user selections, privacy instructions, membership parameters, and so forth.
  • the user may filter the aggregated content such that it comprises only those parts (or files) that relate to a specific individual or object. For instance, the user may wish to sort all photos/videos on the website to see only those in which he/she appears. If this user is recognized on photos/videos (which can be determined by processing contextual information), then only those files can be provided for reviewing. Accordingly, the generated website may dynamically update depending on the users' wishes.
  • users indicated or determined as being presented on the uploaded photos/videos may be prompted to set privacy instructions in regards to sharing information about them. For instance, these users may allow anyone access to review their images. Alternatively, the users may block access to the corresponding photos/videos, or those parts of photos/video may be modified to be blurred, colored, excluded, or the like.
  • privacy instructions related to specific users appearing on the photos/videos may be attached to the contextual information and stored along with the content in the content database 212 , or attached to the member profile stored in the member database 214 . Thus, users may easily set privacy instructions that can be applied to both content uploaded by this user and the content uploaded by others.
  • FIG. 5 is a process flow diagram showing a method 500 for sharing digital content in a communication network comprising a set of user terminals.
  • the method 500 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both.
  • the processing logic resides at the sharing system 104 .
  • the method 500 can be performed by the various modules discussed above with reference to FIG. 3 .
  • Each of these modules can comprise processing logic. It will be appreciated by one of ordinary skill that examples of the foregoing modules may be virtual, and instructions said to be executed by a module may, in fact, be retrieved and executed by a processor.
  • the foregoing modules may also include memory cards, servers, and/or computer discs. Although various modules may be configured to perform some or all of various steps described herein, fewer or more modules may be provided and still fall within the scope of exemplary embodiments.
  • the method 500 may commence at step 502 with the user device 102 capturing digital media content.
  • the digital media content may comprise digital photos, images, text, audio, videos, and so forth.
  • the user device 102 may be a digital camera, a video camera, a computer, a cellular phone, a PDA, or any other suitable device with ability to capture photos, video and audio.
  • the generated content may be temporarily stored in the database 310 .
  • the user device 102 may determine that the captured digital content is associated with the same social event. Such determination can be performed variously.
  • the user may merely indicate (select) that the captured content relates to a certain social event.
  • the user may define its name and assign other relevant information such as date, time, location, participant names and their appearance on captured photos/video, commentary, and other information, which is defined as contextual information in terms of this document.
  • the user may also set a mode of the user device 102 such that the further generated content is automatically assigned with necessary tags denoting their relation to the social event.
  • the user device 102 may automatically determine that the captured content relates to the same social event.
  • the user device 102 may perform an image recognition process to selectively recognize individuals and objects on photos and videos.
  • the recognition process can be any one known in the art suitable for recognizing individuals by their faces, appearance, and other factors. Accordingly, when user device 102 determines that the same individual is captured on two or more photos/videos, or all photos/videos are captured within the same environment (in one room, in one place), the user device 102 may assume that a social event took place. The user may then be prompted to confirm the assumption and assign corresponding contextual information to such content.
  • the user device 102 may determine its geographical location based on data received from a GPS receiver embedded within the user device, or received from databases of a cellular network, or the like. When more than a certain number of pictures/videos are taken from one place, the user device 102 may determine that the captured content relates to the same social event.
  • the user device 102 may determine the presence of similar devices within a proximate distance.
  • the determination process can be of any suitable technique including Bluetooth, Wireless Ethernet, Wi-Fi, WiMax, or other techniques.
  • the user device 102 determines that similar devices are located within the proximity distance (e.g., twenty or less feet) for a certain time period (e.g., fifteen or more minutes), it may be assumed that a social event is taking place. Again, the user of the user device may be prompted to define any content generated or to be generated as being related to the social event.
  • the captured content is associated with the social event.
  • the content may be assigned with tags or contextual information to indicate a relation of the content to the social event.
  • the contextual information may comprise, among other things, information that a particular file relates to a certain social event, time, date, location, as well as containing titles, a name of the social event, data about recognized individuals, and so forth.
  • the user device 102 may determine one or more attendees of the social event. Primarily, such a determination can be based on the same algorithms as discussed with reference to the step 504 . More specifically, the user can be prompted to indicate one or more attendees (in other words, participants, as previously mentioned) of the social event. The user may indicate parts of photos or videos in which such attendees appear. The user may also be prompted to indicate their names and personal information, or provide a link to their member profile within the sharing website 116 or any affiliated website 106 (such as any social networking site.)
  • the attendees may be recognized on photos/video automatically in conjunction with the image recognition process.
  • the user may only need to assign names of the recognized individuals.
  • the attendees may be determined automatically as those who have a similar user device 102 and are located within a predetermined proximity distance from the user. When such devices are within the proximity range, and they are also used to capture photos, video or audio, such devices can be considered as assigned to the attendees. Different approaches can be applied to determine the attendees.
  • the step 506 of assigning the captured content to the social event may be performed when at least one attendee is determined.
  • the one or more attendees are defined. Information about attendees may be stored as part of the contextual information assigned to the content.
  • the user device 102 may transmit the captured content along with assigned contextual information to a remote server.
  • the remote server is the sharing system 104 as described with reference to FIG. 2 .
  • the remote server then allows different users to access the uploaded content. As mentioned, different users may be provided with different levels of access to the content depending on predetermined settings and privacy instructions of the attendees appearing in the photos or videos.
  • the attendees determined as being related to the same social event may optionally be prompted to share the content captured by their user devices with the same remote server.
  • the attendees may be requested to register with the sharing website, provide their personal information, leave comments or feedback, set their privacy instructions for publishing content on the Internet, and so forth.
  • FIG. 6 is a simplified illustration of a graphical user interface 600 , which aggregates digital media content from one or more sources.
  • the graphical user interface 600 may be represented as a window (e.g., a browser window) to show the aggregated content.
  • the aggregation of the content can be performed according to the technique described with reference to FIG. 4 .
  • the graphical user interface 600 may be presented on a screen of the user device 102 via the browser 112 or as ad hoc software 114 .
  • the user interface 600 may comprise one or more of content sections 602 .
  • Each section 602 may comprise an image 604 , a video 606 (i.e., an embedded media player to play the video), audio 608 (i.e., an embedded media player to play the audio), or text 610 .
  • the section 602 may comprise one or more images, video, text, and audio.
  • the text 610 may optionally comprise comments, a part of contextual information (e.g., date and time of capture, location, social event data, recognized attendees, names, titles, personal information, links to affiliated websites, links to personal profiles within the sharing website 116 or any social networking site, ranks, feedbacks, and so forth).
  • a part of contextual information e.g., date and time of capture, location, social event data, recognized attendees, names, titles, personal information, links to affiliated websites, links to personal profiles within the sharing website 116 or any social networking site, ranks, feedbacks, and so forth.
  • the user interface 600 may aggregate multiple sections 602 , with each section 602 originating from different users/attendees.
  • the graphical user interface 600 When the graphical user interface 600 is accessed the first time, it may comprise all content captured during certain social event. However, users are provided with the ability to sort the presented content. The user may select a preferable way for sorting from a drop menu 612 , and press an actionable button 614 “Sort.” Accordingly, the user may sort content by title, date, time, location, origination, recognized individuals, type of the content, and so forth.
  • the user interface 600 may also comprise an actionable button 616 to submit new content related to the same social event. When this button is activated, the user may be driven to a submission site to upload corresponding content.
  • the graphical user interface 600 may include additional, fewer, or different modules for various applications.
  • FIG. 7 is a simplified illustration of a graphical user interface 700 showing a photo 702 (e.g., section 604 ), which was subjected to an image recognition process.
  • the graphical user interface 700 may be presented on a screen of the user device 102 via the browser 112 or as ad hoc software 114 .
  • the user interface 700 may comprise a photo of a group of people.
  • a name is attributed to each individual in the photo.
  • the names can appear on top of the photo, and they can be presented as clickable targets, where “clickable” is another way for the targets to be “selectable.”
  • clickable targets where “clickable” is another way for the targets to be “selectable.”
  • the user clicks on (or selects) such targets the user can be driven to a web page comprising the content from the same social event, but sorted to show all photos or videos of the selected individual.
  • the individuals on the photo can be unrecognized or not assigned with personal information.
  • users may be provided with an option to recognize such individuals.
  • the graphical user interface 700 may comprise an actionable button 704 to indicate an individual on the photo.
  • the button 704 when the button 704 is pressed, the user is driven to select a part of the photo to indicate such individual and provide his/her personal information such as name, title, links to personal member profiles, and the like.
  • the graphical user interface 700 may include additional, fewer, or different modules for various applications.
  • FIG. 8 is a simplified illustration of a graphical user interface 800 of the user device 102 when it has determined that a social event is possibly taking place.
  • the user device 102 e.g., a digital camera
  • the graphical user interface 800 prompts the user to answer if this is a social event.
  • the user may select a “Yes” button 802 to indicate his/her desire to assign to the captured photos contextual information describing the social event and its attendees, and/or to indicate that all following photos are also related to the same social event.
  • the graphical user interface 800 may include additional, fewer, or different modules for various applications.
  • FIG. 9 is a simplified illustration of a graphical user interface 900 of the user device 102 when it has determined that a social event is possibly taking place, according to another example embodiment.
  • the user device 102 e.g., a digital camera or a cellular phone
  • the graphical user interface 900 may prompt the user to create a social event.
  • the user may select a “Yes” button 902 to indicate his/her desire to create a “Social Event.” If pressed, any photos captured with this device will be assigned with corresponding contextual information.
  • the users of the nearby detected devices will also be invited to participate in the social event and to share the content they may also capture with their devices.
  • the “No” button 904 the user may continue without turning the “social mode” on.
  • the graphical user interface 900 may include additional, fewer, or different modules for various applications.
  • FIG. 10 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system 1000 , within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed.
  • the machine operates as a standalone device or can be connected (e.g., networked) to other machines.
  • the machine can operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a PDA, a cellular telephone, a portable music player (e.g., a portable hard drive audio device, such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, a switch, a bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA set-top box
  • MP3 Moving Picture Experts Group Audio Layer 3
  • MP3 Moving Picture Experts Group Audio Layer 3
  • web appliance e.g., a web appliance, a network router, a switch, a bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • MP3 Moving Picture Experts Group Audio Layer 3
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of
  • the example computer system 1000 includes a processor or multiple processors 1002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and a main memory 1004 and a static memory 1006 , which communicate with each other via a bus 1008 .
  • the computer system 1000 can further include a video display unit 1010 (e.g., a liquid crystal display (LCD) or cathode ray tube (CRT)).
  • the computer system 1000 also includes at least one input device 1012 , such as an alphanumeric input device (e.g., a keyboard), a cursor control device (e.g., a mouse), a microphone, a digital camera, a video camera, and so forth.
  • the computer system 1000 also includes a disk drive unit 1014 , a signal generation device 1016 (e.g., a speaker), and a network interface device 1018 .
  • the disk drive unit 1014 includes a computer-readable medium 1020 which stores one or more sets of instructions and data structures (e.g., instructions 1022 ) embodying or utilized by any one or more of the methodologies or functions described herein.
  • the instructions 1022 can also reside, completely or at least partially, within the main memory 1004 , the static memory 1006 , and/or within the processors 1002 during execution thereof by the computer system 1000 .
  • the main memory 1004 and the processors 1002 also constitute machine-readable media.
  • the instructions 1022 can further be transmitted or received over the network 110 via the network interface device 1018 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP), CAN, Serial, and Modbus).
  • HTTP Hyper Text Transfer Protocol
  • CAN Serial
  • Modbus any one of a number of well-known transfer protocols
  • While the computer-readable medium 1020 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions.
  • the term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media. Such media can also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like.
  • the example embodiments described herein can be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware.
  • the computer-executable instructions can be written in a computer programming language or can be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interfaces to a variety of operating systems.
  • HTML Hypertext Markup Language
  • XML Extensible Markup Language
  • XSL Extensible Stylesheet Language
  • DSSSL Document Style Semantics and Specification Language
  • Cascading Style Sheets CSS
  • Synchronized Multimedia Integration Language SML
  • WML JavaTM, JiniTM, C, C++, Perl, UNIX Shell, Visual Basic or Visual Basic Script, Virtual Reality Markup Language (VRML), ColdFusionTM or other compilers, assemblers, interpreters or other computer languages or platforms.
  • the disclosed technique provides a useful tool to enable people to easily aggregate and share digital content such as photos, videos, and the like associated with social events via a network.
  • the aggregation can be performed from different sources in association with the same social event.
  • the content can also be subjected to an image recognition process to define one or more individuals appearing on the photos/videos.
  • Shared content may also be filtered to sort only those photos or videos in which certain participants appear.
  • users may set privacy rules to hide those parts of photos or video in which they appear.

Abstract

Methods and systems for generating a website to share aggregated digital content are provided. In one example embodiment, a system for aggregating and sharing digital content associated with social events via a network facilitates the aggregation and sharing of digital content, such as photos and videos, on a website. The aggregation may be performed with respect to the digital content received from different sources associated with the same social event. The digital content may also be subjected to an image recognition process to identify one or more individuals appearing in the photos or videos. The shared content may also be filtered to display only those photos or videos with specific individuals. In addition, users may be allowed to set privacy rules with respect to the photos and videos within which they appear.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 13/220,536, filed Aug. 29, 2011 and issued as U.S. Pat. No. 9,342,817 on May 17, 2016, and entitled “Auto-Creating Groups for Sharing Photos,” which in turn claims the benefit of U.S. Provisional Patent Application No. 61/505,505, filed Jul. 7, 2011, which are incorporated by reference in their entirety herein.
  • FIELD OF THE INVENTION
  • This disclosure relates generally to transmission of digital information and, more particularly, to methods and systems for aggregating and sharing digital content associated with social events.
  • DESCRIPTION OF RELATED ART
  • File sharing services are widely used by users to upload images and videos captured with mobile devices. Other users, who have access to such content online, can view the uploaded images and videos. A user may define privacy rules and indicate whether the content is publically available to all visitors or whether it can be shared through a specific group of friends or connected members.
  • Conventional file sharing services can be used to share images of sport events, parties, conferences, meetings, and the like between associated participants. Typically, an image or a video is captured with a mobile phone, a digital camera, a laptop, or the like and uploaded to a website at a later time. Other users may then review and download the uploaded content.
  • Such file sharing services become especially useful when participants live in different cities, states, or countries, and generally, it creates a great possibility to view or download related content by any participant. In many instances, such web sharing services may be a part of so these social media sites like social networking sites, blogging sites, file sharing sites, and so forth.
  • Even though a user can upload files to a folder named after a specific event, the files are typically uploaded without being otherwise associated with the depicted events. This way of storing and sharing files makes managing images and videos difficult, especially when the folders comprise hundreds or even thousands of files such that the user may have to sift through a great amount of irrelevant information to find relevant files. For example, birthday party attendants who wish to find themselves on photos uploaded to a file sharing website may have to sort through hundreds of images.
  • Furthermore, some social events, such as business meetings or conferences, may be photographed by multiple participants. Each of them may take photos and store these photos at different file sharing sites. As a result, it may be difficult for other participants to view or download all photos taken at an event as they will need to access multiple sites and may not know where to look.
  • Additionally, some users may wish to find one or more specific people who attended the same social event. Conventionally, it is a time consuming and complex process for a user to sort through all photos or videos in order to find specific people.
  • Participants of a social event may also wish to establish privacy rules for sharing media content depicting the event. Currently, participants are limited to establishing privacy rules with respect to the content they upload themselves, but not with respect to the content uploaded by others.
  • SUMMARY OF THE CLAIMED INVENTION
  • This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • A computer-implemented method for sharing digital media content by a server within a communication network may include a set of user devices receiving digital media content from one or more user devices associated with one or more users, determining one or more parts of the digital media content associated with a social event, aggregating the one or more parts associated with the social event to produce aggregated digital media content, and facilitating access to the aggregated digital media content by the one or more users.
  • The digital media content may include a digital photo, an image, a text, audio, and a video. The user device may include a digital camera, a video camera, a computer, a cellular phone, or a personal digital assistant (PDA). The social event may include a conference, a game play, or a leisure event.
  • The method may further include obtaining contextual information related to the digital media content. The contextual information may include a tag, a time, a date, a geographical location, a comment, social event data, and information related to one or more individuals or objects presented in the content. Determining parts of the digital media content associated with the social event may be based on the contextual information. Determining parts of the digital media content associated with the social event may include receiving a user request to associate the digital content with the social event.
  • The method may further include receiving privacy instructions from users recognized in the photo or video, with the privacy instructions including a restriction to share content and a modification of the photo or video.
  • The method may further include generating a website to share the media content associated with the social event. The method may further include implementing an image recognition process for the received digital media content and recognizing one or more individuals captured on a photo or a video, wherein the photo and the video relate to the digital media content. The image recognition process may be based on the contextual information.
  • The method may further include filtering parts of the aggregated digital media content based on contextual information, a user selection, a privacy instruction, or a user's personal information.
  • The method may further include determining users associated with the social event, with the determination based on the received digital media content, contextual information, or image recognition results.
  • The method may further include prompting a user associated with the social event to provide one or more of digital content, a comment, or feedback.
  • In further exemplary embodiments, modules, subsystems, or devices can be adapted to perform the recited steps. Other features and exemplary embodiments are described below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
  • FIG. 1 shows a block diagram illustrating a system environment suitable for aggregating and sharing digital content.
  • FIG. 2 is a diagram of a sharing system.
  • FIG. 3 is a diagram of a user device.
  • FIG. 4 is a process flow diagram of a method for sharing digital content by a server within a communication network comprising a set of user devices.
  • FIG. 5 is a process flow diagram showing a method for sharing digital content within a communication network comprising a set of user terminals.
  • FIG. 6 shows a graphical user interface aggregating digital media content from one or more sources.
  • FIG. 7 shows a graphical user interface including a photo subjected to an image recognition process.
  • FIG. 8 shows a graphical user interface of a user device suggesting a possible social event occurrence.
  • FIG. 9 shows a graphical user interface of a user device suggesting a possible social event occurrence, according to another example embodiment.
  • FIG. 10 is a diagrammatic representation of an example machine in the form of a computer system within which a set of instructions is executed.
  • DETAILED DESCRIPTION
  • The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with example embodiments. These example embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents.
  • In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. Furthermore, all publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
  • The embodiments described herein relate to methods for aggregating and sharing digital content associated with social events. The methods may be implemented within a network such as the Internet.
  • According to the example embodiments discussed below, participants of a social event may generate digital content such as photos, videos, audio, text, and so forth. The digital content may be generated by personal mobile devices including one or more of cellular phones, smart phones, laptops, computers, digital cameras, and the like. The captured digital content may be associated with the social event. As used herein, the term “social event” may refer to any type of an event involving a group of people. Typical social events may include conferences, meetings, parties, shows, sporting events, business meetings, and so forth.
  • There may be more than one way of associating captured content with a social event. In one example embodiment, a user may manually input social event information at a user device and indicate or set what content captured or to be captured relates to the social event. In one example embodiment, the captured content is tagged to denote its relation to the social event. For example, a user attending a party may set a digital camera so that all photos taken with the digital camera are automatically tagged to indicate their relation to the party. Thereafter, the user may sort the taken photos by filtering for those that relate to the party.
  • In another example embodiment, a social event can be automatically detected. Captured content such as photos, videos, and audio may be analyzed for related parts. For example, captured photos may be subjected to image recognition process. As a result, one or more individuals can be recognized and invited to share and aggregate digital content via a network.
  • Alternatively, a user may manually indicate which parts of the captured photos relate to specific individuals or objects. If a user camera (or mobile phone, laptop, etc.) indicates that two or more photos are taken within the same environment (same place, same party, etc.) and/or one or more of these individuals are captured on two or more photos, the user camera may assume that the captured photos relate to the same social event. Accordingly, the camera may suggest that the user turn on a corresponding operating mode to associate the captured photos with the social event.
  • There may be other ways to automatically define a social event. In some exemplary embodiments, the user device may determine that there is a similar active device within a certain predetermined distance. Then, both devices within the certain predetermined distance may invite their respective users to tag pictures as taken at the social event. Alternatively, the user device may indicate that another device is within a certain predetermined distance (e.g., 20 feet) and invite its user to generate digital content related to the social event.
  • The captured photos or videos (in other words, digital media content) may be tagged and associated with one or more social events on the user device or a remote server. The digital media content may also contain contextual information such as titles, a time, a date, conditions, a location (e.g., GPS coordinates), information related to recognized objects or individuals, and so forth.
  • The user may then upload the captured digital content to a remote server. The content can be hosted on a website and accessed by any person or by a specific group of people, depending on the privacy rules set. The users may sort uploaded photos and aggregate only those which relate to the same social event, or which contain a specific individual, or the like. For instance, among thousands of uploaded photos, the users may easily sort only those in which they appeared.
  • Furthermore, according to exemplary embodiments disclosed herein, the remote server may aggregate digital content from a number of users participating in the same social event. Different users may upload to the remote server their photos and videos from the event they attended. Accordingly, the remote server may selectively aggregate content from different sources within a single place (site). Alternatively, upon request of the user, the remote server may perform such aggregation. The users, therefore, are provided with a useful tool to sort photos or videos from different sources and select only those in which they are interested. For example, the users may see and download photos in which they appear that were previously uploaded by other participants.
  • In yet other embodiments, the remote server may also perform image recognition of the digital content and automatically determine that specific photos/videos relate to the same social event. The owners of the content may then be notified or invited to tag such content and associate it with the social event. Furthermore, the image recognition process may be used to determine specific individuals. Users may manually tag photos having recognized individuals and assign names, nicknames, and the like. In addition, personal information of the users or recognized individuals can be assigned to the uploaded content and can be used to sort photos. In some embodiments, personal information can be retrieved from other affiliated sites, such as social networking sites. Mentioned information assigned to the uploaded content may be used to aggregate and filter content stored on the site. Thus, users can use sorting to filter photos or videos in which searched-for individuals appear.
  • According to additional aspects, the users may set privacy rules within the remote server and/or site hosting the content. In one example, one user may establish privacy rules for all photos in which he/she appeared. In other words, even if content is uploaded by other users (other participants of the same social event), and it is recognized that the certain user is shown in specific photos, these photos can be modified (e.g., blurred in part where the first user is shown, deleted, blocked, and so forth) according to the privacy rules preset by this certain user. Various privacy rules can be set by individuals and groups, as can be readily understood by those skilled in the art.
  • Therefore, embodiments disclosed herein relate to a useful tool that enables people to easily aggregate and share digital content associated with social events via a network. The aggregation can be performed from different sources in association with the same social event. The content, such as photos and videos, can be subjected to an image recognition process to define one or more individuals. Shared content may be filtered to sort only those photos or videos in which social event participants appear. In addition, users may set privacy rules to hide those parts of photos or video in which they appear.
  • The embodiments described herein can be implemented by various means, depending on the application. For example, the embodiments may be implemented in hardware, firmware, software, or in a combination thereof. For hardware implementation, the embodiments may be implemented with processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof. Memory can be implemented within a processor or external to the processor. As used herein, the term “memory” refers to any type of long term, short term, volatile, nonvolatile, or other storage device and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored. For firmware and/or software implementation, the embodiments can be implemented with modules such as procedures, functions, and so on, that perform the functions described herein. Any machine-readable medium tangibly embodying instructions can be used in implementing the embodiments described herein.
  • Referring now to the drawings, FIG. 1 shows a block diagram illustrating a system environment 100 suitable for aggregating and sharing digital content. The system environment 100 comprises one or more user devices 102, a sharing system 104, one or more affiliated sites 106, one or more e-mail servers 108, and a network 110. The network 110 may couple the aforementioned modules.
  • The network 110 is a network of data processing nodes interconnected for the purpose of data communication, which may be utilized to communicatively couple various components of the environment 100. The network 110 may include the Internet or any other network capable of communicating data between devices. Suitable networks may include or interface with any one or more of, for instance, a local intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a MAN (Metropolitan Area Network), a virtual private network (VPN), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, an Ethernet connection, an ISDN (Integrated Services Digital Network) line, a dial-up port, such as a V.90, V.34 or V.34bis analog modem connection, a cable modem, an ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection. Furthermore, communications may also include links to any of a variety of wireless networks, including WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM (Global System for Mobile Communication), CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access), cellular phone networks, GPS, CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network. The network 110 can further include or interface with any one or more of an RS-232 serial connection, an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, an SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking.
  • As used herein, the term “user device” refers to a computer, a laptop, a tablet computer, a portable computing device, a PDA, a digital camera, a handheld cellular phone, a mobile phone, a smart phone, a cordless telephone, a handheld device having wireless connection capability, or any other electronic device suitable for capturing photos, videos, or audio. The user devices 102 may also receive or transmit data such as captured photos or videos via a cord or cordless network. In one example, the user devices 102 may be configured to browse websites or access remote servers via a network. A user device 102 can also be configured to determine its geographical location based on Global Positioning System (GPS) signals, Internet Protocol (IP) addresses, base station information, and so forth.
  • The user devices 102 can be used to generate digital media content such as photos, images, videos, audio, text, and so forth, and also to transmit the content via the network 110. The user devices 102 can also be used to access the content previously stored at a remote database (e.g., at the sharing system 104).
  • In some embodiments, the user devices 102 may comprise a browser 112 providing the ability to browse and interact with sites on the Internet. In some other embodiments, the user devices 102 may comprise software 114 to communicate with the sharing system 104. In one example, the software 114 is a mobile application embedded in the user device 102.
  • The sharing system 104, according to example embodiments disclosed herein, may be configured to receive digital content from the user devices 102, store the digital content, and share it with the same or other user devices 102 in an enhanced manner. Digital content can be accomplished with additional contextual information such as file names, titles, brief descriptions, times, dates, tags, and so forth. Furthermore, contextual information may also comprise results of image recognition (i.e., information on recognized individuals captured in a photo or video).
  • The sharing system 104 can be implemented as a server having multiple modules and databases. In one example, the sharing system 104 may host a site providing access for its visitors to the system. In other words, the site enables visitors to upload or download digital content to the sharing system 104. The sharing system 104 is described in detail below with reference to FIG. 2.
  • According to example embodiments disclosed herein, the one or more affiliated sites 106 may include any website on the Internet that may provide an access to the sharing system 104. In one example, the affiliated sites 106 have a gateway to the sharing system 104 to enable visitors of these sites to upload or download digital content to the sharing system 104.
  • The e-mail server 108 may transfer e-mail messages from one computer to another computer, using client-server application architecture. The e-mail server 108 may be used by one user device 102 to send a message to another user device 102, or may be used by the sharing system 104 to send messages to the user devices 102.
  • FIG. 2 is a diagram of the sharing system 104. In this embodiment, the sharing system 104 may include a communication module 202, a website generator 204, an aggregating module 206, an image recognition module 208, a processing module 210, a content database 212, and a member database 214. In other embodiments, the sharing system 104 may include additional, fewer, or different modules for various applications. Furthermore, all modules can be integrated within a single system, or, alternatively, can be remotely located and optionally be accessed via a third party.
  • The sharing system 104 may be implemented as hardware having software installed thereon that implements the steps necessary to operate the sharing system according to example embodiments disclosed herein. The sharing system 104 may also host a content sharing site 116 directed, among other things, to store, aggregate, and share digital content in an enhanced manner as described further.
  • According to example embodiments, the users (social event participants) may first register with the sharing site 116 and create a member profile. The membership details may be stored in the member database 214. The membership profile stored in the database 214 may store personal information, such as a name, a nickname, user credentials, a representative picture/photo/logo, an address, a phone number, a fax number, an e-mail address, a web address, credentials of associated member profiles of social media sites, or any other form of contact and personal information.
  • The users may manage their profiles (personal webpages within the sharing site 116) in the member database 214. In addition, the sharing site 116 may manually or automatically access the one or more affiliated sites 106. The sharing site 116 may also enable communication between users. According to another example, the sharing site 116 may allow the users to share information via the one or more affiliated sites 106.
  • The users may also upload digital content to the sharing system 104, which in turn can be stored in the content database 212. The users may also generate web pages with the help of the sharing system 104, which are associated with the uploaded content. The web pages may provide access to the content stored in the content database 212 to one or more different users. Accordingly, each generated web page may relate to the content of one social event. In addition, web pages can be managed by one or more users. The web pages may aggregate content from one or more users, and this content can be sorted or filtered according to example embodiments disclosed herein.
  • The communication module 202 may be configured to connect the sharing system 104 to the one or more user devices 102, the one or more affiliated sites 106, and the one or more e-mail servers 108 via the network 110. The connection and data transfer may be provided via an Application Programming Interface (API). The communication module 202 is also configured to provide communication functionality between all modules of the sharing system 104.
  • In particular, the communication module 202 may receive the user requests to store digital content to the member database 214, manage member profiles in the database 212, access and manage content stored in the content database 212, and the like. The communication module 202 can process all such user requests.
  • Pursuant to the example embodiment, the website generator 204 may be configured to generate websites comprising the uploaded content. The websites may contain content originating from one source (i.e. one user), or contain content aggregated from different sources (i.e. obtained from different users). Aggregation of the content can be performed based on assigned contextual information. In other words, aggregation can be performed such that all content related to the same social event is collected on a single website. Alternatively, aggregation can be performed relative to those photos/videos, which comprise certain individuals or objects. It should be apparent that different aggregation methods may be applied.
  • The websites may also comprise functionality to manage the digital content. For example, different users, who have access to such websites, may download content, add commentary, re-post to any affiliated website (e.g., social networking website), and so forth.
  • The aggregating module 206 is configured to aggregate content stored in the content database 212. Specifically, upon request of a user, the aggregating module 206 may sort and deliver to the user (or to the generating website) specific parts of the content. The sorting process can be based on contextual information assigned to every piece of the content. In particular, the aggregating module 206 may sort photos or videos (or other parts of the digital content) that relate to the same social event, show the same individuals or objects, were captured in one day and/or in one location, or the like. The aggregation process can be performed using different methods, which are discussed below.
  • The image recognition module 208 is configured to selectively recognize individuals, objects, characters and the like on photos and videos stored in the database 212. The recognition process can be any one of those known in the art, and specifically it may recognize individuals by their faces, appearance, and other factors. The users may be prompted to facilitate the recognition process. For example, users may define an individual in a photo or add a description of the recognized objects. The results of recognition can be added to the contextual information assigned to the content. Further, this information can be used to aggregate or sort content.
  • The processing module 210 is configured to analyze the uploaded content and determine parts of the content related to the same social events. In one example, when a user uploads content (e.g., photos) to the sharing system 104, the processing module 210 may analyze the contextual information of the content to determine which parts of the content (i.e., photos) relate to the same social event. In other words, if the processing module 210 determines that photos were captured in one place, in one day, by one user, have the same tags, and so forth, it may be assumed that these photos relate to a social event, and the user is prompted to confirm it. The processing module 210 may analyze not only the content uploaded by one user, but also the content uploaded by others.
  • The content database 212 may store content uploaded by users. This database may also comprise contextual information assigned to the content. The member database 214 may store membership-related information, such as user profiles, personal information, and so forth. The users may access the databases 212 and 214 via the communication module 202 to review, modify, or delete information stored therein.
  • FIG. 3 is a diagram of the user device 102. In this embodiment, the user device 102 may include a communication module 302, a content generator 304, an image recognition module 306, a processing module 308, and a database 310. In other embodiments, the user device 102 may include additional, fewer, or different modules for various applications. Furthermore, all modules can be integrated within a single system, or, alternatively, can be remotely located and optionally be accessed via a third party.
  • The user device 102 may be implemented as hardware having software installed thereon that implements the steps necessary to operate the sharing system according to example embodiments disclosed herein.
  • The communication module 302 may be configured to couple the user device 102 to the sharing system 104, the one or more other user devices 102, the one or more affiliated sites 106, and the one or more e-mail servers 108 via the network 110. The coupling and data transfer may be provided via an API. The communication module 302 is also configured to provide communication functionality between all modules of the device 102.
  • In particular, the communication module 302 may generate user requests to store digital content to the content database 212 located at the sharing system 104, manage member profiles in the member database 214, access and manage content stored in the content database 212, and the like.
  • The content generator 304 may be configured to generate digital media content such as photos, videos, audio, and text. In one example, the content generator 304 is a digital camera embedded in the user device 102. However, the content generator 304 may be an input device to generate text data. The content generated by the content generator 304 may be added with contextual information such as title, file name, date, time, name of the device generating such content, and so forth.
  • The image recognition module 306 may be configured to selectively recognize individuals, objects, characters, and the like on photos and videos stored in the database 310. The recognition process can be any one known in the art, and specifically it may recognize individuals by their faces, appearance, and other factors. The users may be prompted to facilitate the recognition process. For example, users may define an individual on a photo or add a description of the recognized objects. The results of recognition can be added to the contextual information assigned to the content. Further, this information can be used to aggregate or sort content.
  • The processing module 308 is configured to analyze the uploaded content and determine parts of the content related to the same social events. In one example, when a user uploads content (e.g., photos) to the sharing system 104, the processing module 308 may analyze contextual information of the content to determine which parts of the content (i.e., photos) relate to the same social event. In other words, if the processing module 308 determines that photos were captured in one place, in one day, have the same tags, and the like, it is assumed that these photos relate to a social event, and the user is prompted to confirm it. Furthermore, the processing module 308 may determine that similar devices exist within a proximity distance from the user device 102. This determination can be done by ascertaining a wireless environment, location (e.g., GPS coordinates), IP address, and so forth.
  • The content database 310 may store content generated by the user device 102. This database 310 may also comprise contextual information, which is assigned to the content.
  • FIG. 4 is a process flow diagram showing a method 400 for sharing digital content by a server in a communication network comprising a set of user devices. The method 400 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both. In one example embodiment, the processing logic resides at the sharing system 104.
  • The method 400 can be performed by the various modules discussed above with reference to FIG. 2. Each of these modules can comprise processing logic. It will be appreciated by one of ordinary skill that examples of the foregoing modules may be virtual, and instructions said to be executed by a module may, in fact, be retrieved and executed by a processor. The foregoing modules may also include memory cards, servers, and/or computer discs. Although various modules may be configured to perform some or all of the various steps described herein, fewer or more modules may be provided and still fall within the scope of example embodiments.
  • As shown in FIG. 4, the method 400 may commence at step 402 with the sharing system 104 receiving digital media content from a user device 102. Receiving can be performed in any suitable manner. The content at this step may also be stored in the content database 212.
  • At step 404, the sharing system 104 obtains contextual information related to the received content. In one example, such contextual information is part of the content (e.g., embedded therein). In another example, the sharing system 104 may generate or update such contextual information based on a member profile (e.g., content can be assigned with information related to the member profile of the user who uploaded the content). Alternatively, the sharing system 104 may utilize a combination of these techniques.
  • At step 406, the sharing system 104 may determine parts of the content associated with a social event. It may be all uploaded content files related to a single social event, or only some of them. The determination process can be implemented differently. In one example, the sharing system 104 may analyze the contextual information assigned to the content and reveal that captured photos or videos relate to the same social event. Alternatively, or in addition, the sharing system 104 may receive a user selection of specific parts of the content as being related to the same social event. For example, the user may be prompted to select those files that relate to the same social event. In yet another example, the sharing system 104 may perform image recognition process to reveal individuals, objects, and the like. Based on the image recognition process, it may be concluded that certain images relate to the same social event. For example, as a result of the image recognition, the sharing system 104 may determine that one individual appears two or more times on different photos or videos. Alternatively, it may be revealed that all photos/videos were captured within the same environment (e.g., in one room). Those skilled in the art would readily understand that different approaches are applicable to making an assumption that a number of files relate to the same social event.
  • In step 408, the sharing system 104 may generate a website having one or more web pages related to the received content. In particular, the website may comprise a part of or the entire content received at step 402. In one example, the website may comprise only those content files that relate to the same social event. The generation of the website may be performed in line with a user's predetermined criteria or selections in any suitable manner. An example of such a website is shown in FIG. 6. The website may become public and can be accessed by any users or a preset group of users to share the content among them.
  • At step 410, the sharing system 104 may determine one or more other users associated with the social event. In particular, such users may be defined through analysis of the content, contextual information assigned to the content, results of the image recognition process, and so forth. It can also be determined that the determined users are registered within the sharing website 116. If they are not registered, they can be invited to join the community and register with the website 116.
  • At step 412, the users determined as being related to the same social event may optionally be prompted to review the content of the generated website, to share their content, to register with the sharing website, to provide their personal information, to leave comment or feedback, to set their privacy instructions for publishing such content on the Internet, and so forth. Any individual recognized on the stored photos or videos can assign the privacy instructions. The privacy instructions may comprise one or more of restrictions on sharing content in which they appear and visual modification of the photos or video (e.g., blurring parts of the photos in which they appear).
  • At step 414, the sharing system 104 may intelligently aggregate the digital media content associated with the same social event from one or more sources. More specifically, in one example, the sharing system 104 may aggregate the received content parts from the one user such that these parts relate to the single social event. Alternatively, if it was previously determined that there were several attendants at the social event, and they have also uploaded their content related to the same social event, the sharing system 104 may intelligently aggregate this content from all such users. According to example embodiments disclosed herein, the website generated at step 408 may be updated to include the content from all these sources. In some other embodiments, the content can be aggregated from different databases, including the content database 212, the user device database 310, the databases of any affiliated website 106, and the like. For instance, if some participants of the social event are not registered with the sharing website 116, and have their accounts at some affiliated social media websites 106 (like social networking websites), the sharing system 104 may be configured to aggregate the relevant content from these sites (e.g., it may access member profiles and download photos or video previously stored within these affiliated sites). In general, any suitable approach for aggregating the content from different sources can be applied.
  • At step 416, users are provided access to the generated website having aggregated content. Different users may be provided with different levels of access to the content. One may review content only, while other ones may be allowed to add, update, or delete content depending on the application.
  • Thus, a single website comprising aggregated content related to the same social event can be generated. This becomes useful when attendants of this event live in different cities, states, or countries, so that they do not need to travel from site to site to review all photos or videos captured during the social event. In addition, the users may apply filtering to the aggregated digital media content included in the generated website. The filtering process can be based on the contextual information, user selections, privacy instructions, membership parameters, and so forth.
  • In one example, the user may filter the aggregated content such that it comprises only those parts (or files) that relate to a specific individual or object. For instance, the user may wish to sort all photos/videos on the website to see only those in which he/she appears. If this user is recognized on photos/videos (which can be determined by processing contextual information), then only those files can be provided for reviewing. Accordingly, the generated website may dynamically update depending on the users' wishes.
  • In yet another example, users indicated or determined as being presented on the uploaded photos/videos may be prompted to set privacy instructions in regards to sharing information about them. For instance, these users may allow anyone access to review their images. Alternatively, the users may block access to the corresponding photos/videos, or those parts of photos/video may be modified to be blurred, colored, excluded, or the like. Such privacy instructions related to specific users appearing on the photos/videos may be attached to the contextual information and stored along with the content in the content database 212, or attached to the member profile stored in the member database 214. Thus, users may easily set privacy instructions that can be applied to both content uploaded by this user and the content uploaded by others.
  • FIG. 5 is a process flow diagram showing a method 500 for sharing digital content in a communication network comprising a set of user terminals. The method 500 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, and microcode), software (such as software run on a general-purpose computer system or a dedicated machine), or a combination of both. In one example embodiment, the processing logic resides at the sharing system 104.
  • The method 500 can be performed by the various modules discussed above with reference to FIG. 3. Each of these modules can comprise processing logic. It will be appreciated by one of ordinary skill that examples of the foregoing modules may be virtual, and instructions said to be executed by a module may, in fact, be retrieved and executed by a processor. The foregoing modules may also include memory cards, servers, and/or computer discs. Although various modules may be configured to perform some or all of various steps described herein, fewer or more modules may be provided and still fall within the scope of exemplary embodiments.
  • As shown in FIG. 5, the method 500 may commence at step 502 with the user device 102 capturing digital media content. The digital media content may comprise digital photos, images, text, audio, videos, and so forth. The user device 102 may be a digital camera, a video camera, a computer, a cellular phone, a PDA, or any other suitable device with ability to capture photos, video and audio. At this step, the generated content may be temporarily stored in the database 310.
  • At step 504, the user device 102 may determine that the captured digital content is associated with the same social event. Such determination can be performed variously.
  • In one example, the user may merely indicate (select) that the captured content relates to a certain social event. The user may define its name and assign other relevant information such as date, time, location, participant names and their appearance on captured photos/video, commentary, and other information, which is defined as contextual information in terms of this document. The user may also set a mode of the user device 102 such that the further generated content is automatically assigned with necessary tags denoting their relation to the social event.
  • In another example, the user device 102 may automatically determine that the captured content relates to the same social event. For this process, the user device 102 may perform an image recognition process to selectively recognize individuals and objects on photos and videos. The recognition process can be any one known in the art suitable for recognizing individuals by their faces, appearance, and other factors. Accordingly, when user device 102 determines that the same individual is captured on two or more photos/videos, or all photos/videos are captured within the same environment (in one room, in one place), the user device 102 may assume that a social event took place. The user may then be prompted to confirm the assumption and assign corresponding contextual information to such content.
  • In still another example, the user device 102 may determine its geographical location based on data received from a GPS receiver embedded within the user device, or received from databases of a cellular network, or the like. When more than a certain number of pictures/videos are taken from one place, the user device 102 may determine that the captured content relates to the same social event.
  • In another example, the user device 102 may determine the presence of similar devices within a proximate distance. The determination process can be of any suitable technique including Bluetooth, Wireless Ethernet, Wi-Fi, WiMax, or other techniques. When the user device 102 determines that similar devices are located within the proximity distance (e.g., twenty or less feet) for a certain time period (e.g., fifteen or more minutes), it may be assumed that a social event is taking place. Again, the user of the user device may be prompted to define any content generated or to be generated as being related to the social event.
  • Those skilled in the art would understand that different approaches for determining the presence of the social event can be applied, and they are not limited by those that are discussed herein.
  • At step 506, the captured content is associated with the social event. In other words, the content may be assigned with tags or contextual information to indicate a relation of the content to the social event. As mentioned, the contextual information may comprise, among other things, information that a particular file relates to a certain social event, time, date, location, as well as containing titles, a name of the social event, data about recognized individuals, and so forth.
  • At step 508, the user device 102 may determine one or more attendees of the social event. Primarily, such a determination can be based on the same algorithms as discussed with reference to the step 504. More specifically, the user can be prompted to indicate one or more attendees (in other words, participants, as previously mentioned) of the social event. The user may indicate parts of photos or videos in which such attendees appear. The user may also be prompted to indicate their names and personal information, or provide a link to their member profile within the sharing website 116 or any affiliated website 106 (such as any social networking site.)
  • In some other embodiments, the attendees may be recognized on photos/video automatically in conjunction with the image recognition process. The user may only need to assign names of the recognized individuals. In yet another example, the attendees may be determined automatically as those who have a similar user device 102 and are located within a predetermined proximity distance from the user. When such devices are within the proximity range, and they are also used to capture photos, video or audio, such devices can be considered as assigned to the attendees. Different approaches can be applied to determine the attendees.
  • Furthermore, according to some embodiments, the step 506 of assigning the captured content to the social event may be performed when at least one attendee is determined. Alternatively, when it is determined that the social event is taking place, the one or more attendees are defined. Information about attendees may be stored as part of the contextual information assigned to the content.
  • At step 510, the user device 102 may transmit the captured content along with assigned contextual information to a remote server. In one example, the remote server is the sharing system 104 as described with reference to FIG. 2. The remote server then allows different users to access the uploaded content. As mentioned, different users may be provided with different levels of access to the content depending on predetermined settings and privacy instructions of the attendees appearing in the photos or videos.
  • At step 512, the attendees determined as being related to the same social event may optionally be prompted to share the content captured by their user devices with the same remote server. In addition, the attendees may be requested to register with the sharing website, provide their personal information, leave comments or feedback, set their privacy instructions for publishing content on the Internet, and so forth.
  • FIG. 6 is a simplified illustration of a graphical user interface 600, which aggregates digital media content from one or more sources. The graphical user interface 600 may be represented as a window (e.g., a browser window) to show the aggregated content. The aggregation of the content can be performed according to the technique described with reference to FIG. 4. The graphical user interface 600 may be presented on a screen of the user device 102 via the browser 112 or as ad hoc software 114.
  • The user interface 600 may comprise one or more of content sections 602. Each section 602 may comprise an image 604, a video 606 (i.e., an embedded media player to play the video), audio 608 (i.e., an embedded media player to play the audio), or text 610. The section 602 may comprise one or more images, video, text, and audio.
  • The text 610 may optionally comprise comments, a part of contextual information (e.g., date and time of capture, location, social event data, recognized attendees, names, titles, personal information, links to affiliated websites, links to personal profiles within the sharing website 116 or any social networking site, ranks, feedbacks, and so forth).
  • The user interface 600 may aggregate multiple sections 602, with each section 602 originating from different users/attendees. When the graphical user interface 600 is accessed the first time, it may comprise all content captured during certain social event. However, users are provided with the ability to sort the presented content. The user may select a preferable way for sorting from a drop menu 612, and press an actionable button 614 “Sort.” Accordingly, the user may sort content by title, date, time, location, origination, recognized individuals, type of the content, and so forth.
  • The user interface 600 may also comprise an actionable button 616 to submit new content related to the same social event. When this button is activated, the user may be driven to a submission site to upload corresponding content. In other embodiments, the graphical user interface 600 may include additional, fewer, or different modules for various applications.
  • FIG. 7 is a simplified illustration of a graphical user interface 700 showing a photo 702 (e.g., section 604), which was subjected to an image recognition process. The graphical user interface 700 may be presented on a screen of the user device 102 via the browser 112 or as ad hoc software 114.
  • As shown, the user interface 700 may comprise a photo of a group of people. When the people in the photo are recognized, a name is attributed to each individual in the photo. The names can appear on top of the photo, and they can be presented as clickable targets, where “clickable” is another way for the targets to be “selectable.” In one example, when the user clicks on (or selects) such targets, the user can be driven to a web page comprising the content from the same social event, but sorted to show all photos or videos of the selected individual.
  • In some embodiments, the individuals on the photo can be unrecognized or not assigned with personal information. Thus, users may be provided with an option to recognize such individuals. Accordingly, the graphical user interface 700 may comprise an actionable button 704 to indicate an individual on the photo. In particular, when the button 704 is pressed, the user is driven to select a part of the photo to indicate such individual and provide his/her personal information such as name, title, links to personal member profiles, and the like. In other embodiments, the graphical user interface 700 may include additional, fewer, or different modules for various applications.
  • FIG. 8 is a simplified illustration of a graphical user interface 800 of the user device 102 when it has determined that a social event is possibly taking place.
  • As shown in the figure, the user device 102 (e.g., a digital camera) has determined through an image recognition process that the user has taken more than five photos in the same place. The graphical user interface 800 prompts the user to answer if this is a social event. The user may select a “Yes” button 802 to indicate his/her desire to assign to the captured photos contextual information describing the social event and its attendees, and/or to indicate that all following photos are also related to the same social event. Alternatively, by pressing the “No” button 804, the user may continue without turning the “social mode” on. In other embodiments, the graphical user interface 800 may include additional, fewer, or different modules for various applications.
  • FIG. 9 is a simplified illustration of a graphical user interface 900 of the user device 102 when it has determined that a social event is possibly taking place, according to another example embodiment.
  • As shown, the user device 102 (e.g., a digital camera or a cellular phone) has determined that there are two similar devices within the predetermined proximity distance, which may also be taking photos of the same social event. Hence, the graphical user interface 900 may prompt the user to create a social event. The user may select a “Yes” button 902 to indicate his/her desire to create a “Social Event.” If pressed, any photos captured with this device will be assigned with corresponding contextual information. In addition, the users of the nearby detected devices will also be invited to participate in the social event and to share the content they may also capture with their devices. Alternatively, by pressing the “No” button 904, the user may continue without turning the “social mode” on. In other embodiments, the graphical user interface 900 may include additional, fewer, or different modules for various applications.
  • FIG. 10 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system 1000, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed. In example embodiments, the machine operates as a standalone device or can be connected (e.g., networked) to other machines. In a networked deployment, the machine can operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a PDA, a cellular telephone, a portable music player (e.g., a portable hard drive audio device, such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, a switch, a bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The example computer system 1000 includes a processor or multiple processors 1002 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and a main memory 1004 and a static memory 1006, which communicate with each other via a bus 1008. The computer system 1000 can further include a video display unit 1010 (e.g., a liquid crystal display (LCD) or cathode ray tube (CRT)). The computer system 1000 also includes at least one input device 1012, such as an alphanumeric input device (e.g., a keyboard), a cursor control device (e.g., a mouse), a microphone, a digital camera, a video camera, and so forth. The computer system 1000 also includes a disk drive unit 1014, a signal generation device 1016 (e.g., a speaker), and a network interface device 1018.
  • The disk drive unit 1014 includes a computer-readable medium 1020 which stores one or more sets of instructions and data structures (e.g., instructions 1022) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 1022 can also reside, completely or at least partially, within the main memory 1004, the static memory 1006, and/or within the processors 1002 during execution thereof by the computer system 1000. The main memory 1004 and the processors 1002 also constitute machine-readable media.
  • The instructions 1022 can further be transmitted or received over the network 110 via the network interface device 1018 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP), CAN, Serial, and Modbus).
  • While the computer-readable medium 1020 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media. Such media can also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like.
  • The example embodiments described herein can be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware. The computer-executable instructions can be written in a computer programming language or can be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interfaces to a variety of operating systems. Although not limited thereto, computer software programs for implementing the present method can be written in any number of suitable programming languages such as, for example, Hypertext Markup Language (HTML), Dynamic HTML, Extensible Markup Language (XML), Extensible Stylesheet Language (XSL), Document Style Semantics and Specification Language (DSSSL), Cascading Style Sheets (CSS), Synchronized Multimedia Integration Language (SMIL), Wireless Markup Language (WML), Java™, Jini™, C, C++, Perl, UNIX Shell, Visual Basic or Visual Basic Script, Virtual Reality Markup Language (VRML), ColdFusion™ or other compilers, assemblers, interpreters or other computer languages or platforms.
  • Thus, methods and systems for aggregating and sharing digital content associated with social events via a network have been described. The disclosed technique provides a useful tool to enable people to easily aggregate and share digital content such as photos, videos, and the like associated with social events via a network. The aggregation can be performed from different sources in association with the same social event. The content can also be subjected to an image recognition process to define one or more individuals appearing on the photos/videos. Shared content may also be filtered to sort only those photos or videos in which certain participants appear. In addition, users may set privacy rules to hide those parts of photos or video in which they appear.
  • Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims (30)

What is claimed:
1. A computer-implemented method for generating a website to share digital media content by a server within a communication network comprising a set of user devices, the method comprising:
receiving digital media content from one or more user devices associated with one or more users;
determining one or more parts of the digital media content associated with a social event;
aggregating the one or more parts associated with the social event to produce aggregated digital media content; and
collecting the aggregated digital media content on a website.
2. The computer-implemented method of claim 1, wherein the digital media content comprises one or more of a digital photo, an image, a text, an audio, and a video.
3. The computer-implemented method of claim 1, wherein the user device comprises one or more of a digital camera, a video camera, a computer, a cellular phone, and a personal digital assistant (PDA).
4. The computer-implemented method of claim 1, wherein the social event comprises one or more of a meeting, a conference, a game play, and a leisure event.
5. The computer-implemented method of claim 1, further comprising obtaining contextual information related to the digital media content.
6. The computer-implemented method of claim 5, wherein the contextual information comprise one or more of a tag, a time, a date, a geographical location, a comment, social event data, and information related to one or more individuals or objects presented in the content.
7. The computer-implemented method of claim 5, wherein the determining parts of the digital media content associated with the social event is based on the contextual information.
8. The computer-implemented method of claim 1, wherein the determining parts of the digital media content associated with the social event comprises receiving a user request to associate the digital content with the social event.
9. The computer-implemented method of claim 8, further comprising receiving privacy instructions from one or more users recognized on the photo or the video, wherein the privacy instructions comprise one or more of a restriction to share content and a modification of a photo or a video.
10. The computer-implemented method of claim 1, in which the digital media content of the website is originated from one source, or is aggregated from different sources.
11. The computer-implemented method of claim 1, further comprising:
implementing an image recognition process for the received digital media content; and
recognizing one or more individuals captured on a photo or a video, wherein the photo and the video relate to the digital media content.
12. The computer-implemented method of claim 11, wherein the image recognition process is based on the contextual information.
13. The computer-implemented method of claim 1, further comprising filtering parts of the aggregated digital media content based on one or more of contextual information, a user selection, a privacy instruction, and user personal information.
14. The computer-implemented method of claim 1, further comprising determining one or more users associated with the social event, wherein the determination is based on one or more of the following: received digital media content, contextual information, and image recognition results.
15. The computer-implemented method of claim 14, further comprising prompting a user associated with the social event to provide one or more of digital content, a comment, and feedback.
16. A system for generating a website to share digital media content, the system comprising:
at least one subsystem configured to receive digital media content from one or more user devices;
at least one subsystem configured to determine one or more parts of the digital media content associated with a social event;
at least one subsystem to aggregate the one or more parts with the social event to produce aggregated digital media content;
at least one subsystem to collect the aggregated digital media content on a website; and
a memory coupled to the at least one subsystem, the memory comprising computer codes for the at least one subsystem.
17. A computer-readable medium having instructions stored thereon, which when executed by one or more computers, cause the one or more computers to:
receive digital media content from one or more user devices associated with one or more users;
determine one or more parts of the digital media content associated with a social event;
aggregate the one or more parts associated with the social event to produce aggregated digital media content; and
collect the aggregated digital media content on a website.
18. A computer-implemented method for generating a website to share digital media content within a communication network comprising a set of user devices, the method comprising:
receiving digital media content captured by a user device;
determining that the digital media content is associated with a social event;
associating the digital media content with the social event,
collecting the associated digital media content on a website; and
transmitting the associated digital media content to a remote server.
19. The computer-implemented method of claim 18, wherein the determination that the digital media content is associated with the social event comprises receiving a user selection to indicate that the generated media or the media to be generated relates to a social event.
20. The computer-implemented method of claim 18, further comprising:
implementing an image recognition process of the digital media content; and
identifying one or more individuals captured on a photo or a video, wherein the photo and the video relate to the digital media content.
21. The computer-implemented method of claim 20, further comprising identifying one or more attendees of the social event.
22. The computer-implemented method of claim 20, wherein the identification of one or more attendees includes identifying one or more user devices capturing digital media content associated with the social event.
23. The computer-implemented method of claim 20, wherein the identification of one or more attendees of the social event comprises receiving user input related to the one or more users associated with the same social event.
24. The computer-implemented method of claim 23, wherein the identification of one or more attendees of the social event comprises identifying one or more individuals recognized during the image recognition process.
25. The computer-implemented method of claim 18, wherein the generated digital media content is associated with the social event when at least one attendee is identified.
26. The computer-implemented method of claim 25, wherein the generated digital media content is associated with the social event when at least one individual is recognized on photos or video within the media content.
27. The computer-implemented method of claim 18, further comprising prompting the attendees to share digital media content captured by respective user devices.
28. The computer-implemented method of claim 18, further comprising generating contextual information associated with the digital media content, wherein the contextual information includes one or more of a tag, a time, a date, a geographical location, a comment, and social event data.
29. A system for generating a website to share digital media content, the system comprising:
at least one processor configured to receive digital media content captured by a user device;
at least one processor configured to determine that the generated digital media content is associated with a social event;
at least one processor configured to associate the captured digital media content with the social event;
at least one processor configured to collect the associated digital media content on a website;
at least one processor configured to transmit the digital media content to a remote server; and
a memory coupled to the at least one processor, the memory comprising codes for the at least one processor.
30. A computer-readable medium having instructions stored thereon, which when executed by one or more computers, cause the one or more computers to:
receive digital media content captured by a user device;
determine that the generated digital media content is associated with a social event;
associate the captured digital media content with the social event;
collect the associated digital media content on a website; and
transmit the digital media content to a remote server.
US15/156,146 2011-07-07 2016-05-16 Generating a Website to Share Aggregated Content Abandoned US20160261669A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/156,146 US20160261669A1 (en) 2011-07-07 2016-05-16 Generating a Website to Share Aggregated Content

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201161505505P 2011-07-07 2011-07-07
US13/220,536 US9342817B2 (en) 2011-07-07 2011-08-29 Auto-creating groups for sharing photos
US15/156,146 US20160261669A1 (en) 2011-07-07 2016-05-16 Generating a Website to Share Aggregated Content

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/220,536 Continuation US9342817B2 (en) 2011-07-07 2011-08-29 Auto-creating groups for sharing photos

Publications (1)

Publication Number Publication Date
US20160261669A1 true US20160261669A1 (en) 2016-09-08

Family

ID=47437401

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/220,536 Active US9342817B2 (en) 2011-07-07 2011-08-29 Auto-creating groups for sharing photos
US15/156,146 Abandoned US20160261669A1 (en) 2011-07-07 2016-05-16 Generating a Website to Share Aggregated Content

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/220,536 Active US9342817B2 (en) 2011-07-07 2011-08-29 Auto-creating groups for sharing photos

Country Status (3)

Country Link
US (2) US9342817B2 (en)
CN (2) CN107491701B (en)
WO (1) WO2013006584A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140019542A1 (en) * 2003-08-20 2014-01-16 Ip Holdings, Inc. Social Networking System and Behavioral Web
US20160092732A1 (en) 2014-09-29 2016-03-31 Sony Computer Entertainment Inc. Method and apparatus for recognition and matching of objects depicted in images
IT201700116131A1 (en) * 2017-10-16 2018-01-16 Alessandro Chicone Process for managing content in a wireless telecommunications network
WO2018089379A1 (en) * 2016-11-14 2018-05-17 Leyefe, Inc. Time-sensitive image data management systems and methods for enriching social events
US20180189521A1 (en) * 2017-01-05 2018-07-05 Microsoft Technology Licensing, Llc Analyzing data to determine an upload account
US10284505B2 (en) * 2017-05-03 2019-05-07 International Business Machines Corporation Social media interaction aggregation for duplicate image posts
US10786736B2 (en) 2010-05-11 2020-09-29 Sony Interactive Entertainment LLC Placement of user information in a game space
US10831823B2 (en) 2014-11-18 2020-11-10 Huawei Technologies Co., Ltd. Photo distribution method and terminal
IT202000013630A1 (en) * 2020-06-08 2021-12-08 Pica Group S P A METHOD OF ACCESSING MULTIMEDIA CONTENT

Families Citing this family (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012019631A1 (en) 2010-08-13 2012-02-16 Telefonaktiebolaget L M Ericsson (Publ) Load distribution architecture for processing tunnelled internet protocol traffic
US20120213404A1 (en) 2011-02-18 2012-08-23 Google Inc. Automatic event recognition and cross-user photo clustering
US9342817B2 (en) 2011-07-07 2016-05-17 Sony Interactive Entertainment LLC Auto-creating groups for sharing photos
US9280545B2 (en) * 2011-11-09 2016-03-08 Microsoft Technology Licensing, Llc Generating and updating event-based playback experiences
KR101969583B1 (en) * 2012-01-10 2019-08-20 삼성전자주식회사 Method for management content, apparatus and computer readable recording medium thereof
CN102546835B (en) * 2012-03-08 2014-07-02 腾讯科技(深圳)有限公司 Method for sharing contents, terminal, server and system
US9391792B2 (en) 2012-06-27 2016-07-12 Google Inc. System and method for event content stream
US9361626B2 (en) * 2012-10-16 2016-06-07 Google Inc. Social gathering-based group sharing
US9418370B2 (en) 2012-10-23 2016-08-16 Google Inc. Obtaining event reviews
US9832622B2 (en) * 2012-11-06 2017-11-28 Facebook, Inc. Systems and methods for device-dependent image transformations
US9358461B2 (en) * 2012-12-26 2016-06-07 Sony Interactive Entertainment America Llc Systems and methods for ranking of cloud executed mini-games based on tag content and social network content
US9841714B2 (en) * 2013-02-05 2017-12-12 Facebook, Inc. Creating social prints from photographs maintained by a social networking system
US10133754B2 (en) * 2013-02-10 2018-11-20 Qualcomm Incorporated Peer-to-peer picture sharing using custom based rules for minimal power consumption and better user experience
US20140250175A1 (en) * 2013-03-01 2014-09-04 Robert M. Baldwin Prompted Sharing of Photos
US20140258405A1 (en) * 2013-03-05 2014-09-11 Sean Perkin Interactive Digital Content Sharing Among Users
US20140253727A1 (en) * 2013-03-08 2014-09-11 Evocentauri Inc. Systems and methods for facilitating communications between a user and a public official
US9202143B2 (en) 2013-04-29 2015-12-01 Microsoft Technology Licensing, Llc Automatic photo grouping by events
WO2014179810A1 (en) 2013-05-03 2014-11-06 Digimarc Corporation Watermarking and signal recogniton for managing and sharing captured content, metadata discovery and related arrangements
KR102085179B1 (en) * 2013-05-07 2020-04-16 삼성전자주식회사 System and method for providing content based on location of device
WO2014197733A2 (en) * 2013-06-05 2014-12-11 V-Poll Method and apparatus for dynamic presentation of composite media
CN103338256B (en) * 2013-06-28 2015-09-23 腾讯科技(深圳)有限公司 Image sharing method, device, server and system
US20150019523A1 (en) * 2013-07-15 2015-01-15 Adam Lior Event-based social networking system and method
US9621505B1 (en) * 2013-07-20 2017-04-11 Google Inc. Providing images with notifications
US9754328B2 (en) * 2013-08-08 2017-09-05 Academia Sinica Social activity planning system and method
US10007954B2 (en) 2013-08-23 2018-06-26 International Business Machines Corporation Managing an initial post on a website
CN105659286B (en) * 2013-09-18 2021-09-28 英特尔公司 Automated image cropping and sharing
US20150095414A1 (en) * 2013-09-27 2015-04-02 F-Secure Corporation Event Scene Identification in Group Event
US11669562B2 (en) 2013-10-10 2023-06-06 Aura Home, Inc. Method of clustering photos for digital picture frames with split screen display
US10474407B2 (en) * 2013-10-10 2019-11-12 Pushd, Inc. Digital picture frame with automated interactions with viewer and viewer devices
US10824666B2 (en) * 2013-10-10 2020-11-03 Aura Home, Inc. Automated routing and display of community photographs in digital picture frames
WO2015061696A1 (en) * 2013-10-25 2015-04-30 Peep Mobile Digital Social event system
US9628986B2 (en) 2013-11-11 2017-04-18 At&T Intellectual Property I, L.P. Method and apparatus for providing directional participant based image and video sharing
US9350774B2 (en) * 2013-12-16 2016-05-24 Dropbox, Inc. Automatic sharing of digital multimedia
US20150195314A1 (en) * 2014-01-03 2015-07-09 Snapcious LLC Method and system for distributed collection and distribution of photographs
US9661260B2 (en) 2014-02-03 2017-05-23 Synchronoss Technologies, Inc. Photograph or video tagging based on peered devices
US9992246B2 (en) * 2014-03-27 2018-06-05 Tvu Networks Corporation Methods, apparatus, and systems for instantly sharing video content on social media
US9537934B2 (en) 2014-04-03 2017-01-03 Facebook, Inc. Systems and methods for interactive media content exchange
US9396354B1 (en) 2014-05-28 2016-07-19 Snapchat, Inc. Apparatus and method for automated privacy protection in distributed images
JP6277570B2 (en) * 2014-06-12 2018-02-14 本田技研工業株式会社 Captured image exchange system, imaging apparatus, and captured image exchange method
US9113301B1 (en) 2014-06-13 2015-08-18 Snapchat, Inc. Geo-location based event gallery
US10139987B2 (en) * 2014-07-18 2018-11-27 Google Llc Automated group recommendation
KR102226820B1 (en) * 2014-08-20 2021-03-11 삼성전자주식회사 Method for sharing data and electronic device thereof
US10824654B2 (en) 2014-09-18 2020-11-03 Snap Inc. Geolocation-based pictographs
US10311916B2 (en) 2014-12-19 2019-06-04 Snap Inc. Gallery of videos set to an audio time line
US9385983B1 (en) 2014-12-19 2016-07-05 Snapchat, Inc. Gallery of messages from individuals with a shared interest
EP3913561A1 (en) 2014-12-30 2021-11-24 Benjamin Ashley Smyth Computer-implemented method for improving a social network site computer network, and terminal, system and computer readable medium for the same
US10467434B2 (en) 2014-12-30 2019-11-05 Benjamin Ashley Smyth Computer-implemented method for improving a social network site computer network, and terminal, system and computer readable medium for the same
US10133705B1 (en) 2015-01-19 2018-11-20 Snap Inc. Multichannel system
KR102346630B1 (en) * 2015-02-17 2022-01-03 삼성전자주식회사 Method for recommending a content based on activitys of a plurality of users and apparatus thereof
KR102371138B1 (en) 2015-03-18 2022-03-10 스냅 인코포레이티드 Geo-fence authorization provisioning
US10135949B1 (en) 2015-05-05 2018-11-20 Snap Inc. Systems and methods for story and sub-story navigation
US10671677B2 (en) * 2015-06-12 2020-06-02 Smugmug, Inc. Advanced keyword search application
CN105069075B (en) * 2015-07-31 2018-02-23 小米科技有限责任公司 Photo be shared method and apparatus
CN105187649B (en) * 2015-09-11 2020-01-17 郑德豪 Information processing method, electronic terminal and cloud server
EP3274878A1 (en) 2015-09-28 2018-01-31 Google LLC Sharing images and image albums over a communication network
WO2017078777A1 (en) * 2015-11-04 2017-05-11 Intel Corporation Generating voxel representations and assigning trust metrics for ensuring veracity for use with multiple applications
CN105488526B (en) * 2015-11-26 2019-07-09 嵊州明智科技服务有限公司 The auto-screening method of the shared photo of group
US10354425B2 (en) 2015-12-18 2019-07-16 Snap Inc. Method and system for providing context relevant media augmentation
CN105681166B (en) * 2016-01-27 2019-01-18 网易传媒科技(北京)有限公司 A kind of information sharing method and device
CN109844717B (en) 2016-08-14 2023-05-23 利维帕尔森有限公司 System and method for real-time remote control of mobile applications
US9906610B1 (en) * 2016-09-01 2018-02-27 Fotoccasion, Inc Event-based media sharing
US10432559B2 (en) * 2016-10-24 2019-10-01 Snap Inc. Generating and displaying customized avatars in electronic messages
US10740388B2 (en) 2017-01-24 2020-08-11 Microsoft Technology Licensing, Llc Linked capture session for automatic image sharing
US10582277B2 (en) 2017-03-27 2020-03-03 Snap Inc. Generating a stitched data stream
US10581782B2 (en) 2017-03-27 2020-03-03 Snap Inc. Generating a stitched data stream
US10129306B1 (en) * 2017-04-21 2018-11-13 Prysm, Inc. Shared applications including shared applications that permit retrieval, presentation and traversal of information resources
WO2018212815A1 (en) 2017-05-17 2018-11-22 Google Llc Automatic image sharing with designated users over a communication network
JP6767319B2 (en) * 2017-07-31 2020-10-14 株式会社ソニー・インタラクティブエンタテインメント Information processing device and file copy method
US10129573B1 (en) * 2017-09-20 2018-11-13 Microsoft Technology Licensing, Llc Identifying relevance of a video
US10497250B1 (en) * 2017-09-27 2019-12-03 State Farm Mutual Automobile Insurance Company Real property monitoring systems and methods for detecting damage and other conditions
JP7063585B2 (en) * 2017-11-27 2022-05-09 シャープ株式会社 Terminal device, conference management system, program and conference management method
US10331394B1 (en) * 2017-12-21 2019-06-25 Logmein, Inc. Manipulating shared screen content
US10860876B2 (en) * 2017-12-28 2020-12-08 Fujifilm Corporation Image presentation system, image presentation method, program, and recording medium
US11290530B2 (en) * 2018-06-01 2022-03-29 Apple Inc. Customizable, pull-based asset transfer requests using object models
US11012403B1 (en) * 2018-09-04 2021-05-18 Facebook, Inc. Storylines: collaborative feedback system
US10880433B2 (en) * 2018-09-26 2020-12-29 Rovi Guides, Inc. Systems and methods for curation and delivery of content for use in electronic calls
US10956791B2 (en) * 2019-07-19 2021-03-23 LayerJot, Inc. Interactive generation and publication of an augmented-reality application
CN112351133B (en) * 2019-08-07 2022-02-25 华为技术有限公司 Media data sharing method and terminal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052349A1 (en) * 2006-08-27 2008-02-28 Michael Lin Methods and System for Aggregating Disparate Batches of Digital Media Files Captured During an Event for the Purpose of Inclusion into Public Collections for Sharing
US20100050090A1 (en) * 2006-09-14 2010-02-25 Freezecrowd, Inc. System and method for facilitating online social networking
US7800646B2 (en) * 2008-12-24 2010-09-21 Strands, Inc. Sporting event image capture, processing and publication
US20110066743A1 (en) * 2009-09-14 2011-03-17 Fergus Gerard Hurley Method for providing event based media streams

Family Cites Families (419)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB958588A (en) 1960-12-08 1964-05-21 Gen Electric Co Ltd Improvements in or relating to colour television receivers
US3147341A (en) 1961-04-24 1964-09-01 Gen Electric Automatic brightness-contrast control using photoresistive element to control brightness and agc voltages in response to ambinent light
FR1419481A (en) 1964-01-07 1965-11-26 Bicycle
US3943277A (en) 1969-02-20 1976-03-09 The United States Of America As Represented By The Secretary Of The Navy Digital memory area correlation tracker
US3717345A (en) 1971-03-15 1973-02-20 Prayfel Inc Computerized race game
GB1419481A (en) 1973-06-20 1975-12-31 Ciba Geigy Ag Dithiole derivatives useful as additives for lubricating oils and other organic materials
JPS5164923A (en) 1974-12-03 1976-06-04 Nippon Kogaku Kk Kamerano roshutsukeino hyojikairo
NL7509871A (en) 1975-08-20 1977-02-22 Philips Nv COLOR TV CHROMA KEY SIGNAL GENERATOR.
US4090216A (en) 1976-05-26 1978-05-16 Gte Sylvania Incorporated Ambient light contrast and color control circuit
US4068847A (en) 1976-06-23 1978-01-17 The Magnavox Company Chroma and luminance signal generator for video games
US4116444A (en) 1976-07-16 1978-09-26 Atari, Inc. Method for generating a plurality of moving objects on a video display screen
US4166430A (en) 1977-09-14 1979-09-04 Amerace Corporation Fluid pressure indicator
US4166429A (en) 1977-09-14 1979-09-04 Amerace Corporation Fluid pressure indicator
US4133004A (en) 1977-11-02 1979-01-02 Hughes Aircraft Company Video correlation tracker
US4448200A (en) 1978-03-27 1984-05-15 University Of Southern California System and method for dynamic background subtraction
JPS5513582A (en) 1978-07-13 1980-01-30 Sanyo Electric Co Ltd Color television receiver
US4203385A (en) 1978-12-12 1980-05-20 Amerace Corporation Fluid pressure indicator
US4241341A (en) 1979-03-05 1980-12-23 Thorson Mark R Apparatus for scan conversion
US4321635A (en) 1979-04-20 1982-03-23 Teac Corporation Apparatus for selective retrieval of information streams or items
US4355334A (en) 1981-05-29 1982-10-19 Zenith Radio Corporation Dimmer and dimmer override control for a display device
JPS5846783A (en) 1981-09-12 1983-03-18 Sony Corp Chromakey device
JPS58195957A (en) 1982-05-11 1983-11-15 Casio Comput Co Ltd Program starting system by voice
US4514727A (en) 1982-06-28 1985-04-30 Trw Inc. Automatic brightness control apparatus
US4757525A (en) 1982-09-29 1988-07-12 Vmx, Inc. Electronic audio communications system with voice command features
FI68131C (en) 1983-06-30 1985-07-10 Valtion Teknillinen REFERENCE FOR A WINDOW MACHINE WITH A GLASS LED WITH A LASER INDICATOR
IL69327A (en) 1983-07-26 1986-11-30 Elscint Ltd Automatic misregistration correction
US4675562A (en) 1983-08-01 1987-06-23 Fairchild Semiconductor Corporation Method and apparatus for dynamically controlling the timing of signals in automatic test systems
IL72685A (en) 1983-08-30 1988-08-31 Gen Electric Advanced video object generator
US4646075A (en) 1983-11-03 1987-02-24 Robert Bosch Corporation System and method for a data processing pipeline
US4649504A (en) 1984-05-22 1987-03-10 Cae Electronics, Ltd. Optical position and orientation measurement techniques
US5555532A (en) 1984-05-23 1996-09-10 The United States Of America As Represented By The Secretary Of The Navy Method and apparatus for target imaging with sidelooking sonar
US4658247A (en) 1984-07-30 1987-04-14 Cornell Research Foundation, Inc. Pipelined, line buffered real-time color graphics display system
JPH0746391B2 (en) 1984-09-14 1995-05-17 株式会社日立製作所 Graphic seeding device
US4672564A (en) 1984-11-15 1987-06-09 Honeywell Inc. Method and apparatus for determining location and orientation of objects
US4683466A (en) 1984-12-14 1987-07-28 Honeywell Information Systems Inc. Multiple color generation on a display
US4737921A (en) 1985-06-03 1988-04-12 Dynamic Digital Displays, Inc. Three dimensional medical image display system
EP0229849B1 (en) 1985-07-05 1996-03-06 Dai Nippon Insatsu Kabushiki Kaisha Method and apparatus for designing three-dimensional container
JPH0814854B2 (en) 1985-10-11 1996-02-14 株式会社日立製作所 3D graphic display device
IL77610A (en) 1986-01-15 1994-01-25 Technion Res & Dev Foundation Single camera three-dimensional head position sensing system
US4843568A (en) 1986-04-11 1989-06-27 Krueger Myron W Real time perception of and response to the actions of an unencumbered participant/user
JP2603947B2 (en) 1986-09-26 1997-04-23 オリンパス光学工業株式会社 Apparatus for detecting corresponding areas between primary color images
US4807158A (en) 1986-09-30 1989-02-21 Daleco/Ivex Partners, Ltd. Method and apparatus for sampling images to simulate movement within a multidimensional space
US4764727A (en) 1986-10-14 1988-08-16 Mcconchie Sr Noel P Circuit continuity and voltage tester
US4905168A (en) 1986-10-15 1990-02-27 Atari Games Corporation Object processing for video system using slips and linked list
US4905147A (en) 1986-10-15 1990-02-27 Logg George E Collision detection system for video system
JPS63177193A (en) 1987-01-19 1988-07-21 株式会社日立製作所 Display device
US4864515A (en) 1987-03-30 1989-09-05 Honeywell Inc. Electronic sensing screen for measuring projectile parameters
FR2613572B1 (en) 1987-04-03 1993-01-22 Thomson Csf LIGHT DATA VISUALIZATION SYSTEM WITH IMPROVED READABILITY
US5014327A (en) 1987-06-15 1991-05-07 Digital Equipment Corporation Parallel associative memory having improved selection and decision mechanisms for recognizing and sorting relevant patterns
US4980823A (en) 1987-06-22 1990-12-25 International Business Machines Corporation Sequential prefetching with deconfirmation
US4860197A (en) 1987-07-31 1989-08-22 Prime Computer, Inc. Branch cache system with instruction boundary determination independent of parcel boundary
US5162781A (en) 1987-10-02 1992-11-10 Automated Decisions, Inc. Orientational mouse computer input system
US5363120A (en) 1987-10-14 1994-11-08 Wang Laboratories, Inc. Computer input device using orientation sensor
US4866637A (en) 1987-10-30 1989-09-12 International Business Machines Corporation Pipelined lighting model processing system for a graphics workstation's shading function
US4901064A (en) 1987-11-04 1990-02-13 Schlumberger Technologies, Inc. Normal vector shading for 3-D graphics display system
US4992972A (en) 1987-11-18 1991-02-12 International Business Machines Corporation Flexible context searchable on-line information system with help files and modules for on-line computer system documentation
US4942538A (en) 1988-01-05 1990-07-17 Spar Aerospace Limited Telerobotic tracker
US5369737A (en) 1988-03-21 1994-11-29 Digital Equipment Corporation Normalization of vectors associated with a display pixels of computer generated images
GB8808608D0 (en) 1988-04-12 1988-05-11 Boc Group Plc Dry pump with booster
US4991223A (en) 1988-06-30 1991-02-05 American Innovision, Inc. Apparatus and method for recognizing image features using color elements
US5448687A (en) 1988-09-13 1995-09-05 Computer Design, Inc. Computer-assisted design system for flattening a three-dimensional surface and for wrapping a flat shape to a three-dimensional surface
US4933864A (en) 1988-10-04 1990-06-12 Transitions Research Corporation Mobile robot navigation employing ceiling light fixtures
US5045843B1 (en) 1988-12-06 1996-07-16 Selectech Ltd Optical pointing device
US5222203A (en) 1989-01-20 1993-06-22 Daikin Industries, Ltd. Method and apparatus for displaying translucent surface
US5034986A (en) 1989-03-01 1991-07-23 Siemens Aktiengesellschaft Method for detecting and tracking moving objects in a digital image sequence having a stationary background
US4969036A (en) 1989-03-31 1990-11-06 Bir Bhanu System for computing the self-motion of moving images devices
AU631661B2 (en) 1989-06-20 1992-12-03 Fujitsu Limited Method for measuring position and posture of object
US5367615A (en) 1989-07-10 1994-11-22 General Electric Company Spatial augmentation of vertices and continuous level of detail transition for smoothly varying terrain polygon density
FR2652972B1 (en) 1989-10-06 1996-11-29 Thomson Video Equip METHOD AND DEVICE FOR INTEGRATING SELF-ADAPTIVE COLOR VIDEO IMAGES.
GB9001468D0 (en) 1990-01-23 1990-03-21 Sarnoff David Res Center Computing multiple motions within an image region
US5668646A (en) 1990-02-06 1997-09-16 Canon Kabushiki Kaisha Apparatus and method for decoding differently encoded multi-level and binary image data, the later corresponding to a color in the original image
ATE137377T1 (en) 1990-02-06 1996-05-15 Canon Kk IMAGE PROCESSING DEVICE
US5064291A (en) 1990-04-03 1991-11-12 Hughes Aircraft Company Method and apparatus for inspection of solder joints utilizing shape determination from shading
US5128671A (en) 1990-04-12 1992-07-07 Ltv Aerospace And Defense Company Control device having multiple degrees of freedom
EP0461577B1 (en) 1990-06-11 1998-12-02 Hitachi, Ltd. Apparatus for generating object motion path
US5265888A (en) 1990-06-22 1993-11-30 Nintendo Co., Ltd. Game apparatus and memory cartridge used therefor
US5253339A (en) 1990-07-26 1993-10-12 Sun Microsystems, Inc. Method and apparatus for adaptive Phong shading
US5354202A (en) 1990-08-01 1994-10-11 Atari Games Corporation System and method for driver training with multiple driver competition
US5269687A (en) 1990-08-01 1993-12-14 Atari Games Corporation System and method for recursive driver training
US5208763A (en) 1990-09-14 1993-05-04 New York University Method and apparatus for determining position and orientation of mechanical objects
US5274560A (en) 1990-12-03 1993-12-28 Audio Navigation Systems, Inc. Sensor free vehicle navigation system utilizing a voice input/output interface for routing a driver from his source point to his destination point
US5268996A (en) 1990-12-20 1993-12-07 General Electric Company Computer image generation method for determination of total pixel illumination due to plural light sources
US5128794A (en) 1990-12-31 1992-07-07 Honeywell Inc. Scanning laser helmet mounted sight
US5534917A (en) 1991-05-09 1996-07-09 Very Vivid, Inc. Video image based control system
US5548667A (en) 1991-05-24 1996-08-20 Sony Corporation Image processing system and method thereof in which three dimensional shape is reproduced from two dimensional image data
US5227985A (en) 1991-08-19 1993-07-13 University Of Maryland Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monitored object
US5305389A (en) 1991-08-30 1994-04-19 Digital Equipment Corporation Predictive cache system
US5212888A (en) 1991-09-16 1993-05-25 Calcomp Inc. Dual function sensor for a pen plotter
US5537638A (en) 1991-10-25 1996-07-16 Hitachi, Ltd. Method and system for image mapping
US5335557A (en) 1991-11-26 1994-08-09 Taizo Yasutake Touch sensitive input control device
US5631697A (en) 1991-11-27 1997-05-20 Hitachi, Ltd. Video camera capable of automatic target tracking
US5734384A (en) 1991-11-29 1998-03-31 Picker International, Inc. Cross-referenced sectioning and reprojection of diagnostic image volumes
US5230623A (en) 1991-12-10 1993-07-27 Radionics, Inc. Operating pointer with interactive computergraphics
US5680487A (en) 1991-12-23 1997-10-21 Texas Instruments Incorporated System and method for determining optical flow
US5590248A (en) 1992-01-02 1996-12-31 General Electric Company Method for reducing the complexity of a polygonal mesh
US5377313A (en) 1992-01-29 1994-12-27 International Business Machines Corporation Computer graphics display method and system with shadow generation
US5953485A (en) 1992-02-07 1999-09-14 Abecassis; Max Method and system for maintaining audio during video control
DE69301308T2 (en) 1992-02-18 1996-05-23 Evans & Sutherland Computer Co IMAGE TEXTURING SYSTEM WITH THEME CELLS.
US5577179A (en) 1992-02-25 1996-11-19 Imageware Software, Inc. Image editing system
US5307137A (en) 1992-03-16 1994-04-26 Mark F. Jones Terrain imaging apparatus and method
JP3107452B2 (en) 1992-04-28 2000-11-06 株式会社日立製作所 Texture mapping method and apparatus
US5450504A (en) 1992-05-19 1995-09-12 Calia; James Method for finding a most likely matching of a target facial image in a data base of facial images
US5366376A (en) 1992-05-22 1994-11-22 Atari Games Corporation Driver training system and method with performance data feedback
JP3391405B2 (en) 1992-05-29 2003-03-31 株式会社エフ・エフ・シー Object identification method in camera image
US5473736A (en) 1992-06-08 1995-12-05 Chroma Graphics Method and apparatus for ordering and remapping colors in images of real two- and three-dimensional objects
IL102289A (en) 1992-06-24 1997-08-14 R Technologies Ltd B V Method and system for processing moving images
EP0580361B1 (en) 1992-07-21 2000-02-02 Pioneer Electronic Corporation Disc player and method of reproducing information of the same
EP0582875B1 (en) 1992-07-27 2001-10-31 Matsushita Electric Industrial Co., Ltd. Apparatus for parallel image generation
US5361385A (en) 1992-08-26 1994-11-01 Reuven Bakalash Parallel computing system for volumetric modeling, data processing and visualization
US5982352A (en) 1992-09-18 1999-11-09 Pryor; Timothy R. Method for providing human input to a computer
JP2682559B2 (en) 1992-09-30 1997-11-26 インターナショナル・ビジネス・マシーンズ・コーポレイション Apparatus and method for displaying image of object on display device and computer graphics display system
GB2271259A (en) 1992-10-02 1994-04-06 Canon Res Ct Europe Ltd Processing image data
US5469193A (en) 1992-10-05 1995-11-21 Prelude Technology Corp. Cordless pointing apparatus
EP0598295B1 (en) 1992-11-17 1998-10-14 Matsushita Electric Industrial Co., Ltd. Video and audio signal multiplexing apparatus and separating apparatus
US5405151A (en) 1992-11-20 1995-04-11 Sega Of America, Inc. Multi-player video game with cooperative mode and competition mode
US5387943A (en) 1992-12-21 1995-02-07 Tektronix, Inc. Semiautomatic lip sync recovery system
US5890122A (en) 1993-02-08 1999-03-30 Microsoft Corporation Voice-controlled computer simulateously displaying application menu and list of available commands
US5660547A (en) 1993-02-17 1997-08-26 Atari Games Corporation Scenario development system for vehicle simulators
JP3679426B2 (en) 1993-03-15 2005-08-03 マサチューセッツ・インスティチュート・オブ・テクノロジー A system that encodes image data into multiple layers, each representing a coherent region of motion, and motion parameters associated with the layers.
EP0622747B1 (en) 1993-04-01 2000-05-31 Sun Microsystems, Inc. Method and apparatus for an adaptive texture mapping controller
GB9308952D0 (en) 1993-04-30 1993-06-16 Philips Electronics Uk Ltd Tracking objects in video sequences
US5297061A (en) 1993-05-19 1994-03-22 University Of Maryland Three dimensional pointing device monitored by computer vision
EP0633546B1 (en) 1993-07-02 2003-08-27 Siemens Corporate Research, Inc. Background recovery in monocular vision
JPH0757117A (en) 1993-07-09 1995-03-03 Silicon Graphics Inc Forming method of index to texture map and computer control display system
US5550960A (en) 1993-08-02 1996-08-27 Sun Microsystems, Inc. Method and apparatus for performing dynamic texture mapping for complex surfaces
US5598514A (en) 1993-08-09 1997-01-28 C-Cube Microsystems Structure and method for a multistandard video encoder/decoder
JP2916076B2 (en) 1993-08-26 1999-07-05 シャープ株式会社 Image display device
EP0641993B1 (en) 1993-09-03 1999-06-30 Canon Kabushiki Kaisha Shape measuring apparatus
JP2552427B2 (en) 1993-12-28 1996-11-13 コナミ株式会社 Tv play system
FR2714502A1 (en) 1993-12-29 1995-06-30 Philips Laboratoire Electroniq An image processing method and apparatus for constructing from a source image a target image with perspective change.
US5559950A (en) 1994-02-02 1996-09-24 Video Lottery Technologies, Inc. Graphics processor enhancement unit
US5699497A (en) 1994-02-17 1997-12-16 Evans & Sutherland Computer Corporation Rendering global macro texture, for producing a dynamic image, as on computer generated terrain, seen from a moving viewpoint
US5611000A (en) 1994-02-22 1997-03-11 Digital Equipment Corporation Spline-based image registration
CA2198611A1 (en) 1994-09-06 1996-03-14 Arie E. Kaufman Apparatus and method for real-time volume visualization
US5526041A (en) 1994-09-07 1996-06-11 Sensormatic Electronics Corporation Rail-based closed circuit T.V. surveillance system with automatic target acquisition
US5920842A (en) 1994-10-12 1999-07-06 Pixel Instruments Signal synchronization
JP2642070B2 (en) 1994-11-07 1997-08-20 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and system for generating quadrilateral mesh
US5649032A (en) 1994-11-14 1997-07-15 David Sarnoff Research Center, Inc. System for automatically aligning images to form a mosaic image
EP0715453B1 (en) 1994-11-28 2014-03-26 Canon Kabushiki Kaisha Camera controller
JP3578498B2 (en) 1994-12-02 2004-10-20 株式会社ソニー・コンピュータエンタテインメント Image information processing device
GB2295936B (en) 1994-12-05 1997-02-05 Microsoft Corp Progressive image transmission using discrete wavelet transforms
GB9501832D0 (en) 1995-01-31 1995-03-22 Videologic Ltd Texturing and shading of 3-d images
US5818553A (en) 1995-04-10 1998-10-06 Norand Corporation Contrast control for a backlit LCD
IL113572A (en) 1995-05-01 1999-03-12 Metalink Ltd Symbol decoder
US5757360A (en) 1995-05-03 1998-05-26 Mitsubishi Electric Information Technology Center America, Inc. Hand held computer control device
US5672820A (en) 1995-05-16 1997-09-30 Boeing North American, Inc. Object location identification system for providing location data of an object being pointed at by a pointing device
US5913727A (en) 1995-06-02 1999-06-22 Ahdoot; Ned Interactive movement and contact simulation game
US5572261A (en) 1995-06-07 1996-11-05 Cooper; J. Carl Automatic audio to video timing measurement device and method
US5617407A (en) 1995-06-21 1997-04-01 Bareis; Monica M. Optical disk having speech recognition templates for information access
US5805745A (en) 1995-06-26 1998-09-08 Lucent Technologies Inc. Method for locating a subject's lips in a facial image
US6002738A (en) 1995-07-07 1999-12-14 Silicon Graphics, Inc. System and method of performing tomographic reconstruction and volume rendering using texture mapping
US5704024A (en) 1995-07-20 1997-12-30 Silicon Graphics, Inc. Method and an apparatus for generating reflection vectors which can be unnormalized and for using these reflection vectors to index locations on an environment map
US6199093B1 (en) 1995-07-21 2001-03-06 Nec Corporation Processor allocating method/apparatus in multiprocessor system, and medium for storing processor allocating program
US5864342A (en) 1995-08-04 1999-01-26 Microsoft Corporation Method and system for rendering graphical objects to image chunks
US5808617A (en) 1995-08-04 1998-09-15 Microsoft Corporation Method and system for depth complexity reduction in a graphics rendering system
US5852443A (en) 1995-08-04 1998-12-22 Microsoft Corporation Method and system for memory decomposition in a graphics rendering system
US6016150A (en) 1995-08-04 2000-01-18 Microsoft Corporation Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers
US5977977A (en) 1995-08-04 1999-11-02 Microsoft Corporation Method and system for multi-pass rendering
US5870097A (en) 1995-08-04 1999-02-09 Microsoft Corporation Method and system for improving shadowing in a graphics rendering system
JP3203160B2 (en) 1995-08-09 2001-08-27 三菱電機株式会社 Volume rendering apparatus and method
US5856844A (en) 1995-09-21 1999-01-05 Omniplanar, Inc. Method and apparatus for determining position and orientation
US5825929A (en) 1995-10-05 1998-10-20 Microsoft Corporation Transformation block optimization method
US5818424A (en) 1995-10-19 1998-10-06 International Business Machines Corporation Rod shaped device and data acquisition apparatus for determining the position and orientation of an object in space
GB2306834B8 (en) 1995-11-03 2000-02-01 Abbotsbury Software Ltd Tracking apparatus for use in tracking an object
KR100261076B1 (en) 1995-11-09 2000-07-01 윤종용 Rendering method and apparatus of performing bump mapping and phong shading at the same time
US5825308A (en) 1996-11-26 1998-10-20 Immersion Human Interface Corporation Force feedback interface having isotonic and isometric functionality
FR2741769B1 (en) 1995-11-23 1997-12-19 Thomson Broadcast Systems PROCESS FOR PROCESSING THE SIGNAL CONSTITUTED BY A SUBJECT EVOLVING BEFORE A COLORED BACKGROUND AND DEVICE IMPLEMENTING THIS METHOD
FR2741770B1 (en) 1995-11-23 1998-01-02 Thomson Broadcast Systems METHOD FOR CALCULATING A CUTTING KEY OF A SUBJECT EVOLVING IN FRONT OF A COLORED BACKGROUND AND DEVICE IMPLEMENTING THIS METHOD
US6028593A (en) 1995-12-01 2000-02-22 Immersion Corporation Method and apparatus for providing simulated physical interactions within computer generated environments
US5963209A (en) 1996-01-11 1999-10-05 Microsoft Corporation Encoding and progressive transmission of progressive meshes
US5574836A (en) 1996-01-22 1996-11-12 Broemmelsiek; Raymond M. Interactive display apparatus and method with viewer position compensation
JP3668547B2 (en) 1996-01-29 2005-07-06 ヤマハ株式会社 Karaoke equipment
AU2123297A (en) 1996-02-12 1997-08-28 Golf Age Technologies Golf driving range distancing apparatus and methods
US6049619A (en) 1996-02-12 2000-04-11 Sarnoff Corporation Method and apparatus for detecting moving objects in two- and three-dimensional scenes
US6009188A (en) 1996-02-16 1999-12-28 Microsoft Corporation Method and system for digital plenoptic imaging
US5982390A (en) 1996-03-25 1999-11-09 Stan Stoneking Controlling personality manifestations by objects in a computer-assisted animation environment
US5764803A (en) 1996-04-03 1998-06-09 Lucent Technologies Inc. Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences
US5889505A (en) 1996-04-04 1999-03-30 Yale University Vision-based six-degree-of-freedom computer input device
US6018347A (en) 1996-04-12 2000-01-25 Multigen Paradigm, Inc. Methods and apparatus for rendering three-dimensional images
US6348921B1 (en) 1996-04-12 2002-02-19 Ze Hong Zhao System and method for displaying different portions of an object in different levels of detail
US5923318A (en) 1996-04-12 1999-07-13 Zhai; Shumin Finger manipulatable 6 degree-of-freedom input device
US5894308A (en) 1996-04-30 1999-04-13 Silicon Graphics, Inc. Interactively reducing polygon count in three-dimensional graphic objects
GB9609197D0 (en) 1996-05-02 1996-07-03 Philips Electronics Nv Process control with evaluation of stored referential expressions
US5805170A (en) 1996-05-07 1998-09-08 Microsoft Corporation Systems and methods for wrapping a closed polygon around an object
US5769718A (en) 1996-05-15 1998-06-23 Rieder; William R. Video game apparatus and medium readable by a computer stored with video game program
US6034693A (en) 1996-05-28 2000-03-07 Namco Ltd. Image synthesizing apparatus, image synthesizing method and information storage medium
JP3664336B2 (en) 1996-06-25 2005-06-22 株式会社日立メディコ Method and apparatus for setting viewpoint position and gaze direction in 3D image construction method
WO1998002223A1 (en) 1996-07-11 1998-01-22 Sega Enterprises, Ltd. Voice recognizer, voice recognizing method and game machine using them
GB9616184D0 (en) 1996-08-01 1996-09-11 Philips Electronics Nv Virtual environment navigation
US5933150A (en) 1996-08-06 1999-08-03 Interval Research Corporation System for image manipulation and animation using embedded constraint graphics
US5781194A (en) 1996-08-29 1998-07-14 Animatek International, Inc. Real-time projection of voxel-based object
CN1480903A (en) 1996-08-29 2004-03-10 ������������ʽ���� Specificity information assignment, object extraction and 3-D model generation method and appts thereof
JP3358169B2 (en) 1996-08-30 2002-12-16 インターナショナル・ビジネス・マシーンズ・コーポレーション Mirror surface rendering method and apparatus
JP3387750B2 (en) 1996-09-02 2003-03-17 株式会社リコー Shading processing equipment
US5786801A (en) 1996-09-06 1998-07-28 Sony Corporation Back light control apparatus and method for a flat display system
US5854632A (en) 1996-10-15 1998-12-29 Real 3D Apparatus and method for simulating specular reflection in a computer graphics/imaging system
US5886702A (en) 1996-10-16 1999-03-23 Real-Time Geometry Corporation System and method for computer modeling of 3D objects or surfaces by mesh constructions having optimal quality characteristics and dynamic resolution capabilities
US5812136A (en) 1996-10-30 1998-09-22 Microsoft Corporation System and method for fast rendering of a three dimensional graphical object
US5935198A (en) 1996-11-22 1999-08-10 S3 Incorporated Multiplier with selectable booth encoders for performing 3D graphics interpolations with two multiplies in a single pass through the multiplier
US6157386A (en) 1997-10-10 2000-12-05 Cirrus Logic, Inc MIP map blending in a graphics processor
US5899810A (en) 1997-01-24 1999-05-04 Kaon Interactive Corporation Distributed game architecture to overcome system latency
JPH10211358A (en) 1997-01-28 1998-08-11 Sega Enterp Ltd Game apparatus
US6121953A (en) 1997-02-06 2000-09-19 Modern Cartoons, Ltd. Virtual reality system for sensing facial movements
CA2252871C (en) 1997-02-20 2008-04-29 Sony Corporation Video signal processing device and method, image synthesizing device, and editing device
US5870098A (en) 1997-02-26 1999-02-09 Evans & Sutherland Computer Corporation Method for rendering shadows on a graphical display
US5880736A (en) 1997-02-28 1999-03-09 Silicon Graphics, Inc. Method system and computer program product for shading
US5949424A (en) 1997-02-28 1999-09-07 Silicon Graphics, Inc. Method, system, and computer program product for bump mapping in tangent space
US6803964B1 (en) 1997-03-21 2004-10-12 International Business Machines Corporation Method and apparatus for processing digital data
US5796952A (en) 1997-03-21 1998-08-18 Dot Com Development, Inc. Method and apparatus for tracking client interaction with a network resource and creating client profiles and resource database
US6137492A (en) 1997-04-03 2000-10-24 Microsoft Corporation Method and system for adaptive refinement of progressive meshes
JP4244391B2 (en) 1997-04-04 2009-03-25 ソニー株式会社 Image conversion apparatus and image conversion method
US6058397A (en) 1997-04-08 2000-05-02 Mitsubishi Electric Information Technology Center America, Inc. 3D virtual environment creation management and delivery system
US5864742A (en) 1997-04-11 1999-01-26 Eastman Kodak Company Copy restrictive system using microdots to restrict copying of color-reversal documents
US5917937A (en) 1997-04-15 1999-06-29 Microsoft Corporation Method for performing stereo matching to recover depths, colors and opacities of surface elements
US6130673A (en) 1997-04-18 2000-10-10 Silicon Graphics, Inc. Editing a surface
US6175367B1 (en) 1997-04-23 2001-01-16 Siligon Graphics, Inc. Method and system for real time illumination of computer generated images
US5912830A (en) 1997-04-30 1999-06-15 Hewlett-Packard Co. System and method for conditionally calculating exponential values in a geometry accelerator
US6331851B1 (en) 1997-05-19 2001-12-18 Matsushita Electric Industrial Co., Ltd. Graphic display apparatus, synchronous reproduction method, and AV synchronous reproduction apparatus
GB2326781B (en) 1997-05-30 2001-10-10 British Broadcasting Corp Video and audio signal processing
NL1006177C2 (en) 1997-05-30 1998-12-07 Delaval Stork V O F Rotor shaft for a rotating machine and rotating machine provided with such a rotor shaft.
US5964660A (en) 1997-06-18 1999-10-12 Vr-1, Inc. Network multiplayer game
US6222555B1 (en) 1997-06-18 2001-04-24 Christofferson Enterprises, Llc Method for automatically smoothing object level of detail transitions for regular objects in a computer graphics display system
US6072504A (en) 1997-06-20 2000-06-06 Lucent Technologies Inc. Method and apparatus for tracking, storing, and synthesizing an animated version of object motion
US6208347B1 (en) 1997-06-23 2001-03-27 Real-Time Geometry Corporation System and method for computer modeling of 3D objects and 2D images by mesh constructions that incorporate non-spatial data such as color or texture
US6049636A (en) 1997-06-27 2000-04-11 Microsoft Corporation Determining a rectangular box encompassing a digital picture within a digital image
US5990901A (en) 1997-06-27 1999-11-23 Microsoft Corporation Model based image editing and correction
US6226006B1 (en) 1997-06-27 2001-05-01 C-Light Partners, Inc. Method and apparatus for providing shading in a graphic display system
US5914724A (en) 1997-06-30 1999-06-22 Sun Microsystems, Inc Lighting unit for a three-dimensional graphics accelerator with improved handling of incoming color values
JP3372832B2 (en) 1997-07-25 2003-02-04 コナミ株式会社 GAME DEVICE, GAME IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM CONTAINING GAME IMAGE PROCESSING PROGRAM
US6009190A (en) 1997-08-01 1999-12-28 Microsoft Corporation Texture map construction method and apparatus for displaying panoramic image mosaics
US6044181A (en) 1997-08-01 2000-03-28 Microsoft Corporation Focal length estimation method and apparatus for construction of panoramic mosaic images
US6018349A (en) 1997-08-01 2000-01-25 Microsoft Corporation Patch-based alignment method and apparatus for construction of image mosaics
US5986668A (en) 1997-08-01 1999-11-16 Microsoft Corporation Deghosting method and apparatus for construction of image mosaics
US5987164A (en) 1997-08-01 1999-11-16 Microsoft Corporation Block adjustment method and apparatus for construction of image mosaics
US6720949B1 (en) 1997-08-22 2004-04-13 Timothy R. Pryor Man machine interfaces and applications
AUPO894497A0 (en) 1997-09-02 1997-09-25 Xenotech Research Pty Ltd Image processing method and apparatus
US6112240A (en) 1997-09-03 2000-08-29 International Business Machines Corporation Web site client information tracker
US6496189B1 (en) 1997-09-29 2002-12-17 Skyline Software Systems Ltd. Remote landscape display and pilot training
US6072494A (en) 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
US6031934A (en) 1997-10-15 2000-02-29 Electric Planet, Inc. Computer vision system for subject characterization
US6101289A (en) 1997-10-15 2000-08-08 Electric Planet, Inc. Method and apparatus for unencumbered capture of an object
US6037947A (en) 1997-10-16 2000-03-14 Sun Microsystems, Inc. Graphics accelerator with shift count generation for handling potential fixed-point numeric overflows
US5905894A (en) 1997-10-29 1999-05-18 Microsoft Corporation Meta-programming methods and apparatus
JP3890781B2 (en) 1997-10-30 2007-03-07 株式会社セガ Computer-readable storage medium, game device, and game image display method
US6320580B1 (en) 1997-11-07 2001-11-20 Sega Enterprises, Ltd. Image processing apparatus
JP3119608B2 (en) 1997-11-19 2000-12-25 コナミ株式会社 Competitive video game device, character movement control method in competitive video game, and recording medium storing character movement control program
JPH11151376A (en) 1997-11-20 1999-06-08 Nintendo Co Ltd Video game device and its storage medium
US6162123A (en) 1997-11-25 2000-12-19 Woolston; Thomas G. Interactive electronic sword game
US6010403A (en) 1997-12-05 2000-01-04 Lbe Technologies, Inc. System and method for displaying an interactive event
DE69712078T2 (en) 1997-12-19 2002-12-12 Esab Ab Goeteborg Gothenburg welding device
US6356288B1 (en) 1997-12-22 2002-03-12 U.S. Philips Corporation Diversion agent uses cinematographic techniques to mask latency
JP3818769B2 (en) 1997-12-24 2006-09-06 株式会社バンダイナムコゲームス Information storage medium, game device, and game system
JPH11203501A (en) 1998-01-14 1999-07-30 Sega Enterp Ltd Picture processor and picture processing method
US6172354B1 (en) 1998-01-28 2001-01-09 Microsoft Corporation Operator input device
US6014144A (en) 1998-02-03 2000-01-11 Sun Microsystems, Inc. Rapid computation of local eye vectors in a fixed point lighting unit
US6850236B2 (en) 1998-02-17 2005-02-01 Sun Microsystems, Inc. Dynamically adjusting a sample-to-pixel filter in response to user input and/or sensor input
US6577312B2 (en) 1998-02-17 2003-06-10 Sun Microsystems, Inc. Graphics system configured to filter samples using a variable support filter
US6275187B1 (en) 1998-03-03 2001-08-14 General Electric Company System and method for directing an adaptive antenna array
US6512507B1 (en) 1998-03-31 2003-01-28 Seiko Epson Corporation Pointing position detection device, presentation system, and method, and computer-readable medium
US6181988B1 (en) 1998-04-07 2001-01-30 Raytheon Company Guidance system having a body fixed seeker with an adjustable look angle
US6578197B1 (en) 1998-04-08 2003-06-10 Silicon Graphics, Inc. System and method for high-speed execution of graphics application programs including shading language instructions
US6100898A (en) 1998-04-08 2000-08-08 Webtv Networks, Inc. System and method of selecting level of detail in texture mapping
US6313841B1 (en) 1998-04-13 2001-11-06 Terarecon, Inc. Parallel volume rendering system with a resampling module for parallel and perspective projections
US6171190B1 (en) 1998-05-27 2001-01-09 Act Labs, Ltd. Photosensitive input peripheral device in a personal computer-based video gaming platform
JP3829014B2 (en) 1998-06-03 2006-10-04 株式会社コナミデジタルエンタテインメント Video game equipment
US6606095B1 (en) 1998-06-08 2003-08-12 Microsoft Corporation Compression of animated geometry using basis decomposition
US6141041A (en) 1998-06-22 2000-10-31 Lucent Technologies Inc. Method and apparatus for determination and visualization of player field coverage in a sporting event
JP2000090289A (en) 1998-07-13 2000-03-31 Sony Corp Device and method for processing image and medium
US6563499B1 (en) 1998-07-20 2003-05-13 Geometrix, Inc. Method and apparatus for generating a 3D region from a surrounding imagery
US6646639B1 (en) 1998-07-22 2003-11-11 Nvidia Corporation Modified method and apparatus for improved occlusion culling in graphics systems
GB9817834D0 (en) 1998-08-14 1998-10-14 British Telecomm Predicting avatar movement in a distributed virtual environment
US6771264B1 (en) 1998-08-20 2004-08-03 Apple Computer, Inc. Method and apparatus for performing tangent space lighting and bump mapping in a deferred shading graphics processor
US6288730B1 (en) 1998-08-20 2001-09-11 Apple Computer, Inc. Method and apparatus for generating texture
US7251315B1 (en) 1998-09-21 2007-07-31 Microsoft Corporation Speech processing for telephony API
TW495710B (en) 1998-10-15 2002-07-21 Primax Electronics Ltd Voice control module for control of game controller
GB2343598B (en) 1998-11-06 2003-03-19 Videologic Ltd Image processing apparatus
US6342885B1 (en) 1998-11-12 2002-01-29 Tera Recon Inc. Method and apparatus for illuminating volume data in a rendering pipeline
US6127936A (en) 1998-11-20 2000-10-03 Texas Instruments Isreal Ltd. Apparatus for and method of providing an indication of the magnitude of a quantity
US6396490B1 (en) 1998-12-04 2002-05-28 Intel Corporation Efficient representation of connectivity information in progressive mesh update record
JP3748172B2 (en) 1998-12-09 2006-02-22 富士通株式会社 Image processing device
JP2000181676A (en) 1998-12-11 2000-06-30 Nintendo Co Ltd Image processor
US6738059B1 (en) 1998-12-18 2004-05-18 Kabushiki Kaisha Sega Enterprises Apparatus and methods for image processing using mixed display objects
US6414960B1 (en) 1998-12-29 2002-07-02 International Business Machines Corp. Apparatus and method of in-service audio/video synchronization testing
US6356263B2 (en) 1999-01-27 2002-03-12 Viewpoint Corporation Adaptive subdivision of mesh models
JP2000229172A (en) 1999-02-10 2000-08-22 Konami Co Ltd Game system and computer readable storage medium on which game program is recorded
JP3972230B2 (en) 1999-02-15 2007-09-05 株式会社セガ GAME DEVICE, GAME DEVICE CONTROL METHOD, AND RECORDING MEDIUM
US6313842B1 (en) 1999-03-03 2001-11-06 Discreet Logic Inc. Generating image data
DE19917660A1 (en) 1999-04-19 2000-11-02 Deutsch Zentr Luft & Raumfahrt Method and input device for controlling the position of an object to be graphically represented in a virtual reality
US6226007B1 (en) 1999-05-21 2001-05-01 Sun Microsystems, Inc. Method and apparatus for modeling specular reflection
JP2000330902A (en) 1999-05-25 2000-11-30 Sony Corp Device and method for information processing, and medium
US6917692B1 (en) 1999-05-25 2005-07-12 Thomson Licensing S.A. Kalman tracking of color objects
JP3431535B2 (en) 1999-05-26 2003-07-28 株式会社ナムコ GAME SYSTEM AND INFORMATION STORAGE MEDIUM
US6489955B1 (en) 1999-06-07 2002-12-03 Intel Corporation Ray intersection reduction using directionally classified target lists
US6717579B1 (en) 1999-06-10 2004-04-06 Dassault Systemes Reflection line control
US6504538B1 (en) 1999-07-01 2003-01-07 Microsoft Corporation Method and system for generating light values for a set of vertices
US6421057B1 (en) 1999-07-15 2002-07-16 Terarecon, Inc. Configurable volume rendering pipeline
US6488505B1 (en) 1999-07-15 2002-12-03 Midway Games West Inc. System and method of vehicle competition with enhanced ghosting features
US6417836B1 (en) 1999-08-02 2002-07-09 Lucent Technologies Inc. Computer input device having six degrees of freedom for controlling movement of a three-dimensional object
JP2001086486A (en) 1999-09-14 2001-03-30 Matsushita Electric Ind Co Ltd Monitor camera system and display method for monitor camera
US6554707B1 (en) 1999-09-24 2003-04-29 Nokia Corporation Interactive voice, wireless game system using predictive command input
JP3270928B2 (en) 1999-09-30 2002-04-02 コナミ株式会社 Field map generation method, video game system, and recording medium
US6611265B1 (en) 1999-10-18 2003-08-26 S3 Graphics Co., Ltd. Multi-stage fixed cycle pipe-lined lighting equation evaluator
US6273814B1 (en) 1999-10-25 2001-08-14 Square Co., Ltd. Game apparatus and method for controlling timing for executive action by game character
US6798411B1 (en) 1999-10-29 2004-09-28 Intel Corporation Image processing
US6571208B1 (en) 1999-11-29 2003-05-27 Matsushita Electric Industrial Co., Ltd. Context-dependent acoustic models for medium and large vocabulary speech recognition with eigenvoice training
JP2001198350A (en) 2000-01-20 2001-07-24 Square Co Ltd Method for providing strategy information of video game on line, computer readable recording medium for program to realize the method and game system and game
US6686924B1 (en) 2000-02-02 2004-02-03 Ati International, Srl Method and apparatus for parallel processing of geometric aspects of video graphics data
US6664955B1 (en) 2000-03-15 2003-12-16 Sun Microsystems, Inc. Graphics system configured to interpolate pixel values
US6426755B1 (en) 2000-05-16 2002-07-30 Sun Microsystems, Inc. Graphics system using sample tags for blur
US6594388B1 (en) 2000-05-25 2003-07-15 Eastman Kodak Company Color image reproduction of scenes with preferential color mapping and scene-dependent tone scaling
US6864895B1 (en) 2000-05-30 2005-03-08 Hewlett-Packard Development Company, L.P. Pseudo-linear frame buffer mapping system and method
US6690372B2 (en) 2000-05-31 2004-02-10 Nvidia Corporation System, method and article of manufacture for shadow mapping
US6717599B1 (en) 2000-06-29 2004-04-06 Microsoft Corporation Method, system, and computer program product for implementing derivative operators with graphics hardware
US6795068B1 (en) 2000-07-21 2004-09-21 Sony Computer Entertainment Inc. Prop input device and method for mapping an object from a two-dimensional camera image to a three-dimensional space for controlling action in a game program
JP2002052256A (en) 2000-08-07 2002-02-19 Konami Co Ltd Supporting device of game winning, terminal device, and recording medium
US6825851B1 (en) 2000-08-23 2004-11-30 Nintendo Co., Ltd. Method and apparatus for environment-mapped bump-mapping in a graphics system
US6316974B1 (en) 2000-08-26 2001-11-13 Rgb Systems, Inc. Method and apparatus for vertically locking input and output signals
US6744442B1 (en) 2000-08-29 2004-06-01 Harris Corporation Texture mapping system used for creating three-dimensional urban models
GB2367471B (en) 2000-09-29 2002-08-14 Pixelfusion Ltd Graphics system
US6853382B1 (en) 2000-10-13 2005-02-08 Nvidia Corporation Controller for a memory system having multiple partitions
US6765573B2 (en) 2000-10-26 2004-07-20 Square Enix Co., Ltd. Surface shading using stored texture map based on bidirectional reflectance distribution function
JP2002153676A (en) 2000-11-17 2002-05-28 Square Co Ltd Game machine, information providing server, record medium, and information providing method and program
JP2002222436A (en) 2000-11-22 2002-08-09 Sony Computer Entertainment Inc Object control method, object control processing program executed by computer, computer readable-recording medium with recorded object control processing program executed by computer, and program executing device executing object control processing program
US6850243B1 (en) 2000-12-07 2005-02-01 Nvidia Corporation System, method and computer program product for texture address operations based on computations involving other textures
US6778181B1 (en) 2000-12-07 2004-08-17 Nvidia Corporation Graphics processing system having a virtual texturing array
US6928433B2 (en) 2001-01-05 2005-08-09 Creative Technology Ltd Automatic hierarchical categorization of music by metadata
US6646640B2 (en) 2001-02-06 2003-11-11 Sony Computer Entertainment Inc. System and method for creating real-time shadows of complex transparent objects
US6995788B2 (en) 2001-10-10 2006-02-07 Sony Computer Entertainment America Inc. System and method for camera navigation
US7162314B2 (en) 2001-03-05 2007-01-09 Microsoft Corporation Scripting solution for interactive audio generation
JP3581835B2 (en) 2001-03-14 2004-10-27 株式会社イマジカ Color conversion method and apparatus in chroma key processing
US6493858B2 (en) 2001-03-23 2002-12-10 The Board Of Trustees Of The Leland Stanford Jr. University Method and system for displaying VLSI layout data
US6741259B2 (en) 2001-03-30 2004-05-25 Webtv Networks, Inc. Applying multiple texture maps to objects in three-dimensional imaging processes
US6961055B2 (en) 2001-05-09 2005-11-01 Free Radical Design Limited Methods and apparatus for constructing virtual environments
US7085722B2 (en) 2001-05-14 2006-08-01 Sony Computer Entertainment America Inc. System and method for menu-driven voice control of characters in a game environment
US20020174036A1 (en) * 2001-05-21 2002-11-21 Coyle Timothy L. Method and system for fundraising including image transfer services
US6639594B2 (en) 2001-06-03 2003-10-28 Microsoft Corporation View-dependent image synthesis
US7006101B1 (en) 2001-06-08 2006-02-28 Nvidia Corporation Graphics API with branching capabilities
US7162716B2 (en) 2001-06-08 2007-01-09 Nvidia Corporation Software emulator for optimizing application-programmable vertex processing
US6884166B2 (en) 2001-07-13 2005-04-26 Gameaccount Limited System and method for establishing a wager for a gaming application
US6781594B2 (en) 2001-08-21 2004-08-24 Sony Computer Entertainment America Inc. Method for computing the intensity of specularly reflected light
JP2003338972A (en) 2001-09-12 2003-11-28 Fuji Photo Film Co Ltd Image processing system, imaging unit, apparatus and method for processing image and program
AU2002335799A1 (en) 2001-10-10 2003-04-22 Sony Computer Entertainment America Inc. System and method for environment mapping
US7081893B2 (en) 2001-10-10 2006-07-25 Sony Computer Entertainment America Inc. System and method for point pushing to render polygons in environments with changing levels of detail
KR100737632B1 (en) 2001-10-10 2007-07-10 소니 컴퓨터 엔터테인먼트 아메리카 인코포레이티드 System and method for dynamically loading game software for smooth game play
JP3493189B2 (en) 2001-10-11 2004-02-03 コナミ株式会社 Game progress control program, game progress control method, and video game apparatus
JP2003225469A (en) 2001-11-30 2003-08-12 Konami Co Ltd Game server device, game management method, game management program and game device
US7443401B2 (en) 2001-10-18 2008-10-28 Microsoft Corporation Multiple-level graphics processing with animation interval generation
WO2003039142A1 (en) 2001-10-29 2003-05-08 Matsushita Electric Industrial Co., Ltd. Video/audio synchronization apparatus
GB0126908D0 (en) 2001-11-09 2002-01-02 Ibm Method and system for display of activity of users
JP3732168B2 (en) 2001-12-18 2006-01-05 株式会社ソニー・コンピュータエンタテインメント Display device, display system and display method for objects in virtual world, and method for setting land price and advertising fee in virtual world where they can be used
US6657624B2 (en) 2001-12-21 2003-12-02 Silicon Graphics, Inc. System, method, and computer program product for real-time shading of computer generated images
US6753870B2 (en) 2002-01-30 2004-06-22 Sun Microsystems, Inc. Graphics system configured to switch between multiple sample buffer contexts
KR100926469B1 (en) 2002-01-31 2009-11-13 톰슨 라이센싱 Audio/video system providing variable delay and method for synchronizing a second digital signal relative to a first delayed digital signal
US7159212B2 (en) 2002-03-08 2007-01-02 Electronic Arts Inc. Systems and methods for implementing shader-driven compilation of rendering assets
US7009605B2 (en) 2002-03-20 2006-03-07 Nvidia Corporation System, method and computer program product for generating a shader program
US6912010B2 (en) 2002-04-15 2005-06-28 Tektronix, Inc. Automated lip sync error correction
US6956871B2 (en) 2002-04-19 2005-10-18 Thomson Licensing Apparatus and method for synchronization of audio and video streams
JP3690672B2 (en) 2002-05-17 2005-08-31 任天堂株式会社 Game system and game program
US6903738B2 (en) 2002-06-17 2005-06-07 Mitsubishi Electric Research Laboratories, Inc. Image-based 3D modeling rendering system
US6831641B2 (en) 2002-06-17 2004-12-14 Mitsubishi Electric Research Labs, Inc. Modeling and rendering of surface reflectance fields of 3D objects
GB0220138D0 (en) 2002-08-30 2002-10-09 Kaydara Inc Matte extraction using fragment processors
US7212248B2 (en) 2002-09-09 2007-05-01 The Directv Group, Inc. Method and apparatus for lipsync measurement and correction
US7339589B2 (en) 2002-10-24 2008-03-04 Sony Computer Entertainment America Inc. System and method for video choreography
US7180529B2 (en) 2002-12-19 2007-02-20 Eastman Kodak Company Immersive image viewing system and method
US7072792B2 (en) 2002-12-24 2006-07-04 Daniel Freifeld Racecourse lap counter and racecourse for radio controlled vehicles
US20040219976A1 (en) 2003-04-29 2004-11-04 Scott Campbell System and method for displaying video game information embedded in a dividing bar
CN100472503C (en) * 2003-05-01 2009-03-25 J·朗 Network meeting system
US7214133B2 (en) 2003-05-09 2007-05-08 Microsoft Corporation Method and apparatus for retrieving recorded races for use in a game
WO2004107749A1 (en) 2003-05-29 2004-12-09 Eat.Tv, Llc System for presentation of multimedia content
US7428000B2 (en) 2003-06-26 2008-09-23 Microsoft Corp. System and method for distributed meetings
US20080274798A1 (en) 2003-09-22 2008-11-06 Walker Digital Management, Llc Methods and systems for replaying a player's experience in a casino environment
US7382369B2 (en) 2003-10-10 2008-06-03 Microsoft Corporation Systems and methods for robust sampling for real-time relighting of objects in natural lighting environments
EP1541208A1 (en) 2003-10-22 2005-06-15 Sony Computer Entertainment America Inc. System and method for utilizing vectors in a video game
US8133115B2 (en) 2003-10-22 2012-03-13 Sony Computer Entertainment America Llc System and method for recording and displaying a graphical path in a video game
CN100456328C (en) 2003-12-19 2009-01-28 Td视觉有限公司 Three-dimensional video game system
US20050246638A1 (en) 2004-04-30 2005-11-03 Microsoft Corporation Presenting in-game tips on a video game system
US7570267B2 (en) 2004-05-03 2009-08-04 Microsoft Corporation Systems and methods for providing an enhanced graphics pipeline
US7333150B2 (en) 2004-05-14 2008-02-19 Pixel Instruments Corporation Method, system, and program product for eliminating error contribution from production switchers with internal DVEs
KR100703334B1 (en) 2004-08-20 2007-04-03 삼성전자주식회사 Apparatus and method for displaying image in mobile terminal
EP1810182A4 (en) 2004-08-31 2010-07-07 Kumar Gopalakrishnan Method and system for providing information services relevant to visual imagery
US20060071933A1 (en) 2004-10-06 2006-04-06 Sony Computer Entertainment Inc. Application binary interface for multi-pass shaders
US20060209210A1 (en) 2005-03-18 2006-09-21 Ati Technologies Inc. Automatic audio and video synchronization
CN101160580B (en) * 2005-03-31 2016-09-21 英国电讯有限公司 The virtual network of the computer of link whose users share similar interests
TWI280051B (en) 2005-06-10 2007-04-21 Coretronic Corp Determination system and method for determining the type of received video signal within the video device
US7636126B2 (en) 2005-06-22 2009-12-22 Sony Computer Entertainment Inc. Delay matching in audio/video systems
US7589723B2 (en) 2005-07-25 2009-09-15 Microsoft Corporation Real-time rendering of partially translucent objects
US20070094335A1 (en) 2005-10-20 2007-04-26 Sony Computer Entertainment Inc. Systems and methods for providing a visual indicator of magnitude
US9697230B2 (en) 2005-11-09 2017-07-04 Cxense Asa Methods and apparatus for dynamic presentation of advertising, factual, and informational content using enhanced metadata in search-driven media applications
US20070168309A1 (en) 2005-12-01 2007-07-19 Exent Technologies, Ltd. System, method and computer program product for dynamically extracting and sharing event information from an executing software application
KR100641791B1 (en) * 2006-02-14 2006-11-02 (주)올라웍스 Tagging Method and System for Digital Data
US8850316B2 (en) 2006-02-16 2014-09-30 Microsoft Corporation Presenting community and information interface concurrent to a multimedia experience that is contextually relevant on a multimedia console system
US7827289B2 (en) * 2006-02-16 2010-11-02 Dell Products, L.P. Local transmission for content sharing
US7965338B2 (en) 2006-04-06 2011-06-21 Microsoft Corporation Media player audio video synchronization
US7965859B2 (en) 2006-05-04 2011-06-21 Sony Computer Entertainment Inc. Lighting control of a user environment via a display device
US7880746B2 (en) 2006-05-04 2011-02-01 Sony Computer Entertainment Inc. Bandwidth management through lighting control of a user environment via a display device
US7896733B2 (en) 2006-09-14 2011-03-01 Nintendo Co., Ltd. Method and apparatus for providing interesting and exciting video game play using a stability/energy meter
JP2008155140A (en) 2006-12-25 2008-07-10 Nidec Sankyo Corp Tube cleaning device, liquid level gage, flowmeter, and float type flowmeter
JP4553907B2 (en) 2007-01-05 2010-09-29 任天堂株式会社 Video game system and storage medium for video game
JP5285234B2 (en) 2007-04-24 2013-09-11 任天堂株式会社 Game system, information processing system
JP5203630B2 (en) 2007-05-08 2013-06-05 株式会社タイトー GAME DEVICE, GAME DEVICE CONTROL METHOD, AND SERVER
US9731202B2 (en) 2007-06-26 2017-08-15 Gosub 60, Inc. Methods and systems for updating in-game content
US9126116B2 (en) 2007-09-05 2015-09-08 Sony Computer Entertainment America Llc Ranking of user-generated game play advice
US8100756B2 (en) 2007-09-28 2012-01-24 Microsoft Corporation Dynamic problem solving for games
US20090118015A1 (en) 2007-11-07 2009-05-07 International Business Machines Corporation Solution for enhancing the user experience of an electronic game by making user-created game data available in context during gameplay
US8566158B2 (en) 2008-01-28 2013-10-22 At&T Intellectual Property I, Lp System and method for harvesting advertising data for dynamic placement into end user data streams
US20090209337A1 (en) 2008-02-15 2009-08-20 Microsoft Corporation User-Powered Always Available Contextual Game Help
US20090227368A1 (en) 2008-03-07 2009-09-10 Arenanet, Inc. Display of notational object in an interactive online environment
JP5225008B2 (en) 2008-10-08 2013-07-03 株式会社ソニー・コンピュータエンタテインメント GAME CONTROL PROGRAM, GAME DEVICE, GAME SERVER, AND GAME CONTROL METHOD
US20110052012A1 (en) * 2009-03-31 2011-03-03 Myspace Inc. Security and Monetization Through Facial Recognition in Social Networking Websites
US10217085B2 (en) * 2009-06-22 2019-02-26 Nokia Technologies Oy Method and apparatus for determining social networking relationships
US20110013810A1 (en) * 2009-07-17 2011-01-20 Engstroem Jimmy System and method for automatic tagging of a digital image
US20110066431A1 (en) * 2009-09-15 2011-03-17 Mediatek Inc. Hand-held input apparatus and input method for inputting data to a remote receiving device
US8810684B2 (en) * 2010-04-09 2014-08-19 Apple Inc. Tagging images in a mobile communications device using a contacts list
US10786736B2 (en) 2010-05-11 2020-09-29 Sony Interactive Entertainment LLC Placement of user information in a game space
US8270684B2 (en) * 2010-07-27 2012-09-18 Google Inc. Automatic media sharing via shutter click
US8341145B2 (en) * 2010-12-20 2012-12-25 Microsoft Corporation Face recognition using social data
US9317530B2 (en) * 2011-03-29 2016-04-19 Facebook, Inc. Face recognition based on spatial and temporal proximity
US9342817B2 (en) 2011-07-07 2016-05-17 Sony Interactive Entertainment LLC Auto-creating groups for sharing photos
US20140087877A1 (en) 2012-09-27 2014-03-27 Sony Computer Entertainment Inc. Compositing interactive video game graphics with pre-recorded background video content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080052349A1 (en) * 2006-08-27 2008-02-28 Michael Lin Methods and System for Aggregating Disparate Batches of Digital Media Files Captured During an Event for the Purpose of Inclusion into Public Collections for Sharing
US20100050090A1 (en) * 2006-09-14 2010-02-25 Freezecrowd, Inc. System and method for facilitating online social networking
US7800646B2 (en) * 2008-12-24 2010-09-21 Strands, Inc. Sporting event image capture, processing and publication
US20110066743A1 (en) * 2009-09-14 2011-03-17 Fergus Gerard Hurley Method for providing event based media streams

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140019542A1 (en) * 2003-08-20 2014-01-16 Ip Holdings, Inc. Social Networking System and Behavioral Web
US10786736B2 (en) 2010-05-11 2020-09-29 Sony Interactive Entertainment LLC Placement of user information in a game space
US20160092732A1 (en) 2014-09-29 2016-03-31 Sony Computer Entertainment Inc. Method and apparatus for recognition and matching of objects depicted in images
US11182609B2 (en) 2014-09-29 2021-11-23 Sony Interactive Entertainment Inc. Method and apparatus for recognition and matching of objects depicted in images
US11113524B2 (en) 2014-09-29 2021-09-07 Sony Interactive Entertainment Inc. Schemes for retrieving and associating content items with real-world objects using augmented reality and object recognition
US10216996B2 (en) 2014-09-29 2019-02-26 Sony Interactive Entertainment Inc. Schemes for retrieving and associating content items with real-world objects using augmented reality and object recognition
US11003906B2 (en) 2014-09-29 2021-05-11 Sony Interactive Entertainment Inc. Schemes for retrieving and associating content items with real-world objects using augmented reality and object recognition
US10943111B2 (en) 2014-09-29 2021-03-09 Sony Interactive Entertainment Inc. Method and apparatus for recognition and matching of objects depicted in images
US10831823B2 (en) 2014-11-18 2020-11-10 Huawei Technologies Co., Ltd. Photo distribution method and terminal
WO2018089379A1 (en) * 2016-11-14 2018-05-17 Leyefe, Inc. Time-sensitive image data management systems and methods for enriching social events
US20200057887A1 (en) * 2016-11-14 2020-02-20 Leyefe, Inc. Time-sensitive image data management systems and methods for enriching social events
US20180189521A1 (en) * 2017-01-05 2018-07-05 Microsoft Technology Licensing, Llc Analyzing data to determine an upload account
US10795952B2 (en) 2017-01-05 2020-10-06 Microsoft Technology Licensing, Llc Identification of documents based on location, usage patterns and content
WO2018128953A1 (en) * 2017-01-05 2018-07-12 Microsoft Technology Licensing, Llc Analyzing data to determine an upload account
US10291564B2 (en) * 2017-05-03 2019-05-14 International Business Machines Corporation Social media interaction aggregation for duplicate image posts
US10284505B2 (en) * 2017-05-03 2019-05-07 International Business Machines Corporation Social media interaction aggregation for duplicate image posts
IT201700116131A1 (en) * 2017-10-16 2018-01-16 Alessandro Chicone Process for managing content in a wireless telecommunications network
IT202000013630A1 (en) * 2020-06-08 2021-12-08 Pica Group S P A METHOD OF ACCESSING MULTIMEDIA CONTENT
WO2021250564A1 (en) * 2020-06-08 2021-12-16 Pica Group S.P.A. Method for accessing multimedia content

Also Published As

Publication number Publication date
CN103635892A (en) 2014-03-12
CN103635892B (en) 2017-10-13
US9342817B2 (en) 2016-05-17
WO2013006584A1 (en) 2013-01-10
CN107491701B (en) 2020-10-16
US20130013683A1 (en) 2013-01-10
CN107491701A (en) 2017-12-19

Similar Documents

Publication Publication Date Title
US9342817B2 (en) Auto-creating groups for sharing photos
US10733226B2 (en) Systems and methods for a scalable collaborative, real-time, graphical life-management interface
EP3641238B1 (en) Information exchange method and terminal
CN107710197B (en) Sharing images and image albums over a communication network
JP6311030B2 (en) System and method for generating a shared virtual space
US9047584B2 (en) Web-based user interface tool for social network group collaboration
WO2020012220A1 (en) In the event of selection of message, invoking camera to enabling to capture media and relating, attaching, integrating, overlay message with/on/in captured media and send to message sender
JP6607539B2 (en) System and method for multiple photo feed articles
US20150356121A1 (en) Position location-enabled, event-based, photo sharing software and service
KR101468294B1 (en) System and method for generating album based on web services dealing with social information
JP2014503091A (en) Friends and family tree for social networking
CN102089776A (en) Managing personal digital assets over multiple devices
CN113132344B (en) Broadcasting and managing call participation
US20220043559A1 (en) Interfaces for a messaging inbox
JP5052696B1 (en) Movie publishing apparatus, method, and computer program
US10614116B2 (en) Systems and methods for determining and providing event media and integrated content in connection with events associated with a social networking system
WO2015061696A1 (en) Social event system
US20160249166A1 (en) Live Content Sharing Within A Social or Non-Social Networking Environment With Rating System
US20150074073A1 (en) Apparatus, system, and method for event-identified content exchange and management
US20200210136A1 (en) Content server, information sharing system, communication control method, and non-transitory computer-readable medium
Sarvas Media content metadata and mobile picture sharing
KR101898820B1 (en) Image business card service system
US20230315685A1 (en) System and method for digital information management
US20140279273A1 (en) Method and System for Multimedia Distribution
KR20140130974A (en) Service method of e-card in mobile

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY COMPUTER ENTERTAINMENT AMERICA LLC, CALIFORNI

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ELLIOTT, MAX;REEL/FRAME:041563/0058

Effective date: 20110822

Owner name: SONY INTERACTIVE ENTERTAINMENT AMERICA LLC, CALIFO

Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT AMERICA LLC;REEL/FRAME:041992/0309

Effective date: 20160331

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: SONY INTERACTIVE ENTERTAINMENT LLC, CALIFORNIA

Free format text: MERGER;ASSIGNOR:SONY INTERACTIVE ENTERTAINMENT AMERICA LLC;REEL/FRAME:053323/0567

Effective date: 20180315