US20080297599A1 - Video data storage, search, and retrieval using meta-data and attribute data in a video surveillance system - Google Patents

Video data storage, search, and retrieval using meta-data and attribute data in a video surveillance system Download PDF

Info

Publication number
US20080297599A1
US20080297599A1 US11/754,335 US75433507A US2008297599A1 US 20080297599 A1 US20080297599 A1 US 20080297599A1 US 75433507 A US75433507 A US 75433507A US 2008297599 A1 US2008297599 A1 US 2008297599A1
Authority
US
United States
Prior art keywords
data
meta
video
attribute
weights
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/754,335
Other versions
US7460149B1 (en
Inventor
John J. Donovan
Daniar Hussain
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIERRA VISTA GROUP LLC
Original Assignee
KD Secure LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KD Secure LLC filed Critical KD Secure LLC
Priority to US11/754,335 priority Critical patent/US7460149B1/en
Assigned to KD SECURE, LLC reassignment KD SECURE, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SECURENET HOLDINGS, LLC
Assigned to SECURENET HOLDINGS, LLC reassignment SECURENET HOLDINGS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DONOVAN, JOHN, HUSSAIN, DANIAR
Application granted granted Critical
Publication of US7460149B1 publication Critical patent/US7460149B1/en
Publication of US20080297599A1 publication Critical patent/US20080297599A1/en
Assigned to TIERRA VISTA GROUP, LLC reassignment TIERRA VISTA GROUP, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KD SECURE, LLC
Assigned to SECURENET SOLUTIONS GROUP, LLC reassignment SECURENET SOLUTIONS GROUP, LLC CORRECTIVE ASSIGNMENT TO CORRECT THE THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 032948 FRAME 0401. ASSIGNOR(S) HEREBY CONFIRMS THE CORRECT ASSIGNEE IS SECURENET SOLUTIONS GROUP, LLC. Assignors: KD SECURE, LLC
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7343Query language or query format
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/786Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using motion, e.g. object motion or camera motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • G11B27/32Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
    • G11B27/327Table of contents
    • G11B27/329Table of contents on a disc [VTOC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/913Television signal processing therefor for scrambling ; for copy protection
    • H04N2005/91357Television signal processing therefor for scrambling ; for copy protection by modifying the video signal
    • H04N2005/91364Television signal processing therefor for scrambling ; for copy protection by modifying the video signal the video signal being scrambled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/78Television signal recording using magnetic recording
    • H04N5/781Television signal recording using magnetic recording on disks or drums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal

Definitions

  • the present invention is generally related to video data storage in security and surveillance systems and applications. More specifically, this invention relates to storage of video data and associated meta-data and attribute data, and subsequent search and retrieval of the video data using the meta-data.
  • the present invention may be used to store, search, and retrieve video data and meta-data that has been obtained from surveillance cameras in various security and safety applications.
  • the present invention may be used to help fight crime, detect and possibly prevent terrorist activity, and help ensure safety procedures are followed.
  • VHS tape has to be rewound multiple times to search for a particular occurrence. This can damage the VHS tape, by stretching the VHS tape and scraping the polymer coating.
  • Digital video data from digital cameras may be stored in digital, random-access media, such as disk.
  • digital video cameras Unfortunately, the vast amount of data generated by digital video cameras is also difficult to store, search, and retrieve from disk.
  • a typical 3 Megapixel digital surveillance camera generates images of approximately 280 Kbytes per frame. If this camera were running at 5 frames per second, it would generate approximately 60 GB per day. If an organization wanted to archive the data for one month, it would take approximately 1.8 TB, and if the organization wanted to archive the data for one year, it would take approximately 22 TB. In a typical application having 100 surveillance cameras around a particular facility, this translates into approximately 6 TB per day, or approximately 180 TB per month, or over approximately 2,000 TB per year!
  • Another drawback with conventional video storage is that all video data is weighted equally. For example, motion detected in an ammunition storage area of an army base would be weighted equally to motion detected in the basement of a dinning hall of the army base. In addition, video data from an old, low quality camera would receive the same weight as video data from a new, high quality camera.
  • Another drawback with conventional video storage is the inability to audit the video data, for example, determine who viewed the video data, and thus provide for audit of the video data.
  • Tips that is, information from informants
  • video phones cell phones with integrated cameras
  • tips are increasingly received as video clips captured at the scene of a crime by well-meaning citizens.
  • What is also needed is a method for monitoring and auditing the stored video data as well as live video data. What is also needed is a method for intelligent alerting of appropriate individuals based on stored video data as well as the live video data.
  • the present invention is a method, a system, and an apparatus for video data storage, search, auditing, and retrieval.
  • meta-data shall mean data about events that have been captured and detected in the video.
  • meta-data could include the number of people detected in a video, motion detected, loud noises detected, etc.
  • attribute data shall mean data about the data, such as integrity of the data, source of the data, reliability of the data, and so on. For example, maintenance attribute data would have a different weight for a camera that was not maintained in the last 5 years compared to a camera that is regularly maintained every 6 months.
  • Attribute data includes “attributes,” which are attributes of the data, and their associated “weights, or weight functions” which are probabilistic weights attached to the video data. For example, an attribute would be “age of the video data,” and an associated weight function would be a function decreasing with age. Some weights may also change with external events, such as maintenance, time, and so on. For example, a weight associated with a camera may go down if the camera was not maintained for a period of time.
  • One embodiment of the present invention stores meta-data, indexed to the video data, in order to facilitate search and retrieval.
  • the meta-data may be generated by one or more video detection components, such as a motion detection module or a gunshot detection module, or may be generated by a human operator, such a security guard.
  • the meta-data is added approximately contemporaneously to the capture and storage of the video data. In an alternate embodiment, the meta-data is added subsequent to the capture and storage of the video data.
  • the video data may be stored in a video data storage module (a logical unit).
  • the video data storage module may be organized as a hierarchical storage module, in which data that is less frequently used is migrated to slower and/or less expensive storage media.
  • the meta-data may be stored in a meta-data storage module (a logical unit), which may be logically part of the video data storage module, or may be logically separate from the video data storage module.
  • Attribute data, including the weights associated with the meta-data may be stored in an attribute storage module (another logical unit).
  • the video data storage module, the meta-data storage module, and the attribute data storage module may be located on the same physical media, or they may be located on different physical media.
  • the video data storage module, the meta-data storage module, and the attribute storage module may be stored on hard disk, optical disk, magnetic disk, flash memory, tape memory, RAID array, NAS (Network Attached Storage), SAN (Storage Area Network), or any other physical or virtual storage media.
  • NAS Network Attached Storage
  • SAN Storage Area Network
  • One embodiment of the present invention is a method for storing video data (“the method”).
  • This method includes the following steps.
  • Video data is received from one or more video sources, such as network-attached IP cameras.
  • the video data is stored in a video storage module, which could be located on a RAID disk or tape.
  • the meta-data, indexed to the video data is stored in a meta-data storage module, which could be located on the same disk as the video data, or on a different disk.
  • Another embodiment of the present invention is the method described above that also includes storing attribute data, which is either entered manually or determined heuristically.
  • Another embodiment of the present invention is the method described above that also includes the step of performing video analysis on the video data from the one or more video sources to generate the meta-data.
  • the video analysis could include motion detection, gunshot detection, or any other video/image analysis function, or component, which can generate meta-data.
  • Various video detection components are described below.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of assigning a time-stamp to the meta-data, the time-stamp providing an index into the video data; and storing the meta-data with the time-stamp in the meta-data storage module.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of receiving input data from one or more data sources, which could be legacy systems; generating additional meta-data based on one or more functions of the input data; and storing the additional meta-data, indexed to the video data, in the meta-data storage module.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of generating additional meta-data based on an intersection of one or more functions of the video data from two or more video sources; and storing the additional meta-data, indexed to the video data, in the meta-data storage module.
  • Yet another embodiment of the present invention is the method described above that also includes the step of providing additional meta-data generated by a human operator; and storing the additional human generated meta-data, indexed to the video data, in the meta-data storage module.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of receiving historical video data from the video storage module; evaluating a set of rules based on the historical video data and the generated meta-data; and performing one or more actions based on the evaluation of the set of rules.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of retrieving historical meta-data from the meta-data storage module, evaluating a set of rules based on the historical meta-data and the generated meta-data, and performing one or more actions based on the evaluation of the set of rules.
  • Yet another embodiment of the present invention is the method described above where the one or more actions include an alert.
  • Yet another embodiment of the present invention is the method described above where the video storage module is a hierarchical storage module.
  • Meta-data may be added automatically by various sensory devices or video detection components. For example, a motion detection component generates meta-data that is indexed to the video data where the motion was detected. In another example, a gunshot detection component generates meta-data that is indexed to the video data where the gunshot was detected. The meta-data may also be generated by a human operator.
  • the meta-data detection components are configurable by a system administrator.
  • the system administrator may customize the types of video detection components that are activated and the meta-data that is generated and recorded.
  • a human operator adds meta-data. For example, a human operator may add meta-data indicating, “suspicious activity was observed at this location.” In another example, a human operator may transcribe the voice associated with the video data, and the transcriptions serve as meta-data associated with the video data.
  • attribute data is also stored, and associated with the video data.
  • Attribute data is information about the video data, such as its source, reliability, etc.
  • one type of attribute data is the camera that the video data was acquired from.
  • Another example of attribute data is the quality of the camera that was used to acquire the video data (e.g., a 3 Megapixel camera would be weighted higher than a VGA camera for purposes of face recognition).
  • Another example of attribute data is the historical pattern of a camera being susceptible to being tampered with.
  • One embodiment of the present invention provides a user interface for a system administrator to enter and customize the attribute data.
  • a particular user of the present invention would customize the present system by entering weights that are associated with attribute data. For example, the system administrator would select the attribute data that corresponds with each camera.
  • a system administrator may identify a low-hanging camera that may be easily tampered with a lower attribute weight, while a high-hanging camera that is difficult to tamper with a higher attribute weight.
  • One embodiment of the present invention automatically upgrades or downgrades the weights associated with attributes. For example, decreasing a weight with age of a camera.
  • Another embodiment of the present invention is a user interface that allows for updating the attributes and associated weights.
  • Another embodiment of the present invention heuristically learns and updates the weights. For example, it may learn that certain cameras are degrading in their reliability.
  • video data is captured and stored in a remote location.
  • the video data may be sent via a network, such as the Internet, or a dedicated fiber optic line, to a remote, secure location. If the local copy of the data is damaged, destroyed, or tampered with, the copy in the remote location may be accessed and analyzed. All video data may be automatically archived to the remote location.
  • video data is archived in a hierarchical storage module.
  • a hierarchy of storage modules with varying speeds, locations, and reliabilities are provided. For example, a high reliability, fast, local RAID disk is provided. In addition, a lower reliability, slower tape drive may also be provided. Additionally, an off-site storage module, which may be connected by a dedicated fiber optic or via the Internet, may also be provided.
  • Video data may be cascaded through the storage hierarchy based on such factors as time, access frequency, as well as its associated meta-data. For example, video data that is older than 30 days may be moved from the RAID disk to the tape drive.
  • video data that has been accessed frequently even though the data may be older than 30 days, may be stored on the RAID disk.
  • video data may be cascaded through the storage hierarchy based on its associated meta-data. That is, video data that has meta-data indicating a gunshot was detected will be stored in more reliable, faster storage no matter how old or how little the data was accessed. Video data that has meta-data indicating that virtually nothing happened may be immediately moved to tape or off-site storage.
  • One embodiment of the present invention provides an audit trail for the data.
  • An audit trail is generated, indicating who and when has viewed or accessed the data.
  • An audit trail is also generated indicating from which cameras the video data was captured with, and if there are any unusual circumstances associated with that camera, for example, weather conditions, power outages, or tampering.
  • One embodiment of the present invention provides data integrity and security by encrypting the video data, and only allowing authorized individuals access to the encryption key.
  • Video tips may be video clips recorded by video phones (cell phones with integrated cameras), digital cameras, handheld video cameras, etc. that are sent in by well-meaning citizens.
  • FIG. 1 illustrates a system architecture for storage and retrieval of video data according to one embodiment of the present invention
  • FIG. 2 shows an illustrative meta-data table according to one embodiment of the present invention
  • FIG. 3 shows an illustrative attribute data table in accordance with one embodiment of the present invention
  • FIG. 4 illustrates a mathematical representation of an illustrative operation of the present invention
  • FIG. 5 illustrates a system architecture for intelligent alerting based on meta-data, according to another embodiment of the present invention
  • FIG. 6 illustrates a software architecture used with one embodiment of the present invention
  • FIG. 7 illustrates a hardware architecture used with one embodiment of the present invention
  • FIG. 8 illustrates a flowchart of a process for storing video data and associated meta-data and attribute data according to one embodiment of the present invention
  • FIG. 9 illustrates a flowchart of a process for retrieving video data and associated meta-data and attribute data according to another embodiment of the present invention.
  • FIG. 10 illustrates a flowchart of a process for intelligent alerting based on past and present meta-data according to yet another embodiment of the present invention
  • FIG. 11 illustrates another example of a hardware architecture according to one embodiment of the present invention.
  • FIG. 12 illustrates another example of a software architecture according to one embodiment of the present invention.
  • the present invention is a system, a method, and an apparatus for storing, searching, and retrieving video data.
  • the video data is received from one or more cameras, which could be digital IP cameras.
  • Meta-data is generated by one or more detection components, or manually entered by a human operator.
  • the video data and corresponding meta-data, indexed to the video data, are stored.
  • Attribute data which relates to such things as the reliability of the meta-data and the video data, and associated weights, or weight functions, is also stored. Attribute data may be determined by a system administrator, and/or determined heuristically.
  • FIG. 1 shows an example of a system architecture 100 of one embodiment of the present invention.
  • One or more cameras 104 , 106 , 108 , or other video capture devices capture one or more streams of vide data.
  • One or more additional sensory devices such as temperature probe 110 , pressure probe 112 , and other sensory device 114 provide sensory data that complements the video data.
  • a hierarchical storage manager 122 which could be software running on a dedicated server, stores, or records, the video data to one or more storage media 124 , 126 , 128 .
  • Storage media 128 may be a remote storage media connected by transmission media 127 .
  • Transmission media 127 may be a dedicated fiber optic line or a public network such as the Internet.
  • Storage media 124 , 126 , and 128 may be hard disk, magnetic tape, and the like.
  • the cameras 104 - 108 and other sensory devices 110 - 114 may themselves generate meta-data in the hardware.
  • digital surveillance cameras generate motion meta-data that indicate when motion was detected in a particular field of view of the camera.
  • meta-data server 116 may process video data in software, for example by using detection component(s) 118 , and generate meta-data corresponding to the video data.
  • a people counting detection component may count the number of people that were detected in a video stream, and generate meta-data indicating the number of people detected.
  • the meta-data server 116 stores the meta-data in meta-data storage module, or area, 120 .
  • attribute data which is information about the meta-data
  • Attribute data may include such things as the reliability of the meta-data, the reliability of the source of the meta-data, the age of the meta-data, and so on.
  • audit trail storage area 131 an audit trail, containing information about who has accessed the video data, how frequently, and so on is stored in audit trail storage area 131 . Each time someone accesses or views a video file from the video storage module, audit information is stored in audit storage module 131 .
  • Access control storage area 132 stores access rights and privileges. Access to view the video data is only given to those authorized individuals who are listed in the access control storage area. Access may be restricted based on the video data, or its associated meta-data. For example, any security officer may be able to view the video data taken at night, but only security officers assigned to investigate a particular case may be given access to the video data where a gunshot was detected.
  • Access to attribute data may also be restricted. For example, only certain high-level security officers may have access to high quality video data from behind a bank teller that may show checks and amounts, whereas any security officer may see the video data from the bank's lobby. Access may also be modulated based on the quality of the video data. For example, anybody may be able to login and view a VGA resolution view of the lobby of their building, but only the security officer can see the mega-pixel resolution video.
  • the access control may be implemented using an authentication scheme provided by the operating system, such as Microsoft ActiveDirectoryTM.
  • Cameras used in the present invention may be digital IP cameras, digital PC cameras, web-cams, analog cameras, cameras attached to camera servers, analog cameras attached to DVRs, etc. Any camera device is within the scope of the present invention, as long as the camera device can capture video. Some cameras may have an integrated microphone; alternatively, a separate microphone may be used to capture audio data along with video data.
  • video video data
  • video source video source
  • the terms “video,” “video data,” “video source,” etc. is meant to include video without audio, as well as video with interlaced audio (audiovisual information). Of course, it is to be understood that the present invention may also be implemented using audio data without accompanying video data by replacing cameras with microphones.
  • the system diagram shown in FIG. 1 is illustrative of only one implementation of the present invention.
  • the meta-data server and the hierarchical storage module may be on dedicated servers, as shown in FIG. 1 for clarity.
  • a common server may provide the functionality of the meta-data server and the hierarchical storage module.
  • the meta-data and the video data may be stored on different media.
  • the meta-data and the video data may be stored on the same physical storage media.
  • the attribute data is shown stored in a separate attribute data storage area.
  • the attribute data may be stored on a dedicated storage area, as illustrated, or may be stored on the same storage as the meta-data and/or the video data.
  • FIG. 2 shows a table 200 of possible meta-data that may be stored.
  • Column 202 corresponds to events that were either generated by sensory devices, or by the meta-data server of FIG. 1 .
  • Illustrative events could be motion detected, gunshot detected, number of people in an area exceeds a threshold, speed of an object in a given area exceeds a threshold, and similar events. The sensory devices themselves, the meta-data server, or both, could generate these events, as described previously.
  • Column 204 represents locations corresponding to those events. For example, locations could be the camera names or locations, such as “Camera 1,” “Parking Lot,” “Lobby,” etc.
  • Column 206 represents the dates the events occurred. For example, a motion event was detected on May 15, 2007.
  • Columns 208 and 210 represent the start and end times of the events, and are one form of indices into the video data. For example, a motion event occurred in Camera 1 on May 15, 2007 from 10:00 AM through 10:23 AM.
  • Column 212 provides a pointer, or an index, to the video data that corresponds to the occurrence of that event.
  • the first event is stored by the hierarchical storage module on a local disk, while the second event is stored on a remote disk, for example, a NAS or a disk attached to a server.
  • Column 214 stores access privileges associates with the event. For example, events where gunshots were detected may have a higher security level than ordinary motion events.
  • meta-data is indexed to the video data, and stored in the meta-data storage module.
  • the meta-data may be generated by one or more sensory devices, including the cameras themselves, or may be entered manually by a human operator, such as a security guard.
  • the present invention provides a user interface by which a human operator may enter meta-data.
  • a user interface is provided for a security officer to monitor one or more cameras.
  • the cameras automatically generate meta-data, as noted above.
  • the human operator may add meta-data manually. For example, if the human operator observes suspicious activity going on in a particular camera, the human operator may add meta-data corresponding to suspicious activity, and the meta-data server in the meta-data storage module would store the meta-data.
  • the human operator may select from a set of possible meta-data tags, as well as add “free-form” meta-data by typing into a text-entry box. For example, a human operator may transcribe speech in the video data. The transcribed speech serves as meta-data to the video data. After the video data has been tagged by meta-data, it is possible to use the present system to search for keywords, such as all the times when a judge said “Order, Order” in a courtroom surveillance camera.
  • the present invention also provides a user interface by which a human operator may enter attribute data.
  • Attribute data is information about the video data and its associated meta-data, such as its source, reliability, etc.
  • one type of attribute data is the camera that the video data was acquired from.
  • Another example of attribute data is the quality of the camera that was used to acquire the video data (e.g., a 3 Megapixel camera would be weighted higher than a VGA camera for purposes of face recognition).
  • attribute data is the historical pattern of a camera being susceptible to being tampered with.
  • attribute data include, but are not limited to, time the camera was repaired or installed, reliability of power to the camera, reliability of transmission, bandwidth, susceptibility to noise, interference, overexposure, weather conditions, age of the camera, type of camera (night, IR, etc.).
  • FIG. 3 illustrates an example of attribute data 300 , which includes attributes about the meta-data and their associated weights, or weighing functions.
  • Column 302 shows various sensory devices and column 304 shows associated attributes.
  • the weights, or weighing functions, associated with the attributes are shown in column 306 .
  • Column 308 indicates whether the weight is dynamic, that is, whether the weight changes with time, external events, and so on.
  • column 310 indicates access privileges of individuals authorized to change the attribute data.
  • Each attribute determines a weight, which could be a constant, or the weight could be a weighing function of the attribute. For example, consider a camera 1 that is not designed to detect gunshots, but which has a low-quality, integrated microphone, and so a gunshot detection component may use the audio to detect loud shots as gunshots. When a motion event is detected on such a camera, it would be assigned a high weight (for example, 0.85 or 85%).
  • gunshot detector 1 may have the opposite attribute-weight profile, in that motion events from the gunshot detector may be weighted low (say, 0.15 or 15%) while gunshot events may be weighted high (say, 0.70 or 70%).
  • Camera 1 may have an age attribute, indicating the age of the camera, and an associated weighting function that weights any data from the camera with a function that decreases with the age of the camera.
  • the time since the last maintenance of the camera may also serve to generate a weight. This could be a step-function that, for example, a function dropping to zero after 1 year of no maintenance on the camera.
  • the frequency of failure may also serve to weigh any data from the camera, again using a function that weights data lower from a camera that has a high frequency of failure.
  • the resolution of the camera may also serves as attribute data to assign a weight to the data; data from a high-resolution camera would be assigned a higher weight than data from a lower resolution camera.
  • attribute data and associated weights that are tied to particular meta-data includes weights assigned to meta-data indicating the number of people in a particular area.
  • This meta-data may be assigned a high weight (0.80) if it comes from camera 2, which may have high resolution, high frame-rate, and other qualities that make it amenable to high reliability for people counting purposes. Contrary, if the same meta-data comes from camera 3, which has low resolution, low frame-rate, or other qualities that make it unreliable when it comes to counting people, the meta-data may be assigned a low weight (0.40).
  • a system administrator may enter and customize the attribute data.
  • a system administrator would customize the present system by entering weights that are associated with attribute data. For example, the system administrator would select the attribute data that corresponds with each camera.
  • a system administrator may identify a low-hanging camera that may be easily tampered with a lower attribute weight, while a high-hanging camera that is difficult to tamper with a higher attribute weight.
  • the system administrator would customize the attribute data for different image qualities. For example, the system administrator would select the weights associated with video data, and the corresponding meta-data, associated with different resolutions of cameras. That is, a higher resolution camera and its associated meta-data would be weighted higher than a lower resolution camera, and the system administrator would select the relative weights.
  • attribute data that the system administrator may set would be based on the past evidence of usefulness of video data coming from each camera. For example, a camera that has been useful in the past for detecting, preventing, or prosecuting crimes would be assigned a higher weight by the system administrator using this user interface.
  • the meta-data may be used to significantly enhance search and retrieval of the video data. That is, in order to perform a search of the video data, the meta-data may be searched first, and the video data may be indexed by the meta-data.
  • meta-data was recorded in the meta-data storage module during detection of a motion event in a particular camera. If at a later time it were desired to locate all places in the video data where motion was detected, a database query would be performed on the meta-data table to retrieve all events where motion was detected.
  • the pointers to the video data and the indices into the video data would provide a mechanism by which to retrieve the video data that corresponds to those occurrences of motion.
  • FIG. 4 shows a possible set-theoretic explanation of the operation of the present invention.
  • V 1 , V 2 , . . . , V i shown as elements 402 , 428 in FIG. 4 respectively.
  • Sets V 1 (element 402 ) and V 2 (element 428 ) represent video data from camera 1 and camera 2, respectively, and so on.
  • Each set of video data V i has subsets of video data, for example, subsets for a particular date range, for a particular time range, for a particular event, etc.
  • video set 402 has subsets of video data identified as elements 404 , 406 , 408 , and 410 in FIG. 4 .
  • Each set of video data V i has a corresponding set of meta-data M i associated with it.
  • Each element in the set of meta-data M i has an index, or a pointer, to a corresponding portion of the video data V i .
  • meta-data set M 1 shown as element 412 in FIG. 4
  • meta-data set M 1 has corresponding subsets of meta-data, shown as element 414 , 416 , 418 , and 420 .
  • Each subset of meta-data is indexed, or points to, a corresponding subset of video data.
  • subset 414 of meta-data M 1 is indexed, or points to, subset 406 of video data V 1 from camera 1 (not shown).
  • FIG. 4 Note that a one-to-one relationship between video data and meta-data is illustrated in FIG. 4 for clarity.
  • the relationship between video-data and meta-data is not restricted to being one-to-one.
  • the relationship may be one-to-many, many-to-one, as well as many-to-many.
  • sets W i of attribute weight data are weight vectors associated with each set of meta-data M i for camera i (not shown).
  • the sets W i of attribute weight data are sets of vectors w i,j which represent weights associated with subsets of the meta-data M i .
  • weight vector w i,j represented as element 424 , represents the weights associated with meta-data subset 416 .
  • the weight vectors w i,j may be n-dimensional vectors representing the weights in one of a number of dimensions, each dimension representing a weight in a particular attribute of the data.
  • a 2-dimentional weight [w 11 , w 12 ] vector may represent the attribute weights associated with the reliability of a particular video camera for both motion detection reliability as well as gunshot detection reliability.
  • One camera may have high motion detection reliability and low gunshot detection reliability, while another camera may have high gunshot detection reliability and low motion detection reliability.
  • the attribute weight vectors w ij may be arbitrarily fine-grained with respect to subsets of the meta-data and subsets of the video data.
  • attribute weight vectors w ij are constant over large subsets of the meta-data and the video data, and may have large discontinuities between subsets.
  • gunshot detection devices may have a very low motion detection reliability weight, and very high gunshot detection reliability, and vice versa for typical motion detection cameras.
  • the set-theoretic described has been shown and described here for ease of understanding and explanation of the present invention.
  • the meta-data and video data may or may not be stored as sets; the data may be stored in matrices, tables, relational databases, etc.
  • the set description is shown for clarity only.
  • the present invention is not limited to this particular mathematical representation, and one of ordinary skill will recognize numerous alternative and equivalent mathematical representations of the present invention.
  • Query (1) would retrieve all events where motion was detected. In the set-theoretic notation described above, the query (1) would correspond to:
  • Event searches may be restricted by particular locations or date-ranges. For example, a security analyst may only wish to search a particular camera, or location, where 3 or more people were detected, for example:
  • V 1 video data from camera 1
  • the security analyst may also restrict searches by date and/or time. For example, the security analyst may only wish to search a particular date range where 3 or more people were detected, for example:
  • Query (12) may be represented in set-theoretic notation as:
  • Combinations of events may also be searched.
  • a security analyst may want to search historical video data for all occurrences where a gunshot was detected at the same time as 3 or more people were detected in the video frame.
  • a possible query to accomplish this would be:
  • Query (14) may be represented in set theoretic notation as:
  • Events may also be correlated and analyzed across multiple cameras, or multiple locations. For example, a security analyst may want to see all events where 1 or more people were detected in a particular lobby, and a gunshot was heard in a parking lot camera. To perform such a search, the security analyst could search by:
  • Query (16) may be interpreted in set-theoretic notation as:
  • the security analyst is not required to using a query language.
  • a query language may be used for sophisticated searches.
  • a user interface is provided for the security analyst, which allows the officer to select the meta-data criteria by which to search by using a visual tool.
  • the user interface automatically generates the query language and sends it to the meta-data server for retrieval.
  • a possible structured query language was shown here. However, the present invention is not limited to the query language shown or described here. Any number of query languages are within the scope of the present invention, including SQL, IBM BS12, HQL, EJB-QL, Datalog, etc.
  • the query languages described here is not meant to be an exhaustive list, and are listed here for illustrative purposes only.
  • attribute weights may be recalculated. For example, to recalculate the attribute weights for an intersection of two subsets of meta-data, the attribute weights would be multiplied together, as shown:
  • the weight associated with both motion events substantially simultaneously is 45% (0.45).
  • W ( M 1 ⁇ M 2 ) W ( M 1 )+ W ( M 2 ) ⁇ W ( M 1 ) ⁇ W ( M 2 ) (19)
  • the weight associated with either one of two motion events occurring substantially simultaneously is 95% (0.95).
  • a correlation engine correlates meta-data, both present and historical, across multiple sensory devices and multiple locations, and activates one or more actions in response to the correlation exceeding a particular threshold.
  • the correlation engine may evaluate various rules, such as “issue an alert to person A when one or more people are present in location B AND a gunshot was detected in location B in the past 24 hours.”
  • Video detection components are used to extract relevant meta-data (also called video parameters), from the video sources; the meta-data is input into the correlation engine.
  • Input components may be used to receive inputs from other systems, for example sensory devices, such as temperature probes.
  • Action components represent various actions that may be taken under certain conditions, and may be activated by the correlation engine.
  • service components provide interfaces for services performed by human beings, for example meta-data addition by human operators.
  • past and present video data, past and present meta-data, and past and present data from sensory devices are used to generate real-time alerts.
  • One or more data inputs 502 are received via one or more input components 504 (only one input component is illustrated for clarity).
  • the data inputs could be data from police reports, anonymous tips, sensory devices, etc.
  • data inputs could come from a personnel database in storage and from temperature probe (not shown).
  • the input components, such as input component 504 provide interfaces between the system 500 and various input devices.
  • the data inputs 502 are assigned a weight by data attribute engine based on the attribute associated with the data inputs 506 .
  • the weights may be a function of the input data, the source of the input data (such as its reliability), external events (such as the National Terror alerts in the United States), or any other information. (Only one input data is shown being processed by data attribute engine 506 for clarity.)
  • One or more video inputs 507 are received and processed by one or more detection components 508 (only one video detection component is illustrated for clarity).
  • the video inputs could be historical, archived video data, such as video from storage 512 , or could be video data from live video cameras (not shown).
  • the detection components such as detection component 508 , determine one or more video parameters from the video inputs 507 .
  • detection component 508 may detect whether or not there is a person in a particular region of video input 507 .
  • the one or more video parameters that are determined by the detection component 508 are assigned a weight by video attribute engine 510 .
  • the weights may be a function of the video data, the video source (such as its reliability), external events (such as the National Terror alerts in the United States), or any other information. (Only one video parameter is shown being processed by video attribute engine 510 for clarity.)
  • the detection components also store meta-data, which represent events detected by the detection component, in meta-data storage 513 .
  • meta-data For example, a motion detection component, when detecting motion, stores meta-data indicating that motion was detected in a certain camera in a certain period in meta-data storage 513 .
  • the meta-data may be represented and stored in a table as illustrated in FIG. 2 , or the meta-data may be stored and represented in some other manner.
  • the historical meta-data stored in metadata storage 513 is weighted by attribute weights by metadata attribute engine 514 .
  • the correlation engine 520 evaluates one or more rules, or triggers, based on the weighted metadata from metadata attribute engine 514 .
  • the weighted input data, the weighted video data, and the weighted meta-data (outputs from the data attribute engine 506 , the video attribute engine 510 , and the metadata attribute engine 514 ) are processed by correlation engine 520 .
  • Correlation engine 520 evaluates a set of rules based on the weighted input data, the weighted video data, and the weighted meta-data.
  • the correlation engine may also be considered to correlate two or more events together.
  • the correlation engine 520 activates one or more actions via one or more action components 522 .
  • the correlation engine 520 may contain a rule stating: “Issue an alert to the police (Action Component 1) if ten or more people gather in a given area (Video Detection Component 1) and within the last 48 hours there was a gunshot detected in that area (historical Metadata 1).” If the preconditions of the rule are satisfied, the action is performed. As discussed previously, the preconditions may be weighted based on the data, the source of the data, external events, and other information. For example, a more recent shooting may receive a higher weight than an older shooting.
  • data may also come from a service component 518 .
  • Service components such as service component 518
  • Service components are interfaces to human operators.
  • a service component may provide an interface for human operators to monitor a given area for suspicious activity, and to send a signal to the correlation engine 520 that suspicious activity is going on in a given area.
  • the correlation engine 520 will activate an action if a corresponding rule is activated.
  • the human operator may force an action to be performed by directly activating an action component, such as action component 522 .
  • Equations 20 to 22 show possible rules that may be evaluated by correlation engine 520 .
  • action component a 1 will be activated if the expression on the left-hand side is greater than a predetermined threshold ⁇ 1 .
  • ⁇ 1 a predetermined threshold
  • Eqs. 20-22 “a” stands for action component, “f, g, and h” are predetermined functions, “w” stands for weight, “x” stands for the input data, and “v” stands for video data.
  • Eqs. 20-22 could represent a hierarchy of actions that would be activated for different threshold scenarios. Alternatively, Eqs. 20-22 could represent several rules being evaluated in parallel.
  • Eqs. 29-22 are illustrative of only one embodiment of the present invention, and the present invention may be implemented using other equations, other expressions, or even by using heuristic rules rather than equations.
  • Equation 23 shows an example of a calculation of determining a weight that may be performed by data attribute engine 506 , video attribute engine 510 , or metadata attribute engine 514 .
  • the weight “w” may be based on attribute data, including the source of the data “s” (for example, the reliability of the source), the time that the data was received “t” (for example, older data would be assigned a lower weight), and the frequency that the data was received “f” (for example, the same data received multiple times would be assigned a higher weight).
  • Other weighting factors may also be used, and the weighing factors described here are illustrative only and are not intended to limit the scope of the invention.
  • Equation 24 shows an example of a calculation that may be performed by detection component 508 to determine a video parameter “v i ” from the video data “v(t)”.
  • the video parameter “v i ” may be obtained as a function “f i ” of the integral.
  • a detection component for counting the number of people that enter a region over a period of time may perform face detection in a given frame, count the number of faces detected, and then integrate over several frames to obtain a final count.
  • the function “f i ” of Eq. 24 may be a composition of several functions, as shown in Equation 25.
  • a detection component may count the number of people wearing a safety helmet that enter a given area by composing a safety helmet detection function with a people counting function.
  • ⁇ i ⁇ 1 ⁇ 2 ⁇ . . . ⁇ n (25)
  • the new, or future, weights “w j ” may be based on the past weights “w i ” and external events “e i ”. Examples of external events could be “Amber Alerts” for missing children, “National Terror Alerts” for terrorist activity in the United States, etc.
  • Eq. 26 shows an example of a calculation for determining new, or future, weights “w j ” by composing a matrix of past weights “w i ” with external events “e i ”.
  • FIG. 6 shows an example of software architecture 600 of one embodiment of the present invention.
  • a presentation layer 602 provides the front-end interface to users of the system 100 of FIG. 1 .
  • a user interface is provided for an administrator, who can modify various system parameters, such as the data input components, the detection components, the data and video weights, the rules, as well as the action components.
  • Another user interface is provided for an officer, such as a security guard, to monitor the activity of the system 100 .
  • a user interface for the security officer would allow the officer to monitor alerts system-wide, turn on and off appropriate cameras, and notify authorities.
  • An interface is also provided for an end-user, such as an executive.
  • the interface for the end-user allows, for example, the end-user to monitor those alerts relevant to him or her, as well as to view those cameras and video sources he or she has permission to view.
  • Various user interfaces may be created for various users of the present invention, and the present invention is not limited to any particular user interface shown or described here.
  • Other user interface screens, for adding meta-data and for modifying attribute data, were discussed above.
  • a middle layer 604 provides the middleware logic for the system 100 .
  • the middle layer 604 includes the weight engines 506 , 510 as well as the correlation engine 520 of FIG. 5 .
  • the middle layer interfaces with the user interface 602 and evaluates the logic of Equations 20-26.
  • a database layer 606 is provided for storing the input data and the video data.
  • the database layer 606 may be implemented using a hierarchical storage architecture, in which older data, or less frequently used data, is migrated to slower and less expensive storage media.
  • the database layer 606 provides the input data and the video data to the middle layer 604 , which in turn processes the data for display by the presentation layer 602 .
  • FIG. 7 shows an example of hardware architecture 700 of one embodiment of the present invention.
  • the software architecture 600 may be implemented using any hardware architecture, of which FIG. 7 is illustrative.
  • a bus 714 connects the various hardware subsystems.
  • a display 702 is used to present the output of the presentation layer 602 of FIG. 2 .
  • An I/O interface 704 provides an interface to input devices, such as keyboard and mouse (not shown).
  • a network interface 705 provides connectivity to a network, such as an Ethernet network, a Local Area Network (LAN), a Wide Area Network (WAN), an IP network, the Internet, etc.
  • RAM 706 provides working memory while executing a process according to system architecture 100 of FIG. 1 .
  • CPU 709 executes program code in RAM 706 , and controls the other system components.
  • Meta-data is stored in metadata storage module 708
  • attribute data is stored in attribute storage module 709 .
  • Hierarchical storage manager 710 provides an interface to one or more storage modules 712 on which video data is stored.
  • Audit information including data about who, when, and how often someone accessed particular video data is stored in audit storage module 711 .
  • the separation between meta-data storage, attribute storage, and video storage is logical only, and all three storage modules, or areas, may be implemented on one physical media, as well as on multiple physical media.
  • FIG. 8 shows a flowchart of a process for storing video data according to one embodiment of the present invention.
  • Process 800 begins in step 802 .
  • Video data is captured from one or more surveillance cameras, as shown in step 804 .
  • Meta-data is generated by performing video analysis on the captured video data, as shown in step 806 .
  • Attribute data and associated weights, representing information about the relevance of the meta-data, are received, as shown in step 808 .
  • a video tip may be received from a well-meaning citizen, and associated meta-data and attribute data may be received or generated, as shown in step 810 .
  • Unions and intersections of meta-data may be used to generate additional meta-data, as shown in step 812 .
  • the video data is stored in a hierarchical storage module, as shown in step 814 .
  • the meta-data indexed by date and time stamp to the video data, is stored in a meta-data storage module, as shown in step 816 .
  • Process 800 ends in step 818 .
  • FIG. 9 shows a flowchart of a process for retrieving video data and associated meta-data and attribute data according to another embodiment of the present invention.
  • Process 900 begins in step 902 .
  • a search criteria is entered, as shown in step 904 .
  • Meta-data which was previously generated by video detection components and indexed to the video data, is searched, as shown in step 906 .
  • Meta-data matching the search criteria is retrieved from a meta-data storage module, as shown in step 908 .
  • Video data, indexed by the meta-data by date and time, is retrieved from the video data storage module, as shown in step 910 . If the video data was encrypted, the video data is decrypted as shown in step 912 .
  • Attribute data representing reliability of the meta-data
  • Audit information may be stored about who and when accessed the video data, as shown in step 916 .
  • Process 900 ends in step 918 .
  • FIG. 10 shows a flowchart of a process for intelligent alerting based on past and present meta-data according to yet another embodiment of the present invention.
  • Process 1100 may be stored in RAM 706 , and may be executed on CPU 709 of FIG. 7 .
  • Process 1000 begins in step 1002 .
  • Video data is captured from one or more surveillance cameras, as shown in step 1004 .
  • Meta-data is generated by performing video analysis on the captured video data, as shown in step 1006 .
  • Attribute data and associated weights, representing information about the relevance of the meta-data, are received, as shown in step 1008 .
  • Historical meta-data is retrieved from a meta-data storage module, as shown in step 1010 .
  • Attribute data associated with the meta-data is retrieved from an attribute storage module, as shown in step 1012 .
  • a set of rules is evaluated based on the generated meta-data, the historical meta-data, and the associated attribute data, as shown in step 1014 .
  • One or more actions is performed based on the evaluation of the rules, as shown in step 1016 .
  • Process 1000 ends in step 1018 .
  • FIG. 11 shows another example of a hardware architecture 1100 according to another embodiment of the present invention.
  • a network 1120 such as an IP network over Ethernet, interconnects all system components.
  • Digital IP cameras 1115 running integrated servers that serve the video from an IP address, may be attached directly to the network.
  • Analogue cameras 1117 may also be attached to the network via analogue encoders 1116 that encode the analogue signal and serve the video from an IP address.
  • cameras may be attached to the network via DVRs (Digital Video Recorders) or NVRs (Network Video Recorders), identified as element 1111 .
  • the video data is recorded and stored on data storage server 1108 .
  • Data storage server 1108 may be used to store the video data, the meta-data, as well as the attribute data and associated weights.
  • Data is also archived by data archive server 1113 on enterprise tape library 1114 .
  • Data may also be sent to remote storage 1106 via a dedicated transmission media such as a fiber optic line, or via a public network such as the Internet.
  • Legacy systems such as external security systems 1109 , may be interfaced via appropriate input components, as described above.
  • a central management server 1110 manages the system 1100 , provides system administrator, access control, and management functionality.
  • Enterprise master and slave servers 1112 provide additional common system functionality.
  • Video analytics server 1107 runs the video detection modules described below, as well as providing the interface to search, retrieve, and analyze the video data and meta-data stored on data server 1108 .
  • the video including live feeds, as well as recorded video, may be viewed on smart display matrix 1105 .
  • the display matrix includes one or more monitors, each monitor capable of displaying multiple camera or video views simultaneously.
  • One or more clients are provided to view live video data, as well as to analyze historical video data.
  • Supported clients include PDA 1101 , central client 1102 , and smart client 1103 .
  • a remote client 1104 may be connected remotely from anywhere on the network or even over the public Internet, due to the open IP backbone of the present invention.
  • FIG. 12 illustrates another example of the software architecture 1200 of another embodiment of the present invention.
  • Data is collected in data collection software layer 1201 .
  • a web interface for a security officer allows the officer to view video data, add meta-data, and view the status of any alerts.
  • data collection software layer 1201 Also in data collection software layer 1201 is a file map interface for interfacing to a map of a building, location, corporate facility, campus, etc. This allows the video data to be correlated to precise locations.
  • a voice interface allows for tips to be received via phone or a voice recording.
  • Data is filtered, weighted, integrated, and correlated in filter software layer 1202 by the collaboration engine, as described previously.
  • Data analysis software layer 1203 provides an interface for a security analyst or data analyst to search, analyze, and review recorded and live video data, as described above.
  • Dissemination software layer 1204 issues reports, alerts, and notifications based on the video data, the meta-data, and the attribute data, as described above.
  • Action software layer 1205 performs actions in response to alerts, including turning systems on or off, notifying the police, fire, and so on, as described above.
  • the software layers may communicate using XML (eXtensible Markup Language).
  • XML eXtensible Markup Language
  • the present invention is not limited to using XML to communicate between software layers, and other communication techniques may be used, including open APIs, etc.
  • Video tips are short video clips captured by well-intentioned citizens.
  • Video tips would be received by the present invention via a user interface.
  • a person would log into the system via the Internet and upload a video of a crime that the person caught on video.
  • the system would process the video tip in a manner analogous to the way it would process video from a surveillance camera.
  • the video detection components would be used to detect one or more events in the video, such as motion, people counting, etc., and generate meta-data about the video tip.
  • the citizen submitting the video tip would also submit associated meta-data, such as the date and time it was captured, its relevance, the names of people in the video, the occurrence of any crime in the video, etc.
  • Attribute data would be assigned to the video tip based on such factors as the identify of the informant, the quality of the video, the reliability of the source, other tips that are coming in, etc.
  • the video tip Once the video tip has entered the system, it is processed in a similar manner to the way video data from the surveillance cameras is processed, as detailed above.
  • the video tip would be archived in the video storage module, and its associated meta-data and attribute data would be stored. It would serve as one additional input into the correlation engine and will be weighted and factored in when generating alerts. In addition, it will be available for later search and retrieval by its associated meta-data and attribute data.
  • various detection components may be used to generate meta-data, or video parameters, from the video inputs. These detection components may be configured to record meta-data along an occurrence of each event. For example, as shown in FIG. 2 , whenever a motion event is detected, meta-data corresponding to the motion event is recorded along with the video data. In another example, if a person is detected in an area by a face detection component, meta-data may be stored along with each occurrence of that person in the video.
  • Some illustrative detection components are listed below. However, the present invention is not limited to these detection components, and various detection components may be used to determine one or more video parameters (meta-data), and are all within the scope of the present invention.
  • various sensory devices may be integrated into system 100 of FIG. 1 by adding an input component for receiving and processing the input from the sensory device.
  • Some illustrative input components are listed below. However, the present invention is not limited to these input components, and various other input components associated with various other sensory and other devices are within the scope of the present invention.
  • action components may be used to perform one or more actions in response to a rule being activated.
  • the rules engine may activate one or more action components under certain conditions defined by the rules.
  • Some illustrative action components are listed below. However, the present invention is not limited to these particular action components, and other action components are within the scope of the present invention.
  • service components may be used to integrate human intelligence into system 500 of FIG. 5 .
  • a service component may provide a user interface for remote security guards who may monitor the video inputs.
  • Some illustrative examples of what the security guards could monitor for and detect is listed below.
  • a human operator may detect some events, such as “suspicious behavior,” which may be difficult for a computer to detect.
  • the human operators may also add meta-data for each occurrence of an event.
  • a security guard may add meta-data to each portion of a video where he or she noticed suspicious activity.
  • the present invention is not limited to the examples described here, and is intended to cover all such service components that may be added to detect various events using a human operator.
  • the present invention may be implemented using any number of detection, input, action, and service components. Some illustrative components are presented here, but the present invention is not limited to this list of components.
  • An advantage of the present invention is the open architecture, in which new components may be added as they are developed.
  • the components listed above may be reused and combined to create advanced applications. Using various combinations and sub-combinations of components, it is possible to assemble many advanced applications.
  • the weight associated with the attribute date (motion after 8:00 PM at night) would be high.
  • the correlation engine would retrieve the stored motion meta-data of Professor Donovan entering the building, and the meta-data associated with two men moving in the parking lot, and would have issued an alert to all people, including Professor Donovan, who are still in the building, using their Blackberries or cell phones.
  • the email alert would have contained a picture of the parking lot, and Professor Donovan would not have entered the parking lot and would possibly not have been shot.
  • Different weights would be associated with the detected method of entrance into the parking lot. For example, if motion was detected in the fence area, this would have a higher weight than motion near the entrance gate. Meta-data that was combined with people loitering at the entrance gate would have a higher weight.
  • the video data would have been searched using meta-data for the precise time when those two men entered the parking lot and for all previous occurrences when two men were detected in the parking lot.
  • the assailants may have been identified scoping the area as well as committing the crime of attempted murder, which could have led to a possible identification and capture of the assailants.
  • a system administrator may set the rules.
  • the system administrator may hold an ordered, procedural workshop with the users and key people of the organization to determine the weighing criteria and the alerting levels.
  • the rules may be heuristically updated. For example, the rules may be learned based on past occurrences. In one embodiment, a learning component may be added which can recognize missing rules. If an alert was not issued when it should have been, an administrator of the system may note this, and a new rule may be automatically generated. For example, if too many alerts were being generated for motion in the parking lot, the weights associated with the time would be adjusted.

Abstract

One embodiment is a method of storing video data from a video surveillance system having one or more cameras. Video data is captured from one or more surveillance cameras. Meta-data is automatically generated by performing video analysis on the captured video data from the surveillance cameras. A human operator may manually enter additional meta-data. Attribute data and associated weights, representing information about the relevance of the meta-data, is received. The video data is stored in a hierarchical video storage area; the meta-data, indexed by date and time stamp to the video data, is stored in a meta-data storage area; and the attribute data is stored in an attribute storage area. One or more alerts may be issued based on the past and present meta-data. The video data is secured by encrypting and storing the video data remotely, and audit trails are generated about who and when viewed the video data.

Description

    REFERENCE TO RELATED APPLICATIONS
  • This application claims priority from co-pending U.S. application Ser. No. 11/746,043 entitled “Apparatus, methods, and systems for intelligent security and safety” to John Donovan et al., filed on May 8, 2007, the entirety of which is hereby incorporated by reference herein.
  • FIELD OF THE INVENTION
  • The present invention is generally related to video data storage in security and surveillance systems and applications. More specifically, this invention relates to storage of video data and associated meta-data and attribute data, and subsequent search and retrieval of the video data using the meta-data. The present invention may be used to store, search, and retrieve video data and meta-data that has been obtained from surveillance cameras in various security and safety applications. The present invention may be used to help fight crime, detect and possibly prevent terrorist activity, and help ensure safety procedures are followed.
  • BACKGROUND OF THE INVENTION
  • As citizens of a dangerous world, we all face security and safety risks. Every day, 30 people die by gunshot in the U.S.—one every 48 minutes. A police officer dies from a gunshot wound every ten days. Analysis of past video data may save lives.
  • A recently foiled terrorist attack on Ft. Dix Army Base in New Jersey involved five terrorists planning to kill U.S. soldiers at the army base. They were observed in video cameras surveying the army base on numerous occasions prior to the planned attack. A well-meaning citizen notified the police and FBI by submitting a “video tip” which started an investigation. The video tip was a video of the men training for the terrorist attack and plotting to kill as many American soldiers in as short a time as possible. Accordingly, the military is concerned about historical analysis of past video data, as well as data from video tips.
  • Muggings and home intrusions are another threat to citizens. In Seattle, Wash. one in every 60 homes was burglarized in 2006. In Boston, Mass. in 2007 an 87-year old woman opened her home only to find a burglar in her home. Proactive alerts based on past video data may deter such crimes.
  • Vandalism and damage to property decreases property values. One study conducted by the London School of Economics found that “a one-tenth standard deviation increase in the recorded density of incidents of criminal damage has a capitalized cost of just under 1% of property values, or £2,200 on the average Inner London property” (Steve Gibbons, The Costs of Urban Property Crime, 2003). Analysis of current and past video data may prevent such vandalism.
  • Violence in schools and on college campuses continues to rise, and has increased concern among students, parents, and teachers. A shooting at Virginia Tech University in 2007 resulted in the killing of 32 people and injured 24 others. In 2005, a professor at MIT was shot four times in a parking lot on campus. If the video data was stored and analyzed using meta-data, the assailants could have been apprehended. The shooting may have even been thwarted.
  • Serious accidents at corporate facilities have resulted in enormous damage to personal lives and to corporate property. For example, an explosion in a Texas oil refinery killed 15 people and injured 180 others. The U.S. Chemical Safety Board determined that various factors, one of which was the absence of adequate experience in the refinery, contributed to the accident: “As the unit was being heated, the Day Supervisor, an experienced ISOM operator, left the plant at 10:47 a.m. due to a family emergency. The second Day Supervisor was devoting most of his attention to the final stages of the ARU startup; he had very little ISOM experience and, therefore, did not get involved in the ISOM startup. No experienced supervisor or ISOM technical expert was assigned to the raffinate section startup after the Day Supervisor left, although BP's safety procedures required such oversight.” (Chemical Safety Board, Investigation Report: Refinery Explosion and Fire, March 2007, pg. 52.) Video surveillance, storage, and analysis could have prevented these deaths and injuries.
  • As a result of terrorist activity (such as the attempted terrorist attack on Ft. Dix), violence on college campuses (such as the shooting at Virginia Tech University), and major accidents (such as the oil refinery explosion in Texas), governments, corporations, universities, other institutions, and individuals are increasingly concerned about security and safety. To address this problem, many of these institutions are installing security and surveillance cameras around their facilities, campuses, and military installations.
  • Once the video data is captured by these cameras, which could be analog or digital cameras, the video data has to be stored, and subsequently retrieved, and information about the quality of the images also has to be stored. There are numerous problems with conventional video data storage and retrieval systems. For example, conventional video data from analogue cameras that is stored on VHS tape is difficult to store and retrieve. The VHS tape has to be rewound multiple times to search for a particular occurrence. This can damage the VHS tape, by stretching the VHS tape and scraping the polymer coating.
  • Digital video data from digital cameras may be stored in digital, random-access media, such as disk. Unfortunately, the vast amount of data generated by digital video cameras is also difficult to store, search, and retrieve from disk. For example, a typical 3 Megapixel digital surveillance camera generates images of approximately 280 Kbytes per frame. If this camera were running at 5 frames per second, it would generate approximately 60 GB per day. If an organization wanted to archive the data for one month, it would take approximately 1.8 TB, and if the organization wanted to archive the data for one year, it would take approximately 22 TB. In a typical application having 100 surveillance cameras around a particular facility, this translates into approximately 6 TB per day, or approximately 180 TB per month, or over approximately 2,000 TB per year! This is a large amount of data to store, search, and retrieve by traditional mechanisms. Present systems cannot store, archive, search, and retrieve such large amounts of data effectively and intelligently. When a pro-active alert that depends on past video data needs to be issued to deter a crime or other dangerous event, or past video data needs to be forensically analyzed for a past crime or other dangerous event, the inadequacies of present systems is even more apparent.
  • One drawback with conventional video storage is that the video data is only indexed by date and time. Therefore, an operator must know the date and time of events of interest before being able to search for those events.
  • Once the video data has been stored, another drawback with conventional video storage is the inability to perform intelligent search. For example, present systems cannot perform search by various meta-data criteria, such as “show all times when 2 or more people were detected in a given area.” Another drawback with conventional video storage is the inability to perform a search that retrieves video data across multiple locations and cameras. For example, present systems cannot perform a search such as “show all times when there was a gunshot detected at this location, and 2 or more people were detected in an adjacent area.”
  • Another drawback with conventional video storage is that all video data is weighted equally. For example, motion detected in an ammunition storage area of an army base would be weighted equally to motion detected in the basement of a dinning hall of the army base. In addition, video data from an old, low quality camera would receive the same weight as video data from a new, high quality camera.
  • Once the video data is stored, another drawback with conventional video storage is data security and integrity. Anyone who has physical access to the disk or tape can damage it, destroying potentially valuable evidence. For example, after a shooting on MIT's campus, the District Attorney's office gained access to the surveillance tape, deleted the video of the shooting, deleted date and time stamps from the tape, and rearranged the remaining images to portray a different set of actions, as well as permanently damaging the original tape.
  • Another drawback with conventional video storage is the difficulties associated with archiving the video data.
  • Another drawback with conventional video storage is the inability to audit the video data, for example, determine who viewed the video data, and thus provide for audit of the video data.
  • Another drawback with convention video storage and analysis is the inability to utilize tips. Tips, that is, information from informants, are an important source of data. With the proliferation of video phones (cell phones with integrated cameras), tips are increasingly received as video clips captured at the scene of a crime by well-meaning citizens.
  • These drawbacks can be overcome with the attendant features and advantages of the present invention. Therefore, as recognized by the present inventors, what are needed are a method, apparatus, and system for storing, searching, archiving, protecting, auditing, and retrieving video data and associated meta-data and attribute data.
  • What is also needed is a method for monitoring and auditing the stored video data as well as live video data. What is also needed is a method for intelligent alerting of appropriate individuals based on stored video data as well as the live video data.
  • Accordingly, it would be an advancement in the state of the art to provide an apparatus, system, and method for storing, searching, auditing, and retrieving video data received from multiple cameras, and for generating intelligent alerts based on the stored video data.
  • It is against this background that various embodiments of the present invention were developed.
  • BRIEF SUMMARY OF THE INVENTION
  • The present invention is a method, a system, and an apparatus for video data storage, search, auditing, and retrieval. As used herein, the term “meta-data” shall mean data about events that have been captured and detected in the video. For example, meta-data could include the number of people detected in a video, motion detected, loud noises detected, etc. As used herein, the term “attribute data” shall mean data about the data, such as integrity of the data, source of the data, reliability of the data, and so on. For example, maintenance attribute data would have a different weight for a camera that was not maintained in the last 5 years compared to a camera that is regularly maintained every 6 months. Attribute data includes “attributes,” which are attributes of the data, and their associated “weights, or weight functions” which are probabilistic weights attached to the video data. For example, an attribute would be “age of the video data,” and an associated weight function would be a function decreasing with age. Some weights may also change with external events, such as maintenance, time, and so on. For example, a weight associated with a camera may go down if the camera was not maintained for a period of time.
  • One embodiment of the present invention stores meta-data, indexed to the video data, in order to facilitate search and retrieval. The meta-data may be generated by one or more video detection components, such as a motion detection module or a gunshot detection module, or may be generated by a human operator, such a security guard. In one embodiment, the meta-data is added approximately contemporaneously to the capture and storage of the video data. In an alternate embodiment, the meta-data is added subsequent to the capture and storage of the video data.
  • In one embodiment, the video data may be stored in a video data storage module (a logical unit). The video data storage module may be organized as a hierarchical storage module, in which data that is less frequently used is migrated to slower and/or less expensive storage media. The meta-data may be stored in a meta-data storage module (a logical unit), which may be logically part of the video data storage module, or may be logically separate from the video data storage module. Attribute data, including the weights associated with the meta-data, may be stored in an attribute storage module (another logical unit). The video data storage module, the meta-data storage module, and the attribute data storage module may be located on the same physical media, or they may be located on different physical media. The video data storage module, the meta-data storage module, and the attribute storage module may be stored on hard disk, optical disk, magnetic disk, flash memory, tape memory, RAID array, NAS (Network Attached Storage), SAN (Storage Area Network), or any other physical or virtual storage media.
  • One embodiment of the present invention is a method for storing video data (“the method”). This method includes the following steps. Video data is received from one or more video sources, such as network-attached IP cameras. Evaluating one or more functions of the video data, such as a motion detection function or a gunshot detection function, generates meta-data. The video data is stored in a video storage module, which could be located on a RAID disk or tape. The meta-data, indexed to the video data, is stored in a meta-data storage module, which could be located on the same disk as the video data, or on a different disk.
  • Another embodiment of the present invention is the method described above that also includes storing attribute data, which is either entered manually or determined heuristically.
  • Another embodiment of the present invention is the method described above that also includes the step of performing video analysis on the video data from the one or more video sources to generate the meta-data. The video analysis could include motion detection, gunshot detection, or any other video/image analysis function, or component, which can generate meta-data. Various video detection components are described below.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of assigning a time-stamp to the meta-data, the time-stamp providing an index into the video data; and storing the meta-data with the time-stamp in the meta-data storage module.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of receiving input data from one or more data sources, which could be legacy systems; generating additional meta-data based on one or more functions of the input data; and storing the additional meta-data, indexed to the video data, in the meta-data storage module.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of generating additional meta-data based on an intersection of one or more functions of the video data from two or more video sources; and storing the additional meta-data, indexed to the video data, in the meta-data storage module.
  • Yet another embodiment of the present invention is the method described above that also includes the step of providing additional meta-data generated by a human operator; and storing the additional human generated meta-data, indexed to the video data, in the meta-data storage module.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of receiving historical video data from the video storage module; evaluating a set of rules based on the historical video data and the generated meta-data; and performing one or more actions based on the evaluation of the set of rules.
  • Yet another embodiment of the present invention is the method described above that also includes the steps of retrieving historical meta-data from the meta-data storage module, evaluating a set of rules based on the historical meta-data and the generated meta-data, and performing one or more actions based on the evaluation of the set of rules.
  • Yet another embodiment of the present invention is the method described above where the one or more actions include an alert.
  • Yet another embodiment of the present invention is the method described above where the video storage module is a hierarchical storage module.
  • Meta-data may be added automatically by various sensory devices or video detection components. For example, a motion detection component generates meta-data that is indexed to the video data where the motion was detected. In another example, a gunshot detection component generates meta-data that is indexed to the video data where the gunshot was detected. The meta-data may also be generated by a human operator.
  • The meta-data detection components are configurable by a system administrator. The system administrator may customize the types of video detection components that are activated and the meta-data that is generated and recorded. In one embodiment, a human operator adds meta-data. For example, a human operator may add meta-data indicating, “suspicious activity was observed at this location.” In another example, a human operator may transcribe the voice associated with the video data, and the transcriptions serve as meta-data associated with the video data.
  • In addition, attribute data is also stored, and associated with the video data. Attribute data is information about the video data, such as its source, reliability, etc. For example, one type of attribute data is the camera that the video data was acquired from. Another example of attribute data is the quality of the camera that was used to acquire the video data (e.g., a 3 Megapixel camera would be weighted higher than a VGA camera for purposes of face recognition). Another example of attribute data is the historical pattern of a camera being susceptible to being tampered with.
  • One embodiment of the present invention provides a user interface for a system administrator to enter and customize the attribute data. A particular user of the present invention would customize the present system by entering weights that are associated with attribute data. For example, the system administrator would select the attribute data that corresponds with each camera. A system administrator may identify a low-hanging camera that may be easily tampered with a lower attribute weight, while a high-hanging camera that is difficult to tamper with a higher attribute weight.
  • One embodiment of the present invention automatically upgrades or downgrades the weights associated with attributes. For example, decreasing a weight with age of a camera. Another embodiment of the present invention is a user interface that allows for updating the attributes and associated weights. Another embodiment of the present invention heuristically learns and updates the weights. For example, it may learn that certain cameras are degrading in their reliability.
  • In one embodiment of the present invention, video data is captured and stored in a remote location. The video data may be sent via a network, such as the Internet, or a dedicated fiber optic line, to a remote, secure location. If the local copy of the data is damaged, destroyed, or tampered with, the copy in the remote location may be accessed and analyzed. All video data may be automatically archived to the remote location.
  • In one embodiment of the present invention, video data is archived in a hierarchical storage module. A hierarchy of storage modules, with varying speeds, locations, and reliabilities are provided. For example, a high reliability, fast, local RAID disk is provided. In addition, a lower reliability, slower tape drive may also be provided. Additionally, an off-site storage module, which may be connected by a dedicated fiber optic or via the Internet, may also be provided. Video data may be cascaded through the storage hierarchy based on such factors as time, access frequency, as well as its associated meta-data. For example, video data that is older than 30 days may be moved from the RAID disk to the tape drive. On the contrary, video data that has been accessed frequently, even though the data may be older than 30 days, may be stored on the RAID disk. Most importantly, video data may be cascaded through the storage hierarchy based on its associated meta-data. That is, video data that has meta-data indicating a gunshot was detected will be stored in more reliable, faster storage no matter how old or how little the data was accessed. Video data that has meta-data indicating that virtually nothing happened may be immediately moved to tape or off-site storage.
  • One embodiment of the present invention provides an audit trail for the data. An audit trail is generated, indicating who and when has viewed or accessed the data. An audit trail is also generated indicating from which cameras the video data was captured with, and if there are any unusual circumstances associated with that camera, for example, weather conditions, power outages, or tampering.
  • One embodiment of the present invention provides data integrity and security by encrypting the video data, and only allowing authorized individuals access to the encryption key.
  • One embodiment of the present invention allows the receipt and storage of tips, including video tips. Video tips may be video clips recorded by video phones (cell phones with integrated cameras), digital cameras, handheld video cameras, etc. that are sent in by well-meaning citizens.
  • Other embodiments of the present invention include the methods described here but implemented in computer-readable media and/or embedded in hardware. Other features and advantages of the various embodiments of the present invention will be apparent from the following more particular description of embodiments of the invention as illustrated in the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a system architecture for storage and retrieval of video data according to one embodiment of the present invention;
  • FIG. 2 shows an illustrative meta-data table according to one embodiment of the present invention;
  • FIG. 3 shows an illustrative attribute data table in accordance with one embodiment of the present invention;
  • FIG. 4 illustrates a mathematical representation of an illustrative operation of the present invention;
  • FIG. 5 illustrates a system architecture for intelligent alerting based on meta-data, according to another embodiment of the present invention;
  • FIG. 6 illustrates a software architecture used with one embodiment of the present invention;
  • FIG. 7 illustrates a hardware architecture used with one embodiment of the present invention;
  • FIG. 8 illustrates a flowchart of a process for storing video data and associated meta-data and attribute data according to one embodiment of the present invention;
  • FIG. 9 illustrates a flowchart of a process for retrieving video data and associated meta-data and attribute data according to another embodiment of the present invention;
  • FIG. 10 illustrates a flowchart of a process for intelligent alerting based on past and present meta-data according to yet another embodiment of the present invention;
  • FIG. 11 illustrates another example of a hardware architecture according to one embodiment of the present invention; and
  • FIG. 12 illustrates another example of a software architecture according to one embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention is a system, a method, and an apparatus for storing, searching, and retrieving video data. The video data is received from one or more cameras, which could be digital IP cameras. Meta-data is generated by one or more detection components, or manually entered by a human operator. The video data and corresponding meta-data, indexed to the video data, are stored. Attribute data, which relates to such things as the reliability of the meta-data and the video data, and associated weights, or weight functions, is also stored. Attribute data may be determined by a system administrator, and/or determined heuristically.
  • FIG. 1 shows an example of a system architecture 100 of one embodiment of the present invention. One or more cameras 104, 106, 108, or other video capture devices capture one or more streams of vide data. One or more additional sensory devices, such as temperature probe 110, pressure probe 112, and other sensory device 114 provide sensory data that complements the video data. A hierarchical storage manager 122, which could be software running on a dedicated server, stores, or records, the video data to one or more storage media 124, 126, 128. Storage media 128 may be a remote storage media connected by transmission media 127. Transmission media 127 may be a dedicated fiber optic line or a public network such as the Internet. Storage media 124, 126, and 128 may be hard disk, magnetic tape, and the like. The cameras 104-108 and other sensory devices 110-114 may themselves generate meta-data in the hardware. For example, digital surveillance cameras generate motion meta-data that indicate when motion was detected in a particular field of view of the camera. In addition, meta-data server 116 may process video data in software, for example by using detection component(s) 118, and generate meta-data corresponding to the video data. For example, a people counting detection component may count the number of people that were detected in a video stream, and generate meta-data indicating the number of people detected. The meta-data server 116 stores the meta-data in meta-data storage module, or area, 120.
  • In addition, attribute data, which is information about the meta-data, is stored in attribute data storage 130. Attribute data may include such things as the reliability of the meta-data, the reliability of the source of the meta-data, the age of the meta-data, and so on.
  • In addition, an audit trail, containing information about who has accessed the video data, how frequently, and so on is stored in audit trail storage area 131. Each time someone accesses or views a video file from the video storage module, audit information is stored in audit storage module 131.
  • Access control storage area 132 stores access rights and privileges. Access to view the video data is only given to those authorized individuals who are listed in the access control storage area. Access may be restricted based on the video data, or its associated meta-data. For example, any security officer may be able to view the video data taken at night, but only security officers assigned to investigate a particular case may be given access to the video data where a gunshot was detected.
  • Access to attribute data may also be restricted. For example, only certain high-level security officers may have access to high quality video data from behind a bank teller that may show checks and amounts, whereas any security officer may see the video data from the bank's lobby. Access may also be modulated based on the quality of the video data. For example, anybody may be able to login and view a VGA resolution view of the lobby of their building, but only the security officer can see the mega-pixel resolution video. The access control may be implemented using an authentication scheme provided by the operating system, such as Microsoft ActiveDirectory™.
  • Cameras used in the present invention may be digital IP cameras, digital PC cameras, web-cams, analog cameras, cameras attached to camera servers, analog cameras attached to DVRs, etc. Any camera device is within the scope of the present invention, as long as the camera device can capture video. Some cameras may have an integrated microphone; alternatively, a separate microphone may be used to capture audio data along with video data. As used herein, the terms “video,” “video data,” “video source,” etc. is meant to include video without audio, as well as video with interlaced audio (audiovisual information). Of course, it is to be understood that the present invention may also be implemented using audio data without accompanying video data by replacing cameras with microphones.
  • The system diagram shown in FIG. 1 is illustrative of only one implementation of the present invention. For example, the meta-data server and the hierarchical storage module may be on dedicated servers, as shown in FIG. 1 for clarity. Alternatively, a common server may provide the functionality of the meta-data server and the hierarchical storage module. Likewise, as shown in FIG. 1 for clarity, the meta-data and the video data may be stored on different media. Alternatively, the meta-data and the video data may be stored on the same physical storage media. Similarly, the attribute data is shown stored in a separate attribute data storage area. The attribute data may be stored on a dedicated storage area, as illustrated, or may be stored on the same storage as the meta-data and/or the video data.
  • FIG. 2 shows a table 200 of possible meta-data that may be stored. Column 202 corresponds to events that were either generated by sensory devices, or by the meta-data server of FIG. 1. Illustrative events could be motion detected, gunshot detected, number of people in an area exceeds a threshold, speed of an object in a given area exceeds a threshold, and similar events. The sensory devices themselves, the meta-data server, or both, could generate these events, as described previously. Column 204 represents locations corresponding to those events. For example, locations could be the camera names or locations, such as “Camera 1,” “Parking Lot,” “Lobby,” etc. Column 206 represents the dates the events occurred. For example, a motion event was detected on May 15, 2007. Columns 208 and 210 represent the start and end times of the events, and are one form of indices into the video data. For example, a motion event occurred in Camera 1 on May 15, 2007 from 10:00 AM through 10:23 AM. Column 212 provides a pointer, or an index, to the video data that corresponds to the occurrence of that event. For example, the first event is stored by the hierarchical storage module on a local disk, while the second event is stored on a remote disk, for example, a NAS or a disk attached to a server. Finally, Column 214 stores access privileges associates with the event. For example, events where gunshots were detected may have a higher security level than ordinary motion events.
  • As video data is captured by the cameras, and stored in the hierarchical storage module, meta-data is indexed to the video data, and stored in the meta-data storage module. As noted previously, the meta-data may be generated by one or more sensory devices, including the cameras themselves, or may be entered manually by a human operator, such as a security guard.
  • The present invention provides a user interface by which a human operator may enter meta-data. For example, a user interface is provided for a security officer to monitor one or more cameras. The cameras automatically generate meta-data, as noted above. In addition, the human operator may add meta-data manually. For example, if the human operator observes suspicious activity going on in a particular camera, the human operator may add meta-data corresponding to suspicious activity, and the meta-data server in the meta-data storage module would store the meta-data.
  • The human operator may select from a set of possible meta-data tags, as well as add “free-form” meta-data by typing into a text-entry box. For example, a human operator may transcribe speech in the video data. The transcribed speech serves as meta-data to the video data. After the video data has been tagged by meta-data, it is possible to use the present system to search for keywords, such as all the times when a judge said “Order, Order” in a courtroom surveillance camera.
  • The present invention also provides a user interface by which a human operator may enter attribute data. Attribute data is information about the video data and its associated meta-data, such as its source, reliability, etc. For example, one type of attribute data is the camera that the video data was acquired from. Another example of attribute data is the quality of the camera that was used to acquire the video data (e.g., a 3 Megapixel camera would be weighted higher than a VGA camera for purposes of face recognition). Another example of attribute data is the historical pattern of a camera being susceptible to being tampered with.
  • Other examples of attribute data include, but are not limited to, time the camera was repaired or installed, reliability of power to the camera, reliability of transmission, bandwidth, susceptibility to noise, interference, overexposure, weather conditions, age of the camera, type of camera (night, IR, etc.).
  • FIG. 3 illustrates an example of attribute data 300, which includes attributes about the meta-data and their associated weights, or weighing functions. Column 302 shows various sensory devices and column 304 shows associated attributes. The weights, or weighing functions, associated with the attributes are shown in column 306. Column 308 indicates whether the weight is dynamic, that is, whether the weight changes with time, external events, and so on. Finally, column 310 indicates access privileges of individuals authorized to change the attribute data.
  • Different sensory devices, including different cameras, may have different attributes associated with them. Each attribute determines a weight, which could be a constant, or the weight could be a weighing function of the attribute. For example, consider a camera 1 that is not designed to detect gunshots, but which has a low-quality, integrated microphone, and so a gunshot detection component may use the audio to detect loud shots as gunshots. When a motion event is detected on such a camera, it would be assigned a high weight (for example, 0.85 or 85%). On the other hand, if a gunshot was detected on this camera by a gunshot detection component, the gunshot event would be assigned a low weight (0.05, or 5%) because the camera is known to have a low-quality microphone, and what may have been detected as a gunshot may have just been a drop of a metal object. In contrast, gunshot detector 1 may have the opposite attribute-weight profile, in that motion events from the gunshot detector may be weighted low (say, 0.15 or 15%) while gunshot events may be weighted high (say, 0.70 or 70%).
  • Other examples of attribute data and associates weights are shown in FIG. 3. Camera 1 may have an age attribute, indicating the age of the camera, and an associated weighting function that weights any data from the camera with a function that decreases with the age of the camera. The time since the last maintenance of the camera may also serve to generate a weight. This could be a step-function that, for example, a function dropping to zero after 1 year of no maintenance on the camera. The frequency of failure may also serve to weigh any data from the camera, again using a function that weights data lower from a camera that has a high frequency of failure. The resolution of the camera may also serves as attribute data to assign a weight to the data; data from a high-resolution camera would be assigned a higher weight than data from a lower resolution camera.
  • Another example of attribute data and associated weights that are tied to particular meta-data includes weights assigned to meta-data indicating the number of people in a particular area. This meta-data may be assigned a high weight (0.80) if it comes from camera 2, which may have high resolution, high frame-rate, and other qualities that make it amenable to high reliability for people counting purposes. Contrary, if the same meta-data comes from camera 3, which has low resolution, low frame-rate, or other qualities that make it unreliable when it comes to counting people, the meta-data may be assigned a low weight (0.40).
  • A system administrator may enter and customize the attribute data. A system administrator would customize the present system by entering weights that are associated with attribute data. For example, the system administrator would select the attribute data that corresponds with each camera. A system administrator may identify a low-hanging camera that may be easily tampered with a lower attribute weight, while a high-hanging camera that is difficult to tamper with a higher attribute weight.
  • In another example, the system administrator would customize the attribute data for different image qualities. For example, the system administrator would select the weights associated with video data, and the corresponding meta-data, associated with different resolutions of cameras. That is, a higher resolution camera and its associated meta-data would be weighted higher than a lower resolution camera, and the system administrator would select the relative weights.
  • Another example of attribute data that the system administrator may set would be based on the past evidence of usefulness of video data coming from each camera. For example, a camera that has been useful in the past for detecting, preventing, or prosecuting crimes would be assigned a higher weight by the system administrator using this user interface.
  • After the meta-data has been stored in the meta-data storage module, the meta-data may be used to significantly enhance search and retrieval of the video data. That is, in order to perform a search of the video data, the meta-data may be searched first, and the video data may be indexed by the meta-data.
  • For example, suppose meta-data was recorded in the meta-data storage module during detection of a motion event in a particular camera. If at a later time it were desired to locate all places in the video data where motion was detected, a database query would be performed on the meta-data table to retrieve all events where motion was detected. The pointers to the video data and the indices into the video data would provide a mechanism by which to retrieve the video data that corresponds to those occurrences of motion.
  • FIG. 4 shows a possible set-theoretic explanation of the operation of the present invention. Consider the sets of video data V1, V2, . . . , Vi shown as elements 402, 428 in FIG. 4 respectively. Sets V1 (element 402) and V2 (element 428) represent video data from camera 1 and camera 2, respectively, and so on. Each set of video data Vi has subsets of video data, for example, subsets for a particular date range, for a particular time range, for a particular event, etc. For example, video set 402 has subsets of video data identified as elements 404, 406, 408, and 410 in FIG. 4.
  • Each set of video data Vi has a corresponding set of meta-data Mi associated with it. Each element in the set of meta-data Mi has an index, or a pointer, to a corresponding portion of the video data Vi. For example, meta-data set M1, shown as element 412 in FIG. 4, has corresponding subsets of meta-data, shown as element 414, 416, 418, and 420. Each subset of meta-data is indexed, or points to, a corresponding subset of video data. For example, subset 414 of meta-data M1 is indexed, or points to, subset 406 of video data V1 from camera 1 (not shown). Note that a one-to-one relationship between video data and meta-data is illustrated in FIG. 4 for clarity. The relationship between video-data and meta-data is not restricted to being one-to-one. The relationship may be one-to-many, many-to-one, as well as many-to-many.
  • In addition, sets Wi of attribute weight data are weight vectors associated with each set of meta-data Mi for camera i (not shown). The sets Wi of attribute weight data are sets of vectors wi,j which represent weights associated with subsets of the meta-data Mi. For example, weight vector wi,j represented as element 424, represents the weights associated with meta-data subset 416. The weight vectors wi,j may be n-dimensional vectors representing the weights in one of a number of dimensions, each dimension representing a weight in a particular attribute of the data. For example, a 2-dimentional weight [w11, w12] vector may represent the attribute weights associated with the reliability of a particular video camera for both motion detection reliability as well as gunshot detection reliability. One camera may have high motion detection reliability and low gunshot detection reliability, while another camera may have high gunshot detection reliability and low motion detection reliability. In principle, the attribute weight vectors wij may be arbitrarily fine-grained with respect to subsets of the meta-data and subsets of the video data. In practice, attribute weight vectors wij are constant over large subsets of the meta-data and the video data, and may have large discontinuities between subsets. For example, gunshot detection devices may have a very low motion detection reliability weight, and very high gunshot detection reliability, and vice versa for typical motion detection cameras.
  • The set-theoretic described has been shown and described here for ease of understanding and explanation of the present invention. The meta-data and video data may or may not be stored as sets; the data may be stored in matrices, tables, relational databases, etc. The set description is shown for clarity only. The present invention is not limited to this particular mathematical representation, and one of ordinary skill will recognize numerous alternative and equivalent mathematical representations of the present invention.
  • For example, a possible query to retrieve those events in which motion was detected would be:

  • SELECT ALL EVENTS WHERE MOTION=TRUE  (1)
  • Query (1) would retrieve all events where motion was detected. In the set-theoretic notation described above, the query (1) would correspond to:

  • x j εV i |M i,j(motion=true)  (2)
  • In order to view the video data corresponding to a particular event, a possible query would be:

  • VIEW EVENT 1 WHERE MOTION=TRUE  (3)
  • Similar queries could be used to retrieve other events. For example, in order to retrieve all events in which a gunshot was detected, a possible query would be:

  • SELECT ALL EVENTS WHERE GUNSHOT=TRUE  (4)
  • Query (4) would be represented in set-theoretic notation as:

  • x j εV i |M i,j(gunshot=true)  (5)
  • To view the first 3 events where gunshots were detected, a possible query would be:

  • VIEW EVENT 1-3 WHERE GUNSHOT=TRUE  (6)
  • Another possible query, to search for all video data where three or more people were detected in a single frame, a possible query would be:

  • SELECT ALL EVENTS WHERE NUMBER_OF_PEOPLE>=3  (7)
  • Query (7) would be represented in set-theoretic notation as:

  • x j εV i |M i,j(number_of_people≧3)  (8)
  • Similarly, in order to view the video data corresponding to the first two events where three or more people were detected, a possible query would be:

  • VIEW EVENT 1-2 WHERE NUMBER_OF_PEOPLE>=3  (9)
  • Event searches may be restricted by particular locations or date-ranges. For example, a security analyst may only wish to search a particular camera, or location, where 3 or more people were detected, for example:

  • SELECT ALL EVENTS WHERE NUMBER_OF_PEOPLE>=3 IN CAMERA1  (10)
  • Query (10) would be represented in set-theoretic notation by restricting the search to V1 (video data from camera 1) as follows:

  • x j εV 1 |M 1,j(number_of_people≧3)  (11)
  • The security analyst may also restrict searches by date and/or time. For example, the security analyst may only wish to search a particular date range where 3 or more people were detected, for example:

  • SELECT ALL EVENTS WHERE NUMBER_OF_PEOPLE>=3 ON 05-15-2007  (12)
  • Query (12) may be represented in set-theoretic notation as:

  • x j εV i |{M i,j(number_of_people≧3)∩M i,j(date=20070515)}  (13)
  • Combinations of events may also be searched. For example, a security analyst may want to search historical video data for all occurrences where a gunshot was detected at the same time as 3 or more people were detected in the video frame. A possible query to accomplish this would be:

  • SELECT ALL EVENTS WHERE GUNSHOT=TRUE AND NUMBER_OF_PEOPLE>=3  (14)
  • Query (14) may be represented in set theoretic notation as:

  • x j εV i |{M i,j(number_of_people≧3)∩M i,j(gunshot=true)}  (15)
  • Any number of combinations and sub-combinations of events may be searched using the query language, including unions and intersections (conjunctions and disjunctions) of events using AND/OR operators, as well as other logical operators.
  • Events may also be correlated and analyzed across multiple cameras, or multiple locations. For example, a security analyst may want to see all events where 1 or more people were detected in a particular lobby, and a gunshot was heard in a parking lot camera. To perform such a search, the security analyst could search by:

  • SELECT ALL EVENTS WHERE NUMBER_OF_PEOPLE>=1 IN LOBBYCAMERA1 AND GUNSHOT=TRUE IN PARKINGCAMERA3  (16)
  • Query (16) may be interpreted in set-theoretic notation as:

  • x j εV 1 ∪V 3 |{M 1,j(number_of_people≧3)∩M 3,j(gunshot=true)}  (17)
  • The security analyst is not required to using a query language. A query language may be used for sophisticated searches. For more basic searches, a user interface is provided for the security analyst, which allows the officer to select the meta-data criteria by which to search by using a visual tool. The user interface automatically generates the query language and sends it to the meta-data server for retrieval.
  • A possible structured query language was shown here. However, the present invention is not limited to the query language shown or described here. Any number of query languages are within the scope of the present invention, including SQL, IBM BS12, HQL, EJB-QL, Datalog, etc. The query languages described here is not meant to be an exhaustive list, and are listed here for illustrative purposes only.
  • When performing queries on meta-data, such as unions and intersections, attribute weights may be recalculated. For example, to recalculate the attribute weights for an intersection of two subsets of meta-data, the attribute weights would be multiplied together, as shown:

  • W(M 1 ∪M 2)=W(M 1W(M 2),  (18)
  • For example, to calculate the weight associated with two motion events occurring substantially simultaneously, where the first motion event has a reliability of 90% (0.90), and the second motion event has a probability of 50% (0.50), the weight associated with both motion events substantially simultaneously is 45% (0.45).
  • To recalculate the attribute weights for a union of two subsets of meta-data, the law of addition of probabilities would be applied, as shown:

  • W(M 1 ∪M 2)=W(M 1)+W(M 2)−W(M 1W(M 2)  (19)
  • For example, to calculate the weight associated with either one of two motion events occurring substantially simultaneously, where the first motion event has a reliability of 90% (0.90), and the second motion event has a probability of 50% (0.50), the weight associated with either one of the events occurring substantially simultaneously is 95% (0.95).
  • One embodiment of the present invention allows real-time alerts to be issued based on the present and historical video data, and especially the present and historical meta-data. A correlation engine correlates meta-data, both present and historical, across multiple sensory devices and multiple locations, and activates one or more actions in response to the correlation exceeding a particular threshold. The correlation engine may evaluate various rules, such as “issue an alert to person A when one or more people are present in location B AND a gunshot was detected in location B in the past 24 hours.” Video detection components are used to extract relevant meta-data (also called video parameters), from the video sources; the meta-data is input into the correlation engine. Input components may be used to receive inputs from other systems, for example sensory devices, such as temperature probes. Action components represent various actions that may be taken under certain conditions, and may be activated by the correlation engine. Finally, service components provide interfaces for services performed by human beings, for example meta-data addition by human operators.
  • In one embodiment, illustrated in FIG. 5, past and present video data, past and present meta-data, and past and present data from sensory devices are used to generate real-time alerts. One or more data inputs 502 are received via one or more input components 504 (only one input component is illustrated for clarity). The data inputs could be data from police reports, anonymous tips, sensory devices, etc. In one embodiment, data inputs could come from a personnel database in storage and from temperature probe (not shown). The input components, such as input component 504, provide interfaces between the system 500 and various input devices. The data inputs 502 are assigned a weight by data attribute engine based on the attribute associated with the data inputs 506. As described above, the weights may be a function of the input data, the source of the input data (such as its reliability), external events (such as the National Terror alerts in the United States), or any other information. (Only one input data is shown being processed by data attribute engine 506 for clarity.)
  • One or more video inputs 507 are received and processed by one or more detection components 508 (only one video detection component is illustrated for clarity). The video inputs could be historical, archived video data, such as video from storage 512, or could be video data from live video cameras (not shown). The detection components, such as detection component 508, determine one or more video parameters from the video inputs 507. For example, detection component 508 may detect whether or not there is a person in a particular region of video input 507. The one or more video parameters that are determined by the detection component 508 are assigned a weight by video attribute engine 510. As described above, the weights may be a function of the video data, the video source (such as its reliability), external events (such as the National Terror alerts in the United States), or any other information. (Only one video parameter is shown being processed by video attribute engine 510 for clarity.)
  • The detection components also store meta-data, which represent events detected by the detection component, in meta-data storage 513. For example, a motion detection component, when detecting motion, stores meta-data indicating that motion was detected in a certain camera in a certain period in meta-data storage 513. The meta-data may be represented and stored in a table as illustrated in FIG. 2, or the meta-data may be stored and represented in some other manner.
  • The historical meta-data stored in metadata storage 513 is weighted by attribute weights by metadata attribute engine 514. The correlation engine 520 evaluates one or more rules, or triggers, based on the weighted metadata from metadata attribute engine 514.
  • The weighted input data, the weighted video data, and the weighted meta-data (outputs from the data attribute engine 506, the video attribute engine 510, and the metadata attribute engine 514) are processed by correlation engine 520. Correlation engine 520 evaluates a set of rules based on the weighted input data, the weighted video data, and the weighted meta-data. The correlation engine may also be considered to correlate two or more events together. The correlation engine 520 activates one or more actions via one or more action components 522. For example, the correlation engine 520 may contain a rule stating: “Issue an alert to the Police (Action Component 1) if ten or more people gather in a given area (Video Detection Component 1) and within the last 48 hours there was a gunshot detected in that area (historical Metadata 1).” If the preconditions of the rule are satisfied, the action is performed. As discussed previously, the preconditions may be weighted based on the data, the source of the data, external events, and other information. For example, a more recent shooting may receive a higher weight than an older shooting.
  • In FIG. 5, data may also come from a service component 518. Service components, such as service component 518, are interfaces to human operators. For example, a service component may provide an interface for human operators to monitor a given area for suspicious activity, and to send a signal to the correlation engine 520 that suspicious activity is going on in a given area. The correlation engine 520 will activate an action if a corresponding rule is activated. Alternatively, the human operator may force an action to be performed by directly activating an action component, such as action component 522.
  • Equations 20 to 22 show possible rules that may be evaluated by correlation engine 520. For example, as shown in Eq. 20, action component a1 will be activated if the expression on the left-hand side is greater than a predetermined threshold τ1. In Eqs. 20-22, “a” stands for action component, “f, g, and h” are predetermined functions, “w” stands for weight, “x” stands for the input data, and “v” stands for video data. Eqs. 20-22 could represent a hierarchy of actions that would be activated for different threshold scenarios. Alternatively, Eqs. 20-22 could represent several rules being evaluated in parallel. Eqs. 29-22 are illustrative of only one embodiment of the present invention, and the present invention may be implemented using other equations, other expressions, or even by using heuristic rules rather than equations.
  • a 1 = f j ( i = 1 i = n w i · x i ) + g j ( i = 1 i = m w i · v i ) + h j ( t = 1 t = t n w ( v ) · v ( t ) t ) τ 1 ( 20 ) a 2 = f j ( i = 1 i = n w i · x i ) + g j ( i = 1 i = m w i · v i ) + h j ( t = 1 t = t n w ( v ) · v ( t ) t ) τ 2 ( 21 ) a j = f j ( i = 1 i = n w i · x i ) + g j ( i = 1 i = m w i · v i ) + h j ( t = 1 t = t n w ( v ) · v ( t ) t ) τ j ( 22 )
  • Equation 23 shows an example of a calculation of determining a weight that may be performed by data attribute engine 506, video attribute engine 510, or metadata attribute engine 514. The weight “w” may be based on attribute data, including the source of the data “s” (for example, the reliability of the source), the time that the data was received “t” (for example, older data would be assigned a lower weight), and the frequency that the data was received “f” (for example, the same data received multiple times would be assigned a higher weight). Other weighting factors may also be used, and the weighing factors described here are illustrative only and are not intended to limit the scope of the invention.

  • w i =s i ·t i· . . . ·ƒi  (23)
  • Equation 24 shows an example of a calculation that may be performed by detection component 508 to determine a video parameter “vi” from the video data “v(t)”. Eq. 24 shows a video stream “v(t)” weighted by a weighting function “w(v)” and integrated over time from time t=1 to t=tn. The video parameter “vi” may be obtained as a function “fi” of the integral. For example, a detection component for counting the number of people that enter a region over a period of time may perform face detection in a given frame, count the number of faces detected, and then integrate over several frames to obtain a final count.
  • v i = f i ( t = 1 t = t n w ( v ) · v ( t ) · t ) ( 24 )
  • In one embodiment, the function “fi” of Eq. 24 may be a composition of several functions, as shown in Equation 25. For example, a detection component may count the number of people wearing a safety helmet that enter a given area by composing a safety helmet detection function with a people counting function.

  • ƒi1∘ƒ2∘ . . . ∘ƒn  (25)
  • In one embodiment, the new, or future, weights “wj” may be based on the past weights “wi” and external events “ei”. Examples of external events could be “Amber Alerts” for missing children, “National Terror Alerts” for terrorist activity in the United States, etc. Eq. 26 shows an example of a calculation for determining new, or future, weights “wj” by composing a matrix of past weights “wi” with external events “ei”.
  • [ w 1 w 2 w j ] = [ e 1 , e 2 , , e n ] · [ w 1 w 2 w i ] ( 26 )
  • FIG. 6 shows an example of software architecture 600 of one embodiment of the present invention. A presentation layer 602 provides the front-end interface to users of the system 100 of FIG. 1. Several user interfaces are provided. For example, a user interface is provided for an administrator, who can modify various system parameters, such as the data input components, the detection components, the data and video weights, the rules, as well as the action components. Another user interface is provided for an officer, such as a security guard, to monitor the activity of the system 100. For example, a user interface for the security officer would allow the officer to monitor alerts system-wide, turn on and off appropriate cameras, and notify authorities. An interface is also provided for an end-user, such as an executive. The interface for the end-user allows, for example, the end-user to monitor those alerts relevant to him or her, as well as to view those cameras and video sources he or she has permission to view. Various user interfaces may be created for various users of the present invention, and the present invention is not limited to any particular user interface shown or described here. Other user interface screens, for adding meta-data and for modifying attribute data, were discussed above.
  • A middle layer 604 provides the middleware logic for the system 100. The middle layer 604 includes the weight engines 506, 510 as well as the correlation engine 520 of FIG. 5. The middle layer interfaces with the user interface 602 and evaluates the logic of Equations 20-26.
  • A database layer 606 is provided for storing the input data and the video data. In one embodiment, the database layer 606 may be implemented using a hierarchical storage architecture, in which older data, or less frequently used data, is migrated to slower and less expensive storage media. The database layer 606 provides the input data and the video data to the middle layer 604, which in turn processes the data for display by the presentation layer 602.
  • FIG. 7 shows an example of hardware architecture 700 of one embodiment of the present invention. The software architecture 600 may be implemented using any hardware architecture, of which FIG. 7 is illustrative. A bus 714 connects the various hardware subsystems. A display 702 is used to present the output of the presentation layer 602 of FIG. 2. An I/O interface 704 provides an interface to input devices, such as keyboard and mouse (not shown). A network interface 705 provides connectivity to a network, such as an Ethernet network, a Local Area Network (LAN), a Wide Area Network (WAN), an IP network, the Internet, etc. RAM 706 provides working memory while executing a process according to system architecture 100 of FIG. 1. Program code for execution of a process according to system architecture 100 of FIG. 1 may be stored on a hard disk, a removable storage media, a network location, or other location (not shown). CPU 709 executes program code in RAM 706, and controls the other system components. Meta-data is stored in metadata storage module 708, and attribute data is stored in attribute storage module 709. Hierarchical storage manager 710 provides an interface to one or more storage modules 712 on which video data is stored. Audit information, including data about who, when, and how often someone accessed particular video data is stored in audit storage module 711. As stated previously, the separation between meta-data storage, attribute storage, and video storage is logical only, and all three storage modules, or areas, may be implemented on one physical media, as well as on multiple physical media. It is to be understood that this is only an illustrative hardware architecture on which the present invention may be implemented, and the present invention is not limited to the particular hardware shown or described here. It is also understood that numerous hardware components have been omitted for clarity, and that various hardware components may be added without departing from the spirit and scope of the present invention.
  • FIG. 8 shows a flowchart of a process for storing video data according to one embodiment of the present invention. Process 800 begins in step 802. Video data is captured from one or more surveillance cameras, as shown in step 804. Meta-data is generated by performing video analysis on the captured video data, as shown in step 806. Attribute data and associated weights, representing information about the relevance of the meta-data, are received, as shown in step 808. Optionally, a video tip may be received from a well-meaning citizen, and associated meta-data and attribute data may be received or generated, as shown in step 810. Unions and intersections of meta-data may be used to generate additional meta-data, as shown in step 812. The video data is stored in a hierarchical storage module, as shown in step 814. The meta-data, indexed by date and time stamp to the video data, is stored in a meta-data storage module, as shown in step 816. Attribute data, indexed to the meta-data, is stored in an attribute storage area, as shown in step 818. Process 800 ends in step 818.
  • FIG. 9 shows a flowchart of a process for retrieving video data and associated meta-data and attribute data according to another embodiment of the present invention. Process 900 begins in step 902. A search criteria is entered, as shown in step 904. Meta-data, which was previously generated by video detection components and indexed to the video data, is searched, as shown in step 906. Meta-data matching the search criteria is retrieved from a meta-data storage module, as shown in step 908. Video data, indexed by the meta-data by date and time, is retrieved from the video data storage module, as shown in step 910. If the video data was encrypted, the video data is decrypted as shown in step 912. Attribute data, representing reliability of the meta-data, is retrieved from the attribute data storage module, as shown in step 914. Audit information may be stored about who and when accessed the video data, as shown in step 916. Process 900 ends in step 918.
  • FIG. 10 shows a flowchart of a process for intelligent alerting based on past and present meta-data according to yet another embodiment of the present invention. Process 1100 may be stored in RAM 706, and may be executed on CPU 709 of FIG. 7. Process 1000 begins in step 1002. Video data is captured from one or more surveillance cameras, as shown in step 1004. Meta-data is generated by performing video analysis on the captured video data, as shown in step 1006. Attribute data and associated weights, representing information about the relevance of the meta-data, are received, as shown in step 1008. Historical meta-data is retrieved from a meta-data storage module, as shown in step 1010. Attribute data associated with the meta-data is retrieved from an attribute storage module, as shown in step 1012. A set of rules is evaluated based on the generated meta-data, the historical meta-data, and the associated attribute data, as shown in step 1014. One or more actions, which could include issuing an alert, is performed based on the evaluation of the rules, as shown in step 1016. Process 1000 ends in step 1018.
  • FIG. 11 shows another example of a hardware architecture 1100 according to another embodiment of the present invention. A network 1120, such as an IP network over Ethernet, interconnects all system components. Digital IP cameras 1115, running integrated servers that serve the video from an IP address, may be attached directly to the network. Analogue cameras 1117 may also be attached to the network via analogue encoders 1116 that encode the analogue signal and serve the video from an IP address. In addition, cameras may be attached to the network via DVRs (Digital Video Recorders) or NVRs (Network Video Recorders), identified as element 1111. The video data is recorded and stored on data storage server 1108. Data storage server 1108 may be used to store the video data, the meta-data, as well as the attribute data and associated weights. Data is also archived by data archive server 1113 on enterprise tape library 1114. Data may also be sent to remote storage 1106 via a dedicated transmission media such as a fiber optic line, or via a public network such as the Internet.
  • Legacy systems, such as external security systems 1109, may be interfaced via appropriate input components, as described above. A central management server 1110 manages the system 1100, provides system administrator, access control, and management functionality. Enterprise master and slave servers 1112 provide additional common system functionality. Video analytics server 1107 runs the video detection modules described below, as well as providing the interface to search, retrieve, and analyze the video data and meta-data stored on data server 1108.
  • The video, including live feeds, as well as recorded video, may be viewed on smart display matrix 1105. The display matrix includes one or more monitors, each monitor capable of displaying multiple camera or video views simultaneously. One or more clients are provided to view live video data, as well as to analyze historical video data. Supported clients include PDA 1101, central client 1102, and smart client 1103. A remote client 1104 may be connected remotely from anywhere on the network or even over the public Internet, due to the open IP backbone of the present invention.
  • FIG. 12 illustrates another example of the software architecture 1200 of another embodiment of the present invention. Data is collected in data collection software layer 1201. For example, a web interface for a security officer allows the officer to view video data, add meta-data, and view the status of any alerts. Also in data collection software layer 1201 is a file map interface for interfacing to a map of a building, location, corporate facility, campus, etc. This allows the video data to be correlated to precise locations. A voice interface allows for tips to be received via phone or a voice recording. A video interface provides an interface to the video data from numerous cameras. (COTS=commercially off-the shelf.)
  • Data is filtered, weighted, integrated, and correlated in filter software layer 1202 by the collaboration engine, as described previously. Data analysis software layer 1203 provides an interface for a security analyst or data analyst to search, analyze, and review recorded and live video data, as described above. Dissemination software layer 1204 issues reports, alerts, and notifications based on the video data, the meta-data, and the attribute data, as described above. Action software layer 1205 performs actions in response to alerts, including turning systems on or off, notifying the police, fire, and so on, as described above. In one embodiment of the present invention, the software layers may communicate using XML (eXtensible Markup Language). However, the present invention is not limited to using XML to communicate between software layers, and other communication techniques may be used, including open APIs, etc.
  • One embodiment of the present invention allows for the receipt and storage of “video tips,” which are short video clips captured by well-intentioned citizens. Video tips would be received by the present invention via a user interface. For example, a person would log into the system via the Internet and upload a video of a crime that the person caught on video. The system would process the video tip in a manner analogous to the way it would process video from a surveillance camera. The video detection components would be used to detect one or more events in the video, such as motion, people counting, etc., and generate meta-data about the video tip. In addition, the citizen submitting the video tip would also submit associated meta-data, such as the date and time it was captured, its relevance, the names of people in the video, the occurrence of any crime in the video, etc.
  • Attribute data would be assigned to the video tip based on such factors as the identify of the informant, the quality of the video, the reliability of the source, other tips that are coming in, etc. Once the video tip has entered the system, it is processed in a similar manner to the way video data from the surveillance cameras is processed, as detailed above. The video tip would be archived in the video storage module, and its associated meta-data and attribute data would be stored. It would serve as one additional input into the correlation engine and will be weighted and factored in when generating alerts. In addition, it will be available for later search and retrieval by its associated meta-data and attribute data.
  • According to the present invention, various detection components may be used to generate meta-data, or video parameters, from the video inputs. These detection components may be configured to record meta-data along an occurrence of each event. For example, as shown in FIG. 2, whenever a motion event is detected, meta-data corresponding to the motion event is recorded along with the video data. In another example, if a person is detected in an area by a face detection component, meta-data may be stored along with each occurrence of that person in the video. Some illustrative detection components are listed below. However, the present invention is not limited to these detection components, and various detection components may be used to determine one or more video parameters (meta-data), and are all within the scope of the present invention.
  • 1. Detect presence of intruder in designated area
  • 2. Detect presence of intruder in designated area during designated time
  • 3. Detect whether it is a person in designated area (excluding pets, wind, etc.)
  • 4. Detect number of people in designated area
  • 5. Detect if more people entered a designated area than left the designate area
  • 6. Detect voice (sound) volume
  • 7. Recognize certain sound patterns, such as gunshots or shouts
  • 8. Detect certain key words
  • 9. Detect speed of motion of an object
  • 10. Detect size of object
  • 11. Detect area of motion
  • 12. Detect acceleration
  • 13. Detect if person is too short in designated area
  • 14. Detect if person is too long in designated area
  • 15. Detect a face
  • 16. Recognize a certain face
  • 17. Detect object left in a given area for a certain period of time
  • 18. Count number of vehicles
  • 19. Detect if vehicle crossed lane
  • 20. Detect if vehicles is driving the wrong way in a lane
  • 21. Determine type of vehicle
  • 22. Detect license plate of vehicle
  • 23. Detect percent of lane occupied
  • 24. Detect speed of vehicle
  • Additionally, various sensory devices may be integrated into system 100 of FIG. 1 by adding an input component for receiving and processing the input from the sensory device. Some illustrative input components are listed below. However, the present invention is not limited to these input components, and various other input components associated with various other sensory and other devices are within the scope of the present invention.
  • 1. Measure temperature
  • 2. Measure pressure
  • 3. Measure height
  • 4. Measure speed
  • 5. Measure revolutions per minute
  • 6. Measure blood pressure
  • 7. Measure heart rate
  • 8. Measure RFID signal
  • 9. Measure Chlorine level
  • 10. Measure radon level
  • 11. Measure Dust particle level
  • 12. Measure pollution level
  • 13. Measure CO2 emission level
  • 14. Measure bacteria level in water
  • 15. Measure water meter
  • 16. Measure electrical meter
  • As described above, various action components may be used to perform one or more actions in response to a rule being activated. The rules engine may activate one or more action components under certain conditions defined by the rules. Some illustrative action components are listed below. However, the present invention is not limited to these particular action components, and other action components are within the scope of the present invention.
  • 1. Send email alert to designated person
  • 2. Send SMS alert to designed phone number
  • 3. Send message to designated blackberry
  • 4. Send alert to public address system
  • 5. Send message or picture to police
  • 6. Send alert email to mass mailing list
  • 7. Send text message (SMS) to mass list
  • 8. Send alert to PC or PocketPC
  • 9. Call designated phone
  • 10. Turn lights on or off in designated area
  • 11. Turn thermostat up or down
  • 12. Turn camera on or off
  • 13. Issue a forced alerts (with automatic escalation if no response)
  • 14. Follow a person using Pan-Zoom-Tilt (PTZ) camera
  • 15. Follow a person from camera to camera
  • According to the present invention, service components may be used to integrate human intelligence into system 500 of FIG. 5. For example, a service component may provide a user interface for remote security guards who may monitor the video inputs. Some illustrative examples of what the security guards could monitor for and detect is listed below. A human operator may detect some events, such as “suspicious behavior,” which may be difficult for a computer to detect. The human operators may also add meta-data for each occurrence of an event. For example, a security guard may add meta-data to each portion of a video where he or she noticed suspicious activity. The present invention is not limited to the examples described here, and is intended to cover all such service components that may be added to detect various events using a human operator.
  • 1. Detect people going into building but not coming out
  • 2. Detect people carrying packages in and not carrying out
  • 3. Detect people carrying packages out but not carrying in
  • 4. Detect people wearing different clothes
  • 5. Detect people acting suspiciously
  • 6. Detect people carrying guns
  • 7. Detect people tampering with locks
  • 8. Detect people being mugged
  • 9. Detect a shooting
  • 10. Detect people being bullied
  • The present invention may be implemented using any number of detection, input, action, and service components. Some illustrative components are presented here, but the present invention is not limited to this list of components. An advantage of the present invention is the open architecture, in which new components may be added as they are developed.
  • The components listed above may be reused and combined to create advanced applications. Using various combinations and sub-combinations of components, it is possible to assemble many advanced applications.
  • The following discussion illustrates just one advanced application that may be created using the above components, and describes the occurrence of a real shooting that may have been prevented and the assailants apprehended if the present invention was in use.
  • On Dec. 16, 2005, in a parking lot on MIT's campus, Professor Donovan, co-author of the present patent application, was shot at in a moving car 14 times at night and was hit 4 times. If the invention described here were in place, the following would have occurred. Surveillance cameras would have detected Professor Donovan entering the building at 8:00 PM, would have stored the video data, and associated meta-data (namely, motion detection), a high weight would be calculated based on the attribute data (an executive swiping in late at night, obtained from the legacy access system). At approximately 8:10 PM, the motion of two men would have been detected in the parking lot. The video data and associated motion meta-data would be stored locally, as well as remotely. The weight associated with the attribute date (motion after 8:00 PM at night) would be high. The correlation engine would retrieve the stored motion meta-data of Professor Donovan entering the building, and the meta-data associated with two men moving in the parking lot, and would have issued an alert to all people, including Professor Donovan, who are still in the building, using their Blackberries or cell phones. The email alert would have contained a picture of the parking lot, and Professor Donovan would not have entered the parking lot and would possibly not have been shot.
  • Different weights would be associated with the detected method of entrance into the parking lot. For example, if motion was detected in the fence area, this would have a higher weight than motion near the entrance gate. Meta-data that was combined with people loitering at the entrance gate would have a higher weight.
  • For later (after the crime) criminal and forensic analysis, the video data would have been searched using meta-data for the precise time when those two men entered the parking lot and for all previous occurrences when two men were detected in the parking lot. Hence the assailants may have been identified scoping the area as well as committing the crime of attempted murder, which could have led to a possible identification and capture of the assailants.
  • Only one example of an illustrative scenario in which the present invention could be applied was described here. However, as will be immediately recognized by one of ordinary skill, the present invention is not limited to this particular scenario. The present invention could be used to help prevent and fight crime, terrorist activity, as well as ensure safety procedures are following by integrating the components described here.
  • In one embodiment, a system administrator may set the rules. The system administrator may hold an ordered, procedural workshop with the users and key people of the organization to determine the weighing criteria and the alerting levels.
  • In another embodiment, the rules may be heuristically updated. For example, the rules may be learned based on past occurrences. In one embodiment, a learning component may be added which can recognize missing rules. If an alert was not issued when it should have been, an administrator of the system may note this, and a new rule may be automatically generated. For example, if too many alerts were being generated for motion in the parking lot, the weights associated with the time would be adjusted.
  • While the methods disclosed herein have been described and shown with reference to particular operations performed in a particular order, it will be understood that these operations may be combined, sub-divided, or re-ordered to form equivalent methods without departing from the teachings of the present invention. Accordingly, unless specifically indicated herein, the order and grouping of the operations is not a limitation of the present invention.
  • While the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.

Claims (23)

1-22. (canceled)
23. A method of storing video data, associated meta-data, and associated attribute weights from a video surveillance system, the method comprising:
capturing video data from one or more surveillance cameras;
generating meta-data by performing video analysis on the video data from the surveillance cameras, the meta-data representing events detected in the video data;
determining attribute weights, representing information about the relevance of the meta-data;
generating intersections of two or more subsets of the meta-data to generate intersection meta-data;
determining attribute weights associated with the intersection meta-data by multiplying the attribute weights for each subset of meta-data;
generating unions of two or more subsets of the meta-data to generate union meta-data;
determining attribute weights associated with the union meta-data by adding the attribute weights for each subset of meta-data and subtracting a multiple of the attribute weights of each subset of meta-data;
changing the attribute weights based on external events by computing future attribute weights from past attribute weights by composing past attribute weights with external event weights;
storing the video data in a video storage area;
storing the meta-data, indexed by date and time stamp to the video data, in a meta-data storage area; and
storing the attribute weights in an attribute storage area,
wherein attribute weights for the intersection meta-data is calculated using the equation:

W(M1∩M2)=W(M1)•W(M2),
wherein attribute weights for the union meta-data is calculated using the equation:

W(M1∪M2)=W(M1)+W(M2)−W(M1)•W(M2), and
wherein M1 and M2 are two subsets of meta-data, W(M1) is an attribute weight associated with subset M1, W(M2) is an attribute weight associated with subset M2, W(M1∩M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2, and W(M1∪M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2.
24. The method of claim 23, wherein the attribute weights are changed based on external events by computing future attribute weights from past attribute weights by composing past attribute weights with external event weights as shown in the equation:
[ w 1 w 2 w j ] = [ e 1 , e 2 , , e n ] · [ w 1 w 2 w i ] ,
where wj are future attribute weights, wi are past attribute weights, and en are external event weights.
25. The method of claim 23, further comprising:
receiving video tips from one or more anonymous sources, the video tips being short video clips captured by citizens;
generating tip meta-data based on the video tips, the tip meta-data representing events detected in the video tips; and
determining tip attribute weights for the tip meta-data, representing information about the relevance of the tip meta-data.
26. The method of claim 23, further comprising:
providing additional meta-data generated by a human operator; and
storing the additional human generated meta-data, indexed to the video data by date and time stamp, in the meta-data storage module.
27. The method of claim 23, further comprising:
retrieving historical meta-data from the meta-data storage module;
evaluating a set of rules based on the historical meta-data and the generated meta-data; and
performing one or more actions based on the evaluation of the set of rules.
28. The method of claim 23, wherein the video storage module is a hierarchical storage module that archives the video data based at least on meta-data and attribute weights associated with the video data.
29. The method of claim 23, further comprising:
storing access privileges for the video data, the meta-data, and the attribute weights.
30. The method of claim 23, further comprising:
encrypting the captured video data before storing the video data.
31. The method of claim 23, wherein the video data is stored off-site.
32. A video surveillance system, comprising:
one or more surveillance cameras for capturing video data;
one or more video storage areas for storing video data;
a meta-data storage area for storing meta-data;
an attribute storage area for storing attribute weights; and
a processor, the processor coupled to the video storage areas, the meta-data storage area, and the attribute storage area, the processor adapted to execute program code to:
capture video data from one or more surveillance cameras;
generate meta-data by performing video analysis on the video data from the surveillance cameras, the meta-data representing events detected in the video data;
determine attribute weights, representing information about the relevance of the meta-data;
generate intersections of two or more subsets of the meta-data to generate intersection meta-data;
determine attribute weights associated with the intersection meta-data by multiplying the attribute weights for each subset of meta-data;
generate unions of two or more subsets of the meta-data to generate union meta-data;
determine attribute weights associated with the union meta-data by adding the attribute weights for each subset of meta-data and subtracting a multiple of the attribute weights of each subset of meta-data;
change the attribute weights based on external events by computing future attribute weights from past attribute weights by composing past attribute weights with external event weights;
store the video data in a video storage area;
store the meta-data, indexed by date and time stamp to the video data, in a meta-data storage area; and
store the attribute weights in an attribute storage area,
wherein attribute weights for the intersection meta-data is calculated using the equation:

W(M1∩M2)=W(M1)•W(M2),
wherein attribute weights for the union meta-data is calculated using the equation:

W( M1∪M2)=W(M1)+W(M2)−W(M1)•W(M2), and
wherein M1 and M2 are two subsets of meta-data, W(M1) is an attribute weight associated with subset M1, W(M2) is an attribute weight associated with subset M2, W(M1∩M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2, and W(M1∪M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2.
33. The apparatus of claim 32, wherein the attribute weights are changed based on external events by computing future attribute weights from past attribute weights by composing past attribute weights with external event weights as shown in the equation:
[ w 1 w 2 w j ] = [ e 1 , e 2 , , e n ] · [ w 1 w 2 w i ] ,
where wj are future attribute weights, wi are past attribute weights, and en are external event weights.
34. The apparatus of claim 32, wherein the processor further comprises program code to:
receive video tips from one or more sources, the video tips being short video clips captured by citizens;
generate tip meta-data based on the video tips;
determine tip attribute weights for the tip meta-data; and
store the video tips in the video storage areas;
35. The apparatus of claim 32, wherein the processor further comprises program code to:
provide additional meta-data generated by a human operator; and
store the additional human generated meta-data, indexed to the video data by date and time stamp, in the meta-data storage module.
36. The apparatus of claim 32, wherein the processor further comprises program code to:
retrieve historical meta-data from the meta-data storage module;
evaluate a set of rules based on the historical meta-data and the generated meta-data; and
perform one or more actions based on the evaluation of the set of rules.
37. The apparatus of claim 32, further comprising:
a hierarchical video storage module adapted to archive the video data based at least on meta-data and attribute weights associated with the video data.
38. The apparatus of claim 32, further comprising:
a fiber optic line to an off-site location for archiving the video data off-site.
39. A method of searching and retrieving video data from a video surveillance system, the method comprising:
entering a search criteria;
searching meta-data associated with the video data, the meta-data generated by one or more video detection components and indexed to the video data;
retrieving meta-data matching the search criteria from a meta-data module;
retrieving video data indexed by the meta-data from a video storage module; and
retrieving attribute weights associated with the meta-data, the attribute weights representing reliability of the meta-data,
wherein attribute weights for intersection meta-data of two sub-sets of meta-data is calculated using the equation:

W( M1∩M2)=W(M1)•W(M2)
wherein attribute weights for union meta-data of two sub-sets of meta-data is calculated using the equation

W( M1∪M2)=W(M1)+W(M2)−W(M1)•W(M2),
wherein M1 and M2 are two subsets of meta-data, W(M1) is an attribute weight associated with subset M1, W(M2) is an attribute weight associated with subset M2, W(M1∩M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2, and W(M1∪M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2.
40. The method of claim 39, wherein the attribute weights includes data about the source of the meta-data.
41. The method of claim 39, further comprising:
storing audit information about who and when retrieved the video data.
42. An apparatus for storing video data, associated meta-data, and associated attribute weights from a video surveillance system, the apparatus comprising:
means for capturing video data from one or more surveillance cameras;
means for generating meta-data by performing video analysis on the video data from the surveillance cameras, the meta-data representing events detected in the video data;
means for determining attribute weights, representing information about the relevance of the meta-data;
means for generating intersections of two or more subsets of the meta-data to generate intersection meta-data;
means for determining attribute weights associated with the intersection meta-data by multiplying the attribute weights for each subset of meta-data;
means for generating unions of two or more subsets of the meta-data to generate union meta-data;
means for determining attribute weights associated with the union meta-data by adding the attribute weights for each subset of meta-data and subtracting a multiple of the attribute weights of each subset of meta-data;
means for changing the attribute weights based on external events by computing future attribute weights from past attribute weights by composing past attribute weights with external event weights;
means for storing the video data in a video storage area;
means for storing the meta-data, indexed by date and time stamp to the video data, in a meta-data storage area; and
means for storing the attribute weights in an attribute storage area,
wherein attribute weights for the intersection meta-data is calculated using the equation:

W(M1∩M2)=W(M1)•W(M2),
wherein attribute weights for the union meta-data is calculated using the equation:

W( M1∪M2)=W(M1)+W(M2)−W(M1)•W(M2), and
wherein M1 and M2 are two subsets of meta-data, W(M1) is an attribute weight associated with subset M1, W(M2) is an attribute weight associated with subset M2, W(M1∩M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2, and W(M1∪M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2.
43. The apparatus of claim 42, wherein the attribute weights are changed based on external events by computing future attribute weights from past attribute weights by composing past attribute weights with external event weights as shown in the equation:
[ w 1 w 2 w j ] = [ e 1 , e 2 , , e n ] · [ w 1 w 2 w i ] ,
where wj are future attribute weights, wi are past attribute weights, and en are external event weights.
44. The apparatus of claim 42, further comprising:
means for receiving video tips from one or more anonymous sources, the video tips being short video clips captured by citizens;
means for generating tip meta-data based on the video tips, the tip meta-data representing events detected in the video tips; and
means for determining tip attribute weights for the tip meta-data, representing information about the relevance of the tip meta-data.
US11/754,335 2007-05-28 2007-05-28 Video data storage, search, and retrieval using meta-data and attribute data in a video surveillance system Expired - Fee Related US7460149B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/754,335 US7460149B1 (en) 2007-05-28 2007-05-28 Video data storage, search, and retrieval using meta-data and attribute data in a video surveillance system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/754,335 US7460149B1 (en) 2007-05-28 2007-05-28 Video data storage, search, and retrieval using meta-data and attribute data in a video surveillance system

Publications (2)

Publication Number Publication Date
US7460149B1 US7460149B1 (en) 2008-12-02
US20080297599A1 true US20080297599A1 (en) 2008-12-04

Family

ID=40073801

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/754,335 Expired - Fee Related US7460149B1 (en) 2007-05-28 2007-05-28 Video data storage, search, and retrieval using meta-data and attribute data in a video surveillance system

Country Status (1)

Country Link
US (1) US7460149B1 (en)

Cited By (114)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080016541A1 (en) * 2006-06-30 2008-01-17 Sony Corporation Image processing system, server for the same, and image processing method
US20090089845A1 (en) * 2007-09-28 2009-04-02 William Rex Akers Video storage and retrieval system
US20090219411A1 (en) * 2008-03-03 2009-09-03 Videolq, Inc. Content aware storage of video data
US20100106707A1 (en) * 2008-10-29 2010-04-29 International Business Machines Corporation Indexing and searching according to attributes of a person
US20100223466A1 (en) * 2009-02-27 2010-09-02 Third Iris Corp Shared scalable server to control confidental event traffic among recordation terminals, analysis engines, and a storage farm coupled via a public network
US20110019003A1 (en) * 2009-07-22 2011-01-27 Hitachi Kokusai Electric Inc. Surveillance image retrieval apparatus and surveillance system
US20110055895A1 (en) * 2009-08-31 2011-03-03 Third Iris Corp. Shared scalable server to control confidential sensory event traffic among recordation terminals, analysis engines, and a storage farm coupled via a non-proprietary communication channel
US20110066767A1 (en) * 2009-09-14 2011-03-17 International Business Machines Corporation Data migration to high speed storage in accordance with i/o activity over time
US20120062732A1 (en) * 2010-09-10 2012-03-15 Videoiq, Inc. Video system with intelligent visual display
US20120237188A1 (en) * 2011-03-16 2012-09-20 Ingrasys Technology Inc. Network video recorder and method for recording video data in the network video recorder
US20120257083A1 (en) * 2011-04-08 2012-10-11 Sony Computer Entertainment Inc. Information Processing Apparatus and Information Processing Method
US20120300065A1 (en) * 2010-01-27 2012-11-29 Photonita Ltda Optical device for measuring and identifying cylindrical surfaces by deflectometry applied to ballistic identification
US20130091109A1 (en) * 2008-12-23 2013-04-11 At&T Intellectual Property I, L.P. System and Method for Representing Media Assets
CN103295355A (en) * 2013-05-28 2013-09-11 国家电网公司 Power transmission line corridor hazard source point management and control system capable of automatically recognizing potential safety hazards at night
US20130286211A1 (en) * 2012-04-26 2013-10-31 Jianhua Cao Method and apparatus for live capture image-live streaming camera utilizing personal portable device
US20140104420A1 (en) * 2009-02-27 2014-04-17 Barracuda Networks, Inc. Point of recordation terminal apparatus and method of operation
US8817094B1 (en) * 2010-02-25 2014-08-26 Target Brands, Inc. Video storage optimization
US20140333777A1 (en) * 2010-05-13 2014-11-13 Honeywell International Inc. Surveillance system with direct database server storage
US8990583B1 (en) * 2007-09-20 2015-03-24 United Services Automobile Association (Usaa) Forensic investigation tool
WO2015057229A1 (en) * 2013-10-17 2015-04-23 Hewlett-Packard Development Company, L.P. Storing data at a remote location based on predetermined criteria
US20150138362A1 (en) * 2010-10-14 2015-05-21 Palo Alto Research Center Incorporated Computer-Implemented System And Method For Providing Emergency Services Notification Through A Centralized Parking Services Server
US20150358537A1 (en) * 2014-06-09 2015-12-10 Verizon Patent And Licensing Inc. Adaptive camera setting modification based on analytics data
US9325951B2 (en) 2008-03-03 2016-04-26 Avigilon Patent Holding 2 Corporation Content-aware computer networking devices with video analytics for reducing video storage and video communication bandwidth requirements of a video surveillance network camera system
US20160274759A1 (en) 2008-08-25 2016-09-22 Paul J. Dawes Security system with networked touchscreen and gateway
DE102016109125B3 (en) * 2016-05-18 2017-10-12 TCO GmbH Monitoring and encryption procedures
US10051078B2 (en) 2007-06-12 2018-08-14 Icontrol Networks, Inc. WiFi-to-serial encapsulation in systems
US10062245B2 (en) 2005-03-16 2018-08-28 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US10062273B2 (en) 2010-09-28 2018-08-28 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10079839B1 (en) 2007-06-12 2018-09-18 Icontrol Networks, Inc. Activation of gateway device
US10078958B2 (en) 2010-12-17 2018-09-18 Icontrol Networks, Inc. Method and system for logging security event data
US10091014B2 (en) 2005-03-16 2018-10-02 Icontrol Networks, Inc. Integrated security network with security alarm signaling system
US10127801B2 (en) 2005-03-16 2018-11-13 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10142394B2 (en) 2007-06-12 2018-11-27 Icontrol Networks, Inc. Generating risk profile using data of home monitoring and security system
US10140840B2 (en) 2007-04-23 2018-11-27 Icontrol Networks, Inc. Method and system for providing alternate network access
US10142392B2 (en) 2007-01-24 2018-11-27 Icontrol Networks, Inc. Methods and systems for improved system performance
US10142166B2 (en) 2004-03-16 2018-11-27 Icontrol Networks, Inc. Takeover of security network
US10156831B2 (en) 2004-03-16 2018-12-18 Icontrol Networks, Inc. Automation system with mobile interface
US10156959B2 (en) 2005-03-16 2018-12-18 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US10200504B2 (en) 2007-06-12 2019-02-05 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US10237806B2 (en) 2009-04-30 2019-03-19 Icontrol Networks, Inc. Activation of a home automation controller
US10237237B2 (en) 2007-06-12 2019-03-19 Icontrol Networks, Inc. Communication protocols in integrated systems
US10313303B2 (en) 2007-06-12 2019-06-04 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US10339791B2 (en) 2007-06-12 2019-07-02 Icontrol Networks, Inc. Security network integrated with premise security system
US10348575B2 (en) 2013-06-27 2019-07-09 Icontrol Networks, Inc. Control system user interface
US10365810B2 (en) 2007-06-12 2019-07-30 Icontrol Networks, Inc. Control system user interface
US10380871B2 (en) 2005-03-16 2019-08-13 Icontrol Networks, Inc. Control system user interface
US10382452B1 (en) 2007-06-12 2019-08-13 Icontrol Networks, Inc. Communication protocols in integrated systems
US10389736B2 (en) 2007-06-12 2019-08-20 Icontrol Networks, Inc. Communication protocols in integrated systems
US10423309B2 (en) 2007-06-12 2019-09-24 Icontrol Networks, Inc. Device integration framework
US10498830B2 (en) 2007-06-12 2019-12-03 Icontrol Networks, Inc. Wi-Fi-to-serial encapsulation in systems
US10523689B2 (en) 2007-06-12 2019-12-31 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US10522026B2 (en) 2008-08-11 2019-12-31 Icontrol Networks, Inc. Automation system user interface with three-dimensional display
US10530839B2 (en) 2008-08-11 2020-01-07 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US10559193B2 (en) 2002-02-01 2020-02-11 Comcast Cable Communications, Llc Premises management systems
US10616075B2 (en) 2007-06-12 2020-04-07 Icontrol Networks, Inc. Communication protocols in integrated systems
US10643271B1 (en) * 2014-01-17 2020-05-05 Glenn Joseph Bronson Retrofitting legacy surveillance systems for traffic profiling and monetization
US10666523B2 (en) 2007-06-12 2020-05-26 Icontrol Networks, Inc. Communication protocols in integrated systems
US10721087B2 (en) 2005-03-16 2020-07-21 Icontrol Networks, Inc. Method for networked touchscreen with integrated interfaces
US10747216B2 (en) 2007-02-28 2020-08-18 Icontrol Networks, Inc. Method and system for communicating with and controlling an alarm system from a remote server
US10785319B2 (en) 2006-06-12 2020-09-22 Icontrol Networks, Inc. IP device discovery systems and methods
US10841381B2 (en) 2005-03-16 2020-11-17 Icontrol Networks, Inc. Security system with networked touchscreen
SE543534C2 (en) * 2015-03-04 2021-03-23 Hitachi Systems Ltd Situation ascertainment system using camera picture data, control device, and situation ascertainment method using camera picture data
US10979389B2 (en) 2004-03-16 2021-04-13 Icontrol Networks, Inc. Premises management configuration and control
US10990827B2 (en) * 2018-08-29 2021-04-27 Ooo Itv Group Imported video analysis device and method
US10999556B2 (en) * 2012-07-03 2021-05-04 Verint Americas Inc. System and method of video capture and search optimization
US10999254B2 (en) 2005-03-16 2021-05-04 Icontrol Networks, Inc. System for data routing in networks
US11011058B2 (en) 2013-03-01 2021-05-18 Conduent Business Services, Llc Computer-implemented system and method for providing available parking spaces
US11080977B2 (en) 2015-02-24 2021-08-03 Hiroshi Aoyama Management system, server, management device, and management method
US11089122B2 (en) 2007-06-12 2021-08-10 Icontrol Networks, Inc. Controlling data routing among networks
US11113950B2 (en) 2005-03-16 2021-09-07 Icontrol Networks, Inc. Gateway integrated with premises security system
US11146637B2 (en) 2014-03-03 2021-10-12 Icontrol Networks, Inc. Media content management
US11153266B2 (en) 2004-03-16 2021-10-19 Icontrol Networks, Inc. Gateway registry methods and systems
US11182060B2 (en) 2004-03-16 2021-11-23 Icontrol Networks, Inc. Networked touchscreen with integrated interfaces
US11201755B2 (en) 2004-03-16 2021-12-14 Icontrol Networks, Inc. Premises system management using status signal
US11212192B2 (en) 2007-06-12 2021-12-28 Icontrol Networks, Inc. Communication protocols in integrated systems
US11218878B2 (en) 2007-06-12 2022-01-04 Icontrol Networks, Inc. Communication protocols in integrated systems
US11240059B2 (en) 2010-12-20 2022-02-01 Icontrol Networks, Inc. Defining and implementing sensor triggered response rules
US11237714B2 (en) 2007-06-12 2022-02-01 Control Networks, Inc. Control system user interface
US11244545B2 (en) 2004-03-16 2022-02-08 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US11258625B2 (en) 2008-08-11 2022-02-22 Icontrol Networks, Inc. Mobile premises automation platform
US11277465B2 (en) 2004-03-16 2022-03-15 Icontrol Networks, Inc. Generating risk profile using data of home monitoring and security system
US11310199B2 (en) 2004-03-16 2022-04-19 Icontrol Networks, Inc. Premises management configuration and control
US11316958B2 (en) 2008-08-11 2022-04-26 Icontrol Networks, Inc. Virtual device systems and methods
US11316753B2 (en) 2007-06-12 2022-04-26 Icontrol Networks, Inc. Communication protocols in integrated systems
US11321570B2 (en) * 2017-07-03 2022-05-03 Nec Corporation System and method for determining event
US11343380B2 (en) 2004-03-16 2022-05-24 Icontrol Networks, Inc. Premises system automation
US11368327B2 (en) 2008-08-11 2022-06-21 Icontrol Networks, Inc. Integrated cloud system for premises automation
US20220198893A1 (en) * 2019-11-26 2022-06-23 Ncr Corporation Asset tracking and notification processing
US11398147B2 (en) 2010-09-28 2022-07-26 Icontrol Networks, Inc. Method, system and apparatus for automated reporting of account and sensor zone information to a central station
US11405463B2 (en) 2014-03-03 2022-08-02 Icontrol Networks, Inc. Media content management
US11424980B2 (en) 2005-03-16 2022-08-23 Icontrol Networks, Inc. Forming a security network including integrated security system components
US11423756B2 (en) 2007-06-12 2022-08-23 Icontrol Networks, Inc. Communication protocols in integrated systems
US11451409B2 (en) 2005-03-16 2022-09-20 Icontrol Networks, Inc. Security network integrating security system and network devices
US11489812B2 (en) 2004-03-16 2022-11-01 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US11496568B2 (en) 2005-03-16 2022-11-08 Icontrol Networks, Inc. Security system with networked touchscreen
US11582065B2 (en) 2007-06-12 2023-02-14 Icontrol Networks, Inc. Systems and methods for device communication
US11601810B2 (en) 2007-06-12 2023-03-07 Icontrol Networks, Inc. Communication protocols in integrated systems
US11615697B2 (en) 2005-03-16 2023-03-28 Icontrol Networks, Inc. Premise management systems and methods
US11646907B2 (en) 2007-06-12 2023-05-09 Icontrol Networks, Inc. Communication protocols in integrated systems
US11677577B2 (en) 2004-03-16 2023-06-13 Icontrol Networks, Inc. Premises system management using status signal
US11700142B2 (en) 2005-03-16 2023-07-11 Icontrol Networks, Inc. Security network integrating security system and network devices
US11706279B2 (en) 2007-01-24 2023-07-18 Icontrol Networks, Inc. Methods and systems for data communication
US11706045B2 (en) 2005-03-16 2023-07-18 Icontrol Networks, Inc. Modular electronic display platform
US11729255B2 (en) 2008-08-11 2023-08-15 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US11750414B2 (en) 2010-12-16 2023-09-05 Icontrol Networks, Inc. Bidirectional security sensor communication for a premises security system
US11758026B2 (en) 2008-08-11 2023-09-12 Icontrol Networks, Inc. Virtual device systems and methods
US11792330B2 (en) 2005-03-16 2023-10-17 Icontrol Networks, Inc. Communication and automation in a premises management system
US11792036B2 (en) 2008-08-11 2023-10-17 Icontrol Networks, Inc. Mobile premises automation platform
US11811845B2 (en) 2004-03-16 2023-11-07 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US11816323B2 (en) 2008-06-25 2023-11-14 Icontrol Networks, Inc. Automation system user interface
US11831462B2 (en) 2007-08-24 2023-11-28 Icontrol Networks, Inc. Controlling data routing in premises management systems
US11916870B2 (en) 2004-03-16 2024-02-27 Icontrol Networks, Inc. Gateway registry methods and systems
US11916928B2 (en) 2008-01-24 2024-02-27 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US11962672B2 (en) 2023-05-12 2024-04-16 Icontrol Networks, Inc. Virtual device systems and methods

Families Citing this family (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4589261B2 (en) * 2006-03-31 2010-12-01 パナソニック株式会社 Surveillance camera device
US7382244B1 (en) 2007-10-04 2008-06-03 Kd Secure Video surveillance, storage, and alerting system having network management, hierarchical data storage, video tip processing, and vehicle plate analysis
US8013738B2 (en) 2007-10-04 2011-09-06 Kd Secure, Llc Hierarchical storage manager (HSM) for intelligent storage of large volumes of data
JP5213105B2 (en) * 2008-01-17 2013-06-19 株式会社日立製作所 Video network system and video data management method
US20090210412A1 (en) * 2008-02-01 2009-08-20 Brian Oliver Method for searching and indexing data and a system for implementing same
AU2008200926B2 (en) * 2008-02-28 2011-09-29 Canon Kabushiki Kaisha On-camera summarisation of object relationships
US10354689B2 (en) * 2008-04-06 2019-07-16 Taser International, Inc. Systems and methods for event recorder logging
US8300098B1 (en) 2008-09-16 2012-10-30 Emc Corporation Techniques for providing access to video data using a network attached storage device
US9141859B2 (en) * 2008-11-17 2015-09-22 Liveclips Llc Method and system for segmenting and transmitting on-demand live-action video in real-time
US9141860B2 (en) * 2008-11-17 2015-09-22 Liveclips Llc Method and system for segmenting and transmitting on-demand live-action video in real-time
US8855665B2 (en) * 2008-12-17 2014-10-07 Avaya Inc. Location privacy enforcement in a location-based services platform
US20100201815A1 (en) * 2009-02-09 2010-08-12 Vitamin D, Inc. Systems and methods for video monitoring
JP5397014B2 (en) * 2009-05-21 2014-01-22 ソニー株式会社 Monitoring system, imaging device, analysis device, and monitoring method
US20100318588A1 (en) * 2009-06-12 2010-12-16 Avaya Inc. Spatial-Temporal Event Correlation for Location-Based Services
US8250040B2 (en) 2009-06-15 2012-08-21 Microsoft Corporation Storage or removal actions based on priority
US10531062B2 (en) * 2009-10-13 2020-01-07 Vincent Pace Stereographic cinematography metadata recording
US20110128382A1 (en) * 2009-12-01 2011-06-02 Richard Pennington System and methods for gaming data analysis
US9152707B2 (en) * 2010-01-04 2015-10-06 Martin Libich System and method for creating and providing media objects in a navigable environment
CN101827266A (en) * 2010-04-01 2010-09-08 公安部第三研究所 Network video server with video structural description function and method for implementing video analysis description by using same
US9134399B2 (en) 2010-07-28 2015-09-15 International Business Machines Corporation Attribute-based person tracking across multiple cameras
US10424342B2 (en) * 2010-07-28 2019-09-24 International Business Machines Corporation Facilitating people search in video surveillance
US8515127B2 (en) 2010-07-28 2013-08-20 International Business Machines Corporation Multispectral detection of personal attributes for video surveillance
US8532390B2 (en) 2010-07-28 2013-09-10 International Business Machines Corporation Semantic parsing of objects in video
US9160784B2 (en) * 2010-10-15 2015-10-13 Hanwha Techwin Co., Ltd. Remote management system, remote management method, and monitoring server
US9773059B2 (en) * 2010-11-09 2017-09-26 Storagedna, Inc. Tape data management
TWI417813B (en) * 2010-12-16 2013-12-01 Ind Tech Res Inst Cascadable camera tampering detection transceiver module
US9092605B2 (en) 2011-04-11 2015-07-28 NSS Lab Works LLC Ongoing authentication and access control with network access device
US9047464B2 (en) 2011-04-11 2015-06-02 NSS Lab Works LLC Continuous monitoring of computer user and computer activities
US8904473B2 (en) * 2011-04-11 2014-12-02 NSS Lab Works LLC Secure display system for prevention of information copying from any display screen system
US8773532B2 (en) * 2011-06-13 2014-07-08 Alcatel Lucent Video surveillance system integrating real and logical video streams
US8904517B2 (en) 2011-06-28 2014-12-02 International Business Machines Corporation System and method for contexually interpreting image sequences
WO2013020165A2 (en) * 2011-08-05 2013-02-14 HONEYWELL INTERNATIONAL INC. Attn: Patent Services Systems and methods for managing video data
TWI450207B (en) 2011-12-26 2014-08-21 Ind Tech Res Inst Method, system, computer program product and computer-readable recording medium for object tracking
US9063938B2 (en) 2012-03-30 2015-06-23 Commvault Systems, Inc. Search filtered file system using secondary storage, including multi-dimensional indexing and searching of archived files
US9639297B2 (en) 2012-03-30 2017-05-02 Commvault Systems, Inc Shared network-available storage that permits concurrent data access
US9367745B2 (en) 2012-04-24 2016-06-14 Liveclips Llc System for annotating media content for automatic content understanding
US20130283143A1 (en) 2012-04-24 2013-10-24 Eric David Petajan System for Annotating Media Content for Automatic Content Understanding
US9330277B2 (en) 2012-06-21 2016-05-03 Google Technology Holdings LLC Privacy manager for restricting correlation of meta-content having protected information based on privacy rules
US8959574B2 (en) 2012-06-21 2015-02-17 Google Technology Holdings LLC Content rights protection with arbitrary correlation of second content
US9852275B2 (en) 2013-03-15 2017-12-26 NSS Lab Works LLC Security device, methods, and systems for continuous authentication
US9648283B2 (en) 2013-05-16 2017-05-09 Verint Americas Inc. Distributed sensing and video capture system and apparatus
US11889009B2 (en) 2013-07-26 2024-01-30 Skybell Technologies Ip, Llc Doorbell communication and electrical systems
US20170263067A1 (en) 2014-08-27 2017-09-14 SkyBell Technologies, Inc. Smart lock systems and methods
US10708404B2 (en) 2014-09-01 2020-07-07 Skybell Technologies Ip, Llc Doorbell communication and electrical systems
US11651665B2 (en) 2013-07-26 2023-05-16 Skybell Technologies Ip, Llc Doorbell communities
US10672238B2 (en) 2015-06-23 2020-06-02 SkyBell Technologies, Inc. Doorbell communities
US20180343141A1 (en) 2015-09-22 2018-11-29 SkyBell Technologies, Inc. Doorbell communication systems and methods
CN103473273B (en) 2013-08-22 2019-01-18 百度在线网络技术(北京)有限公司 Information search method, device and server
US10331661B2 (en) * 2013-10-23 2019-06-25 At&T Intellectual Property I, L.P. Video content search using captioning data
US20150221193A1 (en) * 2014-02-04 2015-08-06 Aruba Networks, Inc. Intrusion Detection and Video Surveillance Activation and Processing
US20170085843A1 (en) 2015-09-22 2017-03-23 SkyBell Technologies, Inc. Doorbell communication systems and methods
US11184589B2 (en) 2014-06-23 2021-11-23 Skybell Technologies Ip, Llc Doorbell communication systems and methods
US20160261824A1 (en) * 2014-11-06 2016-09-08 SkyBell Technologies, Inc. Light socket surveillance systems
EA201401064A1 (en) * 2014-10-28 2016-04-29 Общество с ограниченной ответственностью "Синезис" METHOD (OPTIONS) SYSTEMATIZATION OF VIDEO DATA PRODUCTION PROCESS AND SYSTEM (OPTIONS)
US10313243B2 (en) 2015-02-24 2019-06-04 Commvault Systems, Inc. Intelligent local management of data stream throttling in secondary-copy operations
US10742938B2 (en) 2015-03-07 2020-08-11 Skybell Technologies Ip, Llc Garage door communication systems and methods
US11575537B2 (en) 2015-03-27 2023-02-07 Skybell Technologies Ip, Llc Doorbell communication systems and methods
US11381686B2 (en) 2015-04-13 2022-07-05 Skybell Technologies Ip, Llc Power outlet cameras
US20180047269A1 (en) 2015-06-23 2018-02-15 SkyBell Technologies, Inc. Doorbell communities
US10706702B2 (en) 2015-07-30 2020-07-07 Skybell Technologies Ip, Llc Doorbell package detection systems and methods
CN105141896A (en) * 2015-08-07 2015-12-09 广西南宁派腾科技有限公司 Train compartment environment monitoring system
CN105072399A (en) * 2015-08-07 2015-11-18 广西南宁派腾科技有限公司 Solar train compartment video monitoring system
CN105306865B (en) * 2015-10-29 2019-05-17 四川奇迹云科技有限公司 A kind of video monitoring data system
US20170148291A1 (en) * 2015-11-20 2017-05-25 Hitachi, Ltd. Method and a system for dynamic display of surveillance feeds
US9965680B2 (en) 2016-03-22 2018-05-08 Sensormatic Electronics, LLC Method and system for conveying data from monitored scene via surveillance cameras
US10764539B2 (en) 2016-03-22 2020-09-01 Sensormatic Electronics, LLC System and method for using mobile device of zone and correlated motion detection
US10192414B2 (en) 2016-03-22 2019-01-29 Sensormatic Electronics, LLC System and method for overlap detection in surveillance camera network
US11601583B2 (en) 2016-03-22 2023-03-07 Johnson Controls Tyco IP Holdings LLP System and method for controlling surveillance cameras
US10318836B2 (en) 2016-03-22 2019-06-11 Sensormatic Electronics, LLC System and method for designating surveillance camera regions of interest
US10733231B2 (en) 2016-03-22 2020-08-04 Sensormatic Electronics, LLC Method and system for modeling image of interest to users
US10347102B2 (en) * 2016-03-22 2019-07-09 Sensormatic Electronics, LLC Method and system for surveillance camera arbitration of uplink consumption
US11216847B2 (en) 2016-03-22 2022-01-04 Sensormatic Electronics, LLC System and method for retail customer tracking in surveillance camera network
US10665071B2 (en) 2016-03-22 2020-05-26 Sensormatic Electronics, LLC System and method for deadzone detection in surveillance camera network
US10475315B2 (en) 2016-03-22 2019-11-12 Sensormatic Electronics, LLC System and method for configuring surveillance cameras using mobile computing devices
TWI601421B (en) * 2016-07-01 2017-10-01 物聯智慧科技(深圳)有限公司 Cloud recording system, cloud recording server and cloud recording method
KR102249498B1 (en) 2016-08-17 2021-05-11 한화테크윈 주식회사 The Apparatus And System For Searching
US10765954B2 (en) 2017-06-15 2020-09-08 Microsoft Technology Licensing, Llc Virtual event broadcasting
US10909825B2 (en) 2017-09-18 2021-02-02 Skybell Technologies Ip, Llc Outdoor security systems and methods
US11455380B2 (en) 2018-11-20 2022-09-27 International Business Machines Corporation Chain-of-custody of digital content in a database system
US10755543B1 (en) * 2019-07-08 2020-08-25 Chekt Llc Bridge device supporting alarm format
WO2021014464A1 (en) * 2019-07-19 2021-01-28 John Alexander Valiyaveettil System, multi-utility device and method to monitor vehicles for road saftey
US11074790B2 (en) 2019-08-24 2021-07-27 Skybell Technologies Ip, Llc Doorbell communication systems and methods
CN112733456B (en) * 2021-03-17 2022-10-14 国网河南省电力公司营销服务中心 Electricity stealing prevention behavior identification method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010004739A1 (en) * 1999-09-27 2001-06-21 Shunichi Sekiguchi Image retrieval system and image retrieval method
US20020073079A1 (en) * 2000-04-04 2002-06-13 Merijn Terheggen Method and apparatus for searching a database and providing relevance feedback
US6424370B1 (en) * 1999-10-08 2002-07-23 Texas Instruments Incorporated Motion based event detection system and method
US20050123267A1 (en) * 2003-11-14 2005-06-09 Yasufumi Tsumagari Reproducing apparatus and reproducing method
US20060053153A1 (en) * 2004-09-09 2006-03-09 Kabushiki Kaisha Toshiba Data structure of metadata, and reproduction apparatus and method of the metadata
US20060198554A1 (en) * 2002-11-29 2006-09-07 Porter Robert M S Face detection
US20060217990A1 (en) * 2002-12-20 2006-09-28 Wolfgang Theimer Method and device for organizing user provided information with meta-information
US7212650B2 (en) * 2002-06-14 2007-05-01 Mitsubishi Denki Kabushiki Kaisha Monitoring system and monitoring method
US20070156677A1 (en) * 1999-07-21 2007-07-05 Alberti Anemometer Llc Database access system
US20080092168A1 (en) * 1999-03-29 2008-04-17 Logan James D Audio and video program recording, editing and playback systems using metadata

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080092168A1 (en) * 1999-03-29 2008-04-17 Logan James D Audio and video program recording, editing and playback systems using metadata
US20070156677A1 (en) * 1999-07-21 2007-07-05 Alberti Anemometer Llc Database access system
US20010004739A1 (en) * 1999-09-27 2001-06-21 Shunichi Sekiguchi Image retrieval system and image retrieval method
US6665442B2 (en) * 1999-09-27 2003-12-16 Mitsubishi Denki Kabushiki Kaisha Image retrieval system and image retrieval method
US6424370B1 (en) * 1999-10-08 2002-07-23 Texas Instruments Incorporated Motion based event detection system and method
US20020073079A1 (en) * 2000-04-04 2002-06-13 Merijn Terheggen Method and apparatus for searching a database and providing relevance feedback
US7212650B2 (en) * 2002-06-14 2007-05-01 Mitsubishi Denki Kabushiki Kaisha Monitoring system and monitoring method
US20060198554A1 (en) * 2002-11-29 2006-09-07 Porter Robert M S Face detection
US20060217990A1 (en) * 2002-12-20 2006-09-28 Wolfgang Theimer Method and device for organizing user provided information with meta-information
US20050123267A1 (en) * 2003-11-14 2005-06-09 Yasufumi Tsumagari Reproducing apparatus and reproducing method
US20060053153A1 (en) * 2004-09-09 2006-03-09 Kabushiki Kaisha Toshiba Data structure of metadata, and reproduction apparatus and method of the metadata

Cited By (222)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10559193B2 (en) 2002-02-01 2020-02-11 Comcast Cable Communications, Llc Premises management systems
US10979389B2 (en) 2004-03-16 2021-04-13 Icontrol Networks, Inc. Premises management configuration and control
US11625008B2 (en) 2004-03-16 2023-04-11 Icontrol Networks, Inc. Premises management networking
US10447491B2 (en) 2004-03-16 2019-10-15 Icontrol Networks, Inc. Premises system management using status signal
US11811845B2 (en) 2004-03-16 2023-11-07 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US11810445B2 (en) 2004-03-16 2023-11-07 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US11916870B2 (en) 2004-03-16 2024-02-27 Icontrol Networks, Inc. Gateway registry methods and systems
US11782394B2 (en) 2004-03-16 2023-10-10 Icontrol Networks, Inc. Automation system with mobile interface
US11310199B2 (en) 2004-03-16 2022-04-19 Icontrol Networks, Inc. Premises management configuration and control
US10692356B2 (en) 2004-03-16 2020-06-23 Icontrol Networks, Inc. Control system user interface
US10691295B2 (en) 2004-03-16 2020-06-23 Icontrol Networks, Inc. User interface in a premises network
US10735249B2 (en) 2004-03-16 2020-08-04 Icontrol Networks, Inc. Management of a security system at a premises
US11677577B2 (en) 2004-03-16 2023-06-13 Icontrol Networks, Inc. Premises system management using status signal
US11656667B2 (en) 2004-03-16 2023-05-23 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US11277465B2 (en) 2004-03-16 2022-03-15 Icontrol Networks, Inc. Generating risk profile using data of home monitoring and security system
US11626006B2 (en) 2004-03-16 2023-04-11 Icontrol Networks, Inc. Management of a security system at a premises
US10754304B2 (en) 2004-03-16 2020-08-25 Icontrol Networks, Inc. Automation system with mobile interface
US11601397B2 (en) 2004-03-16 2023-03-07 Icontrol Networks, Inc. Premises management configuration and control
US11244545B2 (en) 2004-03-16 2022-02-08 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US11588787B2 (en) 2004-03-16 2023-02-21 Icontrol Networks, Inc. Premises management configuration and control
US11537186B2 (en) 2004-03-16 2022-12-27 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10796557B2 (en) 2004-03-16 2020-10-06 Icontrol Networks, Inc. Automation system user interface with three-dimensional display
US11489812B2 (en) 2004-03-16 2022-11-01 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US10156831B2 (en) 2004-03-16 2018-12-18 Icontrol Networks, Inc. Automation system with mobile interface
US11449012B2 (en) 2004-03-16 2022-09-20 Icontrol Networks, Inc. Premises management networking
US10142166B2 (en) 2004-03-16 2018-11-27 Icontrol Networks, Inc. Takeover of security network
US10890881B2 (en) 2004-03-16 2021-01-12 Icontrol Networks, Inc. Premises management networking
US10992784B2 (en) 2004-03-16 2021-04-27 Control Networks, Inc. Communication protocols over internet protocol (IP) networks
US11410531B2 (en) 2004-03-16 2022-08-09 Icontrol Networks, Inc. Automation system user interface with three-dimensional display
US11378922B2 (en) 2004-03-16 2022-07-05 Icontrol Networks, Inc. Automation system with mobile interface
US11368429B2 (en) 2004-03-16 2022-06-21 Icontrol Networks, Inc. Premises management configuration and control
US11037433B2 (en) 2004-03-16 2021-06-15 Icontrol Networks, Inc. Management of a security system at a premises
US11343380B2 (en) 2004-03-16 2022-05-24 Icontrol Networks, Inc. Premises system automation
US11757834B2 (en) 2004-03-16 2023-09-12 Icontrol Networks, Inc. Communication protocols in integrated systems
US11893874B2 (en) 2004-03-16 2024-02-06 Icontrol Networks, Inc. Networked touchscreen with integrated interfaces
US11043112B2 (en) 2004-03-16 2021-06-22 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US11201755B2 (en) 2004-03-16 2021-12-14 Icontrol Networks, Inc. Premises system management using status signal
US11184322B2 (en) 2004-03-16 2021-11-23 Icontrol Networks, Inc. Communication protocols in integrated systems
US11182060B2 (en) 2004-03-16 2021-11-23 Icontrol Networks, Inc. Networked touchscreen with integrated interfaces
US11175793B2 (en) 2004-03-16 2021-11-16 Icontrol Networks, Inc. User interface in a premises network
US11159484B2 (en) 2004-03-16 2021-10-26 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US11153266B2 (en) 2004-03-16 2021-10-19 Icontrol Networks, Inc. Gateway registry methods and systems
US11082395B2 (en) 2004-03-16 2021-08-03 Icontrol Networks, Inc. Premises management configuration and control
US11113950B2 (en) 2005-03-16 2021-09-07 Icontrol Networks, Inc. Gateway integrated with premises security system
US10091014B2 (en) 2005-03-16 2018-10-02 Icontrol Networks, Inc. Integrated security network with security alarm signaling system
US10062245B2 (en) 2005-03-16 2018-08-28 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US11367340B2 (en) 2005-03-16 2022-06-21 Icontrol Networks, Inc. Premise management systems and methods
US11595364B2 (en) 2005-03-16 2023-02-28 Icontrol Networks, Inc. System for data routing in networks
US10999254B2 (en) 2005-03-16 2021-05-04 Icontrol Networks, Inc. System for data routing in networks
US10127801B2 (en) 2005-03-16 2018-11-13 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10841381B2 (en) 2005-03-16 2020-11-17 Icontrol Networks, Inc. Security system with networked touchscreen
US10380871B2 (en) 2005-03-16 2019-08-13 Icontrol Networks, Inc. Control system user interface
US10930136B2 (en) 2005-03-16 2021-02-23 Icontrol Networks, Inc. Premise management systems and methods
US11424980B2 (en) 2005-03-16 2022-08-23 Icontrol Networks, Inc. Forming a security network including integrated security system components
US11824675B2 (en) 2005-03-16 2023-11-21 Icontrol Networks, Inc. Networked touchscreen with integrated interfaces
US10156959B2 (en) 2005-03-16 2018-12-18 Icontrol Networks, Inc. Cross-client sensor user interface in an integrated security network
US11451409B2 (en) 2005-03-16 2022-09-20 Icontrol Networks, Inc. Security network integrating security system and network devices
US11792330B2 (en) 2005-03-16 2023-10-17 Icontrol Networks, Inc. Communication and automation in a premises management system
US11496568B2 (en) 2005-03-16 2022-11-08 Icontrol Networks, Inc. Security system with networked touchscreen
US11706045B2 (en) 2005-03-16 2023-07-18 Icontrol Networks, Inc. Modular electronic display platform
US11615697B2 (en) 2005-03-16 2023-03-28 Icontrol Networks, Inc. Premise management systems and methods
US11700142B2 (en) 2005-03-16 2023-07-11 Icontrol Networks, Inc. Security network integrating security system and network devices
US10721087B2 (en) 2005-03-16 2020-07-21 Icontrol Networks, Inc. Method for networked touchscreen with integrated interfaces
US10785319B2 (en) 2006-06-12 2020-09-22 Icontrol Networks, Inc. IP device discovery systems and methods
US10616244B2 (en) 2006-06-12 2020-04-07 Icontrol Networks, Inc. Activation of gateway device
US11418518B2 (en) 2006-06-12 2022-08-16 Icontrol Networks, Inc. Activation of gateway device
US7936372B2 (en) * 2006-06-30 2011-05-03 Sony Corporation Image processing method and system for generating and analyzing metadata and server for such system
US20080016541A1 (en) * 2006-06-30 2008-01-17 Sony Corporation Image processing system, server for the same, and image processing method
US11418572B2 (en) 2007-01-24 2022-08-16 Icontrol Networks, Inc. Methods and systems for improved system performance
US11412027B2 (en) 2007-01-24 2022-08-09 Icontrol Networks, Inc. Methods and systems for data communication
US10142392B2 (en) 2007-01-24 2018-11-27 Icontrol Networks, Inc. Methods and systems for improved system performance
US10225314B2 (en) 2007-01-24 2019-03-05 Icontrol Networks, Inc. Methods and systems for improved system performance
US11706279B2 (en) 2007-01-24 2023-07-18 Icontrol Networks, Inc. Methods and systems for data communication
US11809174B2 (en) 2007-02-28 2023-11-07 Icontrol Networks, Inc. Method and system for managing communication connectivity
US11194320B2 (en) 2007-02-28 2021-12-07 Icontrol Networks, Inc. Method and system for managing communication connectivity
US10747216B2 (en) 2007-02-28 2020-08-18 Icontrol Networks, Inc. Method and system for communicating with and controlling an alarm system from a remote server
US10657794B1 (en) 2007-02-28 2020-05-19 Icontrol Networks, Inc. Security, monitoring and automation controller access and use of legacy security control panel information
US11132888B2 (en) 2007-04-23 2021-09-28 Icontrol Networks, Inc. Method and system for providing alternate network access
US10140840B2 (en) 2007-04-23 2018-11-27 Icontrol Networks, Inc. Method and system for providing alternate network access
US11663902B2 (en) 2007-04-23 2023-05-30 Icontrol Networks, Inc. Method and system for providing alternate network access
US10672254B2 (en) 2007-04-23 2020-06-02 Icontrol Networks, Inc. Method and system for providing alternate network access
US10237237B2 (en) 2007-06-12 2019-03-19 Icontrol Networks, Inc. Communication protocols in integrated systems
US11582065B2 (en) 2007-06-12 2023-02-14 Icontrol Networks, Inc. Systems and methods for device communication
US10339791B2 (en) 2007-06-12 2019-07-02 Icontrol Networks, Inc. Security network integrated with premise security system
US10616075B2 (en) 2007-06-12 2020-04-07 Icontrol Networks, Inc. Communication protocols in integrated systems
US10365810B2 (en) 2007-06-12 2019-07-30 Icontrol Networks, Inc. Control system user interface
US10051078B2 (en) 2007-06-12 2018-08-14 Icontrol Networks, Inc. WiFi-to-serial encapsulation in systems
US11089122B2 (en) 2007-06-12 2021-08-10 Icontrol Networks, Inc. Controlling data routing among networks
US11212192B2 (en) 2007-06-12 2021-12-28 Icontrol Networks, Inc. Communication protocols in integrated systems
US11894986B2 (en) 2007-06-12 2024-02-06 Icontrol Networks, Inc. Communication protocols in integrated systems
US10666523B2 (en) 2007-06-12 2020-05-26 Icontrol Networks, Inc. Communication protocols in integrated systems
US11722896B2 (en) 2007-06-12 2023-08-08 Icontrol Networks, Inc. Communication protocols in integrated systems
US10523689B2 (en) 2007-06-12 2019-12-31 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US10313303B2 (en) 2007-06-12 2019-06-04 Icontrol Networks, Inc. Forming a security network including integrated security system components and network devices
US10498830B2 (en) 2007-06-12 2019-12-03 Icontrol Networks, Inc. Wi-Fi-to-serial encapsulation in systems
US11218878B2 (en) 2007-06-12 2022-01-04 Icontrol Networks, Inc. Communication protocols in integrated systems
US11646907B2 (en) 2007-06-12 2023-05-09 Icontrol Networks, Inc. Communication protocols in integrated systems
US10079839B1 (en) 2007-06-12 2018-09-18 Icontrol Networks, Inc. Activation of gateway device
US11632308B2 (en) 2007-06-12 2023-04-18 Icontrol Networks, Inc. Communication protocols in integrated systems
US10444964B2 (en) 2007-06-12 2019-10-15 Icontrol Networks, Inc. Control system user interface
US11625161B2 (en) 2007-06-12 2023-04-11 Icontrol Networks, Inc. Control system user interface
US11611568B2 (en) 2007-06-12 2023-03-21 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US10423309B2 (en) 2007-06-12 2019-09-24 Icontrol Networks, Inc. Device integration framework
US11601810B2 (en) 2007-06-12 2023-03-07 Icontrol Networks, Inc. Communication protocols in integrated systems
US11237714B2 (en) 2007-06-12 2022-02-01 Control Networks, Inc. Control system user interface
US10200504B2 (en) 2007-06-12 2019-02-05 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US11316753B2 (en) 2007-06-12 2022-04-26 Icontrol Networks, Inc. Communication protocols in integrated systems
US10382452B1 (en) 2007-06-12 2019-08-13 Icontrol Networks, Inc. Communication protocols in integrated systems
US10389736B2 (en) 2007-06-12 2019-08-20 Icontrol Networks, Inc. Communication protocols in integrated systems
US11423756B2 (en) 2007-06-12 2022-08-23 Icontrol Networks, Inc. Communication protocols in integrated systems
US10142394B2 (en) 2007-06-12 2018-11-27 Icontrol Networks, Inc. Generating risk profile using data of home monitoring and security system
US11815969B2 (en) 2007-08-10 2023-11-14 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US11831462B2 (en) 2007-08-24 2023-11-28 Icontrol Networks, Inc. Controlling data routing in premises management systems
US9773120B1 (en) 2007-09-20 2017-09-26 United Services Automobile Association (Usaa) Forensic investigation tool
US10970403B1 (en) 2007-09-20 2021-04-06 United Services Automobile Association (Usaa) Forensic investigation tool
US10380357B1 (en) 2007-09-20 2019-08-13 United Services Automobile Association (Usaa) Forensic investigation tool
US8990583B1 (en) * 2007-09-20 2015-03-24 United Services Automobile Association (Usaa) Forensic investigation tool
US20090089845A1 (en) * 2007-09-28 2009-04-02 William Rex Akers Video storage and retrieval system
US11916928B2 (en) 2008-01-24 2024-02-27 Icontrol Networks, Inc. Communication protocols over internet protocol (IP) networks
US10848716B2 (en) 2008-03-03 2020-11-24 Avigilon Analytics Corporation Content-aware computer networking devices with video analytics for reducing video storage and video communication bandwidth requirements of a video surveillance network camera system
US8736701B2 (en) * 2008-03-03 2014-05-27 Videoiq, Inc. Video camera having relational video database with analytics-produced metadata
US20110050947A1 (en) * 2008-03-03 2011-03-03 Videoiq, Inc. Video camera having relational video database with analytics-produced metadata
US20090219411A1 (en) * 2008-03-03 2009-09-03 Videolq, Inc. Content aware storage of video data
US8872940B2 (en) * 2008-03-03 2014-10-28 Videoiq, Inc. Content aware storage of video data
US9325951B2 (en) 2008-03-03 2016-04-26 Avigilon Patent Holding 2 Corporation Content-aware computer networking devices with video analytics for reducing video storage and video communication bandwidth requirements of a video surveillance network camera system
US9756294B2 (en) 2008-03-03 2017-09-05 Avigilon Analytics Corporation Content-aware computer networking devices with video analytics for reducing video storage and video communication bandwidth requirements of a video surveillance network camera system
US11816323B2 (en) 2008-06-25 2023-11-14 Icontrol Networks, Inc. Automation system user interface
US11616659B2 (en) 2008-08-11 2023-03-28 Icontrol Networks, Inc. Integrated cloud system for premises automation
US11729255B2 (en) 2008-08-11 2023-08-15 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US10522026B2 (en) 2008-08-11 2019-12-31 Icontrol Networks, Inc. Automation system user interface with three-dimensional display
US10530839B2 (en) 2008-08-11 2020-01-07 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US11711234B2 (en) 2008-08-11 2023-07-25 Icontrol Networks, Inc. Integrated cloud system for premises automation
US11190578B2 (en) 2008-08-11 2021-11-30 Icontrol Networks, Inc. Integrated cloud system with lightweight gateway for premises automation
US11316958B2 (en) 2008-08-11 2022-04-26 Icontrol Networks, Inc. Virtual device systems and methods
US11641391B2 (en) 2008-08-11 2023-05-02 Icontrol Networks Inc. Integrated cloud system with lightweight gateway for premises automation
US11258625B2 (en) 2008-08-11 2022-02-22 Icontrol Networks, Inc. Mobile premises automation platform
US11792036B2 (en) 2008-08-11 2023-10-17 Icontrol Networks, Inc. Mobile premises automation platform
US11368327B2 (en) 2008-08-11 2022-06-21 Icontrol Networks, Inc. Integrated cloud system for premises automation
US11758026B2 (en) 2008-08-11 2023-09-12 Icontrol Networks, Inc. Virtual device systems and methods
US10375253B2 (en) 2008-08-25 2019-08-06 Icontrol Networks, Inc. Security system with networked touchscreen and gateway
US20160274759A1 (en) 2008-08-25 2016-09-22 Paul J. Dawes Security system with networked touchscreen and gateway
US20100106707A1 (en) * 2008-10-29 2010-04-29 International Business Machines Corporation Indexing and searching according to attributes of a person
US9342594B2 (en) * 2008-10-29 2016-05-17 International Business Machines Corporation Indexing and searching according to attributes of a person
US20130091109A1 (en) * 2008-12-23 2013-04-11 At&T Intellectual Property I, L.P. System and Method for Representing Media Assets
US20220269661A1 (en) * 2008-12-23 2022-08-25 At&T Intellectual Property I, L.P. System and method for representing media assets
US9547684B2 (en) * 2008-12-23 2017-01-17 At&T Intellectual Property I, L.P. System and method for representing media assets
US10691660B2 (en) 2008-12-23 2020-06-23 At&T Intellectual Property I, L.P. System and method for representing media assets
US11360956B2 (en) * 2008-12-23 2022-06-14 At&T Intellectual Property I, L.P. System and method for representing media assets
US20100223466A1 (en) * 2009-02-27 2010-09-02 Third Iris Corp Shared scalable server to control confidental event traffic among recordation terminals, analysis engines, and a storage farm coupled via a public network
US20140104420A1 (en) * 2009-02-27 2014-04-17 Barracuda Networks, Inc. Point of recordation terminal apparatus and method of operation
US11223998B2 (en) 2009-04-30 2022-01-11 Icontrol Networks, Inc. Security, monitoring and automation controller access and use of legacy security control panel information
US11665617B2 (en) 2009-04-30 2023-05-30 Icontrol Networks, Inc. Server-based notification of alarm event subsequent to communication failure with armed security system
US11356926B2 (en) 2009-04-30 2022-06-07 Icontrol Networks, Inc. Hardware configurable security, monitoring and automation controller having modular communication protocol interfaces
US10275999B2 (en) 2009-04-30 2019-04-30 Icontrol Networks, Inc. Server-based notification of alarm event subsequent to communication failure with armed security system
US10674428B2 (en) 2009-04-30 2020-06-02 Icontrol Networks, Inc. Hardware configurable security, monitoring and automation controller having modular communication protocol interfaces
US11284331B2 (en) 2009-04-30 2022-03-22 Icontrol Networks, Inc. Server-based notification of alarm event subsequent to communication failure with armed security system
US11778534B2 (en) 2009-04-30 2023-10-03 Icontrol Networks, Inc. Hardware configurable security, monitoring and automation controller having modular communication protocol interfaces
US10237806B2 (en) 2009-04-30 2019-03-19 Icontrol Networks, Inc. Activation of a home automation controller
US10332363B2 (en) 2009-04-30 2019-06-25 Icontrol Networks, Inc. Controller and interface for home security, monitoring and automation having customizable audio alerts for SMA events
US11129084B2 (en) 2009-04-30 2021-09-21 Icontrol Networks, Inc. Notification of event subsequent to communication failure with security system
US10813034B2 (en) 2009-04-30 2020-10-20 Icontrol Networks, Inc. Method, system and apparatus for management of applications for an SMA controller
US11601865B2 (en) 2009-04-30 2023-03-07 Icontrol Networks, Inc. Server-based notification of alarm event subsequent to communication failure with armed security system
US11553399B2 (en) 2009-04-30 2023-01-10 Icontrol Networks, Inc. Custom content for premises management
US11856502B2 (en) 2009-04-30 2023-12-26 Icontrol Networks, Inc. Method, system and apparatus for automated inventory reporting of security, monitoring and automation hardware and software at customer premises
US9342744B2 (en) * 2009-07-22 2016-05-17 Hitachi Kokusai Electric Inc. Surveillance image retrieval apparatus and surveillance system
US20110019003A1 (en) * 2009-07-22 2011-01-27 Hitachi Kokusai Electric Inc. Surveillance image retrieval apparatus and surveillance system
US20110055895A1 (en) * 2009-08-31 2011-03-03 Third Iris Corp. Shared scalable server to control confidential sensory event traffic among recordation terminals, analysis engines, and a storage farm coupled via a non-proprietary communication channel
US8380891B2 (en) 2009-09-14 2013-02-19 International Business Machines Corporation Data migration to high speed storage in accordance with I/O activity over time
US20110066767A1 (en) * 2009-09-14 2011-03-17 International Business Machines Corporation Data migration to high speed storage in accordance with i/o activity over time
US8230131B2 (en) 2009-09-14 2012-07-24 International Business Machines Corporation Data migration to high speed storage in accordance with I/O activity over time
US20120300065A1 (en) * 2010-01-27 2012-11-29 Photonita Ltda Optical device for measuring and identifying cylindrical surfaces by deflectometry applied to ballistic identification
US8817094B1 (en) * 2010-02-25 2014-08-26 Target Brands, Inc. Video storage optimization
US9367617B2 (en) * 2010-05-13 2016-06-14 Honeywell International Inc. Surveillance system with direct database server storage
US20140333777A1 (en) * 2010-05-13 2014-11-13 Honeywell International Inc. Surveillance system with direct database server storage
US20120062732A1 (en) * 2010-09-10 2012-03-15 Videoiq, Inc. Video system with intelligent visual display
US10645344B2 (en) * 2010-09-10 2020-05-05 Avigilion Analytics Corporation Video system with intelligent visual display
US10223903B2 (en) 2010-09-28 2019-03-05 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10127802B2 (en) 2010-09-28 2018-11-13 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US10062273B2 (en) 2010-09-28 2018-08-28 Icontrol Networks, Inc. Integrated security system with parallel processing architecture
US11900790B2 (en) 2010-09-28 2024-02-13 Icontrol Networks, Inc. Method, system and apparatus for automated reporting of account and sensor zone information to a central station
US11398147B2 (en) 2010-09-28 2022-07-26 Icontrol Networks, Inc. Method, system and apparatus for automated reporting of account and sensor zone information to a central station
US11545031B2 (en) 2010-10-14 2023-01-03 Conduent Business Services, Llc System and method for providing distributed on-street valet parking with the aid of a digital computer
US10839685B2 (en) 2010-10-14 2020-11-17 Conduent Business Services, Llc System and method for providing information through a display of parking devices with the aid of a digital computer
US10546495B2 (en) 2010-10-14 2020-01-28 Conduent Business Services, Llc Computer-implemented system and method for offering customer priority parking reservations
US10964212B2 (en) 2010-10-14 2021-03-30 Conduent Business Services, Llc Computer-implemented system and method for facilitating rental of private parking space by an urban resident
US10417912B2 (en) 2010-10-14 2019-09-17 Conduent Business Services, Llc System and method for providing distributed on-street valet parking with the aid of a digital computer
US20150138362A1 (en) * 2010-10-14 2015-05-21 Palo Alto Research Center Incorporated Computer-Implemented System And Method For Providing Emergency Services Notification Through A Centralized Parking Services Server
US11308804B2 (en) 2010-10-14 2022-04-19 Conduent Business Services, Llc Computer-implemented system and method for providing management of motor vehicle parking spaces during scheduled street sweeping
US10242573B2 (en) 2010-10-14 2019-03-26 Conduent Business Services, Llc Computer-implemented system and method for offering merchant and shopper-friendly parking reservations through tourist privileges
US10621866B2 (en) 2010-10-14 2020-04-14 Conduent Business Services, Llc Computer-implemented system and method for providing guest parking reservations
US11750414B2 (en) 2010-12-16 2023-09-05 Icontrol Networks, Inc. Bidirectional security sensor communication for a premises security system
US10078958B2 (en) 2010-12-17 2018-09-18 Icontrol Networks, Inc. Method and system for logging security event data
US11341840B2 (en) 2010-12-17 2022-05-24 Icontrol Networks, Inc. Method and system for processing security event data
US10741057B2 (en) 2010-12-17 2020-08-11 Icontrol Networks, Inc. Method and system for processing security event data
US11240059B2 (en) 2010-12-20 2022-02-01 Icontrol Networks, Inc. Defining and implementing sensor triggered response rules
US20120237188A1 (en) * 2011-03-16 2012-09-20 Ingrasys Technology Inc. Network video recorder and method for recording video data in the network video recorder
US8526796B2 (en) * 2011-03-16 2013-09-03 Ingrasys Technology Inc. Network video recorder and method for recording video data in the network video recorder
US8810688B2 (en) * 2011-04-08 2014-08-19 Sony Corporation Information processing apparatus and information processing method
US20120257083A1 (en) * 2011-04-08 2012-10-11 Sony Computer Entertainment Inc. Information Processing Apparatus and Information Processing Method
US20130286211A1 (en) * 2012-04-26 2013-10-31 Jianhua Cao Method and apparatus for live capture image-live streaming camera utilizing personal portable device
US10999556B2 (en) * 2012-07-03 2021-05-04 Verint Americas Inc. System and method of video capture and search optimization
US11011058B2 (en) 2013-03-01 2021-05-18 Conduent Business Services, Llc Computer-implemented system and method for providing available parking spaces
CN103295355A (en) * 2013-05-28 2013-09-11 国家电网公司 Power transmission line corridor hazard source point management and control system capable of automatically recognizing potential safety hazards at night
US11296950B2 (en) 2013-06-27 2022-04-05 Icontrol Networks, Inc. Control system user interface
US10348575B2 (en) 2013-06-27 2019-07-09 Icontrol Networks, Inc. Control system user interface
US10122794B2 (en) 2013-10-17 2018-11-06 Hewlett Packard Enterprise Development Lp Storing data at a remote location based on predetermined criteria
EP3058490A4 (en) * 2013-10-17 2017-04-26 Hewlett-Packard Enterprise Development LP Storing data at a remote location based on predetermined criteria
WO2015057229A1 (en) * 2013-10-17 2015-04-23 Hewlett-Packard Development Company, L.P. Storing data at a remote location based on predetermined criteria
CN105659224A (en) * 2013-10-17 2016-06-08 慧与发展有限责任合伙企业 Storing data at a remote location based on predetermined criteria
US10643271B1 (en) * 2014-01-17 2020-05-05 Glenn Joseph Bronson Retrofitting legacy surveillance systems for traffic profiling and monetization
US11405463B2 (en) 2014-03-03 2022-08-02 Icontrol Networks, Inc. Media content management
US11943301B2 (en) 2014-03-03 2024-03-26 Icontrol Networks, Inc. Media content management
US11146637B2 (en) 2014-03-03 2021-10-12 Icontrol Networks, Inc. Media content management
US20150358537A1 (en) * 2014-06-09 2015-12-10 Verizon Patent And Licensing Inc. Adaptive camera setting modification based on analytics data
US9811748B2 (en) * 2014-06-09 2017-11-07 Verizon Patent And Licensing Inc. Adaptive camera setting modification based on analytics data
US11080977B2 (en) 2015-02-24 2021-08-03 Hiroshi Aoyama Management system, server, management device, and management method
SE543534C2 (en) * 2015-03-04 2021-03-23 Hitachi Systems Ltd Situation ascertainment system using camera picture data, control device, and situation ascertainment method using camera picture data
DE102016109125B3 (en) * 2016-05-18 2017-10-12 TCO GmbH Monitoring and encryption procedures
US11321570B2 (en) * 2017-07-03 2022-05-03 Nec Corporation System and method for determining event
US10990827B2 (en) * 2018-08-29 2021-04-27 Ooo Itv Group Imported video analysis device and method
US20220198893A1 (en) * 2019-11-26 2022-06-23 Ncr Corporation Asset tracking and notification processing
US11962672B2 (en) 2023-05-12 2024-04-16 Icontrol Networks, Inc. Virtual device systems and methods

Also Published As

Publication number Publication date
US7460149B1 (en) 2008-12-02

Similar Documents

Publication Publication Date Title
US7460149B1 (en) Video data storage, search, and retrieval using meta-data and attribute data in a video surveillance system
US11929870B2 (en) Correlation engine for correlating sensory events
US9619984B2 (en) Systems and methods for correlating data from IP sensor networks for security, safety, and business productivity applications
US7999847B2 (en) Audio-video tip analysis, storage, and alerting system for safety, security, and business productivity
US20080273088A1 (en) Intelligent surveillance system and method for integrated event based surveillance
US11829389B2 (en) Correlating multiple sources
US20160358017A1 (en) In-vehicle user interfaces for law enforcement
US10030986B2 (en) Incident response analytic maps
Bowling et al. Automated policing: the case of body-worn video
Lin Police body worn cameras and privacy: Retaining benefits while reducing public concerns
CN103731639A (en) Method for providing picture access records through intelligent video security and protection system
WO2015173836A2 (en) An interactive system that enhances video surveillance systems by enabling ease of speedy review of surveillance video and/or images and providing means to take several next steps, backs up surveillance video and/or images, as well as enables to create standardized intelligent incident reports and derive patterns
CN117354469B (en) District monitoring video target tracking method and system based on security precaution
GB2600405A (en) A computer implemented method, apparatus and computer program for privacy masking video surveillance data

Legal Events

Date Code Title Description
AS Assignment

Owner name: SECURENET HOLDINGS, LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DONOVAN, JOHN;HUSSAIN, DANIAR;REEL/FRAME:021197/0467

Effective date: 20071112

Owner name: KD SECURE, LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SECURENET HOLDINGS, LLC;REEL/FRAME:021197/0511

Effective date: 20080702

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: TIERRA VISTA GROUP, LLC, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KD SECURE, LLC;REEL/FRAME:032948/0401

Effective date: 20140501

AS Assignment

Owner name: SECURENET SOLUTIONS GROUP, LLC, FLORIDA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 032948 FRAME 0401. ASSIGNOR(S) HEREBY CONFIRMS THE CORRECT ASSIGNEE IS SECURENET SOLUTIONS GROUP, LLC;ASSIGNOR:KD SECURE, LLC;REEL/FRAME:033012/0669

Effective date: 20140501

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20201202