|Número de publicación||WO2003047258 A1|
|Tipo de publicación||Solicitud|
|Número de solicitud||PCT/US2002/035144|
|Fecha de publicación||5 Jun 2003|
|Fecha de presentación||31 Oct 2002|
|Fecha de prioridad||21 Nov 2001|
|También publicado como||EP1457050A1|
|Número de publicación||PCT/2002/35144, PCT/US/2/035144, PCT/US/2/35144, PCT/US/2002/035144, PCT/US/2002/35144, PCT/US2/035144, PCT/US2/35144, PCT/US2002/035144, PCT/US2002/35144, PCT/US2002035144, PCT/US200235144, PCT/US2035144, PCT/US235144, WO 03047258 A1, WO 03047258A1, WO 2003/047258 A1, WO 2003047258 A1, WO 2003047258A1, WO-A1-03047258, WO-A1-2003047258, WO03047258 A1, WO03047258A1, WO2003/047258A1, WO2003047258 A1, WO2003047258A1|
|Inventores||Dennis L. Montgomery|
|Solicitante||Etreppid Technologies, Llc|
|Exportar cita||BiBTeX, EndNote, RefMan|
|Citas de patentes (4), Citada por (31), Clasificaciones (41), Eventos legales (6)|
|Enlaces externos: Patentscope, Espacenet|
METHOD AMD APPARATUS FOR STORING DIGITAL VIDEO CONTENT PROVIDED
FROM A PLURALITY OF CAMERAS
1. FIELD OF THE INVENTION
 The present invention relates a method and apparatus for storing digital video content provided from a plurality of cameras.
2. BACKGROUND OF THE RELATED ART
 Surveillance cameras are extremely well known and used to help deter theft, crime and the like.
 It is also well known to store as a sequence of images on some type of medium the event that the surveillance camera is recording. Conventionally, such storage has been done using VCR's, and storing the images obtained from the event on a tape. Such a storage medium, however, has the disadvantage of requiring the frequent removal of and old tape and insertion of a new tape every few hours, which, when many cameras are being used, is a time consuming and tedious process. Further disadvantages occur since it is difficult to authenticate the recording stored on the tape. Still further, as a result of tapes degrading over time, as well as then typically being played back on a player different from that which recorded the tape, the resulting sequence of images may not be clear.
 The storage of images obtained from such events in digital form has also been contemplated. In such a system, images from the event recorded by the camera are converted to digital form, and have been compressed in some manner, such as using the known MPEG format. To date, however, such systems have not proved feasible for a variety of reasons. In situations requiring many cameras simultaneously the bandwidth required as a result of all of the cameras being used has prevented storage of images over a period of time that is long enough and which has sufficient resolution to decrease the overall cost of information storage to a point that digital storage is economical.
 One conventional approach to this problem is to begin recording when movement of some type is detected, so that otherwise still images are not continuously recorded, thus reducing the storage requirements necessary for operation of the system. Usage of such event initiation indicators in this manner introduce problems of their own, such as, for example, not having a continuous record.  Applications such as video surveillance monitor environments and, often, the activities of individuals within these environments. In video surveillance, data are transmitted from a delivery device, such as a digital or analog camera, to a repository where the data are stored, with typically many processing steps in between. Analog systems typically store data on videocassette recorder tapes, in analog form, which tend to be bulky and cumbersome. In contrast, digital systems store the data in a digital format.
 Analog systems are widely used at present. This is in large part due to lower cost of analog equipment, in terms of cameras as well as overall cost per frame of image data. Accordingly, most surveillance systems that are currently in use, even if they have digital delivery devices such as digital cameras, at some point convert the digital information into an analog form, whether that analog form is required for real-time viewing on an analog monitor, or for analog storage. Thus, in conventional systems, there typically exists an analog switch or switches that configure the data being received from many different cameras to their respective monitor for viewing and/or videocassette recorder unit.
 All digital systems, while available in prototype form, have not been widely implemented due to practical cost considerations, both in terms of the digital delivery units, as well as the cost of memory necessary to store the digital data associated with digital images and sound. And those prototypes that have been proposed have significant limitations.
 The activity that is captured by a video surveillance system in general will depend on the environment that is being monitored. For many environments, there are often areas that require monitoring for activity over an extended period but that do not exhibit a great deal of activity over the course of the extended period. For example, a camera might be focused on the door to a bank vault for 24 hours a day, but might only capture relatively few individuals entering the vault or merely walking by the vault door. Under conventional arrangements, the surveillance data for the monitoring are typically stored in a number of manners. In analog systems, the surveillance data is typically stored in analog form on a videocassette recorder, as noted above.
 In digital systems, in order to reduce the memory requirements, proposals have been made in which the system will send to memory data that occurs upon the initiation of motion. While initiating the saving of data upon the initiation of motion has the effect of reducing memory requirements, it has the undesired effect of not providing for a continuous capture of the events that the particular digital camera recorded.
 Thus, it would be desirable to have the capability to reduce the amount of storage necessary to house monitoring surveillance data without compromising the integrity of the monitoring system.
 In its simplest form, a surveillance system might essentially consist of an analog video camera hooked up to a remote video monitor as shown in FIG. 11 A, or an audio device hooked to a speaker, although the camera may also contain audio as well. Using the camera for purposes of this discussion, the camera is pointed at a spot of interest, e.g., a front door, an automated teller machine, etc., and provides an image of that scene to the monitor. An operator watches the monitor to look for unusual or unauthorized behavior at the scene. If such activity is perceived, the operator takes appropriate action - identifying the individual, notifying security police, etc.
 The system may have one or many cameras, each of which can be displayed in a predetermined area of the monitor. Alternatively, the operator may toggle through the scenes. Further, instead of one or more analog cameras, the system may use digital cameras such as CCD cameras and the like. Such digital cameras have the advantage of providing a high-quality, low-noise image when compared to analog images.
 Another possible video surveillance arrangement is shown in FIG. 11B. This system uses multiple cameras connected to the monitor via a controller. The controller can multiplex several camera signals and provide them to the monitor. Also, it can control the positions of the cameras. The operator uses an input device such as a keyboard, joystick or the like to direct the controller to control the motion of the cameras so they point to particular areas within their range, track interesting features in the images, etc. It may also use the input device to control the controller to direct the controller to provide particular ones of the camera signals to the monitor.
 FIG. 11C shows another arrangement of a video surveillance system. Here, a video recording device is connected to the camera outputs, the monitor input, or both. The video recording device, e.g., a video cassette recorder for analog cameras, can record the camera signals for archival, later review, and the like. Further, it can record images displayed on the monitor as evidence of activities taking place in the environments being inspected. For digital systems, the video storage device may be a digital storage device, a mass storage device such as a hard disk drive, or the like. When a hard disk drive is used, it may be a separate unit from the user controller and camera controller, or it may be part of an integrated system.
 When the cameras are analog models, their signals may be stored on analog or digital storage devices. With an analog storage device or devices such as video cassette recorders, the camera signal or signals are stored on videotape much like a television signal. In a system using a digital storage device, e.g., a digital surveillance system or an analog camera system which digitizes the camera signal, the camera images are pixilated and stored in the digital storage device as data files. The files may be uncompressed, or they may be compressed using a compression algorithm to maximize use of the storage space.
 If camera images are continually stored in a digital storage device without any deletions, eventually the storage device (or the part of it allocated for camera image storage) will become full. At that point, the stored data and incoming data must be managed to accommodate the new data.
 In general, an improved surveillance system is thus desirable.
3. SUMMARY OF THE INVENTION
 The present invention provides a distributed surveillance system that allows for the digital storage of data, as well as the recognition of external patterns obtained in parallel operations. Further, digital data can be stored at different levels of compression, and pattern recognition achieved while the digital data is still in its compressed form.
 The present invention described herein provide advantageous techniques for data frame adaptation to minimize storage size, source noise cancellation, and data frame delivery device source authentication in, for example, a surveillance system.
 The present invention describes methods and systems for adapting the size of a digital data frame to minimize data storage, for cancelling source noise resident in a digital data frame, and for authenticating the source of a digital data frame.  One embodiment of the present invention provides a control program which controls the digital storage device. The control program monitors the status of the digital storage device. When the storage device (or portion thereof allocated for image storage) becomes full and new information needs to be added, the control program directs the storage device to delete information therein to make room for the new information. This is done based on various data parameters, such as the priority of individual messages or data units, the age of each message or data unit, and the like. For example, when information needs to be deleted from a digital storage device to make room for new information, older data of high priority cameras may be saved instead of newer data of low priority cameras. In this way, efficient use of the digital storage system can be made.
 The invention also includes a method of detecting an occurrence of an event that exists within a sequence of stored frames relating to a scene is described, as well as methods that operate upon the data relating to the scene once an event is detected.
 The invention also includes a method of detecting occurrence of an event includes comparing a first stored frame to a later stored frame to determine whether a change in size between the frames exists that is greater than a predetermined threshold. In a preferred embodiment, the first and later stored frames are compressed, and operated upon while compressed to determine the occurrence of an event.
 Also provided are methods of operating upon the data relating to the scene one an event is detected include providing a still image at intervals, typically every 5-10 frames using the data from at least the recording device that detected the event. In a further embodiment, still images from other recording devices that are associated in a predetermined manner with the recording device that detected the event are also obtained at intervals.
 Also provided is a monitor program which monitors images coming from one or more surveillance cameras. When an image or set of images satisfies certain conditions, the monitor program takes an appropriate action.
 A method of automatically monitoring a game of chance is described. The method includes operating a video camera to obtain a stream of data that includes a plurality of repetitive actions stored thereon relating to the game of chance, and automatically parsing the stream of data to count the plurality of repetitive actions, the count obtained providing an indicator usable to monitor the game of chance. 4. BRIEF DESCRIPTION OF THE DRAWINGS
 The above and other objects, features, and advantages of the present invention are further described in the detailed description which follows, with reference to the drawings by way of non-limiting exemplary embodiments of the present invention, wherein like reference numerals represent similar parts of the present invention throughout several views and wherein:
FIG. 1 illustrates a block diagram of the digital video content storage system according to at least one embodiment of the invention;
FIG.2 illustrates a block diagram of software modules used by different processors according to at least one embodiment of the invention;
FIG. 3 is a block diagram illustrating a transmission system according to at least one embodiment of the invention;
FIGS. 4A through 4D are diagrams illustrate exemplary sizes adjustments to frames based on whether motion is or is not present in an area being monitored;
FIG. 5 is a flow diagram illustrating an exemplary noise pattern discovery process according to at least one embodiment of the invention;
FIG. 6 is a flow diagram illustrating an exemplary noise correction process according to at least one embodiment of the invention;
FIG. 7 illustrates an exemplary system according to at least one embodiment of the invention;
FIG. 8 illustrates compressed frames produced by a camera which has an object pass through its field of view;
FIG. 9A illustrates a process for detecting and reacting to a change in the size of frames of a source;
FIG. 9B illustrates several possible examples of operations that can be performed in reaction to the detection of an occurrence of an event according to least one embodiment of the invention;
FIG. 10 illustrates four screen displays in which a person caused a change in the size of frames to occur;
FIG. 11 A - 11C show various video surveillance system arrangements;
FIG. 12 shows a video surveillance system according to an embodiment of the present invention; FIG. 13 shows a video surveillance system according to an embodiment of the present invention;
FIG. 14 illustrates a gaming table according to one embodiment of the present invention;
FIG. 15A-15B illustrates a player place setting having a bet area and a play area;
FIG. 16 illustrates a sequence of repetitive actions that are possible in game played in accordance with an embodiment of the present invention;
FIG. 17 illustrates a mask for a player place setting in which no cards and bets are present and a mask for a dealer setting in which no cards are present;
FIG. 18A illustrates a roulette layout and mask for usage with the roulette layout;
FIGS. 18B-18C illustrates a roulette wheel and mask for usage with the roulette wheel;
FIG. 19 illustrates a sequence of repetitive actions for a roulette wheel and ball; and
FIGS. 20A -20B illustrate exemplary reports generated from repetitive actions being monitored
5. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
 The present invention provides a distributed surveillance system and methods of using the system. Features included are digital storage of surveillance information, both visual and audible, pattern recognition of external patterns within obtained streams of surveillance information from different sources, distributing external patterns to various computer locations where such external patterns can be used for searching, interpreting automatically patterns of repetitive activity and generating reports therefrom, distributing individual images that are automatically determined to contain a particular image of interest, establishing pattern recognition rules to detect activity of interest, as well as others described herein.
 The present invention is implemented using a combination of hardware and software elements. FIG. 1 illustrates an exemplary system 100 according to the present invention, and various different devices that will allow for the various permutations described herein to be understood, although it is understood that this exemplary system 100 should not be construed as limiting the present invention. FIG. 1 illustrates a plurality of conventional cameras 110-1 to 110-n, analog or digital, each of which is connected to a computer 120-1 to 12-m and preferably contains systems for detecting both images and sound. The connections between the cameras 110 and the computers 120 can be many, being shown as both connected in a one-to-one correspondence, as well as a number of the cameras 110 being connected to a single computer 120. Further, a computer 120 is also shown as not being connected to any camera, to illustrate that digital data stored in any digital format can be operated upon by the present invention, such as being stored on a computer.
 The computers 120 are each preferably coupled using a network 150 of some type, such as the Internet or a private network, to a central server 130. Each of the computers 120, and the cameras 110 connected thereto, can be in any number of disparate locations. While the computer 120 connected to a camera 110 are preferably within or close to the same building in order to minimize the distance that signals from the camera 110 must travel until they reach the computer 120, the cameras can be on different floors of the same building, and the computers 120 can be within different buildings. And while the system is illustrated for convenience herein as having a single central server 130, it will be appreciated that any or all of the computers 120 could be configured to act as the central server described herein. It should be understood, however, that it is advantageous to have the central server 130 at a location that is remote from the location where the surveillance is being performed, and even more preferable to have that remote location be in a physically separate building. In typical operation, a particular company may have its own server 130, and, if desired, each of different servers 130 can be connected together through the network 150 to permit sharing of certain data therebetween. In the discussion that follows, however, the system 100 will be described with reference to a single server 130, which can server a multitude of different companies, to illustrate the most flexible aspects of the system 100 described herein.
 Furthermore, as illustrated, a variety of other computers 140-1 to 140-n, each containing a monitor 142, are also illustrated as being coupled to the server 130 through the network 150. Data received by computers 140 can be decrypted and decoded for subsequent usage, such as being perceived visibly using monitor 142 and audibly using speaker 144 shown.
 This particular exemplary system 100 thus allows the cameras 110 to provide at least images, and sound if desired, which can be encoded and stored for surveillance purposes. This data can be stored in digital form on a computer 120 or a computer 130, as described further hereinafter. Further, patterns within images or sound obtained from cameras 110 that compare to external patterns can also be recognized, as will be described further herein, using either the computer 120 provided with the image data from the camera 110, the server 130, or both.
 Transmission of data between each computer 120 or 140 and server 130 is also preferably encrypted, as described further hereinafter. For data transmitted to computers 120, 140 or server 130, the data is decrypted in order to operate on it.
 While the above network is provided to illustrate one manner in which the advantages of the present invention described herein can be used, this is exemplary. For example, each of the different computers 120, 130 and 140 could be implemented as a single computer and devices other than computers that contain processors that are able to implement the inventions described herein can be used instead. Many other variants are possible. And while a device such as mentioned will typically include a processor of some type, such as an Intel Pentium 4 microprocessor or a DSP in conjunction with program instructions that are written based upon the teachings herein, other hardware that implements the present invention can also be used, such as an field programmable gate array. The program instructions are preferably written in C++ or some other computer programming language. For repetitive routines that are used repeatedly, these can be written in assembler for faster processing.
 FIG. 1, illustrated and described previously, describes the hardware elements of the surveillance system 100 according to the present invention. The present invention also includes software 200 that is resident on each of the computers 120 and 140, as well as the server 130, to provide for the variety of functions described herein. It will be appreciate that various modules 210 of the software 200 are resident on different computers, thereby allowing for the functions as described herein to be achieved.
 One aspect of software 200 is that the modules are preferably comprised of functionalities that allow interoperability therebetween, as discussed hereinafter. That this interoperability is possible will become apparent in the discussion that follows. In this regard, as shown by FIG. 2, modules 210 that are typically contained within and operate in conjunction with the computers 120 are:
-local front end processing module 210-1;
-local pattern recognition module 210-2, which, as described hereinafter, allows for both internal pattern recognition and external pattern recognition as described hereinafter; -local multi-pass module 210-3;
-local encryption/decryption module 210-4;
-local user interface module 210-5; and
-local priority data storage module 210-6.
Modules 210 that are typically contained within and operate in conjunction with the server 130 are:
-network pattern recognition module 210-12, which, as described hereinafter, allows for both internal pattern recognition and external pattern recognition, and includes network priority information, event alert generation, and repetitive action pattern analysis and associated report generation, as described hereinafter;
-network multi-pass module 210-13;
-network encryption/decryption module 210-14;
-network interface module 210-15; and
-network priority data storage module 210-16.
Each of the modules described above will be further discussed so that operation of the system 100 is understood.
 Local front end processing module 210-1, local user interface module 210- 5. and network interface module 210-15
 The local front end processing module 210-1, local user interface module 210-5. and network interface module 210-15 will first be described. As described generally above, the surveillance system 100 according to the present invention allows for recording devices, typically cameras 110, to be placed at various locations associated with a building, and allows for buildings at disparate locations to be served. Accordingly, in the preferred embodiment, each building will contain at least one local computer 120, although the number of local computers needed will vary depending upon the amount of processing power desired relative to the number of recording devices and other processing operations that take place at the location itself. In this regard, the local user interface module 210-5 is used to set up and keep track of connections between cameras that are included on the local system. Thus, a single local interface module, typically associated with a single company, will keep track of company information, camera information, and user information.
 Company information tracked will typically include the company name, and the address information for each building of which surveillance is required.  Camera information, that is associated with a particular building, includes the model of the camera, a camera priority, a camera identifier that can be used by both the local system as well as the network system, the location of the camera, which can include not only a location that it views, but reference to a particular activity, such as a particular gaming table number in an embodiment for usage at a casino, a gaming type identifier if the camera is used for a gaming table, an operation status identifier (functioning or out-of-service), and a camera purpose identifier, which can be, for example in the context of usage at a casino, whether the camera is used for gaming or non-gaming purposes. How these are used will be further described hereinafter.
 User information identifies users who can access the system, and the degree of access that they have to the system. Certain users will have view access only, which may be of only certain views or all views, while other users will have access to make changes, run reports, and the like as described further herein.
 With respect to the set-up of each camera, the individual camera 110 not only needs to be set-up and connected with the system using the local user interface module 210-5, but also needs to be configured to operate properly. In this regard, the local front end processing module 210-1 is preferably used to properly configure the camera to operate as effectively as it can. Configuring the individual camera 110 using the local user interface module 210-5 is described below with respect to FIGS. 3-6 and in U.S. Patent Application entitled "Method And System For Size Adaptation And Storage Minimization, Source Noise Correction, And Source Watermarking Of Digital Data Frames" filed on the same day as this application, which is assigned to the same assignee as the present invention, and which bears attorney reference 042503/0273340, the contents of which are hereby expressly incorporated by reference herein.
 The local user interface module 210-5 is also configured to transmit the information relating to the local system to the network interface module 210-15. It is understood that the network interface module 210-15 will preferably contain the same information as exists with respect to each different local user interface module 210-5. In addition, since the network interface module 210-15 receives information from a multitude of different computers 120, the network interface module 210-15 will also include network related features, such as network establishment of network external patterns desired for pattern recognition as described further herein, network camera priority rules, network back- up procedures, and network report generation. The manner in which these various local and network features are implemented will be described hereinafter, but it will be understood that the local user interface module 210-5 and the network interface module 210-15 together allow for access to the results of the operations performed by the other modules 210 described hereinafter.
 With respect to local and network camera priority rules that are for convenience discussed with the local user interface module 210-5 and network interface module 210-15, as mentioned above the local user interface module 210-5, when setting up a camera, will establish a priority of that camera, which priority scheme can take many different forms, one being a 0-10 priority, with 0 being the highest priority and 10 being the lowest priority. In addition to the local priority setting, there may be another network priority setting, thus ensuring that certain types of data are transmitted from the computer 120 to the server 130. Thus network camera priority settings also exist, which cause the network interface module 210-15 to initiate transfers of data from certain cameras 110 at a higher priority than other cameras 110.
6. LOCAL PATTERN RECOGNITION MODULE 210-2 AND NETWORK
PATTERN RECOGNITION MODULE 210-12
 The local pattern recognition module 210-2 and network pattern recognition module 210-12 each provide for the capability to recognize patterns within data. Typically, that data includes image data that is formatted in frames. Patterns recognized include patterns within the data, as well as external patterns that are externally generated and being searched for within the data, as described further hereinafter.
 Both the local pattern recognition module 210-2 and network pattern recognition module 210-12 can use a variety of pattern recognition techniques, but preferably use pattern recognition techniques as described in U.S. Appln. No. bearing attorney reference number 042503/0259665 entitled "Method And Apparatus For Determining Patterns Within Adjacent Blocks Of Data," filed on October 31, 2001, which is assigned to the same assignee as the present invention, can be used to perform pattern recognition and compression, and the contents of which are expressly incorporated by reference herein.
 Differences between the local pattern recognition module 210-2 and the network pattern recognition module 210-12 exist. One significant difference, typically, is that while the local pattern recognition module 210-2 typically operates upon uncompressed data as described in U.S. Appln. No. bearing attorney reference number 042503/0259665 entitled "Method And Apparatus For Determining Patterns Within Adjacent Blocks Of Data" reference above, the network pattern recognition module 210-12 will operate upon data in a compressed form.
 Specifically, the network pattern recognition module 210-12 can operate upon data that has been consistently compressed. Compression is preferably achieved using the techniques described in Appln. No. 09/727,096 entitled "Method And Apparatus For Encoding Information Using Multiple Passes And Decoding In A Single Pass" filed on November 29, 2000. In this compression scheme, as noted in the section below, recorded data can be subjected to multiple passes to achieve further compression.
 Both the local pattern recognition module 210-2 and the network pattern recognition module 210-12 also provide for the recognition of external patterns within images that are externally generated. So that external patterns can be detected ^respective of which amount of compression is used, as noted in further detail below, the external patterns are stored at each of the different levels of compression that exist.
 The network interface module 210-15, in conjunction with the local pattern recognition module 210-2 and the network pattern recognition module 210-12, is used to identify and keep track external patterns desired for pattern recognition and the priority of that external pattern. In particular, an external pattern will represent an object, which object could be the face of a person, a particular article such as a purse, card, a vehicle or a vehicle license plate. Whatever the object, the present invention will store a representation of that object and compare use that representation to compare with patterns that are within the image recorded.
 A limiting factor in the ability of the system 100 to track external patterns is that the system 100 is already obtaining data from various cameras 110 and compressing that data as described above. In order to perform that task alone, substantial processing power is required, leaving only some percentage, based upon the computing power available, to track external patterns of objects. Thus, the network interface module 210-15 will keep track of the priority of each external pattern that will be searched for based upon the input from each camera 110. Certain of the highest priority external patterns are distributed to computers 120 in an uncompressed form, using a sufficient number of points, such as 25, that allows for sufficiently accurate pattern detection in vector form using (x,y) offsets as is known, for pattern recognition that takes place using another processor thread for the purpose of searching for a particular pattern, as described in U.S. Appln. No. bearing attorney reference number 042503/0259665 entitled "Method And Apparatus For Determining Patterns Within Adjacent Blocks Of Data" reference above. Lower priority external patterns are retained on the server 130 in compressed form, and are searched for within the server 130 in the manner discussed above.
 Since the computers 120 are typically operating upon real-time images that are received, it is apparent that the external patterns located by computers 120 will be obtained more quickly than external patterns found by server 130, which external patterns need not be searched for in real time. If, however, the computer 120 cannot complete its search for external patterns, it will so notify server 130, which will then preferably search for all desired external patterns and assume that the computer 120 did not find any.
 Operations using pattern detection, whether using the pattern detection techniques described in U.S. Appln. No. bearing attorney reference number 042503/0259665 entitled "Method And Apparatus For Determining Patterns Within Adjacent Blocks Of Data" referenced above or other conventional techniques are further described below.
 Each of the pattern recognition operations described can be implemented by the system 100 described herein, if desired.
7. LOCAL MULTI-PASS MODULE 210-3 AND NETWORK MULTI-PASS MODULE 210-13
 The local multi-pass module 210-3 and network multi-pass module 210-13 operate to provide further compression from that obtained by the local pattern recognition module 210-2 and network pattern recognition module 210-12. That further compression is preferably achieved using the techniques described in Appln. No. 09/727,096 entitled "Method And Apparatus For Encoding Information Using Multiple Passes And Decoding In A Single Pass" filed on November 29, 2000, and assigned to the same assignee as the present invention, the contents of which are expressly incorporated by reference herein, or other encoding/decoding processes.  While each of the local multi-pass module 210-3 and network multi-pass module 210-13 can be configured to operate in the same manner, typically that is not the case. Rather, typically, the local multi-pass module 210-3 is configured to perform some predetermined number of passes on the data that it receives in order to partially compress that data before it is transmitted by the computer 120 to the server 130 for further operations, including further compression operations performed by the network multi-pass module 210- 13 using further passes to further compress the data. In this regard, it is preferable that each of the local multi-pass modules 210-3 using the same number of passes and the same compression routines, so that the data received by the server and further operated upon using the network multi-pass module 210-13 is already consistently compressed between the various sources from which it receives data. Thus, as is apparent, the network multi-pass module 210-13 will preferably contain additional multi-pass compression routines not found in the local multi-pass modules 210-3 that allow for further passes to occur. Typically, passes 1 and 2 are performed by the computer 120, whereas further passes are performed by the server 130. Further, since the more compression requested the greater the number of passes, and also the slower the system, data can be saved with a user-specified amount of compression. Since images may be recorded at different levels of compression, for each different compression level there must be an associated compression of all of the external patterns. Thus, if images are recorded at one of 1, 2, 5, 10, 15 or 20 passes, then external patterns must be obtained for each of 1, 2, 5, 10, 15 and 20 passes, so that the appropriately compressed external pattern can be used during comparison operations depending upon the compression of the image.
8. LOCAL ENCRYPTION/DECRYPTION MODULE 210-4 AND NETWORK
ENCRYPTION/DECRYPTION MODULE 210-14
 The local encryption decryption module 210-4 and network encryption/decryption module 210-14 each perform the same functions—encrypting and transmitting data to a source destination, and receiving and decrypting data that has been previously encrypted and transmitted. While many encryption decryption techniques exist, one technique that is advantageously implemented is described in U.S. Application No. 09/823,278 entitled "Method And Apparatus For Streaming Data Using Rotating Cryptographic Keys," filed on March 29, 2001, and assigned to the same assignee as the present invention, the contents of which are expressly incorporated by reference herein. 9. LOCAL PRIORITY DATA STORAGE MODULE 210-6 AND NETWORK PRIORITY DATA STORAGE MODULE 210-16
 The local priority data storage module 210-6 and network priority data storage module 210-16 keep track of the data stored on each of the computers 120 and server 130, respectively. These priority data storage modules are different than data backup, and assume a worst-case scenario that no data backup has occurred. In essence, both the local priority data storage module 210-6 and network priority data storage module 210-16 operate in the same manner —to keep that data which is most important. Specifics on how data is differentiated and these modules operate are described in U.S. Application No. bearing attorney reference number 042503/0273342 entitled "System And Method For Managing Memory In A Surveillance System" filed on the same day as this application, and which is assigned to the same assignee as the present invention, the contents of which are expressly incorporated by reference herein.
 Also, each local user interface module 210-5 can operate if not connected to the network, with the resident software continuing to perform the functions that it typically would in conjunction with it associated computer 120. The data saved, should prioritizing be necessary, will be in accordance with the patent application refened to in the immediately above paragraph.
10. OTHER ENVIRONMENTS
 The surveillance system as described herein describes security in the context of a casino, which includes gaming tables, vault areas, and other common areas requiring surveillance. It will be understood, however, that the present invention can be implemented in any environment requiring security, including financial institutions, large commercial buildings, jewelry stores, airports, and the like.
 In certain buildings, airports and the like, there can be various levels of security established at various locations. At an airport, for instance, the initial gate entry area represents one level of security, and the actual gate can represent another level of security. In a casino, jewelry store, or financial institution, an entry one can represent one level of security and a vault within represent another level of security. Comparisons of images from these related areas, and generating alerts and other information based thereupon can be used in environments further described below. Certain characteristics of each environment can be used in making comparisons. For example, in a jewelry store, comparisons can be made between a mask of an unbroken glass display case, such that if the case breaks, an alert can be sounded.
11. ADAPTATION FOR WIRELESS ENVIRONMENT
 The present invention can also be adapted for surveillance in environments where a wired connection between the computer 120 and the server 130 is not possible, such as on an airplane. In such an environment, the available transmission bandwidth becomes even more of a limiting factor than in a wired network. Accordingly, in such an environment, the computer 120 will typically be limited to performing the compression operations described above and wirelessly transmitting the compressed information, and pattern recognition and other operations will take place at server 130 which is adapted to receive the wirelessly transmitted data.
 The below embodiments described herein provide advantageous techniques for data frame adaptation to minimize storage size, data frame noise correction to aid in pattern recognition, and data frame delivery device source authentication in, for example, a surveillance system.
 The embodiments describe methods and systems for adapting the size of a digital data frame to minimize data storage, for conecting source noise resident in a digital data frame, and for authenticating the source of a digital data frame.
 Referring first to FIG. 3, it is a block diagram illustrating an exemplary transmission system 300 according to the present invention, and various different devices that will allow for the various permutations described herein to be understood, although it is understood that this exemplary transmission system 300 should not be construed as limiting the present invention. The system 300 includes source data delivery devices 310, for example, conventional cameras 310-1 to 310-N, each of which is connected to a computer device 320 at a data interface 318 via respective transmission equipment 316-1 to 316-N.
 The source data delivery devices 310- 1 to 310-N preferably contain systems for detecting both images and sound, although devices that can reproduce images or sound but not both are also within the scope of the present invention. The source data delivery devices 310 can be analog or digital.  Further, source delivery devices 310 generate noise that becomes overlaid onto the recorded signal. And delivery devices that are most susceptible to producing large amounts of noise are those devices 310 that record images, in other words a camera. And while there exists high quality cameras that produce only slight amounts of such noise, cameras used in many surveillance environments are often of a low grade quality. As such, the cameras often generate a substantial amount of noise that is overlaid onto the actual image that is being recorded.
 For devices 310 that record images, this noise results from a combination of the internal elements that are used to record the image, including the optical systems, transducers, digital circuits, the power source and AC/DC converters, and the like. It has been found, however, that once a camera has been turned on for a period of time, it reaches a steady state operation, such that the noise will repeat in a cyclic noise pattern. The present invention, as described hereinafter, exploits this property to eliminate cyclic noise from the recorded image. Thus, certain aspects of the present invention correct for the noise signature of devices 310 such as cameras.
 The environment in which a device 310 is placed has also been found to be significant. If a device 310 is analog, the respective transmission equipment 316 is typically analog, and if device 310 is digital, the respective transmission equipment is typically digital. A conventional arrangement for an analog camera device 310 includes analog transmission equipment 316A that includes analog transmission lines and amplifiers placed at lengths along the analog transmission lines to refresh the analog signals as suitable. A typical arrangement for a digital camera device 310 includes digital transmission equipment 316D that typically includes only an optical transmission line, as digital signals can travel along an optical transmission line distances that are much greater than analog signals can travel, as is known.
 hi most systems, however, within the data interface 318 is located an analog switch that allows for switching streams of data from various cameras to various monitors and/or recording equipment. As such, at this point, conversion of digital data, if it has previously been obtained, to analog form, is still required in many instances. Thus, irrespective of the type of camera used, analog or digital, the data interface 318 in many cases will include an analog to digital (A/D) converter after the analog switch so that analog signals output from the switch can be converted to digital form for input into the computer 320. And for such systems which contain digital recording devices 310, before the analog switch there exists a digital to analog (D/A) converter that converts the digital signals to analog form, so that they can be operated upon by the analog switch. It is apparent, therefore, that in addition to the source noise that is generated from the recording device 310, that the transmission medium 316 will also contribute noise as well, particularly from signal degradation and amplifier distortion in the analog context, and from digital to analog conversion and analog to digital conversion in the digital context.
 And while the system as above-described will be used for the remainder of the discussion herein, it should be understood that these are illustrative examples and many other arrangements are possible.
 As mentioned previously, devices 310, and particularly low quality cameras, generated a cyclic noise pattern, which pattern is further altered as a result of the transmission medium 316. One obvious component of this noise tends to be from power used to drive the electrical components. While a DC voltage is typically used to drive circuit components, this DC voltage is typically obtained as a result of a conversion from an AC source, which in the United States oscillates at 60 Hz/sec. Thus, this AC noise becomes one component of the source noise, and can have a particularly severe effect since most image devices 310 record images at 30 frames/sec, a frequency that is relatively close to the oscillating frequency of the AC power signal.
 As mentioned above, the present invention exploits the existence of this cyclic noise property to eliminate cyclic noise from the recorded image, and how it does that will now be described with respect to the flowchart of FIG. 5. As indicated by step 310, an initial set-up is first preferably done, so that the recording device 310 and the transmission medium 316 associated with that device are in place. This ensures stability of the initializing routine. Once step 510 is complete, the initializing steps are begun, with the first initializing step 520 being to rum on the device 310 after the computer 320 is configured to record the output of the device 310. It is noted that in this initial configuration, the amount of time that the device 310 will require to heat up before it exhibits a cyclic noise pattern is unknown.
 When initially turned on, the camera records the image taken from a known color pattern, such as a known white blotter. Initially, as shown by step 530, the image is recorded for some number of frames, typically in the range of 200-300 frames. Each of these frames is then compared against a stored "white" image that contains pixel representations corresponding to the actual known color pattern to obtain a difference frame, as shown by step 540. In the following step 550, these difference frames are compared against one another to determine if there is any repetition of patterns between them. While conventional pattern recognition algorithms can be used, preferably the pattern recognition algorithm described in U.S. Application No. bearing attorney reference number 042503/0259665 entitled "Method And Apparatus For Determining Patterns Within Adjacent Blocks Of Data," filed on October 31, 2001, which is assigned to the same assignee as the present invention, is used. For purposes of using the pattern recognition described in this U.S. Application No. bearing attorney reference number 042503/0259665 and the nomenclature therein, each frame can be a reference frame, and be compared to each of the other frames, with each of the other frames being a target frame for purposes of that comparison. In order to maximize the comparisons, each frame can be designated a reference frame with the other frames being target frames, although it will be appreciated that such a number of comparisons leads to redundant comparisons, and thus a lesser number of comparisons is needed.
 If, following step 550, a cyclic noise pattern is uncovered therein, that cyclic noise pattern can be stored in step 560.
 If, however, a cyclic noise pattern was not uncovered, then the recording device 310 is operated in step 570 for a period of time longer than it was previously, and the recording stored. Thereafter, step 550 is repeated, using the larger number of recorded frames to uncover the cyclic noise pattern. Steps 560, 570, and 550 then repeat until a cyclic noise pattern is found.
 In terms of the typical length of time that it takes to uncover the cyclic noise pattern, it has been determined that in more recent digital cameras, such as Fujitsu Series XV, that the cyclic noise pattern will appear after a heat-up time of approximately two minutes, and that the cyclic noise pattern repeats in a range of typically every 250-350 frames. For older analog models, however, the heat up time required can be on the order of days, although the cyclic noise pattern once established will still be on the order of 2,000- 4,000 frames.
 Once the cyclic noise pattern is obtained, then, as shown in FIG. 6, it can be used to remove the noise from the recorded data, and thus obtain a better representation of that which is being detected, such as the image if the device 310 is a camera. As shown in step 610, once the recording device 310 is turned on, an initialization period corresponding to the previously determined heat-up period is preferably allowed to occur, so that the device 310 enters a steady state operation. Once this period of time passes, recording of the desired scene can begin, as shown by step 620. And once recording begins, each recorded frame is input into computer 320, and, as shown by step 630, is synchronized with a corresponding frame from the cyclic noise pattern to remove the cyclic noise therefrom. Accordingly, as shown by step 640, each frame with the cyclic noise removed therefrom is obtained. The frames can then be used as desired in subsequent surveillance operations.
 It should be also be noted that it has been determined that this cyclic noise pattern is substantially frequency independent. Thus, while a known white blotter was indicated as being used above, any suitable solid material of known color maybe used, as long as the known color is identical to the color of mathematically represented benchmark frames of data used for comparison.
 In another aspect of the present invention, the present invention exploits the obtained cyclic noise pattern. As described above, the cyclic noise pattern is preferably detected within each frame and eliminated or minimized. According to another aspect of the present invention, watermarking of particular frames generated by a source recording device 310 is performed using the noise signature. In a presently prefened embodiment, the camera noise is not removed for every nth frame to obtain a detectable watermark indicative that the frame actually comes from that particular source recording device 310. If a different source recording device 310' were instead used, a different noise pattern would exist, and the expected noise pattern would not be found. Thus, this noise creates a digital signature that will identify the frame as having come from the particular recording device 310 rather than from a different recording device 310', thus foiling any attempts to introduce a substitute stream of data. In this regard, in order to be able to later in time verify the specific camera that recorded a specific sequence, when storing the specific sequence, it is preferable that the cyclic noise pattern also be stored with the sequence, to ensure that such verification can be made later in time.
 It should also be noted that although the cyclic noise removal of the present invention is described in temis of real-time elimination of the cyclic noise pattern, that the cyclic noise removal can operate upon data that has been previously stored. And while having the actual recording device used to record the data is desirable, noise patterns can be detected in stored data even without having the actual camera.
 As described above, if a surveillance system attempts to store recorded data digitally, the memory requirements can be quite large and expensive. Minimizing the storage space required for storing data, for example, frames of digital data, is a common objective of data delivery systems. As noted previously, while systems exist which will not store data during periods when motion is not detected, that fact that a continuous record is unavailable is undesirable.
 And while compression routines exist which can operate to minimize the amount of recorded data that needs to be stored, that amount of data can still be quite large. In general, the amount of data that is recorded by the video surveillance system 300 depends on the environment that is being monitored. For many environments, there are often areas that require monitoring for activity over an extended period but that do not exhibit a great deal of activity over the course of the extended period. For example, the camera 310 might be focused on the door to a bank vault for 24 hours a day, but might only capture relatively few individuals entering the vault or merely walking by the vault door. This can easily be contrasted with the case of frames from a motion picture or from a video camera that is trained on a busy area with much traffic.
 Exemplary aspects of the present invention exploit monitoring of environments that do not exhibit a great deal of activity over the course of an extended period of monitoring. Rather than storing all of the surveillance data recorded, another aspect of the present invention reduces the amount of storage by reducing the stored image resolution for frames of data corresponding to no motion being detected.
 Frames of digital image data are typically made up of pixels, with each pixel having, for example, a 16, 24, or 32 bit RGB representation. Since the resolution of a particular frame in increases as the number of pixels used to represent the frame increases, to conserve data storage space that would otherwise be taken up by filming of these environments exhibiting no activity for extended periods, after a predetermined period of time of storing a full-sized frame during which no motion is observed, the resolution of the stored frame is reduced to some fraction, for example, one-quarter, of the size of the full- sized frame. The smaller frame size is used until a frame with motion appears. Then, the stored frame size is increased to a larger frame size. It should be understood that the lower the fraction, the greater the reduction in storage space typically needed to store the data. While lesser or greater than 25% resolution can be stored, this amount has been found to be a good compromise between maintaining clarity of the image and reducing data stored, which, as will be appreciated, are competing requirements.
 FIGS. 4A through 4D illustrate the various operations necessary to implement the reduced resolution frame storage. In FIGS. 4A through 4D, an exemplary frame storage size of 640x480 pixels (prior to any compression taking place) is used, with a reduced resolution frame storage size of 320x240 pixels (prior to any compression taking place) if no differences indicative of motion or activity occurring in the environment or area are monitored. Preferably, the computer device 320 performs a frame by frame comparison for a particular camera of the cameras 310. It is understood that even with cyclic noise patterns removed, differences between images will still result, even if the actual scene recorded was the same. Accordingly, differences between frames that exceed a certain predetermined threshold, such as 3-5% of tolerated loss, are used to indicate the introduction of motion to a scene. It is noted that the predetermined threshold between adjacent frames containing motion will be exceeded because the new object contained in the frame will significantly alter certain bits within the frame. Further, it is preferable that the comparison operations operate upon the full resolution frame size, and that the reduced frame size be stored once it is determined that motion between adjacent frames does not exist.
 Whether adjacent frames are within the threshold can be determined using pattern recognition techniques, and preferably the pattern recognition technique described in the U.S. Appln. bearing attorney reference number 042503/0259665 mentioned above. Generally, and particularly for FIGS. 4A through 4D, the reference frame is initially set to an initial frame of a sequence of frames, while the target frame is initially set to a subsequent frame of the sequence of frames. Once the reference and the target frames are compared with one another, the subsequent frame that was the target frame is redesignated as a new reference frame, and another subsequent frame that follows the subsequent frame is redesignated as a new target frame. The process is preferably repeated for each successive frame in the sequence.
 It should be noted that according to the preferred embodiment, the recording device 310 is fixed in position, does not zoom, and always records the same background scene. Thus, processing can be simplified from the situation where the recording device 310 is not fixed. If not fixed, then a no-motion reference frame 414 cannot be obtained, and a sequential comparison of frames is required. It is noted, however, that since a sequential comparison of frames may already be obtained if compression in addition to the frame size reduction described herein is being used, that comparison can be used rather than using a no-motion reference frame that is always the same.
 In FIG. 4A, a 640x480 reference frame 402 of digital data that has been previously recorded as a 640x480 size frame that captured a scene A is compared with a subsequent 640x480 target frame 404 of digital data. As shown, this subsequent frame contains a scene B that that is different from scene A, thus indicating that there is activity or motion that occurs that engenders differences between the frames 402, 404 and causes the predetermined threshold to be exceeded. Since the predetermined threshold is exceeded, the scene B is recorded at the larger 640x480 frame size. Subsequent frames 406 continue to be sized at the larger 640x480 frame size until the predetermined threshold is not exceeded for some window of time, typically 200-300 frames of no activity.
 In FIG. 4B, a 640x480 reference frame 408 of digital data representing scene A that has previously been recorded as a 320x480 reduced frame is compared with a 640x480 target frame 410 of digital data, which captures a subsequent scene A that falls within the predetermined threshold. Since subsequent scene A falls within the predetermined threshold, it is also recorded as a reduced 320x480 frame size, indicative of there being no discernible activity or motion that occurs. Preferably, subsequent frames 412 continue to be sized at the smaller 320x240 frame size until differences between frames are recognized that cause the predetermined threshold to be exceeded.
 In FIG. 4C, a 640x480 reference frame 414 of digital data that was recorded at 640x480 of scene A is compared with a subsequent 640x480 target frame 416 of digital data, which captures a subsequent scene A that differs by less than the predetermined threshold. Since initial scene A and subsequent scene A are within the threshold, it is concluded that there is no discernible activity or motion that occurs. Thus, the recorded frame size is thus adjusted to the smaller 320x240 frame size if the window of time as referred to above has elapsed. If the window of time has not elapsed the subsequent scene A is stored as a 640x480 frame, but a counter conesponding to the window of time is incremented. Subsequent frames 418 that also are within the predetermined threshold after the window of time has been exceeded are thus sized at the smaller 320x240 frame size until differences that cause the predetennined threshold to be exceeded are recognized.
 hi FIG. 4D, a 640x480 reference frame 420 of digital data that captured scene A had been recorded at 320x240. This reference frame is compared with a 640x480 target frame 422 of digital data, which captures a subsequent frame of scene B that differs from scene A by more than the predetermined threshold, indicating that there is activity or motion that occurs that engenders differences between the frames 420, 422. Since the predetermined threshold is exceeded, the subsequent frame size is adjusted to the larger 640x480 frame size. Preferably, subsequent frames 406 are sized at the larger 640x480 frame size until the predetermined threshold is no longer exceeded, and the window of time has elapsed.
 hi a modification of the embodiment described above, if the last recorded frame was recorded at a small size 320x240 frame size, than the comparison operations, instead of comparing two different 640x480 frames will compare two 320x240 frames, which reduces the number of comparison operations needed, and if the predetennined threshold is exceeded, then the entire 640x480 size frame that was obtained but not used for the comparison operations is stored.
 Other modifications are also within the scope of the present invention. For example, the order that the steps are implemented can vary.
 Further, the cyclic noise that is detected can be used for other purposes. For example, in a typical installation the cameras, amplifiers, and the like will all be turned on and being used continuously, 24 hours a day. As a result, they tend to operate in a stable manner, and thus the cyclic noise pattern can be eliminated. If, however, the camera, amplifier or another component begins to drift from its stable operating characteristics, a new cyclic noise pattern will develop that is different from the originally obtained cyclic noise pattern. As a result, the watermark that is occasionally passed will differ, as described above. When this occurs, the difference will cause an alert, as noted above. While this alert may indicate suspicious circumstances, it could also indicate that one of the components, such as the camera or amplifier, may fail in the near future, since an early indicator that a device will fail is unstable operation, which can thus cause the drift. Accordingly, the present invention can be used as an early warning system that can indicate that a particular device may soon completely fail. If a particular device is found to be unstable and needs to be replaced, it is noted that the initial set-up as previously described will need to be performed again, since the new device will cause a different cyclic noise pattern to result.
 Further, an even further reductions in the size of the stored frame can be made. One example of that is if some predetermined percentage of continuous frames are entirely black, such as 98%, indicating lights are out and no image is detectable. In such circumstances a further reduction in stored frame size to l/8th of the original frame size may be wananted.
 One of the pattern recognition embodiments describes a method of detecting an occurrence of an event that exists within a sequence of stored frames relating to a scene, as well as methods that operate upon the data relating to the scene once an event is detected.
 The method of detecting occurrence of an event includes comparing a first stored frame to a later stored frame to detennine whether a change in size between the frames exists that is greater than a predetermined threshold. In a prefened embodiment, the first and later stored frames are compressed, and operated upon while compressed to determine the occunence of an event.
 Methods of operating upon the data relating to the scene one an event is detected include providing a still image at intervals, typically every 5-10 frames using the data from at least the recording device that detected the event. In a further embodiment, still images from other recording devices that are associated in a predetermined manner with the recording device that detected the event are also obtained at intervals.
 Methods and apparatus for detecting and reacting to a change in the field of view of a video camera whose output is digitized, and typically compressed, are described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced in a variety of compressed data systems in which the occunence of an event is reflected in the size of the compressed data. In other instances, well-known operations, steps, functions and elements are not shown in order to avoid obscuring the invention.  Parts of the description will be presented using terminology commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art, such as compression, pattern recognition, frames, images, field of view, and so forth. Various operations will be described as multiple discrete steps performed in turn in a manner that is most helpful in understanding the present invention. However, the order of description should not be construed as to imply that these operations are necessarily performed in the order that they are presented, or even order dependent. Lastly, repeated usage of the phrases "in one embodiment," "an alternative embodiment," or an "alternate embodiment" does not necessarily refer to the same embodiment, although it may.
 FIG. 7 illustrates an exemplary system 700 according to the present invention, which is shown as having a computer 720 that compresses and operates upon digitized data using the features of the present invention described herein. Computer 720 may also operate to compress the digitized data and transmit it to another device, shown as a server 730, so that server 730 operates upon digitized data using the features of the present invention described herein. While compression may be achieved by computer 730, practically this is not efficient. A number of computers 720 are shown as providing digitized data to server 730, which aspect is illustrated in order to explain further how various related streams of digitized data can be operated upon according to one embodiment of the present invention, as described hereinafter.
•  While the system as described above illustrates one manner in which the advantages of the present invention described herein can be used, this is exemplary. For example, the computers 720 and 730 could be implemented as a network of computers or as a device other than a computer that contains a processor that is able to implement the inventions described herein. Many other variants are possible. And while a device such as mentioned will typically include a processor of some type, such as an Intel Pentium 4 microprocessor or a DSP in conjunction with program instructions that are written based upon the teachings herein, other hardware that implements the present invention can also be used, such as a field programmable gate anay. The program instructions are preferably written in C++ or some other computer programming language. For repetitive routines that are used repeatedly, these can be written in assembler for faster processing. As mentioned above, the present invention operates upon data preferably formatted into a matrix anay within a frame, as described further hereinafter. For data that is not temporally related, the blocks can be formatted into frames that may or may not have the ability to store the same amount of data. For data that is temporally related, such as image and audio data, each image and its related audio data will preferably have its own frame, although that is not a necessity, since image and audio data can be stored on separate tracks and analyzed independently.
 According to one embodiment, the computer 720 or server 730 is assumed to have received digital image/audio frames that relate to a sequence, which sequence has been digitized into frames and compressed in some manner. These frames may be compressed in essentially real-time and operated upon, or compressed and transmitted for storage to in another location. Further, there are situations when compression is unnecessary, such as if pattern recognition between frames is performed during substantially real time operations on frames.
 In some instances, the data obtained conesponds to locations where the scenes being monitored and stored do not change often. For example, a camera may be stationed outside a vault or door in a stairwell and record the scene, which is then received by the computer 700. It is only when, someone enters the vault or crosses a door in the stairwell that the image frame changes substantially (since even between sequential frames that record the same scene, changes in the data representing the frame will exist due to at least noise effects).
 For image frames conesponding to the scene that have been compressed, it is desirable to be able to further operate upon the compressed image frames, for example, to determine if an event has occuned in the image frames. For example, it is desirable to be able to detect that the image frames have changed due to a change in the field of view (e.g., someone crossing through the field of view) without having to decompress the images. The present invention provides a mechanism for detecting that a change has occuned in the field of view without having to decompress image frames and also provides several mechanisms for reacting to the detection of the change.
 FIG. 8 illustrates compressed frames that have been produced by a camera that has had an object pass through its field of view. Camera 802 is pointed at doorway 806 in a stairwell and a computer (not shown) attached to camera 802 such as computers 720 of FIG. 7 produces a sequence of digitized and compressed frames 808a-n, 810a-n, 812a-n, and 814a-n. Compression can be achieved using the techniques described in U.S. Patent Application bearing attorney reference 042503/0259665 entitled "Method And Apparatus For Determining Patterns Within Adjacent Blocks of Data" filed on October 31, 2001 and assigned to the assignee of the present application, and Appln. No. 09/727,096 entitled "Method And Apparatus For Encoding Information Using Multiple Passes And Decoding In A Single Pass" filed on November 29, 2000 and assigned to the assignee of the present application, both of which are expressly incorporated herein by reference. It is noted that when using these techniques, particularly for pattern recognition, that rotational and diagonal traversals of search blocks are typically not necessary, particularly when a camera is fixed in position.
 Whether the frames 808a-n, 81 Oa-n, 812a-n, and 814a-n are operated upon in essentially real-time, stored and operated upon, compressed and then operated upon, compressed, stored and then operated upon, or compressed, transmitted, stored at another location and then operated upon, the inventions described herein are equally applicable. For a still scene having a sequence of frames, the uncompressed frames will have a certain relatively constant size, whether it has been compressed or not, and whether there is action or movement or not, because the same number of bits is used to represent the frame. Once compressed, however, the amount of data needed to represent each frame that is of the same scene will be substantially less than the same scene that also has other objects superimposed thereon, and that compressed scene will compress to roughly the same size, though there will be some non-uniformity due to noise and other factors. Thus, as shown in FIG. 8, the compressed frames of 812a-n are larger than the compressed frames 808a-n and 81 Oa-n because person 804 has entered through doorway 806. When person 804 leaves the field of view of camera 802, the size of compressed frames return to the previous relatively small size, as shown in frames 814a-n.
 FIG. 9 A illustrates a process 900 for detecting and reacting to a change in the size of compressed frames obtained from the same source. It should be appreciated that present invention is not limited to compressed video frames. Even though above the frames have been described in the context of video data the frames could be audio data or a mixture of audio and video data or even another form of data that provides an indication of the occunence of an event. According to one embodiment, process 900, operates on compressed video frames from the field of view of a camera which has an initial period of substantially no activity in the field of view followed a period of activity caused by an object which enters the field of view causing a change in the size of the compressed video frames. Each of the frames is preferably stored in compressed form as a block of data that has a size associated with it. In process 900, for each frame, it is determined at step 920 whether there are more following frames to process. If not, then process 900 stops at step 922. If there are, for another frame, the size of adjacent compressed frames is compared in step 902. If the subsequent frame is greater 904 than the previous frame by a certain threshold, a react step 905 follows to indicate detection of the occunence of the event.
 FIG. 9B illustrates several possible examples of operations that can be performed in reaction to the detection of an occunence of an event according to an embodiment of the present invention. Specifically, as shown in FIG. 9B one or more of three possible operations 906, 910, 916 can be performed as a result of an event occurring, as determined by step 905.
 If there was an event determined, operation 906 can be instituted, which will cause a sequence of still images, obtained at some interval from each other that follow the initiation of the event to be obtained. These still frames represent a significant reduction from the total number of frames that a particular camera has obtained. By making the interval between every 4th to 10th frame, preferably every 6th frame, one of the still images obtained will contain a "best view" of the object that has caused the increase in size of the compressed frame. With each still frame obtained, operations can be performed, such as transmitting, via email or other transmission mechanism shown in step 908, each still image to a predetermined location. If desired, image recognition can be performed on each still image, as shown by step 909 indicating that the still frame should be processed for external pattern recognition of an object, such as a person or article, as performed in and described by step 910, detailed hereinafter.
 The step 910 external pattern recognition is directed to looking for previously obtained external patterns within either compressed or uncompressed data representing frames. This pattern recognition is shown as being initiated by step 905 after a compressed frame has been acted upon, which is prefened for previously compressed data, since external pattern recognition need not be performed on frames that have roughly the same size that indicates no motion is taking place. For frames that have not been compressed, external pattern recognition can occur on the uncompressed data that is being searched for to determine if external patterns of significance exist therein, using techniques such as described in U.S. Patent Application entitled "Method And Apparatus For Determining Patterns Within Adjacent Blocks of Data" filed on October 31, 2001 and mentioned above, with at least one separate thread preferably being initiated for each different external pattern that is being searched for, and instead of a reference frame being used to obtain search blocks, the external pattern is used to obtain search blocks that are searched for in the target frame. Of course, other conventional pattern recognition techniques can be used.
 The external patterns of interest are contained in a table of preferably both uncompressed and compressed files, and which of the files being used will depend upon whether pattern recognition will be made based upon uncompressed data or compressed data, respectively.. The compressed objects of interest are stored using the same compression technique that is used to obtain compression of the frames, thus allowing for more efficient pattern recognition.
 If as a result of the external pattern matching a match is found to exist, as shown by step 912, a match indication will cause an alert of some type to be generated in step 914. This can occur at either computer 720 or server 730. An alert can be for example, an indication on the monitor of a security guard or other authority indicating the identity of the person identified by the external pattern recognition or the location of the event, or it could be an audible alert over a wireless radio to a security guard to confront someone with a certain description at a certain location. An example of an alert will be described in connection with FIG. 10 below.
 Additionally or alternatively, process 900 allows a security guard observing the field of view of the camera on a monitor to tag or mark an object/person that caused the change in the size of the frames so as to permit easy following of the person as the person navigates in front of the camera and appears on the monitor, as shown by step 916. Different shapes and colors of tags can be used to differentiate between different levels of scrutiny that should be applied to each object. For example, one shaped object, such as a triangle, can be used to designate an external pattern that has a high priority, whereas another shaped object, such as a circle, can be used to designate an external pattern that has a low priority. Similarly, or in combination with the shapes being used, one color, such as red, can be used to designate a different high priority, and another color, such as green, can be used to designate a different low priority.
 Once an object of interest is located, tagging that object can be used to cause the marker to appear adjacent to the image. Of course, this feature can be turned off if desired. FIG. 10 illustrates in greater detail a technique for facilitating following an image of an object, such as a person, among a group of objects displayed on a monitor by marking the image of the object on the monitor with a tag or mark. FIG. 10 illustrates four screen displays 1002, 1004, 1006, 1008 in which an object caused a change in the size of frames to occur. In screen display 1004 a tag 1001a is attached to the image of person 1001. As the image of person 1001 moves from display 1004, to display 1006, and then display 1008, tag 1001a facilitates observance of where the image of person 1001 is on the display.
 Displays 1002, 1004, 1006, and 1008 also show in a comer of the display an alerts region. As described in connection with FIG. 9B, according to one embodiment, a visual alert is generated when the external pattern recognition process 910 that produces an indication that a match has occurred, and identifies both the match name, in this case a person's name, and the location where the match occurred, thus describing the identity and location of the object identified.
 FIG. 12 shows a video surveillance system according to an embodiment of the present invention. This system is similar to the one shown in FIG. 11C, with the exception that in addition to controlling the positions of the cameras and supplying the camera signals to the monitor, the controller also manages information in the digital storage device.
 Assuming the digital storage device is a hard drive system, many techniques are known for storing data therein. For purposes of discussion assume that the disk drive stores a table listing all of the data units, e.g., files, stored thereon, the size of each file, its date of creation, its date of last access, and the sector (or other unit as appropriate) at which storage of the data unit begins. Each segment of the data unit includes a link to the next sector of the data unit. Possibly, it also includes a link back to the previous sector. The final sector of the data unit points to a null value as the next sector. When link-backs are included, the first sector's link-back similarly points to a null value.  Assume that the controller has received an image to be stored on the hard disk drive, and that the disk drive is full, or else has less free space than is required for storage of the image data. Some data must be deleted to make room for the new image data. One embodiment of the present invention scores individual data units based on their priority and age, and chooses data units for erasure in the order: low priority, old data; low priority, new data; high priority, new data; high priority, old data; low priority. In other words, assuming the table entry for this image associates a 1 or 0 with a Priority parameter of the image, 1 being high priority and 0 being low priority, and associates an age measurement from 0-255 with an Age parameter of the image, 0 being old and 255 being new, the controller can construct a score for the data unit as follows:
 Score = 256 * Priority + Age
 This will provide a score which can range from 0 (low priority, old data) through 255 (low priority, new data) and 256 (high priority, old data) to 511 (high priority, new data). This effectively groups the data units into four non-overlapping groups - high priority, new data; high priority, old data; low priority, new data; low priority, old data - in decreasing order. The controller can then, based on the file sizes associated with the images, select enough low-scoring data units for erasure so that there will be enough room for the new data. The controller can then instruct the hard disk unit to erase the selected files and store the new data therein.
 This order of desirability - high priority, new data; high priority, old data; low priority, new data; low priority, old data - is useful in situations where it is most important to retain image data that has high priority. Other arrangements may be used in other situations - for example, the score
 Score = 256 * Priority + Age
 where 1 is high priority and 0 is low priority, and associates an age measurement from 0-255 with the image, 0 being new and 255 being old, will generate scores of desirability in the order low priority, new data; low priority, old data; high priority, new data; high priority, old data, with the last being the most desirable. This ordering might be useful when old data is more important than new data, e.g., in a data archival situation. Alternatively, the score  Score = 256 * Priority + Age
 where 1 is low Priority and 0 is high Priority, and associates an Age measurement from 0-255 with the image, 0 being new and 255 being old, produces an ordering in decreasing desirability of low priority, old data; low priority, new data; high priority, old data; high priority, new data.
 Further, the score
 Score = 256 * Priority + Age
 where 1 is low Priority and 0 is high Priority, and the Age measurement is from 0-255, 0 being old and 255 being new, produces an ordering in decreasing desirability of low priority, new data; low priority, old data; high priority, new data; high priority, old data.
 Other anangements are also possible. For example, rather than the Age parameter representing the age of creation of an image file as above, it could alternatively represent a time since the last access of the image.
 Further, both creation age and access age could be used. Additionally, other parameters could also be used. For example, a score such as
 Score = 512 * Priority + 256 * Subject + Age
 could be used where Subject could be 1 for the Vault and 0 for Stairwell, with Priority being 1 for high priority and 0 for low priority, and Age being 0 for old through 255 for new. This would order scores in the following way, from most desirable to least desirable: high priority, Vault, new; high priority, Vault, old; high priority, Stairwell, new; high priority, Stairwell, old; low priority, Vault, new; low priority, Vault, old; low priority, Stairwell, new; low priority, Stairwell, old. This scoring system would value images from Vault cameras more highly than images from the Stairwell.
 Other numbering systems are of course possible. Further, the data units subject to potential erasure need not be limited to those already stored but may additionally include the unit intended to be stored. In this case, the new data unit may be designated for erasure - in which case, no erasure of stored information would be necessary. Also, rather than using two-valued parameters (0 and 1), the system may make use of parameters with more than two values. For example, the Priority parameter may have values for high, medium and low or a range such as 0-10, with 0 being the highest priority and 10 being the lowest priority.
 FIG. 13 shows a video surveillance system according to an embodiment of the present invention. This system is similar to the one shown in FIG. 11C, with the exception that in addition to controlling the positions of the cameras and supplying the camera signals to the monitor, the controller also monitors images produced by the cameras for certain conditions as specified by rales set by the video surveillance system operator, and produces alerts, also called alarms, or the like when one of those conditions is met. Alternatively, the monitoring program need not be in the controller, but may be separate and monitor images in the digital storage device after storage.
 Regardless of where the monitoring is done, the base of the monitoring program lies in its pattern recognition of image features. Typically, pattern recognition as used herein is capable of identifying people based on a shot of their face in an image, etc. Further, the pattern recognition system can also resolve objects, such as purses, briefcases, individual cards, betting chips. The degree of resolution, of course depends upon many factors, as is known. All such things that might be the object of pattern recognition will sometimes be refened to as entities in the following discussion and claims.
 Pattern recognition can be based on a single image, e.g., "If the custodian is in the vault shot, notify the system operator", or it can be based on multiple images, e.g., "If John Doe and Joe Smith (two suspected bank robbers) are in the lobby shot at time TI and only John Doe is in the lobby shot at a later time T2, then notify the system operator and start looking for Joe Smith."
 The basic pattern recognition expressions are
 IF Pl AND P2 AND P3. . . THEN Ql AND Q2 AND Q3. . . (1)
 IF PI OR P2 OR P3. . . THEN Ql AND Q2 AND Q3. . . (2)  For example, in the setting of a casino, the monitor program may have a rule such as
 IF (image shows a person on a list of known card counters) (3)
 THEN (notify system operator)
 where there is one P and one Q. Alternatively, a rule may be of the conjunctive form
 IF (image at time TI shows card counter A) AND (image at time TI shows card counter B) AND (image at time T2 shows card counter A) AND (image at time T2 does not show card counter B) THEN (notify system operator)
 AND (notify casino security) (4)
 Alternatively, the rule may be a disjunctive one such as
 IF (image shows game dealer in cashier area) OR (image shows
 game dealer in vault area) THEN (notify system operator) (5)
 Some other alarm generation recognition rules that might arise in these situations include:
 IF (at time TI object or person A is in a first image shot) AND (at time TI object or person B is in the first image shot) AND (at time T2 object or person A is in a second image shot) AND (at time T2 object or person B is not in the second image shot) THEN generate an alarm) (6)
 IF (at time TI object A is in a first image shot) AND (at time TI object B is not in the first image shot) AND (at time TI object A is in a second image shot) AND (at time T2 object B is in the second image shot)
 THEN (generate an alarm) (7)
 An example of this rule is at various airport security check-in locations. At an initial entry position, a person A is photographed canying no objects. At another location, such as an entryway onto an airplane, another photograph shows person A with an object B, which can be used to generate an alarm showing a changed condition.
[000158 IF (object or person A is in an image shot of a place where the object
[000159 or person is not permitted to be) THEN (generate an alarm) (8)
[000160 Examples of this rule are:
[000161 IF ($1000 betting chip is in area other than high stakes betting area)
[000162 THEN (generate alarm) (9)
[000164 IF (kitchen worker is in area other than kitchen) THEN (generate
[000165 an alarm) (10)
 IF (person or object A is in an image of a scene) AND (person or object A is on a list of people or objects of a certain type) AND (person or object B is also on the list of people or object of that certain type) THEN (generate an alarm) AND (look for person or obj ect B in the image) (11)
 With respect to the above rule (11), a modification of the rule also provides for the inclusion of alternative or alias information concerning a specific person or object. Accordingly, the group information can also include alternative or alias information. Thus, if photographs of the same person, one with a beard or sunglasses and one without a beard or sunglasses exist, then both of the photographs can be associated with that person and searched when searching for that person. The same alternative information can also be stored with respect to objects. A modification of this rale, for example purposes using only person identities is:
 IF (person A is in an image of a scene) AND (person A is on a list of known or suspected tenorists) AND (person B is also on the list of known or suspected tenorists) THEN (generate an alarm) AND (look for person B in the image) (12)
 This pattern recognition process may be done on images in the video surveillance system a single time. Alternatively, it may be done periodically, or on a continuous basis. Further, the rules can have time limits. For example, a rule may specify that if a person A is recognized in an image, the system will search for a person B in images for 15 minutes therefrom and, if person B is found within that time, a certain action will be taken.
 FIG. 14 illustrates data gathering system according to one embodiment of the present invention. System 1400 includes player place settings 1410a-g, dealer setting 1412, camera 1414, computer 1415, network 1416 and terminal 1417. Camera 1414 films table 1400 and player place settings 1410a-g and dealer setting 1412 to obtain a stream of digital data that includes the repetitive actions that occur. The repetitive actions are activities that occur in the place settings 1410a-g and dealer setting 1412. The camera 1414 is preferably fixed, and is preferably set at a same zoom position for all comparison operations performed as described herein, so that as much consistency between adjacent frames in the stream of digital data as possible are obtained.
 As shown in FIG. 15B, a player place setting 1410a-g has bet area 1502 and play area 1504. During a card game such as blackjack, for example, a player will place bets such as chips or jetons in bet area 1502 and cards of the player's hand in play area 1504. As the card game develops, and a card is added to the hand or the hand is split, activity takes place in play area 1504 and possibly in bet area 1502. Similarly, the dealer's hand will be placed in dealer's hand area 1506. As the card game develops, a card or cards may be added to dealer's hand area 1506. Figure 16 illustrates a sequence of repetitive actions that are possible in a game played in accordance with an embodiment of the present invention. At 1602a each of player's place setting 1410a-g are clear of any cards and bets, and at 1602b the dealer's setting 1412 are clear of any cards. As the card game develops from 1602a to 1612a and from 1602b to 1612b cards are added to play area 1504 and to dealer's hand area 1506 and bets are placed on bet area 1502. The sequence of repetitive actions 1602a-312a are representative of what happens at one of the player place settings 1410a-g. A sequence similar to that shown in Figure 16 can occur for other player place settings.
 By taking the stream of data that emerges from the camera and parsing it at computer 1415 to determine the transitions that occur, the number of hands that are being played at a table can be determined. This is possible according to one embodiment of the present invention by using a mask, described further hereinafter, to detect transitions between hands (i.e., the end of a hand or the beginning of a new hand). According to one embodiment, the mask is indicative of a place setting in which no cards and bets are present. However, other indications of the beginning of a hand or the end of a hand can be used. FIG. 15A illustrates a mask for a gaming table according to one embodiment of the present invention. Mask 1500 includes masks for player place settings 1510a-g and mask for dealer place setting 1512.
 Computer 1415 stores mask 1500 and uses it to detect transitions between hands. FIG. 17 illustrates the masks for player place settings and the dealer place setting in greater detail. Mask 1702 is for player place setting 1510a-g in which no cards and bets are present, and a mask 1704 is for a dealer setting 1512 in which no cards are present. By comparing at computer 1415 each of the repetitive actions to the conesponding mask it can be determined whether a hand has ended and/or a new hand is about to begin, hi between hands, the continuous frames that will illustrate the mask are not each counted as a separate hand.
 For example, by comparing mask 1702 to 1602a it is clear that a hand is about to begin. If the progress of the game is followed and the stream of data from the camera is parsed and the mask is compared to subsequent frames, it is not until 1610a that mask 1702 and 1610a are identical, indicating that a hand has been completed. At that point, a counter that keeps track of hands being played at table 1400 can be incremented. The process of comparison continues with repetitive actions 1612a and 1612b.
 Technically, the above-described pattern comparisons require pattern matching operations to be performed between the mask 1702 and that portion of the digital data stream conesponding to the location of the mask 1702 during the playing of the game of chance. The mask 1702, in such comparison operations, is essentially an external pattern that is being searched for in a particular location of each frame of the stream of digital data representing the image. Conventional pattern recognition systems can be used to operate upon the stream of digital data and obtain the indications of the mask 1702 being within the stream of digital data that is obtained. Further, the mask area can be further required to at least have recognized within it an object of significance to it, such as a card or a chip, in order to prevent an enant object, such as a hand, that appears in the mask area from inconectly indicating that a game is underway or has been completed.  And while a mask as described is a prefened manner of comparison for pattern recognition purposes, it is not the only manner in which the comparisons can be made. Comparisons between frames can also be made, such that continued durations of an activity can generate a count. For instance, white space on a dealer card area that exists for greater than a predetermined period of time could be used to generate a count, with another count not being generated until after that dealer card area has had cards placed thereon for another predetermined period of time.
 The above description was made in the context of one player playing with a dealer. It should be appreciated that more than one player can be playing at one time with the dealer. In the event of multiple players are playing at a table and a player's hand finishes before that of other players, the player's hand which finishes is detected when the player clears the player's bet area and play area. Each time it is detected that a player's hand has ended, a hand counter (a software register, not shown) can be incremented at computer 1415. The information about hands played at a gaming table and other information that can be gathered based on the present invention is provided via network 1416 to terminal 1417 or other terminals (not shown). Network 1416 can be the Internet, another distributed internet network, or a dedicated network.
 FIG. 18A illustrates a roulette layout. Layout 1800 is divided into 180 areas for placing bets. The fundamental area of layout 1800 is the alternating area of red and black numbers 1-36 and digits 0 and 00 that are colored green. The remaining areas are permutations of the fundamental area: areas for even numbers, odd numbers, red numbers, black numbers, first 12 numbers, second 12 numbers, third 12 numbers, first 18 numbers, and last 18 numbers. One can bet on any single number (straight up), a combination of numbers, red, black, odd or even. Each of the one to six players at the roulette table is given different- colored chips so that keeping track of the numbers on the layout one is betting on is possible using a reference to the color.
 FIG. 18B illustrates a roulette wheel. Wheel 1810 is divided into 38 slots, 512 for a ball to land in, and is numbered 1 through 36, 0 and 00. Each roulette game begins when the dealer spins the wheel in one direction, and then rolls a small ball along the inner edge 1814 of wheel 1810 in the opposite direction. The ball eventually falls into one of the numbered slots 1812. That number is the declared winner for that game.  FIG. 18C illustrates a mask 1820 for a roulette wheel, which can be as simple as tracking the slot area 1812 that the ball rolls into. Mask 1820 is stored in a computer such as computer 1415 of FIG. 14 and is used to detect the transitions between roulette games. During a single game, the ball is rolled, the wheel is spun, and then the ball lands in one of the slots 1812. A camera such as camera 1414 is placed to view the wheel 1810 and is used to capture the repetitive actions of the roulette wheel and ball. In particular, each time the ball rolls into a slot, this indicates that the game is complete, and can be recorded as a repetitive sequence. That camera, or another camera, can also be used to capture the repetitive action of chips being played on the table, with each of the separate betting areas having its own mask area, which can be queried for repetitive activity using the techniques described above. Similarly, the actions of chips being taken away from losing bets by the dealer, and other chips being provided to the winner from the dealer, are repetitive activities that can be used to count the number of games that take place in a given period of time.
 FIG. 19 illustrates a sequence of repetitive actions for a roulette wheel and ball. By comparing at a computer, such as computer 1415, mask 1820 to the repetitive actions 1902-1908 it can be detennined that 2 games have been played. This is known in the sequence of four frames (with other frames in between not shown), since when the ball comes to rest on any slot 1812 can be used as an indication that a game has been completed, which action is shown by actions 1902 and 1908. Alternatively, each time the ball appears in the inner edge 1814 of wheel 1810 can be used to indicate that a new game is occurring. By keeping track of the time, the efficiency of the roulette dealer can be tracked. Further, by tracking both the mask 1820 and the mask associated with each separate betting area, it can be determined that the declared wim er at the table conesponds to the actual winner as determined by which numbered slot 1812 the ball actually fell into.
 The present invention can be adapted for other repetitive games, such as poker, 3-card poker, pai-gow, Caribbean stud, baccarat, and other games.
 Further, reports can be generated based upon the statistics obtained by the present invention. By keeping track of the particular dealer at each table for a predetermined period, the number of hands dealt in the period can be obtained. And by combining periods for a particular dealer, that dealer's average efficiency can be determined. Further, statistics can be kept for a table location basis, for example, so that it can be determined which tables are busiest during various periods of time, which can then allow, again for example, staffing of the busiest tables with the most efficient dealers. FIGS. 20A and 20B illustrate two different reports, directed to a dealer and a table location, respectively, illustrating the statistics obtained over a single shift of a predetermined duration. Added security also is obtained, since verification that payouts were made to actual winners can occur.
 Other repetitive activities can also be monitored using the techniques of the present invention.
 For example, in casinos, money is always counted in the same manner, with money being laid out on a table in exactly the same manner, typically in increments of $20,000 in the United States. Each action can be tracked if desired, such that at each position where the next amount of money is laid, that is tracked, and then it is tracked that at the next expected position money is then laid out there. If there is a deviation from this, which can also be time based, an alert can be triggered.
 As another example, cameras in hallways can be used to keep track of the period of time that a laundry cart is in front of a specific room, using a mask that contains a picture of the room without a cart in front. When an object appears for a period of time greater than 3 minutes, for example, the object can be interpreted to be the cart. The period of time until that object is removed from the scene can be used to monitor the amount of time the cart was in front of the room, and therefore obtain an estimate of the time that was needed to clean the room.
 In still another example, the repetitive action of making money payouts by a dealer can be used to count the amount of money paid out. Since typically a camera is above a table, a perspective view of the rack that contains the chips that are used for payouts cannot be obtained. Since, however, it is typical to place a silver coin between every five chips, Each time a silver coin seen in an area conesponding to a particular column of chips being paid out appears can be used to estimate that five chips times the value of those chips has been paid out. Thus, counting the instances of recognizing that silver coin in an area conesponding to that column of chips allows a total estimate of an amount paid out to be obtained. Thus, the repetitive action is looking for the instances that silver appears in a mask area conesponding to that column of chips.  Of course, other repetitive activities can also be momtored automatically using the techniques described herein.
 The embodiments described above have been presented for purposes of explanation only, and the present invention should not be construed to be so limited. Variations on the present invention will become readily apparent to those skilled in the art after reading this description, and the present invention and appended claims are intended to encompass such variations as well.
|Patente citada||Fecha de presentación||Fecha de publicación||Solicitante||Título|
|WO2001020489A1 *||18 Sep 2000||22 Mar 2001||Nicholson Vincent J||Casino table automatic video strategic event annotation and retrieval|
|US4841366 *||31 Mar 1987||20 Jun 1989||Nec Home Electronics Ltd.||Cyclic noise reducing apparatus|
|US5901246 *||6 Jun 1995||4 May 1999||Hoffberg; Steven M.||Ergonomic man-machine interface incorporating adaptive pattern recognition based control system|
|US6011901 *||26 Nov 1997||4 Ene 2000||Timepres Corporation||Compressed digital video record and playback system|
|Patente citante||Fecha de presentación||Fecha de publicación||Solicitante||Título|
|WO2005091639A1||15 Mar 2005||29 Sep 2005||3Vr Security, Inc.||Pipeline architecture for analyzing multiple video streams|
|WO2006114353A1 *||16 Mar 2006||2 Nov 2006||Robert Bosch Gmbh||Method and system for processing data|
|WO2009034531A2 *||10 Sep 2008||19 Mar 2009||Silvio Spiniello||Video surveillance method and system|
|WO2009034531A3 *||10 Sep 2008||2 Jul 2009||Silvio Spiniello||Video surveillance method and system|
|EP1524853A2 *||30 Sep 2004||20 Abr 2005||SANYO ELECTRIC Co., Ltd.||Content processing apparatus|
|EP1524853A3 *||30 Sep 2004||27 Abr 2005||SANYO ELECTRIC Co., Ltd.||Content processing apparatus|
|EP2097854A2 *||8 Nov 2007||9 Sep 2009||Cryptometrics, INC.||System and method for parallel image processing|
|EP2097854A4 *||8 Nov 2007||27 Mar 2013||Nextgenid Inc||System and method for parallel image processing|
|EP2378461A3 *||15 Mar 2005||12 Ago 2015||3VR Security, Inc.||Pipeline architecture for analyzing multiple video streams|
|EP2378462A3 *||15 Mar 2005||12 Ago 2015||3VR Security, Inc.||Pipeline architecture for analyzing multiple video streams|
|EP2378463A3 *||15 Mar 2005||12 Ago 2015||3VR Security, Inc.||Pipeline architecture for analyzing multiple video streams|
|EP2378464A3 *||15 Mar 2005||19 Ago 2015||3VR Security, Inc.||Pipeline architecture for analyzing multiple video streams|
|EP2860662A3 *||17 Sep 2014||22 Abr 2015||Omron Corporation||Monitoring system, monitoring method, monitoring program, and recording medium in wich monitoring program is recorded|
|EP2863372A1 *||5 Abr 2006||22 Abr 2015||Avigilon Fortress Corporation||Video surveillance system employing video primitives|
|US7409072||5 Oct 2004||5 Ago 2008||Sanyo Electric Co., Ltd.||Content processing apparatus|
|US7529411||15 Mar 2005||5 May 2009||3Vr Security, Inc.||Interactive system for recognition analysis of multiple streams of video|
|US7646895||4 Abr 2006||12 Ene 2010||3Vr Security, Inc.||Grouping items in video stream images into events|
|US7663661 *||13 Oct 2004||16 Feb 2010||3Vr Security, Inc.||Feed-customized processing of multiple video streams in a pipeline architecture|
|US7664183 *||13 Oct 2004||16 Feb 2010||3Vr Security, Inc.||Correlation processing among multiple analyzers of video streams at stages of a pipeline architecture|
|US7667732 *||13 Oct 2004||23 Feb 2010||3Vr Security, Inc.||Event generation and camera cluster analysis of multiple video streams in a pipeline architecture|
|US7672370 *||13 Oct 2004||2 Mar 2010||3Vr Security, Inc.||Deep frame analysis of multiple video streams in a pipeline architecture|
|US7697026||13 Oct 2004||13 Abr 2010||3Vr Security, Inc.||Pipeline architecture for analyzing multiple video streams|
|US7760230||15 Mar 2005||20 Jul 2010||3Vr Security, Inc.||Method for automatically reducing stored data in a surveillance system|
|US7847820||15 Mar 2005||7 Dic 2010||3Vr Security, Inc.||Intelligent event determination and notification in a surveillance system|
|US7933455||7 Dic 2009||26 Abr 2011||3Vr Security, Inc.||Grouping items in video stream images into events|
|US8094026||2 May 2011||10 Ene 2012||Robert M Green||Organized retail crime detection security system and method|
|US8115623||18 Ago 2011||14 Feb 2012||Robert M Green||Method and system for hand basket theft detection|
|US8130285||4 Abr 2006||6 Mar 2012||3Vr Security, Inc.||Automated searching for probable matches in a video surveillance system|
|US8803975||30 Oct 2008||12 Ago 2014||3Vr Security, Inc.||Interactive system for recognition analysis of multiple streams of video|
|US9378632||10 Mar 2014||28 Jun 2016||Avigilon Fortress Corporation||Video surveillance system employing video primitives|
|US9424464||22 Sep 2014||23 Ago 2016||Omron Corporation||Monitoring system, monitoring method, monitoring program, and recording medium in which monitoring program is recorded|
|Clasificación internacional||H04N5/76, H04N7/26, H04N5/913, G08B25/00, H04N7/18, G06T7/20, H04N7/30, G06T7/00, H04N5/77|
|Clasificación cooperativa||G06T7/254, H04N19/172, H04N19/146, H04N19/103, H04N19/60, H04N19/117, H04N19/90, G08B13/1968, H04N5/77, H04N7/181, G08B13/19667, G08B13/19686, G08B13/19641, G08B13/19615, G08B13/19691, H04N5/76, H04N2005/91364|
|Clasificación europea||G08B13/196A5M, G08B13/196U1, G08B13/196U4, G08B13/196S1, G08B13/196L1, G08B13/196U6, H04N5/76, H04N7/26A4C, H04N7/26Z4, H04N7/18C, H04N7/30, G06T7/20D, H04N7/26A6E, H04N7/26A4F, H04N7/26A8P|
|5 Jun 2003||AK||Designated states|
Kind code of ref document: A1
Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW
|5 Jun 2003||AL||Designated countries for regional patents|
Kind code of ref document: A1
Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG
|30 Jul 2003||121||Ep: the epo has been informed by wipo that ep was designated in this application|
|21 May 2004||WWE||Wipo information: entry into national phase|
Ref document number: 2003548542
Country of ref document: JP
|14 Jun 2004||WWE||Wipo information: entry into national phase|
Ref document number: 2002803971
Country of ref document: EP
|15 Sep 2004||WWP||Wipo information: published in national office|
Ref document number: 2002803971
Country of ref document: EP