WO2003096142A2 - A system and method for detection and analysis of data - Google Patents

A system and method for detection and analysis of data Download PDF

Info

Publication number
WO2003096142A2
WO2003096142A2 PCT/IL2003/000385 IL0300385W WO03096142A2 WO 2003096142 A2 WO2003096142 A2 WO 2003096142A2 IL 0300385 W IL0300385 W IL 0300385W WO 03096142 A2 WO03096142 A2 WO 03096142A2
Authority
WO
WIPO (PCT)
Prior art keywords
complexity
video recording
analysis
values
recording
Prior art date
Application number
PCT/IL2003/000385
Other languages
French (fr)
Other versions
WO2003096142A3 (en
Inventor
Goren Gordon
Hanna Gordon
Original Assignee
Gordonomics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/146,499 external-priority patent/US20030105736A1/en
Priority claimed from US10/152,309 external-priority patent/US7227975B2/en
Application filed by Gordonomics Ltd. filed Critical Gordonomics Ltd.
Priority to AU2003224408A priority Critical patent/AU2003224408A1/en
Publication of WO2003096142A2 publication Critical patent/WO2003096142A2/en
Publication of WO2003096142A3 publication Critical patent/WO2003096142A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • Automatic Analysis And Handling Materials Therefor (AREA)

Abstract

The present invention relates to the detection and analysis of data, including to a detection and analysis apparatus and method of files, and variant types of recordings such audio, photo and video, in particular.

Description

A SYSTEM AND METHOD FOR DETECTION AND ANALYSIS OF
DATA
RELATED APPLICATIONS PCT Application PCT/ILO 1/01074, for METHOD AND SYSTEM FOR CREATING MEANINGFUL SUMMARIES FROM INTERRELATED SETS OF INFORMATION UNITS filed November 21 2001, having publication serial number WO 02/46960 and publication date of 13 June 2002; is related to the present invention and is incoφorated by reference herein
PRIORITY APPLICATIONS Priority is claimed from the following U.S. applications: U.S. 10/146,499, for A SYSTEM AND METHOD FOR ANALYZING AND CLASSIFICATION OF FILES, filed on May 14, 2002; U.S. 10/152,309 for A SYSTEM AND METHOD FOR ANALYZING AERIAL PHOTOS, filed on May 20, 2002; U.S. 10/152,308 for A SYSTEM AND METHOD FOR DETECTION AND ANALYSIS OF AUDIO RECORDINGS, filed on May 20, 2002; U.S. 10/152,310 for A SYSTEM AND METHOD FOR DETECTION AND ANALYSIS OF VIDEO RECORDINGS, filed on May 20, 2002; U.S. 10/153,418 for A SYSTEM AND METHOD FOR MESSAGE INSERTION WITHIN VIDEO RECORDINGS, filed on May 21, 2002.
FIELD OF INVENTION The present invention relates to the detection and analysis of data, in general, and to the detection and analysis system and method of files, and variant types of recordings such audio, photo and video, in particular. BACKGROUND OF THE INVENTION Many files are transferred over the Internet and other communication lines on daily basis for leisure, business, military and various other purposes. The present accessibility for receiving files over versatile communication lines to ever-growing amount of users around the world is a great advantage. However, the said accessibility results un-occasionally with files that are addressed to a particular destination to reach other destinations. Consequently, files including private or confidential information can be inspected by unauthorized elements. Inspection by unauthorized elements may cause mere inconvenience when the said files contain personal private information. Business secrets exposed to competitors or dishonest persons can cause grave financial losses. Furthermore, military secrets inspected by unauthorized persons or hostile elements may damage relationships between states and endanger people's lives. The un- occasional phenomenon of erring reception of files and its possible consequences has resulted with the need to encrypt files sent over communication lines.
Encrypted files can appear to be for a person unaware of its encryption as an unencrypted message. Thus, an erred receiver of a file over a communication line can be misled believing the received file as inspected provides all data within the file. However, this advantage can result with a drawback, a person addressed for an encrypted file cannot always be aware of receiving an encrypt file. A furthermore disadvantage may result incase of military use, while downloading messages transferred over a communication line between hostile elements one can not be aware of the real data or message conveyed between the said elements. A further existing need is for selection of files received by end users over the Internet and other communication lines. There are many types of files that an end user can receive such as text files, image files and others. An end user can process and manage each type of file in a different manner. An early knowledge of incoming file type can save processing time and storage place. While one end user may desire to receive only one particular type of files over the Internet and other communication lines the connecting communication lines can provide a variety of undesired files. There is a growing need for enabling an end user to pre-select incoming files according to their type.
Another aspect of the present invention relates to aerial photos. Aerial photos are photos taken from an aircraft or a satellite during a typically predominated flying route thus, providing a perspective view of a usually large area. Consequently, aerial photos provide considerable amount of information regarding the surface captured by the photograph. Aerial photos capture structural layout of the surface and objects located on the surface within the photo's frame. The photos are used for mapping areas for civil engineering matters such as planning roads, bridges and habitation locations. Other purposes can be for tracking changes over a period of time by comparing previously photographed aerial photos to current updated photograph of the same surface location. Correspondingly, aerial photos can track position of objects located on a surface and their advancement over a period of time. Hence, aerial photos are used for military surveillance of hostile objects such as vehicles, tanks, artillery guns, antiaircraft guns and the like. The accuracy and magnitude of objects that can be detected within an aerial photo is subject to the height of the aircraft and the resolution provided by the camera. The resolution and clarity of a photo is also subject to the camera used and the clarity of the intermediate medium (e.g. air) affected by the time of day, weather, etc. as well as other environment factors. Aerial photos are usually large and capture a large surface area and consequently a considerable amount of data. The considerable amount of data raises the problem of comprehending and processing all the said data within a reasonable period of time. Furthermore, the objects captured occasionally have low resolution and require expertise recognition for determining the character of the said object. One manner of interpreting an aerial photo is the manual way. The manual way requires expertise manpower that reviews in detail for pertinent objects within the photo's frame and reports its findings to an interested factor such as a mapping center agency, military intelligence agency, etc. The manual way is time consuming and is generally insufficient and impractical when aerial photos are large and the period of time is limited. Another way for interpreting an. aerial photo is by combing the manual way with a Pattern Recognition Method (PRM). According to this way of interpreting the first stage is executing the PRM and the second stage is by executing the said manual way. The PRM is operated within computerized surrounding that has the ability to recognize the presence of an object on a photo that can be easily distinguished from its surroundings, such as a ship in the ocean or a regiment of tanks concentrated at one base in a desert and the like. The PRM operating within a computerized environment provides fragments of the aerial photo to be examined according to the manual way. The fraction received from the operation of the RPM reduces the size of the aerial photo to be examined within the next stage. Once the PRM is executed the second stage of the manual way is executed upon the received fractions. However, the combination of the PRM and manual way does not provide accurate results subject to the limitation of the PRM. The limitation of the PRM is due to the method of recognition of important objects within an aerial photo, which often ignores important objects that are not recognized, by this method.
A further aspect related to the invention is the field of audio analysis. Listening to people conversing regardless of the contents of the conversation can be very informative regarding to the state of mind the conversing parties are in. A speaker raising his voice significantly in comparison to normal manner of speech can indicate stress, anger or other exceptional inconvenience. Similarly, a speaker talking significantly slowly compared to normal can indicate some distress or fatigue of the conversing speaker. Eavesdropping, though invading the privacy of speakers, is performed for various reasons. One reason is for protecting society from suspected felons such as drug traffics, mass suicide assassins, etc. Another use for eavesdropping can be medical tracing of mental sick patience. Eavesdropping for military purposes is probably the most widespread use made of eavesdropping. Military necessity for detailed information regarding the arsenal, ammunition and all military equipment as well as the need to get hold of information regarding movements, concepts and thoughts of opponent armies and other defined elements led to the extensive use of eavesdropping as a legitimate tool for acquiring information. The extensive use of eavesdropping for military use provides a considerable amount of audio recordings. -An audio recording received as such has little value prior to processing and extracting the information within the recording. Audio information extraction process can include a number of stages, a preliminary evaluation of the potential importance of information, a technical process that can include an audio replay and/or an audio replay providing enabling to inscribe the text in writing. The last stage includes the extraction of relevant information from the audio recording. Hence the process of extracting valuable information is time consuming and requires professional personnel. Infrequently when many hours of audio recordings accumulates not all recordings can be processed consequently valuable information is vanished. Furthermore, the duration of extraction of valuable information from audio recordings, also when not accumulated, can be critical. The situation of loss of valuable information due to the disability to process all audio recordings happens occasionally within military intelligence agencies responsible for eavesdropping. There is a growing need to prevent lose of valuable information concealed within unprocessed audio recording. Furthermore, there is need to evaluate on-line audio recordings for urgent purposes and for information of highly importance. There is a further need to provide an immediate alert when detecting an emergency situation or other urgent situation.
Still another aspect related within the present invention relates to analysis of data within video recordings. Overview of particular areas using video recording can be used for various applications such as supervision, control and the like. Guarding and tracking a zone captured by a video recording for keeping unauthorized persons from entering to a restricted area is one broadly used application of video recording. Accordingly, one or more persons inspect the video recording displayed over a monitor that can be positioned in a remote location from the supervised area. Applying the use of video recording for supervising, controlling and the like can be either on-line or off-line for later inspection. Frequently, the zone captured by one video camera is insufficient and a number of cameras are required for providing an efficient supervision of the observed particular area. A video camera used for tracking can capture a large zone providing a wide perspective of the supervised area. Occasionally, the zone recorded has a plain pattern that enables a person viewing the monitor to track protruding changes. Thus, in some cases a change in the pattern can be perceptible immediately due to the sharp and obvious change within the video recoding presented. One example may be a video recording of passage seldom used having trespassed by an intruder. However, the pattern changes in a video recording may not be always obvious and immediately tracked by a person monitoring the recording or the recording may not be continuously attained. Furthermore, when areas supervised are large or demand for great skill for monitoring the process can require costly resources for skillful personnel that are limited to the human eye perceiving ability. Consequently, not occasionally valuable information regarding to changes perceived by the video recording is left unnoticed and unprocessed, resulting with valuable information being lost. One use of such supervising is within the military use were tracking movements of hostile forces is vital, and failing to duly track changes can apply harsh consequences. There is a growing need to prevent lose of valuable information concealed within unprocessed video recording. Furthermore, there is need to evaluate on-line video recordings for urgent purposes and for information of highly importance. There is a further need to provide an immediate alert when detecting an emergency situation or other urgent situation. It is evident that there is a clear need for diminishing the need for large number of skillful personnel for supervising and analyzing video recordings.
The present invention further relates to the need to insert data within a video recording. Overview of particular areas using video recording can be used for various applications such as supervision, control and the like. Guarding and tracking a zone captured by a video recording for keeping unauthorized persons from entering to a restricted area is one broadly used application of video recording. Accordingly, one or more persons inspect the video recording displayed over a monitor that can be positioned in a remote location from the supervised area. Applying the use of video recording for supervising, controlling and the like can be either on-line or off-line for later inspection. Frequently, the zone captured by one video camera is insufficient and a number of cameras are required for providing an efficient supervision of the observed particular area. A video camera used for tracking can capture a large zone providing a wide perspective of the supervised area. Occasionally, the zone recorded has a plain pattern that enables a person viewing the monitor to track protruding changes. Thus, in some cases a change in the pattern can be perceptible immediately due to the sharp and obvious change within the video recoding presented. One example may be a video recording of passage seldom used having trespassed by an intruder. However, the pattern changes in a video recording may not be always obvious and immediately tracked by a person monitoring the recording or the recording may not be continuously attained. Furthermore, when areas supervised are large or demand for great skill for monitoring the process can require costly resources for skillful personnel that are limited to the human eye perceiving ability. Consequently, not occasionally valuable information regarding to changes perceived by the video recording is left unnoticed and unprocessed, resulting with valuable information being lost. One use of such supervising is within the military use were tracking movements of hostile forces is vital, and failing to duly track changes can apply harsh consequences. There is a growing need to prevent lose of valuable information concealed within unprocessed video recording. Furthermore, there is need to evaluate on-line video recordings for urgent purposes and for information of highly importance. There is a further need to provide an immediate alert when detecting an emergency situation or other urgent situation. There is therefore also a need for diminishing the need for large number of skillful personnel for supervising and analyzing video recordings.
There is therefore a need in the art for a method and apparatus for analyzing and classifying file types and for detecting between encrypted and un- encrypt files transferred over communication lines. Furthermore, there is a need for a method and system for recognizing and analyzing important objects within aerial photos providing rapid and accurate information. There is therefore a need in the art for a method and system for detection and analysis of audio recordings. There is therefore a need in the art for a system and method for detection and analysis of video recordings. There is therefore also a need in the art for a system and method for detection and analysis of video recordings.
In addition, recently video recordings are broadly used for leisure, education and many other uses. Video recordings provide a continuous changeable two- dimensional picture over the time axis. Video recording projection is performed on television sets as well as on computer screens. The growing market of TV broadcast on demand and the increase of viewers of movies by Internet, cable and satellite TV channels, etc provides possibilities to transfer additional information to spectators. Due to the prohibition in most countries to implement deliberately subconscious advertisements and commercial messages the spectators must visualize the messages. Said additional information can be advertisements. Advertisements are added to movies by stopping the video recording (e.g. a movie) for broadcasting the advertisement. Commercial companies or government agencies that wish to forward their message, included within me advertisement come up against a mental obstacle regarding the spectator. The mental obstacle of spectators hinders the ability of the advertiser to forward the message to the spectators of the video recordings. Consequently, the content of the advertisement is forwarded only to a segment of the spectators of the video recording that receive only a fraction of a multifaceted message. The mental obstacle is derived from the distinct separation between the video recording and the transferred message (i.e. advertisement). A spectator decreases her or his reception concentration during the time interval designated for messages other than the observed video recording. Furthermore, only short and simple drawing attention messages are considered for broadcasting. Naturally, short and simple drawing attention messages limit the type of messages to be broadcasted. One way of confronting the requirements for broadcasting messages is by broadcasting "fast moving" messages easily viewed by spectators during the broadcast of video recording. The "fast moving" messages that broadcast messages concurrently to the broadcasting of the video recording conflicts with the spectators wish to view the entire broadcasted video recording. Undesirably, the "fast moving" messages over rides parts of the broadcasted picture, consequently, interrupts with the spectator's leisure, learning experience etc., Furthermore, said interruption for the inherent broadcasting permits "fast moving" messages to be extremely brief. Moreover, "fast moving" messages can create antagonism feeling towards the messages, which is a most undesirable result. There is therefore a need to facilitate a service that permits messages delivered within a video recording without having the substantial drawbacks of the prior art. Furthermore, there is a need to provide a service that will enable to forward messages that are multifaceted that are able to be received and implemented by spectators of video recordings. There is a further need to provide a service that enables broadcasting video messages within video recordings without over riding important segments of the broadcasted video recording. There is also therefore a need in the art for a method and system for message insertion within video recordings.
SUMMARY OF THE INVENTION
One aspect of the present invention regards a method for analysis and classification of electronic data, the method comprising receiving file from an input device; calculating complexity of the file received; classification of the complexities of file; displaying file on a user interface; and storing file and their given classifications. It also regards a system for analysis and classification of files, the system comprises: an input device for capturing files; a computing device for calculating complexities of the captured files; a computing device for classification of complexities of files interacting with storage device, user interface and input devices; a storage device for providing computing device, user interfaces devices and input devices with relevant information; storage of captured, analyzed and classified files; a user interface device for displaying files and their classifications to the user; interaction of user and system.
A second aspect of the present invention regards a method for analysis of aerial photos, the method comprising: receiving an aerial photo; calculating complexity values of aerial photos received; sorting the complexity values of the aerial photo; and displaying the aerial photos and analysis of the complexity values. The method further comprising comparing complexity values of at least two aerial photos. The same aspect of the present invention regards a system for analysis of aerial photos, the system comprises an input device for receiving aerial photos; a computing device for calculating complexity values of captured aerial photos; a computing device for sorting complex values of aerial photos; and a storage device for storing internal database. The system further comprising a comparator device.
A third aspect of the present invention a method for detection and analysis of audio recording, the method comprising: receiving an audio recording; calculating complexity values of audio recording received; comparing the complexity values within an audio recording; and displaying analysis of audio recording on a user interface. The aspect regards a system for detection and analysis of audio recording, the system comprises: an input device for receiving audio recording; a computing device for calculating complex values of received audio recording; a comparator device for comparing complex values of audio recording; and a storage device for storing internal database.
A fourth aspect of the present invention a method for detection and analysis of video recording, the method comprising: receiving a video recording; calculating complexity values of video recording received; comparing the calculated complexity values with reference complexity values; and displaying analysis of video recording on a user interface. The reference complexity values are internal complexity parameters. The reference complexity values are complexity values calculated for a video recording recorded at a predetermined time. The same aspect of the invention regards a system for detection and analysis of video recording, the system comprises: an input device for receiving video recording; a computing device for calculating complex values of received video recording; a comparator device for comparing complex values of video recording; and a storage device for storin internal database. A fifth aspect of the invention regards a method for insertion of messages within a video recording, the method comprising: receiving a message, and calculating complexity values of message received, and storing the message within a database, and receiving video recording, and calculating complexity values of received video recording, and inserting messages within video recordmg, and displaying video recording with inserted messages. The said received messages can be video messages. The same aspect regards a system for insertion of messages within video recording, the system comprises: an input device for receiving video recording and messages; a computing device for calculating complex values of received video recording and messages, an insertion computing device for inserting messages within the video recording, and a storage device for storing commercials.
BRIEF DESCRIPTION OF DRAWINGS Fig. 1 depicts a block diagram illustrating the process executed by the encryption and analysis classification system in accordance with a preferred embodiment of the present invention.
Fig. 2 depicts a screen shot presenting the unsorted incoming file column list and the sorted incoming files column list in accordance with a preferred embodiment of the present invention. Fig. 3 depicts a block diagram illustrating the object recognition analysis system in accordance with a second preferred embodiment of the present invention.
Fig. 4 depicts a block diagram illustrating the audio detection and analysis system and method in accordance with a third preferred embodiment of the present invention.
Fig. 5 depicts a block diagram illustrating the video detection and analysis system and method in accordance with a fourth preferred embodiment of the present invention. Fig. 6 depicts a block diagram illustrating the method and system for message insertion within video recordings in accordance with a fifth preferred embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION The present invention provides an encryption analysis and classification system (EACS) for analyzing and classifying files received by the EACS. The present invention provides the use of the complexity data analysis (CD A) method and system presented within PCT Application PCT/ILO 1/01074, related patent application to the present invention, which is incorporated herein by reference. Thus, the present invention provides accurate analysis and classification defining each file its type and whether it is encrypted and, given the fact it is encrypted, the encryption level using the CDA. The use of the CDA for analyzing and classification for files and their level of encryption is possible by exploiting a characteristic attribute included within all files transferred over communication lines. The complexity characteristic attribute determinates that all file types have a different level of complexity. The characteristic attribute is detectable by the EASC. Furthermore, encrypted files differ from unencrypted files by having a substantially more complex structure that is detectable by the EACS.
The complexity value calculated by the EACS is used for classifying of files within the EACS. The files received as input of the EACS are analyzed and classified and are provided as output of the EACS. The complexity value given to each file is calculated using the complexity engine within the EACS (according to PCT Application PCT/IL01/01074). The complexity engine within the EACS provides each file with complexity values. The complexity value of files is given by using pre-inserted parameters to the EACS complexity engine database. According to one embodiment the said parameters can provide complexity value for a text file by treating each byte as a letter and calculating the complexity over a file using a mean complexity, other complexity statistics, etc. Classification of files is performed by the EACS by comparing internal database thresh-hold. parameters to received complexity values of files. Thus, a received complexity value is classified according to the range of thresh-holds values within the EACS. According to one embodiment an encrypted text file will be distinguished from the same unencrypted text file by the complexity value given by the EACS complexity engine. Consequently, the EACS is applied according to the present invention to sort between incoming files over the Internet or other communication lines. One skilled in the art can appreciate that in a similar manner the EACS can analyze and classify image files, text files and the like. The EACS will be better understood relating to Fig. 1.
Fig. 1 depicts a block diagram illustrating the process executed by the EACS 10. The EACS 10 consists from an input device 20, user interface 40, external database 50, output device 60, internal database 70, complexity engine 30 and a classification device 80. The input device 20 is a device for capturing files. One example of an input device 20 can include a computing device including a browser connected to a communication device that can be connected to a data communication network such as the Internet and other communication lines that provide the transfer of files in a digital manner. The input device 20 transfers the file to the computing device as a complexity engine 30 that calculates the complexity of received files. The complexity engine 30 is illustrated and explained within PCT Application PCT/ILO 1/01074 incorporated to the present invention. The classification devise 80 is a computing device that compares the complexity parameters values of the files to those within the internal database 70. The classification device 80 includes a classification handler (not shown) and is connected to the internal database 70 containing the parameters to be compared with the complexity value given to a file by the complexity engine 30. After the classification device 80 performs the said comparison the said file receives a classification number. The classification number given by the classification device 80 is used for storing the said file at the external database 50. The classification number given to the said file by the classification device 80 is used also for storing the said file within the internal database 70. The incoming files and their classification numbers can be presented at the user interface 40 for display. The user interface 40 can be a screen display unit or any other display unit. The user interface 40 can include an input device (not shown) for adding and modifying parameters and data required for the complexity engine's 30 internal database (not shown) and for the modification of the internal database 70 of the classification device 80.
One preferred embodiment is depicted within Fig. 2. Fig. 2 depicts a screen shot 100 presenting the unsorted incoming file column list 101 and the sorted incoming files column list 102. The sorting of the incoming files within the present embodiment is performed by the EACS. Accordingly, the files received at the input device 20 as illustrated in Fig. 1 have their complexity value calculated within the complexity engine 30. The complexity values received from the complexity engine 30 are classified within the classification device 80 and are compared to thresh holds received from the internal database 70 based on previous files or parts there of received within the EACS or predetermined data inserted by the user. The classification device 80 stores the received files with their calculated complexity values within the external database 50. The classification results received from the classification device 80 presents to the user interface 40 the classification of all files according to their complexity calculation. Fig. 2 depicts the results presented to the user at the screen display of the user interface. The incoming files column list 101 is separated from the sorted incoming files column list 102. The sorted file column list 102 is sorted according to the complexity values given within the EACS. The present preferred embodiment provides the possibility to display the most "interesting" files on the highlighted files column list 103. The highlighted files column list 103 can present on the screen display of the user interface the files that have the highest complexity value.
The present invention also provides an object recognition analysis system and method (ORAS) for recognizing and analyzing areas containing objects generally and objects within aerial photos received by the ORAS specifically. The present invention provides the use of the complexity data analysis (CDA) system and method presented within PCT Application PCT/ILO 1/01074, related patent application to the present invention, which is incorporated herein by reference. Thus, the ORAS provides the ability of recognition and analysis of the existence of areas with objects and objects using CDA. Furthermore the ORAS provides the ability to compare two or more aerial photos or aerial photos fragments and to analyze changes between said photos. In a further aspect of the present invention ORAS may operate in a combined manner with the PRM either for recognition and analyzing objects or for tracing changes between two or more aerial photos. The use of the CDA for recognition and analyzing of objects is performed by exploiting characteristic attribute of known objects and landscapes as viewed on an aerial photo. The complexity value of said characteristic attributes are inserted to the internal database of the ORAS providing thresh-hold values used for comparison and recognition of said objects by the CDA according to PCT Application PCT/ILO 1/01074. The ORAS can be directed to a particular fragment within an aerial photo and can perform a CDA for providing information, in one case, whether there was a change of complexity value of the particular section of the aerial photo in comparison to another and, in another case, to recognize and analyze existence of objects and particular objects within an aerial photo. Accordingly, the ORAS can be directed to fragments frames within an aerial photo by a user, be used as a following stage of PRM for recognition and analysis of objects within the fragments frames received from activating the PRM, or a following stage after activating any other method.
The ORAS activates complexity calculations on aerial photos or on fragments frames of aerial photos that were selected by a user, fragments received from executing PRM or any other method. After given a complexity value by the complexity engine within the ORAS (according to PCT Application PCT/ILO 1/01074) the complexity value is compared, if required, and sorted by the sorting device in accordance to suitable parameters received from the internal database of the ORAS. The ORAS provides recognition and analysis output of objects after the sorting device processed the complexity metric values given to objects within the fragment of the aerial photo. The ORAS will be better understood relating to Fig. 3.
Fig. 3 depicts a block diagram illustrating the ORAS, designated 10. The ORAS 10 includes an input device 20, a user interface 30, an external database 80, an output device 60, an internal database 70, a complexity engine 40 and a sorting device 50. The input device 20 is a device for capturing aerial photos. According to one preferred embodiment the input device 20 can be a scanner and the like. According to the preferred embodiment the input photo is presented to a user through the user interface 30. The user interface 30 according to the present embodiment can include a screen (not shown) and input device (not shown) such as a pointing device. The user, according to the present embodiment can indicate fragments within the said photo for recognition and analysis. Additionally, the user, within the present embodiment, can insert parameters regarding basic definitions such as what percentage of the photo to present, type of photos, etc. The parameters are inserted to the internal database 70. The internal database 70 conveys parameters, inserted by user as well as others (according to PCT Application PCT/ILO 1/01074), to the complexity engine 40. The complexity engine 40 activates the CDA on the previously indicated fragments frames using parameters received from the internal database 70. The complexity engine 40 computes the complexity value for the photo fragment and produces a complexity metric for each photo (i.e. for every area in the photo there are complexity parameters). The sorting device 50 sorts the area according to their complexity value and sends the recognized and analyzed relevant areas, within the previously indicated aerial photo fragment by the user, to the user interface 30 for display. Concurrently for receiving the recognized and analyzed relevant areas the user interface 30 receives the said aerial photo fragment. According to another embodiment the sorting device 50 can provide particular object recognition thus, presenting the user interface 30 with its analysis of the objects within the fragment. The output device 60 provides the relevant areas and the object recognition and analysis alongside with the previously indicated aerial photo fragment to the user interface. The recognition and analysis results alongside with the relevant aerial photo fragment are stored in the above embodiments within an external database 80 for further evaluation.
In another preferred embodiment the input device 20, such as a scanner, receives pairs of photos, each aerial photo photographed the same surface location but at different time. The user interface 30 according to the present embodiment includes the same elements as the first embodiment above, including a screen (not shown) and an input device (not shown) such as a pointing device. The user views the photo on the user interface 30 and indicates the relevant fragment for analysis and comparison. The user can insert parameters to the internal database 70 through the user interface 30. The parameters can be regarding basic definitions such as what percentage of the photo to present, type of photos, etc. The internal database 70 receives the sends the complexity engine 40 the appropriate calculation parameters for activating a complexity calculation for each photo. The complexity engine 40 computes the complexity value of the photos and produces a complexity metric for each photo (i.e. each area within the photo has a complexity parameter). The complexity engine 40 calculates the difference between the complexity values of each area within the pair of photos, thus producing a difference complexity metric for the pair of photos. The said complexity metric of each photo along with the photo is stored within the external database 80 for further evaluation. Additionally, according to the present embodiment, the sorting device 50 sorts the areas within the photos according to their complexity value and sends to the user interface 30 the relevant information including, the relevant fragment and the recognition and analysis of the fragment received from the ORAS. Thus, the user interface 30 displays the relevant areas extracted from both photos and the difference recognized and analyzed by the ORAS.
In another embodiment of the present invention there is provided an audio detection and analysis system and method (ADAS) for detecting and analyzing audio recordings received by the ADAS. The present invention provides the use of the complexity data analysis (CDA) method presented within PCT Application PCT/ILO 1/01074, related patent application to the present invention, which is incorporated by reference. Thus, the present invention detects and analyzes audio recordings activating the CDA method providing valuable information prior to performing a time consuming audio recordings processing analysis. The use of the CDA method for detecting and analyzing valuable information for audio recordings is possible by exploiting a characteristic attribute of that each audio sound has and the possibility of the ADAS to characterize a normal complexity value for a known speaker within an audio recording. The complexity characteristic attribute value of a known particular speaker enables Hie ADAS to determine whether a known conversing speaker within any audio recording embraces the normal definition stored within the internal database of the ADAS. Furthermore, the ADAS can perform an on-line detection and analysis of speakers conversing within an audio recording. The ADAS calculates and provides complexity values to voices recorded on the audio recording using the complexity engine as within PCT Application PCT/ILO 1/01074. The complexity engine prior to activating its process for providing complexity values draws relevant parameters from the internal database. A user at the user interface can insert parameters to the internal database. After the calculation of the complexity value of the audio recording, the comparison device with a parameter that contains the known complexity value of the particular speaker compares the complexity value. The ADAS can generate an alarm when the comparison result crosses a threshold provided by the internal database. The results of the detection and analysis is presented to the user at the user interface and stored within external database. The output device conveys the audio recording to the any predetermined destination such as an ordinary processing location. One skilled in the art can appreciate that in a similar manner the ADAS can detect and analyze on-line audio recordings as well. The ADAS will be better understood relating to Fig. 4. Fig. 4 depicts a block diagram illustrating the ADAS, designated 10. The
ADAS 10 includes an input device 20, a user interface 30, an external database 80, an output device 60, an internal database 70, a complexity engine 40 and a comparison device 50. The input device 20 is a device for receiving audio recordings. According to one preferred embodiment taken from the military intelligence field the input device 20 can be an audio receiver with a digital converter. According to the preferred embodiment the recording source is familiar to the user. The user interface 30 according to the present embodiment can include a screen (not shown) and an input device (not shown) such as a keyboard. The user, according to the present embodiment can indicate the source of the recording and can insert relevant parameters to the internal database 70. The internal database 70 conveys parameters, inserted by user as well as others (according to PCT Application PCT/ILO 1/01074), to the complexity engine 40. The complexity engine 40 activates the CDA on the recording and calculates its complexity value using parameters received from the internal database 70. The complexity value of the recording alongside with the recording is stored within the external database 80. The comparison device 50 compares the complexity value, provided by the internal database 70 as a parameter, with the calculated complexity value received from the complexity engine 40 (e.g. compares a new recorded conversation of the person to other known voice sound of the same person, thus alerting if he is excited, calm, etc.). The comparison device 50 further examines whether the complexity value of the input recording to threshold parameters provided by the internal database 70. The comparison device 50 generates an alert provided to the user interface 30. The comparison device 50 presents user interface 30 as well statistics of the current recording and other relevant recordings in the external database 80. One skilled in the art can easily percept that the above preferred embodiment can be either on-line or off-line.
In another embodiment of the present invention a two recordings can be received by the ADAS on-line having their complexity value calculated parallel or one after another, as within the previous preferred embodiment. The complexity values provided by the complexity engine 40 of the incoming input recordings are compared to one another within the comparison deice 50. The recording having the highest complexity value is marked as a more significant source for listening to on-line and accordingly an alert is sent to the user interface 30. In accordance with another embodiment of the present invention there is provided a video detection and analysis system and method (VDAS) for detecting and analyzing video recordings received by the VDAS. The present invention provides the use of the complexity data analysis (CDA) method and system presented within PCT Application PCT/ILO 1/01074, related patent application to the present invention, which is incorporated herein by reference. Thus, the present invention detects and analyzes video recordings activating the CDA method and system providing valuable information prior to performing a time consuming video recordings processing detection and analysis by monitoring persons. The use of the CDA method for detecting and analyzing valuable information of video recordings is possible by exploiting a characteristic attribute of each video recording has, and the possibility of the VDAS to calculate a complexity value of each video recording. The complexity characteristic attribute value of a known video recording stored within the VDAS enables to determine whether the current calculated complexity of the video recording differs from the known position of objects within the supervised or viewed area. Furthermore, the VDAS can determine the video recording reference for the comparison and can perform an analysis either on-line or off-line. The method used within the VDAS enables to detect and analyze changes that cannot be detected by a human eye. Furthermore, the VDAS can accurately estimate the changes between video recordings compared.
The VDAS calculates and provides complexity values to video recording using the complexity engine as within PCT Application PCT/ILO 1/01074. The complexity engine prior to activating its process for providing complexity values draws relevant parameters from the internal database. A user at the user interface can insert parameters to the internal database. After calculating complexity values of the video recording by the complexity engine, the comparison device receives the complexity value form the complexity engine. The comparison device compares the complexity value of the video recording input received from the complexity engine with known complexity values relating to video recordings previously inserted to the VDAS. The complexity value that the current input complexity is to be compared to is designated within the internal database. The VDAS can generate an alarm when the comparison result within the comparison device crosses a threshold provided by the internal database. The results of the detection and analysis is presented to the user at the user interface and stored within external database. The output device conveys the video recording to any predetermined destination such as an ordinary processing location. One skilled in the art can appreciate that in a similar manner the VDAS can detect and analyze on-line video recordings as well. The VDAS will be better understood relating to Fig. 5. Fig. 5 depicts a block diagram illustrating the VDAS, designated 10. The
VDAS 10 includes an input device 20, a user interface 30, an external database 80, an output device 60, an internal database 70, a complexity engine 40 and a comparison device 50. The input device 20 is a device for receiving video recordings. According to one preferred embodiment the input device 20 is a video receiver with a digital converter. According to the preferred embodiment the recording source is familiar to the. user. The user interface 30 according to the present embodiment can include a screen (not shown) and an input device (not shown) such as a keyboard. The user, according to the present embodiment can indicate the source of the video recording and can insert relevant parameters to the internal database 70. The internal database 70 conveys parameters, inserted by user as well as others, (according to PCT Application PCT/ILO 1/01074), to the complexity engine 40. The complexity engine 40 activates the CDA on the recording and calculates its complexity value using parameters received from the internal database 70. The complexity engine 40 calculates the complexity of the video recording and sends a complexity metric (i.e. every area within the frame and along the frames has a complexity parameter). The complexity value of the video recording alongside with the recording is stored within the external database 80. The comparison device 50 compares a known, determinate within the internal database 70, complexity value of the area captured by a video recording, provided by the external database 80, with the calculated complexity value received from the complexity engine 40 of the same supervised area. The comparison device 50 can provide an alert in case of a trespass of a threshold data, received from the internal database 70, of difference value received as a result of the comparison execution within the comparison device 50. The comparison device 50 generates an alert provided to the user interface 30 with a warning. The comparison device 50 presents user interface 30 with the video recording received as input at the input device 20 as well as with statistics relating to the current recording and other relevant recordings stored within the external database 80. One skilled in the art can easy percept that the above preferred embodiment can be either on-line or off-line.
The person skilled in the art will appreciate that what has been shown is not limited to the description above. Those skilled in the art to which this invention pertains will appreciate many modifications and other embodiments of the invention. It will be apparent that the present invention is not limited to the specific embodiments disclosed and those modifications and other embodiments are intended to be included within the scope of the invention. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
In an additional embodiment of the present invention the present invention provides a method and system for message insertion within video recordings (SMI). The SMI provides the possibility of the insertion of messages to a video recording in a manner that does not over ride any important part segment of the broadcasted video recording. The present invention provides the use of the complexity data analysis (CDA) method and system presented within PCT Application PCT/ILO 1/01074, related patent application to the present invention, which is incorporated herein by reference. Thus, the present invention detects and analyzes video recordings activating the CDA method and system providing valuable, information prior to insertion of video messages or other type of messages such as text or picture. The use of the CDA method and system for detecting and analyzing valuable information of video recordings is possible by exploiting a characteristic attribute of each video recording has, and the possibility of the SMI to calculate a complexity value of each video recording. The complexity characteristic attribute value of a video recording stored within the SMI or on-line video recording received as input enables to determine the important segments of the video picture and the less important segments of the picture. The CDA method and system used within the SMI provides complexity values to the video recordings and to the messages (i.e. video messages). The first step within the SMI is creating a video messages database (i.e. commercial advertisement database). The second step within the SMI is determining the unimportant segments within the video recording by calculating the complexity values of the recording. The high complexity value indicates the importance of the segment receiving the complexity value. A low complexity value provided to a segment of the video picture indicates the segment is not important thus, a video message, or any other message, can be inserted. The CDA calculates the complexity value by calculating a three-dimensional complexity value, including the picture and the time axis. The SMI determines the width and length dimensions as well as the time dimension of the unimportant segments within the video recording. Thus, the SMI determines the width and length dimensions as well as the time length of the inserted message possible. After adjusting performing the insertion by "planting" the video messages within the video recording the SMI presents the final result at the output device. The preferred embodiment of the present invention relates to the insertion of commercial advertisements within video recording (i.e. a movie). Thus, providing within the output device a movie with commercials. The SMI will be better understood relating to Fig. 6.
Fig. 6 depicts a block diagram illustrating the SMI, designated 10. The SMI 10 includes an input device 20 for the receiving of commercials, an input device 30 for the receiving of movies, a user interface 40, an output device 50, a complexity engine 70, a commercials database 60 and plant device 80. The input device 30 receives digital movies and the input device 20 receives commercials. At the first step the commercial database 60 is created. The commercials received from the input device 20 have their complexity calculated within the complexity engine 70. The commercials are stored within the commercial database 60 or within a separate database-storing device (not shown) with the complexity values calculated for the commercial. The commercial is then computed for other relevant parameters such as colors and other within the commercial database 60. The commercial database 60 determines the priority according to data received from internal database (not shown) within the SMI 10. The next step is the processing of the movie. The movie is received within the SIM 10 by the input device 30. The movie has its complexity values calculate within the complexity engine 70. Thus, the unimportant segments within the movie. The movie's complexity values are transferred to the plant device 80. The plant device 80 receives the parameters of the commercials required for insertion within the movie and executes the insertion in the appropriate locations. Thus, the commercials are inserted in accordance to the priority provided by the commercials database 60 and in accordance to thresholds determined by internal database (not shown) within the SMI 10. The said threshold determines the complexity not to be over ride by a commercial. A user at the user interface 40 can insert values to the internal database (not shown) that will provide threshold values for important complexity values within the movie, priority for commercials to be inserted as well as other values. The user interface 40 can include a pointing device a keyboard and the like, thus, providing a user with the ability to input said values to the SMI 10. The user interface 40 can include a screen for displaying the incoming commercials and movie. The plant device 80 computes all required values received from the commercial database 60 and the complexity engine 70 in order to place the commercials within the movie according to priority, law complexity values (unimportant segments) within the movie, colors contrast of said commercials with unimportant segments of movie etc. One skilled in the art can easy percept that the above-preferred embodiment can be either on-line or off-line.
The person skilled in the art will appreciate that what has been shown is not limited to the description above. Those skilled in the art to which this invention pertains will appreciate many modifications and other embodiments of the invention. It will be apparent that the present invention is not limited to the specific embodiments disclosed and those modifications and other embodiments are intended to be included within the scope of the invention. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims

CLAIMS: I/WE CLAIM:
1. A method for analysis and classification of electronic data, the method comprising receiving file from an input device; calculating complexity of the file received; classification of the complexities of file; displaying file on a user interface; and storing file and their given classifications.
2. A system for analysis and classification of files, the system comprises: an input device for capturing files; a computing device for calculating complexities of the captured files; a computing device for classification of complexities of files interacting with storage device, user interface and input devices; a storage device for providing computing device, user interfaces devices and input devices with relevant information; storage of captured, analyzed and classified files; a user interface device for displaying files and their classifications to the user; interaction of user and system.
3. A method for analysis of aerial photos, the method comprising: receiving an aerial photo; calculating complexity values of aerial photos received; sorting the complexity values of the aerial photo; and displaying the aerial photos and analysis of the complexity values.
4. The method of claim 3 further comprising comparing complexity values of at least two aerial photos.
5. A system for analysis of aerial photos, the system comprises an input device for receiving aerial photos; a computing device for calculating complexity values of captured aerial photos; a computing device for sorting complex values of aerial photos; and a storage device for storing internal database.
6. The system of claim 5 further comprising a comparator device.
7. A method for detection and analysis of audio recording, the method comprising: receiving an audio recording; calculating complexity values of audio recording received; comparing the complexity values within an audio recording; and displaying analysis of audio recording on a user interface.
8. A system for detection and analysis of audio recording, the system comprises: an input device for receiving audio recording; a computing device for calculating complex values of received audio recording; a comparator device for comparing complex values of audio recording; and a storage device for storing internal database.
9. A method for detection and analysis of video recording, the method comprising: receiving a video recording; calculating complexity values of video recording received; comparing the calculated complexity values with reference complexity values; and displaying analysis of video recording on a user interface.
10. The method of claim 9 wherein the reference complexity values are internal complexity parameters.
11. The method of claim 9 wherein the reference complexity values are complexity values calculated for a video recording recorded at a predetermined time.
12. A system for detection and analysis of video recording, the system comprises: an input device for receiving video recording; a computing device for calculating complex values of received video recording; a comparator device for comparing complex values of video recording; and a storage device for storing internal database.
13. A method for insertion of messages within a video recording, the method comprising: receiving a message; calculating complexity values of message received; storing the message within a database; receiving video recording; calculating complexity values of received video recording; inserting messages within video recording; and displaying video recording with inserted messages.
14. The method of claim 13 wherein the messages are video messages.
15. A system for insertion of messages within video recording, the system comprises an input device for receiving video recording and messages; a computing device for calculating complex values of received video recording and messages; an insertion computing device for inserting messages within the video recording; and a storage device for storing commercials.
PCT/IL2003/000385 2002-05-14 2003-05-13 A system and method for detection and analysis of data WO2003096142A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003224408A AU2003224408A1 (en) 2002-05-14 2003-05-13 A system and method for detection and analysis of data

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US10/146,499 US20030105736A1 (en) 2001-11-20 2002-05-14 System and method for analyzing and classification of files
US10/146,499 2002-05-14
US10/152,310 2002-05-20
US10/152,309 US7227975B2 (en) 2001-11-20 2002-05-20 System and method for analyzing aerial photos
US10/152,309 2002-05-20
US10/152,310 US6928228B2 (en) 2001-11-20 2002-05-20 System and method for detection and analysis of video recordings
US10/152,308 US7069218B2 (en) 2001-11-20 2002-05-20 System and method for detection and analysis of audio recordings
US10/152,308 2002-05-20
US10/153,418 US7142774B2 (en) 2001-11-20 2002-05-21 System and method for message insertion within video recordings
US10/153,418 2002-05-21

Publications (2)

Publication Number Publication Date
WO2003096142A2 true WO2003096142A2 (en) 2003-11-20
WO2003096142A3 WO2003096142A3 (en) 2012-09-13

Family

ID=29424837

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2003/000385 WO2003096142A2 (en) 2002-05-14 2003-05-13 A system and method for detection and analysis of data

Country Status (2)

Country Link
AU (1) AU2003224408A1 (en)
WO (1) WO2003096142A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110214451A (en) * 2016-12-30 2019-09-06 社交媒体广播公司 With the video content distribution platform of advertisement and reward collection mechanism integration
US11101876B2 (en) * 2016-02-29 2021-08-24 Urugus S.A. System for planetary-scale analytics

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5235510A (en) * 1990-11-22 1993-08-10 Kabushiki Kaisha Toshiba Computer-aided diagnosis system for medical use
US5995152A (en) * 1996-01-26 1999-11-30 Dell Usa, L.P., A Texas Limited Partnership Video monitor which superimposes a high frequency periodic wave over signals to vertical deflection plates to increase display quality in low resolution modes
US20020009146A1 (en) * 1998-03-20 2002-01-24 Barbara A. Hall Adaptively encoding a picture of contrasted complexity having normal video and noisy video portions
US6563532B1 (en) * 1999-01-05 2003-05-13 Internal Research Corporation Low attention recording unit for use by vigorously active recorder
US6604126B2 (en) * 2001-04-11 2003-08-05 Richard S. Neiman Structural data presentation method
US6834120B1 (en) * 2000-11-15 2004-12-21 Sri International Method and system for estimating the accuracy of inference algorithms using the self-consistency methodology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5235510A (en) * 1990-11-22 1993-08-10 Kabushiki Kaisha Toshiba Computer-aided diagnosis system for medical use
US5995152A (en) * 1996-01-26 1999-11-30 Dell Usa, L.P., A Texas Limited Partnership Video monitor which superimposes a high frequency periodic wave over signals to vertical deflection plates to increase display quality in low resolution modes
US20020009146A1 (en) * 1998-03-20 2002-01-24 Barbara A. Hall Adaptively encoding a picture of contrasted complexity having normal video and noisy video portions
US6563532B1 (en) * 1999-01-05 2003-05-13 Internal Research Corporation Low attention recording unit for use by vigorously active recorder
US6834120B1 (en) * 2000-11-15 2004-12-21 Sri International Method and system for estimating the accuracy of inference algorithms using the self-consistency methodology
US6604126B2 (en) * 2001-04-11 2003-08-05 Richard S. Neiman Structural data presentation method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11101876B2 (en) * 2016-02-29 2021-08-24 Urugus S.A. System for planetary-scale analytics
CN110214451A (en) * 2016-12-30 2019-09-06 社交媒体广播公司 With the video content distribution platform of advertisement and reward collection mechanism integration
CN110214451B (en) * 2016-12-30 2022-06-03 社交媒体广播公司 Video content distribution platform integrated with advertisement and reward collection mechanism

Also Published As

Publication number Publication date
WO2003096142A3 (en) 2012-09-13
AU2003224408A1 (en) 2003-11-11
AU2003224408A8 (en) 2012-10-04

Similar Documents

Publication Publication Date Title
US7714878B2 (en) Apparatus and method for multimedia content based manipulation
US20200074156A1 (en) Emotion detection enabled video redaction
US20150070506A1 (en) Event triggered location based participatory surveillance
Slavkovikj et al. Review of wildfire detection using social media
US20150294233A1 (en) Systems and methods for automatic metadata tagging and cataloging of optimal actionable intelligence
US11037604B2 (en) Method for video investigation
CN110263613A (en) Monitor video processing method and processing device
Werbach Sensors and sensibilities
KR20200078155A (en) recommendation method and system based on user reviews
Arikuma et al. Intelligent multimedia surveillance system for safer cities
Núñez et al. Computational collective intelligence
WO2003096142A2 (en) A system and method for detection and analysis of data
Li et al. RIMS: A Real-time and Intelligent Monitoring System for live-broadcasting platforms
Bouma et al. Integrated roadmap for the rapid finding and tracking of people at large airports
Cecil Televised images of jail: Lessons in controlling the unruly
KR102058723B1 (en) System for building a database by extracting and encrypting video objects and its oontrol method
Durova et al. TooManyEyes: Super-recogniser directed identification of target individuals on CCTV
Omezi et al. Proposed forensic guidelines for the investigation of fake news
Gulzar et al. Surveillance privacy protection
CN111160946A (en) Advertisement accurate delivery privacy protection method and system based on video technology
Kaneko et al. AI-driven smart production
KR100642888B1 (en) Narrative structure based video abstraction method for understanding a story and storage medium storing program for realizing the method
Núñez et al. Computational Collective Intelligence: 7th International Conference, ICCCI 2015, Madrid, Spain, September 21-23, 2015, Proceedings, Part II
US20210350138A1 (en) Method to identify affiliates in video data
Bowman et al. Content-Based Multimedia Analytics for Big Data Challenges

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP