WO2009066994A2 - Method for detecting unattended object and removal of static object - Google Patents

Method for detecting unattended object and removal of static object Download PDF

Info

Publication number
WO2009066994A2
WO2009066994A2 PCT/MY2008/000160 MY2008000160W WO2009066994A2 WO 2009066994 A2 WO2009066994 A2 WO 2009066994A2 MY 2008000160 W MY2008000160 W MY 2008000160W WO 2009066994 A2 WO2009066994 A2 WO 2009066994A2
Authority
WO
WIPO (PCT)
Prior art keywords
unattended
owner
event
image
digital signal
Prior art date
Application number
PCT/MY2008/000160
Other languages
French (fr)
Other versions
WO2009066994A3 (en
Inventor
Hock Woon Hon
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2009066994A2 publication Critical patent/WO2009066994A2/en
Publication of WO2009066994A3 publication Critical patent/WO2009066994A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the present invention relates to video surveillance systems in general, and more particularly to a detection system that is fully automated to detect unattended objects or removal of stationary objects.
  • Present surveillance system are designed to assist humans i.e. security officers.
  • the system may consist of several surveillance cameras such as CCTV cameras and/or detecting device such as motion detector.
  • the images that are provided by the cameras need to be constantly monitored by the human in order to identify any activity that is amiss.
  • One aspect of the present invention is to use image processing techniques and image processing sequence and process to a surveillance camera system to detect unattended objects or removal of static objects. It is also able to detect objects that have been unintentionally left unattended by the owner; for instance the object fell off without the owner noticing.
  • Another aspect of the present invention is to detect the presence of an unattended object with other human in close proximity of the object, but not its owner. This is a different approach from previous arts where detection of unattended object is limited to static object with no human presence within its close proximity.
  • Yet another aspect of the present invention is to be able to track and detect the owner of the unattended object by employing one or a combination of known detection methods.
  • the tracking of the owner is not limited to the field of view of a single camera, but expanded to the field of view of all the cameras present in a area where the detection system as described in the present invention is used.
  • the present invention discloses a method to detect unattended object using image processing technique.
  • the system involves an integration of several image processing technique/process and image processing sequence and process to a surveillance camera system.
  • the surveillance system can automatically detect an unattended object, which has been left behind by its owner beyond a predetermined distance and alert the security guard is presented.
  • the system uses image sources such as surveillance cameras such in collaboration with the image processing sequence.
  • the image data from the image source is used as input to the unattended object detection process.
  • the type of image from the image source depends on the type of image source used for example a panoramic image is obtained when a omnidirectional lens camera is used, and a visible light video data is received when a visible light camera is used, hi an instance where a thermal camera is used, then the image data would be night vision image.
  • FIG. 1 shows the relationship between the panoramic object space and the resultant image space mapping.
  • Objects that fall under the panoramic field of view will be on the field of view of the camera.
  • the field of view provided by the panoramic camera will be continuos and fully warp. Objects moving around within the field of view of the camera can be viewed from different part in the image.
  • the unattended object detection process is implemented using specifically written software.
  • the detection process is incorporated into a surveillance system, which has of one or more surveillance cameras.
  • the operation and data flow of the present invention are described herein.
  • Figure 2 describes the data flow in the present invention.
  • Figure 3 shows the process to detect unattended object. A preferred embodiment of the present invention will be described with reference to Figure 2 and 3.
  • Video signals from the surveillance camera are the input for the unattended object detection process.
  • the video signal can be analog video signal from standard cameras or digital video signal from IP based camera.
  • EP based cameras When EP based cameras are used, the processing of the system can be applied to web-based application, in which the images can be viewed from anywhere in the world.
  • Analog or digital video signal is used as input to the unattended object detection process (A).
  • the signal is captured by a special device for example frame grabber or DSP. These devices can be used to captures multiple video input from cameras and digitize and store these signals in digital data form (B). However, when a D? based camera is used the data would already be in digital format. Therefore, this step would not be implemented.
  • Multiple video signals can be captured by multiple input capture devices and simultaneous capture sequence (2).
  • the captured digital image is required to be transformed in the correct manner before it can be registered into panoramic image (3).
  • the transformed image data needs to be enhanced in terms of the noise level and visual quality of the image data (4).
  • the transformed and registered panoramic image data (C) is fed both to frame synchronizer (10) and Gaussian mixture based background subtraction method (8) before input to morphological filters to fill up the foreground object void.
  • This method records pixel activity of each of the pixels in the image and background subtraction is achieved by analyzing each pixel (8); this module is assisted by uneven illumination (5), trailing reduction (6), and shadow reduction (7) for the purpose of:
  • the moving foreground object information (D) is fed to track region and current region comparison (11) and then to region tracking (12).
  • the current detected region and previous recorded track information is tracked in order to obtain the different stage of the object.
  • the region tracking is assisted by motion cue, texture cue and color cue (13).
  • the characteristics of the object movement can be determined after the previous processes (11 and 12).
  • the moving object characteristics include a newly emerged object, exit object, object splitting; object merging or current existing objects (14). After the getting the blob information the blob is converted into object further processing (15).
  • the moving object characteristics is an object splitting into two or more, it triggers the unattended object event to start.
  • the object characteristics information (E) is used to be an input for feature extraction (16). In this process, the shape and motion information will be extracted.
  • Object feature extractor extracts information that is related to each of the detected objects including ellipse information, angle and orientation of the major axis and minor axis.
  • Object classifier must be trained with a large amount of database (18) before it can be used to detect specific object (17).
  • the object class information (F) is used as an input to the event recognizer (19) to determine if the unattended object event has happened.
  • the surrounding region of the unattended object is divided into multiple regions and probability techniques can be used to calculate the probability of the owner leaving the object. Once the object is dropped off and separate from the owner the probability is used to determine the likely hood of the owner leaving the object by observing the movement and once the predetermined distance is exceeded event is detected.
  • the result (G) will be managed by the message manager (21) and the result will be combined by the original panoramic video data by the overlap manager (22).
  • the overlay image (H) is displayed through image display component (23).
  • the system can also be calibrated so that a number of cameras are coordinated and the images are fed to the detection system. This enables the surveillance system to be able to track the movement of the owner of the unattended object and determine if the object has been left unattended on purpose. This is vital especially in large areas such as airports, where many cameras are present and the human can disappear from the field of view of one camera and emerge in the other.
  • detection techniques such as face recognition, gait analysis etc are added to the system. Further an image hand-over process is employed.
  • the cameras are placed in a way that at least 10-15% of the field of view of each camera is overlapping with an adjacent camera.
  • Features of the human are recorded and compared between the adjacent cameras. This will limit the search area or the number of cameras before the targeted human is locked down.
  • an image hand-over process is employed. This is done by means of tagging the person who has left the object unattended i.e. leaving the object in place and moving away from it.
  • the human's body feature such as face, clothes or gait is used as the identifier.
  • Known detection method such as face recognition or gait analysis is used to recognize the human by comparing the face and gait information against known database.
  • the alarm is triggered and the human i.e. owner of the object is identified using the detection method that is employed.
  • the detection system as described above can be employed to detect static objects that are moved from its predetermined location.
  • the system is able to learn the background of the static objects. Therefore, when the object is moved from its predetermined position, the system recognizes it as a void.
  • the system can also be extended to identify the particular object that has been removed by recognizing the shape and location of the void. This will trigger the alarm system automatically.
  • a detection method such as facial recognition is used to determine the human that was within the proximity of the static object prior to its removal.
  • image hand-over technique the location of the individual can be determined. This will enable the security guards to act immediately.

Abstract

A surveillance method for the automatic detection of unattended object or removal of a stationary object via a surveillance image analysis. The method is operative in analyzing image data received from surveillance cameras and providing in real time a set of alarm messages to security personnel. The invention is useful for detecting unattended objects or removal of stationary objects.

Description

Method For Detecting Unattended Object And Removal Of Static Object
Field of invention
The present invention relates to video surveillance systems in general, and more particularly to a detection system that is fully automated to detect unattended objects or removal of stationary objects.
Background of Invention
One of the most critical surveillance challenges today is the timely and accurate detection of suspicious objects, such as unattended luggage, illegally parked vehicles, suspicious persons, and the like, in or near airports, train stations, government buildings and in general all crowded public places.
Present surveillance system are designed to assist humans i.e. security officers. The system may consist of several surveillance cameras such as CCTV cameras and/or detecting device such as motion detector. However, the images that are provided by the cameras need to be constantly monitored by the human in order to identify any activity that is amiss.
With the proliferation of closed circuit television (CCTV) cameras, it is not uncommon for a security control room to have 50 to 200+ cameras to monitor for a single compound. It has been shown in research that a human can effectively watch 9-12 cameras for only 15 minutes. Since these surveillance systems are based on human intervention the problems related to natural human-specific processes, such as fatigue, lack of concentration, and the like, remain as weakness.
Therefore, arise the need for a surveillance system where a lesser number of cameras are involved and the system is automatically capable of detecting unattended object or static objects being removed from its predetermined space.
Summary of Invention
One aspect of the present invention is to use image processing techniques and image processing sequence and process to a surveillance camera system to detect unattended objects or removal of static objects. It is also able to detect objects that have been unintentionally left unattended by the owner; for instance the object fell off without the owner noticing.
Another aspect of the present invention is to detect the presence of an unattended object with other human in close proximity of the object, but not its owner. This is a different approach from previous arts where detection of unattended object is limited to static object with no human presence within its close proximity.
Yet another aspect of the present invention is to be able to track and detect the owner of the unattended object by employing one or a combination of known detection methods. The tracking of the owner is not limited to the field of view of a single camera, but expanded to the field of view of all the cameras present in a area where the detection system as described in the present invention is used.
Brief Description of Drawings
Figure 1 Panoramic field of view
Figure 2 Data flow in the present invention
Figure 3 Process to detect unattended object
Detailed Description of Invention
The present invention discloses a method to detect unattended object using image processing technique. The system involves an integration of several image processing technique/process and image processing sequence and process to a surveillance camera system.
Current surveillance systems are hampered due to many reasons, some mainly due to its dependency on humans. Other reasons include the requirement of multiple of cameras.
hi a preferred embodiment of the present invention, the surveillance system can automatically detect an unattended object, which has been left behind by its owner beyond a predetermined distance and alert the security guard is presented. The system uses image sources such as surveillance cameras such in collaboration with the image processing sequence. The image data from the image source is used as input to the unattended object detection process. The type of image from the image source depends on the type of image source used for example a panoramic image is obtained when a omnidirectional lens camera is used, and a visible light video data is received when a visible light camera is used, hi an instance where a thermal camera is used, then the image data would be night vision image.
For the purpose of discussion of the invention, a panoramic image data would be used as the input image. Figure 1 shows the relationship between the panoramic object space and the resultant image space mapping. Objects that fall under the panoramic field of view will be on the field of view of the camera. The field of view provided by the panoramic camera will be continuos and fully warp. Objects moving around within the field of view of the camera can be viewed from different part in the image.
Images from the surveillance camera are fed to the detection process flow. The unattended object detection process is implemented using specifically written software. The detection process, is incorporated into a surveillance system, which has of one or more surveillance cameras. The operation and data flow of the present invention are described herein. Figure 2 describes the data flow in the present invention. Figure 3 shows the process to detect unattended object. A preferred embodiment of the present invention will be described with reference to Figure 2 and 3.
Video signals from the surveillance camera are the input for the unattended object detection process. The video signal can be analog video signal from standard cameras or digital video signal from IP based camera. When EP based cameras are used, the processing of the system can be applied to web-based application, in which the images can be viewed from anywhere in the world.
Analog or digital video signal is used as input to the unattended object detection process (A).The signal is captured by a special device for example frame grabber or DSP. These devices can be used to captures multiple video input from cameras and digitize and store these signals in digital data form (B). However, when a D? based camera is used the data would already be in digital format. Therefore, this step would not be implemented. Multiple video signals can be captured by multiple input capture devices and simultaneous capture sequence (2). The captured digital image is required to be transformed in the correct manner before it can be registered into panoramic image (3). The transformed image data needs to be enhanced in terms of the noise level and visual quality of the image data (4). The transformed and registered panoramic image data (C) is fed both to frame synchronizer (10) and Gaussian mixture based background subtraction method (8) before input to morphological filters to fill up the foreground object void. This method records pixel activity of each of the pixels in the image and background subtraction is achieved by analyzing each pixel (8); this module is assisted by uneven illumination (5), trailing reduction (6), and shadow reduction (7) for the purpose of:
D reducing and evening out the illumination from different direction D reducing the trailing effect caused by the motion object as a result of averaging and low learning rate
D reducing the shadow effect that might present problem in background determination and estimation.
The moving foreground object information (D) is fed to track region and current region comparison (11) and then to region tracking (12). The current detected region and previous recorded track information is tracked in order to obtain the different stage of the object. The region tracking is assisted by motion cue, texture cue and color cue (13). The characteristics of the object movement can be determined after the previous processes (11 and 12). The moving object characteristics include a newly emerged object, exit object, object splitting; object merging or current existing objects (14). After the getting the blob information the blob is converted into object further processing (15).
When the moving object characteristics is an object splitting into two or more, it triggers the unattended object event to start.
The object characteristics information (E) is used to be an input for feature extraction (16). In this process, the shape and motion information will be extracted. Object feature extractor extracts information that is related to each of the detected objects including ellipse information, angle and orientation of the major axis and minor axis. Object classifier must be trained with a large amount of database (18) before it can be used to detect specific object (17).
The object class information (F) is used as an input to the event recognizer (19) to determine if the unattended object event has happened.
The surrounding region of the unattended object is divided into multiple regions and probability techniques can be used to calculate the probability of the owner leaving the object. Once the object is dropped off and separate from the owner the probability is used to determine the likely hood of the owner leaving the object by observing the movement and once the predetermined distance is exceeded event is detected.
\ The result (G) will be managed by the message manager (21) and the result will be combined by the original panoramic video data by the overlap manager (22). The overlay image (H) is displayed through image display component (23).
The system can also be calibrated so that a number of cameras are coordinated and the images are fed to the detection system. This enables the surveillance system to be able to track the movement of the owner of the unattended object and determine if the object has been left unattended on purpose. This is vital especially in large areas such as airports, where many cameras are present and the human can disappear from the field of view of one camera and emerge in the other.
In order to operate the system in this mode, detection techniques such as face recognition, gait analysis etc are added to the system. Further an image hand-over process is employed.
Described below is the flow of the hand-over process.
The cameras are placed in a way that at least 10-15% of the field of view of each camera is overlapping with an adjacent camera. Features of the human are recorded and compared between the adjacent cameras. This will limit the search area or the number of cameras before the targeted human is locked down.
Then an image hand-over process is employed. This is done by means of tagging the person who has left the object unattended i.e. leaving the object in place and moving away from it. The human's body feature such as face, clothes or gait is used as the identifier. Known detection method such as face recognition or gait analysis is used to recognize the human by comparing the face and gait information against known database.
When the system as described in the preferred embodiment identifies the object left by the human as an unattended object, the alarm is triggered and the human i.e. owner of the object is identified using the detection method that is employed.
In another preferred embodiment, the detection system as described above can be employed to detect static objects that are moved from its predetermined location. The system is able to learn the background of the static objects. Therefore, when the object is moved from its predetermined position, the system recognizes it as a void. The system can also be extended to identify the particular object that has been removed by recognizing the shape and location of the void. This will trigger the alarm system automatically.
Once again a detection method such as facial recognition is used to determine the human that was within the proximity of the static object prior to its removal. Using the image hand-over technique, the location of the individual can be determined. This will enable the security guards to act immediately.

Claims

Claims
1. A method to detect unattended object using image processing technique comprising the steps of receiving video signal from image source; capturing the video signal; wherein the video signal is converted into a digital signal when a analog video image source is used to transform and enhance said digital signal; wherein the digital signal is used to obtain panoramic image data and wherein the digital signal is used to obtain moving foreground object information; obtain object information by using the image data and foreground object information; track the object to determine if the event of object splitting, merging, new object enter to the scene or object disappeared from the scene; start unattended object event when object splitting into two or more occurs extract object features from object information to obtain object class information determine if unattended object event happened using object class information; event recognizer to determine if the unattended object event happens by applying probability calculation to calculate the tendency of the owner leaving the scene; wherein the occurrence of an unattended object event is used to track and zoom on the owner of the object once the event begins; and by the message manager in combination with the initial video image by means of overlap manager; and the overlay image obtained from overlap manager is displayed through image display component.
2. A method according to claim 1, wherein the analog signal is captured by a frame grabber
3. A method according to claim 1, wherein the analog signal is captured by a DSP, digital signal processor.
4. A method according to claim 1, wherein the digital signal is enhanced in terms of noise level and visual quality.
5. A method according to claim 1, wherein the image source is surveillance camera.
6. A method according to claim 2, wherein the surveillance camera is a CCTV camera.
7. A method according to claim 2, wherein the camera is a fish eye camera.
8. A method according to claim 1, wherein the number of image source is one or more.
9. A method according to claim 1, wherein a image hand-over process is used when more than one image source is used.
10. A method according to claim 9, wherein the image hand-over process comprises steps of
The owner of the unattended object is identified and tagged The body features such as face, cloth, gait of the owner is used as the identifier of the owner
The features of the owner is recorded and compared between the adjacent cameras to locate the owner
11. A method according to claim 10, wherein the owner of the object is determined by using facial recognition, or gait analysis or cloth recognition method.
12. A method according to claim 6, wherein the field of view of an image source overlaps the field of view of the adjacent image source.
13. A method according to claim 7, wherein the range of overlap is 10-15 %.
14. A method according to claim 1, wherein an object is identified as an unattended object when it is left unattended by its owner exceeding a predetermined distance.
15. A method according to claim 9, wherein the owner is a human.
16. A method according to claim 9, wherein the object is an inanimate object.
17. A method according to claim 9, wherein predetermined distance is a distance between object and owner allowed to be separated.
18. A method according to claim 9, wherein an alarm is triggered when an unattended object event happens.
19. A method according to claim 18, wherein an unattended object event in described as an event when and object splitting is detected.
20. A method according to claim 1, wherein the unattended object event is determined as an event where a static object is removed from its predetermined position.
PCT/MY2008/000160 2007-11-23 2008-11-24 Method for detecting unattended object and removal of static object WO2009066994A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI20072086A MY143022A (en) 2007-11-23 2007-11-23 Method for detecting unattended object and removal of static object
MYPI20072086 2007-11-23

Publications (2)

Publication Number Publication Date
WO2009066994A2 true WO2009066994A2 (en) 2009-05-28
WO2009066994A3 WO2009066994A3 (en) 2009-07-16

Family

ID=40668028

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2008/000160 WO2009066994A2 (en) 2007-11-23 2008-11-24 Method for detecting unattended object and removal of static object

Country Status (2)

Country Link
MY (1) MY143022A (en)
WO (1) WO2009066994A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8483481B2 (en) 2010-07-27 2013-07-09 International Business Machines Corporation Foreground analysis based on tracking information
US8934670B2 (en) 2008-03-25 2015-01-13 International Business Machines Corporation Real time processing of video frames for triggering an alert
CN112560655A (en) * 2020-12-10 2021-03-26 瓴盛科技有限公司 Method and system for detecting masterless article

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6028626A (en) * 1995-01-03 2000-02-22 Arc Incorporated Abnormality detection and surveillance system
US20040120581A1 (en) * 2002-08-27 2004-06-24 Ozer I. Burak Method and apparatus for automated video activity analysis
WO2004070649A1 (en) * 2003-01-30 2004-08-19 Objectvideo, Inc. Video scene background maintenance using change detection and classification
US20040240542A1 (en) * 2002-02-06 2004-12-02 Arie Yeredor Method and apparatus for video frame sequence-based object tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6028626A (en) * 1995-01-03 2000-02-22 Arc Incorporated Abnormality detection and surveillance system
US20040240542A1 (en) * 2002-02-06 2004-12-02 Arie Yeredor Method and apparatus for video frame sequence-based object tracking
US20040120581A1 (en) * 2002-08-27 2004-06-24 Ozer I. Burak Method and apparatus for automated video activity analysis
WO2004070649A1 (en) * 2003-01-30 2004-08-19 Objectvideo, Inc. Video scene background maintenance using change detection and classification

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8934670B2 (en) 2008-03-25 2015-01-13 International Business Machines Corporation Real time processing of video frames for triggering an alert
US9123136B2 (en) 2008-03-25 2015-09-01 International Business Machines Corporation Real time processing of video frames
US9129402B2 (en) 2008-03-25 2015-09-08 International Business Machines Corporation Real time processing of video frames
US9142033B2 (en) 2008-03-25 2015-09-22 International Business Machines Corporation Real time processing of video frames
US9418444B2 (en) 2008-03-25 2016-08-16 International Business Machines Corporation Real time processing of video frames
US9418445B2 (en) 2008-03-25 2016-08-16 International Business Machines Corporation Real time processing of video frames
US9424659B2 (en) 2008-03-25 2016-08-23 International Business Machines Corporation Real time processing of video frames
US8483481B2 (en) 2010-07-27 2013-07-09 International Business Machines Corporation Foreground analysis based on tracking information
US8934714B2 (en) 2010-07-27 2015-01-13 International Business Machines Corporation Foreground analysis based on tracking information
US9460361B2 (en) 2010-07-27 2016-10-04 International Business Machines Corporation Foreground analysis based on tracking information
CN112560655A (en) * 2020-12-10 2021-03-26 瓴盛科技有限公司 Method and system for detecting masterless article

Also Published As

Publication number Publication date
WO2009066994A3 (en) 2009-07-16
MY143022A (en) 2011-02-14

Similar Documents

Publication Publication Date Title
US10614311B2 (en) Automatic extraction of secondary video streams
Kang et al. Real-time video tracking using PTZ cameras
US10210397B2 (en) System and method for detecting, tracking, and classifiying objects
Candamo et al. Understanding transit scenes: A survey on human behavior-recognition algorithms
CN111161312B (en) Object trajectory tracking and identifying device and system based on computer vision
Velastin et al. A motion-based image processing system for detecting potentially dangerous situations in underground railway stations
KR101492473B1 (en) Context-aware cctv intergrated managment system with user-based
Davies et al. A progress review of intelligent CCTV surveillance systems
JP2001005974A (en) Method and device for recognizing object
Lalonde et al. A system to automatically track humans and vehicles with a PTZ camera
Din et al. Abandoned object detection using frame differencing and background subtraction
WO2009066994A2 (en) Method for detecting unattended object and removal of static object
US20200394802A1 (en) Real-time object detection method for multiple camera images using frame segmentation and intelligent detection pool
CN111612815A (en) Infrared thermal imaging behavior intention analysis method and system
Islam et al. Correlating belongings with passengers in a simulated airport security checkpoint
JP5752975B2 (en) Image monitoring device
Czyzewski et al. Moving object detection and tracking for the purpose of multimodal surveillance system in urban areas
TWI476735B (en) Abnormal classification detection method for a video camera and a monitering host with video image abnormal detection
Chan A robust target tracking algorithm for FLIR imagery
Velastin CCTV video analytics: Recent advances and limitations
Prabhakar et al. An efficient approach for real time tracking of intruder and abandoned object in video surveillance system
EP1405279A1 (en) Vision based method and apparatus for detecting an event requiring assistance or documentation
KR101926510B1 (en) Wide area surveillance system based on facial recognition using wide angle camera
KR102297575B1 (en) Intelligent video surveillance system and method
Draganjac et al. Dual camera surveillance system for control and alarm generation in security applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08851958

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08851958

Country of ref document: EP

Kind code of ref document: A2