US8154590B2 - Method for observation of a person in an industrial environment - Google Patents

Method for observation of a person in an industrial environment Download PDF

Info

Publication number
US8154590B2
US8154590B2 US12/362,745 US36274509A US8154590B2 US 8154590 B2 US8154590 B2 US 8154590B2 US 36274509 A US36274509 A US 36274509A US 8154590 B2 US8154590 B2 US 8154590B2
Authority
US
United States
Prior art keywords
person
movement
model
data
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/362,745
Other versions
US20090237499A1 (en
Inventor
Ulrich Kressel
Lars Krueger
Werner Progscha
Christian Woehler
Franz Kummert
Joachim Schmidt
Rainer Ott
Gerhard Sagerer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pilz GmbH and Co KG
Original Assignee
Pilz GmbH and Co KG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pilz GmbH and Co KG filed Critical Pilz GmbH and Co KG
Assigned to PILZ GMBH & CO. KG reassignment PILZ GMBH & CO. KG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUMMERT, FRANZ, SCHMIDT, JOACHIM, PROGSCHA, WERNER, KRESSEL, ULRICH, KRUEGER, LARS, WOEHLER, CHRISTIAN, OTT, RAINER
Publication of US20090237499A1 publication Critical patent/US20090237499A1/en
Assigned to PILZ GMBH & CO. KG reassignment PILZ GMBH & CO. KG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SCHMIDT, JOACHIM, PROGSCHA, WERNER, OTT, RAINER, SAGERER, GERHARD, KUMMERT, FRANZ, KRESSEL, ULRICH, KRUEGER, LARS, WOEHLER, CHRISTIAN
Application granted granted Critical
Publication of US8154590B2 publication Critical patent/US8154590B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16PSAFETY DEVICES IN GENERAL; SAFETY DEVICES FOR PRESSES
    • F16P3/00Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body
    • F16P3/12Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body with means, e.g. feelers, which in case of the presence of a body part of a person in or near the danger zone influence the control or operation of the machine
    • F16P3/14Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body with means, e.g. feelers, which in case of the presence of a body part of a person in or near the danger zone influence the control or operation of the machine the means being photocells or other devices sensitive without mechanical contact
    • F16P3/142Safety devices acting in conjunction with the control or operation of a machine; Control arrangements requiring the simultaneous use of two or more parts of the body with means, e.g. feelers, which in case of the presence of a body part of a person in or near the danger zone influence the control or operation of the machine the means being photocells or other devices sensitive without mechanical contact using image capturing devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40202Human robot coexistence

Definitions

  • the invention relates to a method for observation of a person in an industrial environment.
  • the SVM For detection purposes, windows of different size are pushed over the image, and the corresponding features are extracted from these image regions; the SVM subsequently decides whether the corresponding window contains a person or not.
  • temporal sequences of two-dimensional Haar wavelet features are combined to form high dimensional feature vectors, and these are classified with the aid of the SVMs, thus resulting in a gain in detective performance by comparison with the pure individual image approach.
  • the method of chamfer matching is applied to the detection of pedestrian contours in the scenario of road traffic using a non-stationary camera.
  • the technique of chamfer matching is combined with a stereo image processing system and a neural network with local receptive fields in accordance with [8] which is used as a texture classifier in order to attain a reliable and robust classification result.
  • the shape representation is given by B splines in [10]. Assuming a stationary camera, the person is segmented out from the background by difference image analysis; the tracking algorithm operates with Kalman filters.
  • Another group of methods for detecting persons are model based techniques in which explicit prior knowledge about the appearance of persons is used in the form of a model. Since instances of masking of parts of the body are problematic in this case, many systems additionally assume prior knowledge about the type of the movements to be detected and the viewing angle of the camera.
  • the persons are segmented out by subtraction of the background, for example, and this presupposes a stationary camera as well as a background which does not change, or changes only slowly.
  • the models used consist, for example, of straight rods (“stick figures”), with individual body parts being approximated by ellipsoids [13-16].
  • the hand is described by an articulated model with kinematic constraints, in particular with regard to physically possible joint angles. These constraints enable determination of the three-dimensional position, posture and movement of the hand.
  • a method for detecting movement cycles of the hands (and gestures) that is based on a contour analysis, a tracking stage and a classifier, based on hidden Markov models (HMMs), for the movements is described in [20].
  • the GREFIT system described in [21] is capable of classifying the dynamics of hand postures on the basis of gray scale images with the aid of an articulated model of the hand.
  • a hierarchical system of neural networks localizes the 2D position of the finger tips in the images of the sequence.
  • a further neural network transforms these values into the best fitting 3D configuration of the articulated hand model.
  • hand postures are detected directly by labeling corresponding images by means of a self-organizing map (SOM) and by subsequent training with the aid of a neural network.
  • SOM self-organizing map
  • a trajectory analysis that is based on a particle filter and which also includes symbolic object knowledge is used in [23] the detection of “manipulative gestures” (hand movements that serve for gripping or displacing objects).
  • This approach is extended in [24] in the context of human/robot interaction to the effect that the classification of the hand trajectory by a hidden Markov model is performed in combination with a Bayes network and a particle filter.
  • An approach to the classification of building actions (for example assembly of parts) by an analysis of movement patterns with the aid of the particle filter approach is described in [25]. It is described in [26] how the results of the analysis of the hand movements are integrated with the aim of a more reliable object detection in an approach for detecting components composed from individual elements.
  • [27] describes a view-based system in which objects are detected by means of neural networks that can be subsequently trained online, that is to say during the operating phase.
  • a method for 3D modeling of a person starting from 2D image data is described in [30].
  • a multicamera system is used to acquire image data of a person, and body parts of the latter identified in the 2D image data, in particular by means of a template matching.
  • the body parts thus identified are then modeled by dynamic template matching with the aid of 3D templates.
  • the result of this is that the persons can be identified quickly and continuously even if they are partially masked, or temporarily could not be acquired by the multicamera system.
  • the detected persons are then tracked in the image data with the aid of a kinematic movement model and of Kalman filters.
  • 3D voxel data are generated starting from the image data generated by a multicamera system. Proceeding therefrom, corresponding templates are matched to body parts by means of specific matching algorithms.
  • a kinematic body model as previously in the case of [30].
  • the contributions described in [32] additionally indicate a first approach to the analysis of the biometric behavior of the observed persons, in particular their gestures (“hand raising for signaling the desire to ask a question”).
  • the object of the invention is to provide an approach for a camera-based detection and modeling of persons in an industrial environment.
  • a method for observation of a person in an industrial environment comprising a moveable machine element, the method comprising the steps of: acquiring image data of the person by means of a multicamera system, analyzing the image data in order to produce a person hypothesis representing the person, providing an articulated virtual 3D model of a human body, matching the articulated virtual 3D model of the human body to the person hypothesis in order to generate a movement behavior representation, determining an instantaneous position of the machine element, determining a hazard potential depending on the position of the machine element and the movement behavior representation, and controlling the machine element as a function of the hazard potential.
  • a method for observation of a person in an industrial environment comprising a moveable machine element, the method comprising the steps of: acquiring image data of the person by means of a multicamera system, analyzing the image data in order to produce a person hypothesis representing the person, providing an articulated virtual 3D model of a human body, matching the articulated virtual 3D model to the person hypothesis, providing a database containing a plurality of reference data representing a reference movement cycle of the person, the reference data having been determined from shape and position of the articulated virtual 3D model during a plurality of reference movement phases, generating current data representing a current movement of the person as a function of a current shape and position of the articulated virtual 3D model, correlating the current data with the reference data from the database, wherein a current movement phase is detected whenever the current data exhibits a predefined degree of similarity to the reference data, wherein a movement cycle is assessed as having been completely carried out by the person whenever a specific sequence of current movement phases has been detected, and wherein
  • image data of the person are acquired by means of a multicamera system. These image data are then examined with regard to the imaging of a person such that whenever a person has been detected in the image data an articulated, virtual 3D model of the human body is matched to this person hypothesis. Subsequently, this virtual body model is continuously matched to the movement behavior of the person detected in the image data.
  • the position and/or the movement behavior of a machine or a machine element located in the environment of the person is determined.
  • a hazard potential can be determined starting from knowledge of the position and of the movement behavior of the virtual body model in space. The hazard potential thus determined is subjected to a threshold value comparison in order to act upon the movement control of the machine or the machine part in the event of this threshold value being exceeded.
  • acting upon the movement control of the machine or the machine part effects a shutdown thereof or a slowing down of the movement thereof. If only a slowing down of the movement is effected, the machine or its movable machine element is able to continue the work operation with a simultaneous reduction in the hazard potential.
  • data are continuously derived therefrom as a function of the current shape and position of the virtual body model and are correlated with the data of a database.
  • the database contains a plurality of data that have been determined in advance from shape and position of a body model during a plurality of movement phases describing a movement cycle of a person.
  • those movement phases are regarded by the observed person as having been adopted by the observed person whenever the data derived from the current body model thereof exhibits a certain degree of similarity to the data stored in relation to this movement phase in the database. If in this case a specific sequence of movement phases stored in the database is detected, the movement cycle is then regarded as having been completely carried out by the observed person. If, however, the movement cycle is assessed as having been incompletely carried out, signaling to this effect is performed.
  • a further advantageous field application of this alternative refinement of the invention is the supervision of newly trained operating staff. Many mistakes arise in production when, for example, a new workforce needs to be trained immediately during a vacation. The work cycles can be observed by newly trained operating staff by means of the invention. It is then possible to advise when it is found that movement phases required within a movement cycle to be performed have not been fulfilled, thus resulting in the need to assume that a work operation has not been correctly carried out.
  • a reference is made to at least one of the movement phases that is viewed as not having been adopted in the course of checking a correct sequence with reference to the movement cycle.
  • a trainer can detect which sections of the learned activity are still found to be difficult by the trainee and possibly require additional explanation or further training.
  • a statistical acquisition of the movement phases not detected as having been adopted can advantageously also enable problematic movement phases to be detected ergonomically within an entire movement cycle and, if appropriate, to be optimized by resetting the sequence of the movement phases, or by modifying the systems or objects to be operated to the observed person.
  • the data volumes can be managed in the database, and the outlay on processing, can also be reduced by subjecting the image data stored therein to a transformation, in particular a main axis transformation.
  • a transformation in particular a main axis transformation.
  • the correlation for determining the similarity of the currently acquired image data with the data of the database is performed on the basis of said transformed image data.
  • the 3D model of the person is created on the basis of 3D point data.
  • These point data can be created by multiocular image analysis, in particular including stereo image analysis.
  • information items related to each 3D point in space that go beyond the spatial coordinates (x, y, z) thereof, such as its speed or acceleration, to be obtained by the use of a stereo method (as described in [28], for example) based on space time features.
  • the segmentation of a plurality of 3D point data (3D-point cloud) is advantageously performed by means of a cluster method, in particular by means of agglomerative clustering.
  • the convex envelope is subsequently determined for each extracted cluster of 3D points.
  • 3D points are unnecessarily excluded in the previous step, in particular foot points in the vicinity of the floor or contact points with other objects, are preferably reinstated.
  • the result of this first processing step is the persons in the scene, represented as convex envelopes of the clusters representing them. Over time, the person thus detected can advantageously be tracked by tracking the convex envelope, projected onto the floor, that produces a 2D polygon, doing so by means of linear prediction and Kalman filtering.
  • An articulated 3D model of the human body can advantageously be matched to the person hypotheses thus found. It is advantageous in this case to model the body parts by interconnected cylinders.
  • the posture of the person is given in this model as a vector of the joint angle of the model.
  • the evaluation of a posture is preferably performed by determining the deviation between the features derived from the 3D-point cloud and the images of the scene, and the appearance of the model for a given posture, it thereby being possible to determine a probability that the given posture reproduces the measured shape of the person.
  • a kernel-based particle filter [29] is particularly suitable as a probabilistic approach to the exploration of the search space.
  • the detected movements of the body parts are advantageously represented by motion templates.
  • Such motion templates are here representative movement patterns that are included by 3D measurement of typical human movement cycles and delimit the space of possible joint angles and joint angle speeds of the person model. It is possible in this way to extrapolate the movements of the person in a biologically realistic fashion, particularly with the aim of detecting the risk of a collision between human and machine.
  • a movement process can be regarded as a combined cycle of movement phases.

Abstract

A method for observing a person in an industrial environment using a multicamera system to acquire image data about the position and alignment of a person, including a person's body parts. These image data are then examined with regard to the imaging of a person so that whenever a person has been detected in the image data, an articulated virtual 3D model of the human body is matched to this person hypothesis. Subsequently, this virtual body model is continuously matched to the movement behavior of the person detected in the image data. A hazard potential is determined using knowledge of the position and the movement behavior of the virtual body model in space. The hazard potential thus determined is subjected to a threshold value comparison in order to act upon the movement control of the machine or the machine part in the event of this threshold value being exceeded.

Description

CROSS REFERENCES TO RELATED APPLICATIONS
This application is a continuation of international patent application PCT/EP2007/003037 filed on Apr. 4, 2007 designating the U.S., which international patent application has been published in German language as WO 2008/014831 A2 and claims priority from German patent applications DE 10 2006 036 400.7 filed on Aug. 2, 2006 and DE 10 2006 048 166.6 filed on Oct. 10, 2006. The entire contents of these prior applications are incorporated herein by reference.
BACKGROUND OF THE INVENTION
The invention relates to a method for observation of a person in an industrial environment.
Present day industrial manufacturing processes in automobile production can generally being divided into fully automatic cycles that are carried out exclusively by machines, and completely manual cycles that are carried out exclusively by individual workers or a number of workers cooperating with one another. To date, the close cooperation between persons and machines, in particular industrial robots, has been greatly limited owing to safety aspects. A plurality of complicated and expensive safety systems such as, for example, metal fences, light barriers, laser scanners or combined systems are required in order to keep workers in the production environment away from potentially hazardous machines. The systems are incapable of detecting the exact location, the body posture or the movement behavior of the human. As soon as a worker approaches the robot, the latter is stopped and the production process is interrupted.
The missing “knowledge” of such safety systems that relates to the monitored production environment is particularly disadvantageous in that the manufacturing processes greatly profit from a close collaboration of human and machine. Whereas the human behaves flexibly and adaptively, but is inclined to make mistakes when carrying out repetitive work operations, machines operate quickly and exactly but in this case are static and not very flexible. For example, in the case of a completely automatic manufacturing unit consisting of a number of cooperating robots the production process must be stopped when a single one of the cooperating robots is defective. It would be desirable here to replace the defective robot temporarily by a human worker who cooperates with the remaining robots such that the production can be continued. Efficiency, flexibility and quality of industrial manufacturing can be raised considerably by close cooperation of humans and machines for the purpose of semi-automated processes.
Present day safety systems in the field of industrial production consist mostly of metal fences, light barriers and/or laser scanners. The first approaches are being made to securing robot protection zones on the basis of image processing, and these are described in detail in [1] and [2]. The method described in [1] uses stereo image analysis to detect whether an object is located in the protection zone of the robot, without in so doing extracting information about the nature of the object (for example human or object) or its movement behavior. In [2] a person is detected exclusively with the aid of the skin color of the hands, something which leads to problems with the reliability of detection in the case of inconstant lighting conditions (variable color temperature); the method described cannot be employed at all when working gloves are used. Just like the prior art set forth in [1], these methods do not extract any information about the type of the object. Again, in the case when a person is involved they do not detect the body parts and the movement behavior of said person. Such systems are therefore certainly capable of shutting down a robot when a person intrudes into its protection zone, but are incapable of detecting whether a collision is being threatened or whether human and machine are cooperating regularly and without any hazard in the case when a person is located in the immediate vicinity of the robot.
In accordance with the review article [3], in the field of the recognition of persons the appropriate approaches are divided into two-dimensional methods with explicit shape models, or no models, and into three-dimensional models. In [4], windows of different size are pushed over the initial image; the corresponding image regions are subjected to a Haar wavelet transformation. The corresponding wavelet coefficients are obtained by applying differential operators of different scaling and orientation to different positions of the image region. A small subset of the coefficients based on their absolute value and their local distribution in the image are selected “by hand” from this set of features, which can be very large in some circumstances. This reduced set of features is fed for classification to a support vector machine (SVM). For detection purposes, windows of different size are pushed over the image, and the corresponding features are extracted from these image regions; the SVM subsequently decides whether the corresponding window contains a person or not. In [5], temporal sequences of two-dimensional Haar wavelet features are combined to form high dimensional feature vectors, and these are classified with the aid of the SVMs, thus resulting in a gain in detective performance by comparison with the pure individual image approach. In [6], the method of chamfer matching is applied to the detection of pedestrian contours in the scenario of road traffic using a non-stationary camera. In [7], the technique of chamfer matching is combined with a stereo image processing system and a neural network with local receptive fields in accordance with [8] which is used as a texture classifier in order to attain a reliable and robust classification result.
Other methods use statistical shape models in order to detect and to track persons. Here, [9] concerns models that are obtained by means of a training phase and in which exemplary contours are described by positions of feature points. The parameter set is reduced by using a principal component analysis (PCA), thus resulting in a certain generalization ability in addition to a reduction in the computational outlay. This is useful in the event of the tracking of such a deformable contour, for example of a moving pedestrian, over time, since parameter sets inconsistent with the learning set are avoided from the very first. It is not only the contours of whole persons that can be detected—so also can those of a hand, and the corresponding movements can be detected. However, with this approach all the features must be present at any time, and for this reason no instances of masking are permitted. Furthermore, it is not excluded that the parameterization determined by the training phase permits physically impossible states. The shape representation is given by B splines in [10]. Assuming a stationary camera, the person is segmented out from the background by difference image analysis; the tracking algorithm operates with Kalman filters.
Elsewhere, the technique of color cluster flow is used [11] in order to detect persons in image sequences recorded with a moving camera. Even in the event of partial masking of the person, it is therefore possible to detect persons and track them over time very reliably. This detection stage is combined with the TDNN classification approach described in detail in [8].
Recent work relating to a complete, real time system for detecting pedestrians in road traffic scenes and consisting of a detection stage, a tracking stage and an object classification stage are described in [12].
Another group of methods for detecting persons are model based techniques in which explicit prior knowledge about the appearance of persons is used in the form of a model. Since instances of masking of parts of the body are problematic in this case, many systems additionally assume prior knowledge about the type of the movements to be detected and the viewing angle of the camera. The persons are segmented out by subtraction of the background, for example, and this presupposes a stationary camera as well as a background which does not change, or changes only slowly. The models used consist, for example, of straight rods (“stick figures”), with individual body parts being approximated by ellipsoids [13-16].
An example of the simultaneous use of the very different features of intensity, edges, distance and movement for the purpose of a multi-cue approach to the detection of persons standing or moving in a fashion aligned laterally to the camera is described in [17]. This approach is “object oriented” to the effect that for a specific application generic objects are defined (for example person, background, floor, light source) and associated methods are made available for detecting these objects in the image. If a few object properties are extracted from the image, the objects are instantiated such that it is possible subsequently to apply further, specialized methods.
Commercial systems for three-dimensional determination of the posture (location and fashion in which the body parts are adopted) of persons are based on the detection of marks applied to the body. A powerful method for marker-less three-dimensional determination of posture is described in [18].
A large portion of the work on detection of the posture of persons is concentrated on the 3D reconstruction of the hands. In [19], the hand is described by an articulated model with kinematic constraints, in particular with regard to physically possible joint angles. These constraints enable determination of the three-dimensional position, posture and movement of the hand. A method for detecting movement cycles of the hands (and gestures) that is based on a contour analysis, a tracking stage and a classifier, based on hidden Markov models (HMMs), for the movements is described in [20]. The GREFIT system described in [21] is capable of classifying the dynamics of hand postures on the basis of gray scale images with the aid of an articulated model of the hand. In a first stage, a hierarchical system of neural networks localizes the 2D position of the finger tips in the images of the sequence. In the second stage, a further neural network transforms these values into the best fitting 3D configuration of the articulated hand model. In [22], hand postures are detected directly by labeling corresponding images by means of a self-organizing map (SOM) and by subsequent training with the aid of a neural network.
A trajectory analysis that is based on a particle filter and which also includes symbolic object knowledge is used in [23] the detection of “manipulative gestures” (hand movements that serve for gripping or displacing objects). This approach is extended in [24] in the context of human/robot interaction to the effect that the classification of the hand trajectory by a hidden Markov model is performed in combination with a Bayes network and a particle filter. An approach to the classification of building actions (for example assembly of parts) by an analysis of movement patterns with the aid of the particle filter approach is described in [25]. It is described in [26] how the results of the analysis of the hand movements are integrated with the aim of a more reliable object detection in an approach for detecting components composed from individual elements. In this context, [27] describes a view-based system in which objects are detected by means of neural networks that can be subsequently trained online, that is to say during the operating phase.
A method for 3D modeling of a person starting from 2D image data is described in [30]. Here, a multicamera system is used to acquire image data of a person, and body parts of the latter identified in the 2D image data, in particular by means of a template matching. The body parts thus identified are then modeled by dynamic template matching with the aid of 3D templates. The result of this is that the persons can be identified quickly and continuously even if they are partially masked, or temporarily could not be acquired by the multicamera system. The detected persons are then tracked in the image data with the aid of a kinematic movement model and of Kalman filters.
An identification of persons and their body parts within image data transformed into 3D space is described in [31]. 3D voxel data are generated starting from the image data generated by a multicamera system. Proceeding therefrom, corresponding templates are matched to body parts by means of specific matching algorithms. Here, as well, reference is made to a kinematic body model as previously in the case of [30].
In addition to generation of 3D person models from 2D image data and general movement analysis, the contributions described in [32] additionally indicate a first approach to the analysis of the biometric behavior of the observed persons, in particular their gestures (“hand raising for signaling the desire to ask a question”).
The prior art described above shows that a plurality of methods based on the image processing are known for the purpose of detecting persons in different complex environments, for detecting body parts and their movement cycles, and for detecting complex objects composed of individual parts and the corresponding assembly activities. The applicability of these algorithms is, however, frequently described only with the aid of purely academic applications.
SUMMARY OF THE INVENTION
The object of the invention is to provide an approach for a camera-based detection and modeling of persons in an industrial environment.
According to one aspect of the invention, there is provided a method for observation of a person in an industrial environment comprising a moveable machine element, the method comprising the steps of: acquiring image data of the person by means of a multicamera system, analyzing the image data in order to produce a person hypothesis representing the person, providing an articulated virtual 3D model of a human body, matching the articulated virtual 3D model of the human body to the person hypothesis in order to generate a movement behavior representation, determining an instantaneous position of the machine element, determining a hazard potential depending on the position of the machine element and the movement behavior representation, and controlling the machine element as a function of the hazard potential.
According to another aspect, there is provided A method for observation of a person in an industrial environment comprising a moveable machine element, the method comprising the steps of: acquiring image data of the person by means of a multicamera system, analyzing the image data in order to produce a person hypothesis representing the person, providing an articulated virtual 3D model of a human body, matching the articulated virtual 3D model to the person hypothesis, providing a database containing a plurality of reference data representing a reference movement cycle of the person, the reference data having been determined from shape and position of the articulated virtual 3D model during a plurality of reference movement phases, generating current data representing a current movement of the person as a function of a current shape and position of the articulated virtual 3D model, correlating the current data with the reference data from the database, wherein a current movement phase is detected whenever the current data exhibits a predefined degree of similarity to the reference data, wherein a movement cycle is assessed as having been completely carried out by the person whenever a specific sequence of current movement phases has been detected, and wherein a signal is produced whenever an incomplete movement cycle is determined.
DESCRIPTION OF PREFERRED EMBODIMENTS
In order to obtain the most detailed information possible about the position and alignment of a person, in particular also with reference to body parts thereof, in an industrial environment, image data of the person are acquired by means of a multicamera system. These image data are then examined with regard to the imaging of a person such that whenever a person has been detected in the image data an articulated, virtual 3D model of the human body is matched to this person hypothesis. Subsequently, this virtual body model is continuously matched to the movement behavior of the person detected in the image data.
In a first refinement of the observation of a person in an industrial environment, the position and/or the movement behavior of a machine or a machine element located in the environment of the person is determined. A hazard potential can be determined starting from knowledge of the position and of the movement behavior of the virtual body model in space. The hazard potential thus determined is subjected to a threshold value comparison in order to act upon the movement control of the machine or the machine part in the event of this threshold value being exceeded.
In a particularly advantageous way, acting upon the movement control of the machine or the machine part effects a shutdown thereof or a slowing down of the movement thereof. If only a slowing down of the movement is effected, the machine or its movable machine element is able to continue the work operation with a simultaneous reduction in the hazard potential.
This way of observing persons renders it possible in a particularly advantageous way to apply the method for securing a hazard in the context of reliable human/machine interaction. It is thereby possible to allocate a location- and time-dependent variable degree of hazard to the monitored spatial areas as a function of the current position and the current movement state of a machine or a machine element, as well as of the observed person.
In the alternative refinement of the observation of persons in an industrial environment, data are continuously derived therefrom as a function of the current shape and position of the virtual body model and are correlated with the data of a database. In this case, the database contains a plurality of data that have been determined in advance from shape and position of a body model during a plurality of movement phases describing a movement cycle of a person. In this case, in the course of the method those movement phases are regarded by the observed person as having been adopted by the observed person whenever the data derived from the current body model thereof exhibits a certain degree of similarity to the data stored in relation to this movement phase in the database. If in this case a specific sequence of movement phases stored in the database is detected, the movement cycle is then regarded as having been completely carried out by the observed person. If, however, the movement cycle is assessed as having been incompletely carried out, signaling to this effect is performed.
This type of observation of a person advantageously enables the method for checking movement and work cycles to be applied in the industrial production field. By way of example, a complex manual mounting cycle is thereby checked during its execution as to whether appropriate work is carried out completely and in the correct sequence. Should the observed movement cycle deviate from the expected one, this can be signaled. Particularly in the case of highly complex modules such as, for example, aggregates, this type of quality inspection is substantially more effective than checking the assembly together with possible subsequent reworking, since in the latter case modules may be required to be completely or partially disassembled, and this is associated in turn with a high outlay on time and costs.
A further advantageous field application of this alternative refinement of the invention is the supervision of newly trained operating staff. Many mistakes arise in production when, for example, a new workforce needs to be trained immediately during a vacation. The work cycles can be observed by newly trained operating staff by means of the invention. It is then possible to advise when it is found that movement phases required within a movement cycle to be performed have not been fulfilled, thus resulting in the need to assume that a work operation has not been correctly carried out.
In a particularly advantageous way, in the course of signaling a movement cycle assessed as incomplete a reference is made to at least one of the movement phases that is viewed as not having been adopted in the course of checking a correct sequence with reference to the movement cycle. In this way, it is particularly easy for the observed person to detect the mistake in his/her movement cycle or in his/her execution of the operation. On the other hand, a trainer can detect which sections of the learned activity are still found to be difficult by the trainee and possibly require additional explanation or further training.
A statistical acquisition of the movement phases not detected as having been adopted can advantageously also enable problematic movement phases to be detected ergonomically within an entire movement cycle and, if appropriate, to be optimized by resetting the sequence of the movement phases, or by modifying the systems or objects to be operated to the observed person.
In order to arrive at the data relating to the individual movement phases and/or the movement cycle for the sake of the database, it is obvious to obtain said data by means of a plurality of training cycles while the recorded images of the space to be observed, the person or the machine are being acquired by means of the multicamera system. Alternatively or in addition, it is, however, also very well conceivable to generate the data of the database by simulating the movement cycle and the image information to be expected in this regard, doing so on a computer system. Thus it also becomes advantageously possible for the observation system to be already preinitialized in the factory in parallel with the system design and its implementation by means of the data of the observation system that are determined by simulation.
In a particularly advantageous way, the data volumes can be managed in the database, and the outlay on processing, can also be reduced by subjecting the image data stored therein to a transformation, in particular a main axis transformation. For this purpose, the correlation for determining the similarity of the currently acquired image data with the data of the database is performed on the basis of said transformed image data.
In a particularly advantageous way, the 3D model of the person is created on the basis of 3D point data. These point data can be created by multiocular image analysis, in particular including stereo image analysis. For example, it is possible for information items related to each 3D point in space that go beyond the spatial coordinates (x, y, z) thereof, such as its speed or acceleration, to be obtained by the use of a stereo method (as described in [28], for example) based on space time features. The segmentation of a plurality of 3D point data (3D-point cloud) is advantageously performed by means of a cluster method, in particular by means of agglomerative clustering. The convex envelope is subsequently determined for each extracted cluster of 3D points. Simple features are first determined for each cluster, in particular its height or volume, in order to detect persons. It is then possible in this way for invalid, irrational clusters to be rejected, particularly starting from a-priori knowledge of the properties of a natural person. It is thus advantageously possible to combine neighboring clusters to form persons when the individual clusters do not overlap and the corresponding constraints with regard to shape and size are observed for the resulting overall object.
3D points are unnecessarily excluded in the previous step, in particular foot points in the vicinity of the floor or contact points with other objects, are preferably reinstated. The result of this first processing step is the persons in the scene, represented as convex envelopes of the clusters representing them. Over time, the person thus detected can advantageously be tracked by tracking the convex envelope, projected onto the floor, that produces a 2D polygon, doing so by means of linear prediction and Kalman filtering.
An articulated 3D model of the human body can advantageously be matched to the person hypotheses thus found. It is advantageous in this case to model the body parts by interconnected cylinders. The posture of the person is given in this model as a vector of the joint angle of the model. The evaluation of a posture is preferably performed by determining the deviation between the features derived from the 3D-point cloud and the images of the scene, and the appearance of the model for a given posture, it thereby being possible to determine a probability that the given posture reproduces the measured shape of the person. A kernel-based particle filter [29] is particularly suitable as a probabilistic approach to the exploration of the search space.
The detected movements of the body parts are advantageously represented by motion templates. Such motion templates are here representative movement patterns that are included by 3D measurement of typical human movement cycles and delimit the space of possible joint angles and joint angle speeds of the person model. It is possible in this way to extrapolate the movements of the person in a biologically realistic fashion, particularly with the aim of detecting the risk of a collision between human and machine. In this case, a movement process can be regarded as a combined cycle of movement phases.
Literature referred to in the specification is listed below. All these literature is incorporated by reference herewith.
  • [1] D. Döttling, L. Krüger, W. Progscha, M. Wendler, C. Wöhler. Verfahren und Vorrichtung zur Absicherung eines Gefahrenbereichs [Method and device for securing a hazard zone]. German laid-open patent application DE 10245720 A1.
  • [2] D. Ebert and D. Henrich. Simero—sichere Mensch-Roboter-Koexistenz [Simero—safe human/robot coexistence]. In: Fraunhofer IRB, Ed., 2. Workshop für OTS-Systeme in der Robotik—Mensch und Roboter ohne trennende Schutzsysteme [2nd workshop for OTS systems in robotics—human and robot without separating protective systems], pp. 119-134, Stuttgart, 2003
  • [3] D. Gavrila. The Visual Analysis of Human Movement: A Survey. Computer Vision and Image Understanding, 73(1): 82-98, January 1999.
  • [4] M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, T. Poggio. Pedestrian detection using wavelet templates. IEEE Int. Conf. on Computer Vision and Pattern Recognition, pp. 193-199, San Juan, 1997.
  • [5] C. Papageorgiou, T. Poggio. A pattern Classification Approach to Dynamical Object Detection. Int. Conf. on Computer Vision, pp. 1223-1228, Kerkyra, Greece, 1999.
  • [6] D. M. Gavrila, V. Philomin, Real-time object detection for “smart” vehicles. Int. Conf. on Computer Vision, pp. 87-93, Kerkyra, Greece, 1999.
  • [7] D. M. Gavrila, J. Giebel, S. Münder. Vision-Based Pedestrian Detection: The PROTECTOR, System. IEEE Intelligent Vehicles Symposium, Parma, Italy, 2004.
  • [8] C. Wöhler, Neuronale Zeitverzögerungsnetzwerke für die Bildsequenzanalyse und ihre Anwendung in fahrzeuggebundenen Bildverarbeitungssystemen. [Neural time delay networks for image sequence analysis, and application in vehicle-bound image processing systems]. Dissertation. Mathematisch-Naturwissenschaftliche Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn, 2000 VDI-Fortschritt-Berichte, series 10, No. 645, VDI-Verlag, Dütsseldorf, 2000.
  • [9] M. J. S. Day, J. S. Payne. A Projection Filter for Use with Parameterised Learning Models. Int. Conf. on Pattern Recognition, pp. 867-869, Brisbane, 1998.
  • [10] A. Baumberg, D. Hogg. An Efficient Method for Contour Tracking Using Active Shape Models. IEEE Workshop on Motion of Non-Rigid and Articulated Objects, pp. 194-199, Austin, Tex., 1994.
  • [11] B. Heisele, C. Wöhler. Motion-Based Recognition of Pedestrians. Int. Conf. on Pattern Recognition, pp. 1325-1330, Brisbane, 1998.
  • [12] D. M. Gavrila, J. Giebel, S. Münder. Vision-Based Pedestrian Detection: The PROTECTOR, System. Proc. of the IEEE Intelligent Vehicles Symposium, Parma, Italy,
  • [13] Y. Guo, G. Xu, S. Tsuji. Understanding Human Motion Patterns. Int. Conf. on Pattern Recognition, pp. 325-329, 1994.
  • [14] I.-C. Chang, C.-L. Huang. Ribbon-based Motion Analysis of Human Body Movements. Int. Conf. on Pattern Recognition, pp. 436-440, Vienna, 1996.
  • [15] K. Akita. Image Sequence Analysis of Real World Human Motion. Pattern Recognition. Vol. 17, No. 1, pp. 73-83, 1984.
  • [16] W. Long, Y. Yang. Log-Tracker, an Attribute-Based Approach to Tracking Human Body Motion, Int. J. Pattern Recog. Artificial Intell., Vol. 5, No. 3, pp. 439-458, 1991.
  • [17] R. Kahn, M. Swain, P. Propkowicz, J. Firby. Gesture Recognition Using the Perseus Architecture. IEEE Conference on Computer Vision and Pattern Recognition, pp. 734-741, San Francisco, 1996.
  • [18] B. Rosenhahn, U. G. Kersting, A. W. Smith, J. K. Gurney, T. Brox, R. Klette. A System for Marker-less Human Motion Estimation. In: W. Kropatsch, R. Sablatnig, A. Hanbury (eds.). Pattern Recognition. Proc. 27th DAGM Symposium, Vienna, Austria. Lecture Notes in Computer Science 3663, pp. 176-183; Springer-Verlag Berlin Heidelberg, 2005.
  • [19] S. U. Lee, I. Cohen. 3D Hand Reconstruction from a Monocular View. Int. Conf. on Pattern Recognition, Cambridge, UK, 2004.
  • [20] A. Ramamoorthy, N. Vaswani, S. Chaudhury, S. Banerjee. Recognition of dynamic hand gestures. Pattern Recognition, Vol. 36, pp. 2069-2081, 2003.
  • [21] C. Nölker, H. Ritter. Visual Recognition of Continuous Hand Postures. IEEE Transactions on Neural Networks, Special Issue Multimedia, 2002.
  • [22] G. Heidemann, H. Bekel, I. Bax, A. Saalbach. Hand Gesture Recognition: Self-Organising Maps as a Graphical User Interface for the Partitioning of Large Training Data Sets. Int. Conf. on Pattern Recognition, Cambridge, UK, 2004.
  • [23] J. Fritsch, N. Hofemann, G. Sagerer. Combining Sensory and Symbolic Data for Manipulative Gesture Recognition. Int. Conf. on Pattern Recognition, Cambridge, UK, 2004.
  • [24] Z. Li, N. Hofemann, J. Fritsch, G. Sagerer. Hierarchical Modelling and Recognition of Manipulative Gesture. In Proc. IEEE ICCV, Workshop on Modeling People and Human Interaction, Beijing, China, 2005.
  • [25] J. Fritsch, F. Lömker, M. Wienecke, G. Sagerer. Erkennung von Konstruktionshandlungen aus Bildfolgen [Detection of building actions from image sequences]. In: Mustererkennung 2000, 22. DAGM-Symposium [Pattern recognition 2000, 22nd DAGM symposium], Informatik aktuell. pp. 389-396, Kiel, 2000
  • [26] E. Braun, J. Fritsch, G. Sagerer. Incorporating Process Knowledge into Object Recognition for Assemblies. IEEE Conf. on Computer Vision, pp. 726-732, Vancouver.
  • [27] H. Bekel, I. Bax, G. Heidemann, H. Ritter. Adaptive Computer Vision: Online Learning for Object Recognition. In: C. E. Rasmussen, H. H. Bütlthoff, M. A. Giese, B. Schölkopf (eds.). Pattern Recognition. Proc. 26th DAGM Symposium, Tubingen, Germany. Lecture Notes in Computer Science 3175, pp. 447-454, Springer-Verlag Berlin Heidelberg, 2004.
  • [28] C. Wöhler, L. Krüger. Verfahren und Vorrichtung zur Korrespondenzbestimmung, vorzugsweise zur dreidimensionalen Rekonstruktion einer Szene [Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene]; German patent application DE 102006013598.9, published after the priority date.
  • [29] C. Chang, R. Ansari, A. Khokhar. Multiple object tracking with kernel particle filter. IEEE Int. Conf. on Computer Vision and Pattern Recognition, Vol. 1, pp. 566-573, 2005.
  • [30] Horprasert, T. Haritaoglu, I., Harwood, D., et al. Real-time 3D motion capture. In Proc. Perceptual User Interfaces, pages 87-90, November 1998.
  • [31] I. Mikic, M. Trivedi, E. Hunter, and P. Cosman. Human body model acquisition and tracking using voxel data. International Journal on Computer Vision, 53(3):199-223, July 2003.
  • [32] M. Pardàs et al., Body detection, tracking and analysis, WP5 partners first focus meeting, 1-2 Dec. 2005, Rocquencourt, slides, http://www-rocq.inria.fr/imedia/Muscle/WP5/WP5-FFM-docs/E-TEAM-BODY-3.ppt.

Claims (14)

What is claimed is:
1. A method for observation of a person in an industrial environment comprising a moveable machine element, the method comprising the steps of
acquiring image data of the person by means of a multicamera system,
analyzing the image data in order to produce a person hypothesis representing the person,
providing an articulated virtual 3D model of a human body,
matching the articulated virtual 3D model of the human body to the person hypothesis in order to generate a movement behavior representation,
determining an instantaneous position of the machine element,
determining a hazard potential depending on the position of the machine element and the movement behavior representation, and
controlling the machine element as a function of the hazard potential.
2. The method of claim 1, wherein the machine element is slowed down or shut down whenever the hazard potential exceeds a predefined threshold.
3. A method for observation of a person in an industrial environment comprising a moveable machine element, the method comprising the steps of
acquiring image data of the person by means of a multicamera system,
analyzing the image data in order to produce a person hypothesis representing the person,
providing an articulated virtual 3D model of a human body,
matching the articulated virtual 3D model to the person hypothesis,
providing a database containing a plurality of reference data representing a reference movement cycle of the person, the reference data having been determined from shape and position of the articulated virtual 3D model during a plurality of reference movement phases,
generating current data representing a current movement of the person as a function of a current shape and position of the articulated virtual 3D model,
correlating the current data with the reference data from the database,
wherein a current movement phase is detected whenever the current data exhibits a predefined degree of similarity to the reference data,
wherein a movement cycle is assessed as having been completely carried out by the person whenever a specific sequence of current movement phases has been detected, and
wherein a signal is produced whenever an incomplete movement cycle is determined.
4. The method of claim 3, wherein the signal comprises in indication of a missing part in the incomplete movement cycle.
5. The method of claim 4, further comprising a statistical acquisition of missing parts.
6. The method of claim 3, wherein the reference data are generated by means of a plurality of training cycles.
7. The method of claim 3, wherein the reference data are generated by means of simulation.
8. The method of claim 3, wherein the reference data are subjected to a transformation in order to produce transformed reference data, with the step of correlating being performed on the basis of the transformed reference data.
9. The method of claim 1, wherein the articulated virtual 3D model is created on the basis of 3D point data including information relating to spatial coordinates.
10. The method of claim 8, wherein the 3D point data are combined in a cluster as a function of predefined person-specific limiting values.
11. The method of claim 9, wherein two individual clusters are combined to form a common cluster representing the person, whenever the two individual clusters do not overlap and the resulting common cluster falls within the person-specific limiting values.
12. The method of claim 9, wherein the industrial environment comprises a floor, and a projection of the cluster onto the floor is determined and tracked in order to continuously match the articulated virtual 3D model the person during movement.
13. The method of claim 12, wherein the projection is tracked by means of linear prediction and Kalman filtering.
14. The method of claim 1, wherein a probability of match is determined, the probability of match representing a probability with which a given posture of the articulated, virtual 3D model represent the current shape of the person.
US12/362,745 2006-08-02 2009-01-30 Method for observation of a person in an industrial environment Active 2029-01-28 US8154590B2 (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
DE102006036400 2006-08-02
DE102006036400.7 2006-08-02
DE102006036400 2006-08-02
DE102006048166.6 2006-10-10
DE102006048166A DE102006048166A1 (en) 2006-08-02 2006-10-10 Method for observing a person in an industrial environment
DE102006048166 2006-10-10
PCT/EP2007/003037 WO2008014831A2 (en) 2006-08-02 2007-04-04 Method for observation of a person in an industrial environment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2007/003037 Continuation WO2008014831A2 (en) 2006-08-02 2007-04-04 Method for observation of a person in an industrial environment

Publications (2)

Publication Number Publication Date
US20090237499A1 US20090237499A1 (en) 2009-09-24
US8154590B2 true US8154590B2 (en) 2012-04-10

Family

ID=38885059

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/362,745 Active 2029-01-28 US8154590B2 (en) 2006-08-02 2009-01-30 Method for observation of a person in an industrial environment

Country Status (6)

Country Link
US (1) US8154590B2 (en)
EP (1) EP2046537A2 (en)
JP (1) JP2009545789A (en)
CN (1) CN101511550B (en)
DE (1) DE102006048166A1 (en)
WO (1) WO2008014831A2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110081044A1 (en) * 2009-10-07 2011-04-07 Microsoft Corporation Systems And Methods For Removing A Background Of An Image
US20110080336A1 (en) * 2009-10-07 2011-04-07 Microsoft Corporation Human Tracking System
US20110080475A1 (en) * 2009-10-07 2011-04-07 Microsoft Corporation Methods And Systems For Determining And Tracking Extremities Of A Target
US20130129230A1 (en) * 2011-11-18 2013-05-23 Microsoft Corporation Computing Pose and/or Shape of Modifiable Entities
US8891827B2 (en) 2009-10-07 2014-11-18 Microsoft Corporation Systems and methods for tracking a model
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US9427871B2 (en) 2013-05-06 2016-08-30 Abb Technology Ag Human safety provision in mobile automation environments
CN107053165A (en) * 2015-09-28 2017-08-18 西门子产品生命周期管理软件公司 Method and data handling system for simulating and handling anti-collision management
US9940553B2 (en) 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
US10099372B2 (en) 2017-02-07 2018-10-16 Veo Robotics, Inc. Detecting and classifying workspace regions for safety monitoring
US10210382B2 (en) 2009-05-01 2019-02-19 Microsoft Technology Licensing, Llc Human body pose estimation
US10924881B2 (en) * 2016-03-03 2021-02-16 Husqvarna Ab Device for determining construction device and worker position
US11215711B2 (en) 2012-12-28 2022-01-04 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
US11518051B2 (en) 2017-02-07 2022-12-06 Veo Robotics, Inc. Dynamic, interactive signaling of safety-related conditions in a monitored environment
US11541543B2 (en) 2017-02-07 2023-01-03 Veo Robotics, Inc. Dynamic, interactive signaling of safety-related conditions in a monitored environment
US11820025B2 (en) 2017-02-07 2023-11-21 Veo Robotics, Inc. Safe motion planning for machinery operation

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009514551A (en) * 2005-11-09 2009-04-09 プリメーラ バイオシステムズ インコーポレーティッド Multiple quantitative detection method for pathogens
FR2927444B1 (en) * 2008-02-12 2013-06-14 Cliris METHOD FOR GENERATING A DENSITY IMAGE OF AN OBSERVATION AREA
CN102239032B (en) * 2008-12-03 2014-07-02 Abb研究有限公司 A robot safety system and a method
AT508094B1 (en) 2009-03-31 2015-05-15 Fronius Int Gmbh METHOD AND DEVICE FOR OPERATING A POWER SOURCE ASSOCIATED WITH A HAND-HELD WORK EQUIPMENT
EP2430614B1 (en) * 2009-05-11 2013-09-18 Universität zu Lübeck Method for the real-time-capable, computer-assisted analysis of an image sequence containing a variable pose
US20110026770A1 (en) * 2009-07-31 2011-02-03 Jonathan David Brookshire Person Following Using Histograms of Oriented Gradients
DE102009046107A1 (en) * 2009-10-28 2011-05-05 Ifm Electronic Gmbh System and method for interaction between a person and a machine
US9189949B2 (en) 2010-12-09 2015-11-17 Sealed Air Corporation (Us) Automated monitoring and control of contamination in a production area
US9406212B2 (en) 2010-04-01 2016-08-02 Sealed Air Corporation (Us) Automated monitoring and control of contamination activity in a production area
US9143843B2 (en) 2010-12-09 2015-09-22 Sealed Air Corporation Automated monitoring and control of safety in a production area
DE102010017857B4 (en) 2010-04-22 2019-08-08 Sick Ag 3D security device and method for securing and operating at least one machine
US9011607B2 (en) 2010-10-07 2015-04-21 Sealed Air Corporation (Us) Automated monitoring and control of cleaning in a production area
DE102010061382B4 (en) * 2010-12-21 2019-02-14 Sick Ag Opto-electronic sensor and method for detection and distance determination of objects
US20130070056A1 (en) * 2011-09-20 2013-03-21 Nexus Environmental, LLC Method and apparatus to monitor and control workflow
US10095991B2 (en) * 2012-01-13 2018-10-09 Mitsubishi Electric Corporation Risk measurement system
DE102012102236A1 (en) 2012-03-16 2013-09-19 Pilz Gmbh & Co. Kg Method and device for securing a hazardous working area of an automated machine
DE102012103163A1 (en) * 2012-04-12 2013-10-17 Steinel Gmbh Device for controlling a building unit
WO2014008929A1 (en) 2012-07-10 2014-01-16 Siemens Aktiengesellschaft Robot arrangement and method for controlling a robot
WO2014036549A2 (en) * 2012-08-31 2014-03-06 Rethink Robotics, Inc. Systems and methods for safe robot operation
US10776734B2 (en) * 2012-09-10 2020-09-15 The Boeing Company Ergonomic safety evaluation with labor time standard
US9804576B2 (en) * 2013-02-27 2017-10-31 Rockwell Automation Technologies, Inc. Recognition-based industrial automation control with position and derivative decision reference
US9498885B2 (en) 2013-02-27 2016-11-22 Rockwell Automation Technologies, Inc. Recognition-based industrial automation control with confidence-based decision support
US9393695B2 (en) * 2013-02-27 2016-07-19 Rockwell Automation Technologies, Inc. Recognition-based industrial automation control with person and object discrimination
US9798302B2 (en) 2013-02-27 2017-10-24 Rockwell Automation Technologies, Inc. Recognition-based industrial automation control with redundant system input support
DE102014209337A1 (en) 2013-05-17 2014-11-20 Ifm Electronic Gmbh System and method for detecting a hazardous area
US20150092040A1 (en) * 2013-10-01 2015-04-02 Broadcom Corporation Gesture-Based Industrial Monitoring
DE102013110905A1 (en) * 2013-10-01 2015-04-02 Daimler Ag MRK planning and monitoring technology
US9452531B2 (en) * 2014-02-04 2016-09-27 Microsoft Technology Licensing, Llc Controlling a robot in the presence of a moving object
DE102014202733B4 (en) * 2014-02-14 2022-09-01 Homag Plattenaufteiltechnik Gmbh Method for operating a machine, in particular a panel dividing plant
JP5785284B2 (en) 2014-02-17 2015-09-24 ファナック株式会社 Robot system that prevents accidents of transported objects falling
US9921300B2 (en) 2014-05-19 2018-03-20 Rockwell Automation Technologies, Inc. Waveform reconstruction in a time-of-flight sensor
US11243294B2 (en) 2014-05-19 2022-02-08 Rockwell Automation Technologies, Inc. Waveform reconstruction in a time-of-flight sensor
US9696424B2 (en) 2014-05-19 2017-07-04 Rockwell Automation Technologies, Inc. Optical area monitoring with spot matrix illumination
US9256944B2 (en) 2014-05-19 2016-02-09 Rockwell Automation Technologies, Inc. Integration of optical area monitoring with industrial machine control
US9625108B2 (en) 2014-10-08 2017-04-18 Rockwell Automation Technologies, Inc. Auxiliary light source associated with an industrial application
US10198706B2 (en) * 2015-07-31 2019-02-05 Locus Robotics Corp. Operator identification and performance tracking
DE102015225587A1 (en) * 2015-12-17 2017-06-22 Volkswagen Aktiengesellschaft Interaction system and method for interaction between a person and at least one robot unit
DE102016200455A1 (en) * 2016-01-15 2017-07-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Safety device and method for safe operation of a robot
WO2017198342A1 (en) * 2016-05-18 2017-11-23 Bobst Grenchen Ag Control system for a functional section of a paper processing device
DE102016212695B4 (en) * 2016-05-31 2019-02-21 Siemens Aktiengesellschaft industrial robots
JP6703691B2 (en) * 2016-06-02 2020-06-03 コマツ産機株式会社 Controller, forging machine, and control method
WO2018018574A1 (en) * 2016-07-29 2018-02-01 罗伯特·博世有限公司 Personnel protection system and operation method therefor
US11000953B2 (en) * 2016-08-17 2021-05-11 Locus Robotics Corp. Robot gamification for improvement of operator performance
EP3583450A1 (en) 2017-02-20 2019-12-25 3M Innovative Properties Company Optical articles and systems interacting with the same
CN111164605A (en) 2017-09-27 2020-05-15 3M创新有限公司 Personal protective equipment management system using optical patterns for equipment and security monitoring
DE102017221305A1 (en) * 2017-11-23 2019-05-23 Robert Bosch Gmbh Method for operating a collaborative robot
DE102018109320A1 (en) * 2018-04-19 2019-10-24 Gottfried Wilhelm Leibniz Universität Hannover Method for detecting an intention of a partner in relation to a multi-membered actuated kinematics
JP2019200560A (en) * 2018-05-16 2019-11-21 パナソニックIpマネジメント株式会社 Work analyzing device and work analyzing method
CN108846891B (en) * 2018-05-30 2023-04-28 广东省智能制造研究所 Man-machine safety cooperation method based on three-dimensional skeleton detection
DE102018114156B3 (en) * 2018-06-13 2019-11-14 Volkswagen Aktiengesellschaft Method for controlling a robot, in particular an industrial robot, and device for controlling the robot
JP7160923B2 (en) * 2018-08-09 2022-10-25 株式会社Fuji Simulation method, simulation system
KR102085168B1 (en) * 2018-10-26 2020-03-04 오토아이티(주) Method and apparatus for managing safety in dangerous zone based on motion tracking
DE102019103349B3 (en) 2019-02-11 2020-06-18 Beckhoff Automation Gmbh Industrial robot system and method for controlling an industrial robot
DE102019207144A1 (en) * 2019-05-16 2020-11-19 Robert Bosch Gmbh Method for recognizing an operator of a work machine
EP3761193A1 (en) * 2019-07-04 2021-01-06 Siemens Aktiengesellschaft Safety analysis of technical systems comprising human objects
DE102019216405A1 (en) * 2019-10-24 2021-04-29 Robert Bosch Gmbh Method for preventing personal injury when operating a mobile work machine
IT201900021108A1 (en) * 2019-11-13 2021-05-13 Gamma System S R L Safety system for an industrial machinery
CN113033242A (en) * 2019-12-09 2021-06-25 上海幻电信息科技有限公司 Action recognition method and system
CN111275941A (en) * 2020-01-18 2020-06-12 傲通环球环境控制(深圳)有限公司 Construction site safety management system
EP3865257A1 (en) 2020-02-11 2021-08-18 Ingenieurbüro Hannweber GmbH Device and method for monitoring and controlling a technical working system
CN111553264B (en) * 2020-04-27 2023-04-18 中科永安(安徽)科技有限公司 Campus non-safety behavior detection and early warning method suitable for primary and secondary school students
CN111726589B (en) * 2020-07-07 2022-01-28 山东天原管业股份有限公司 Production and processing method of valve body
AT17459U1 (en) * 2021-01-21 2022-05-15 Altendorf Gmbh Safety device for machine tools
CN112936267B (en) * 2021-01-29 2022-05-27 华中科技大学 Man-machine cooperation intelligent manufacturing method and system
EP4170438A1 (en) * 2022-06-07 2023-04-26 Pimu Llc Safety control system

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1061487A1 (en) 1999-06-17 2000-12-20 Istituto Trentino Di Cultura A method and device for automatically controlling a region in space
US20020165642A1 (en) 1999-08-04 2002-11-07 Masaya Sakaue User-machine interface system for enhanced interaction
US6516099B1 (en) * 1997-08-05 2003-02-04 Canon Kabushiki Kaisha Image processing apparatus
DE10245720A1 (en) 2002-09-24 2004-04-01 Pilz Gmbh & Co. Safety method for protecting automatic machine danger area with scene analysis of different images of danger area via 2 different analysis algorithms for reliable detection of foreign object in danger area
WO2004029502A1 (en) 2002-09-24 2004-04-08 Pilz Gmbh & Co. Kg Method and device for making a hazardous area safe
WO2004055732A1 (en) 2002-12-18 2004-07-01 Pilz Gmbh & Co. Kg Method for determining three-dimensional object contours by means of images taken synchronously
EP1482238A2 (en) 2003-05-29 2004-12-01 CASAGRANDE SpA Safety device for operating machines, particularly drilling machines or suchlike, and method to recognize the presence of persons, using such safety device
US20050033497A1 (en) * 2003-08-06 2005-02-10 Stopczynski Lawrence Gerard Method of controlling an external object sensor for an automotive vehicle
US20050207618A1 (en) 2002-09-24 2005-09-22 Christian Wohler Method and device for safeguarding a hazardous area
US6956469B2 (en) * 2003-06-13 2005-10-18 Sarnoff Corporation Method and apparatus for pedestrian detection
US20060186702A1 (en) * 2005-01-17 2006-08-24 Kabushiki Kaisha Toyota Chuo Kenkyusho Collision behavior control apparatus
DE102006013598A1 (en) 2005-09-01 2007-03-15 Daimlerchrysler Ag Image area`s spatial correspondence determining method for three-dimensional reconstruction of scene, involves using parameters of parameterize functions for formation of similarity measurement between image areas of several image sequences
US7353082B2 (en) * 2003-11-24 2008-04-01 Abb Research Ltd. Method and a system for programming an industrial robot
US20080285807A1 (en) * 2005-12-08 2008-11-20 Lee Jae-Ho Apparatus for Recognizing Three-Dimensional Motion Using Linear Discriminant Analysis
US7460125B2 (en) * 2005-12-02 2008-12-02 Electronics And Telecommunications Research Institute Apparatus and method for immediately creating and controlling virtual reality interactive human body model for user-centric interface
US20080312765A1 (en) * 2006-02-16 2008-12-18 Virtual-Mirrors Limited Design and Production of Garments
US7508977B2 (en) * 2000-01-20 2009-03-24 Canon Kabushiki Kaisha Image processing apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4066168B2 (en) * 2003-03-13 2008-03-26 オムロン株式会社 Intruder monitoring device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6516099B1 (en) * 1997-08-05 2003-02-04 Canon Kabushiki Kaisha Image processing apparatus
EP1061487A1 (en) 1999-06-17 2000-12-20 Istituto Trentino Di Cultura A method and device for automatically controlling a region in space
US20020165642A1 (en) 1999-08-04 2002-11-07 Masaya Sakaue User-machine interface system for enhanced interaction
US7508977B2 (en) * 2000-01-20 2009-03-24 Canon Kabushiki Kaisha Image processing apparatus
US20050207618A1 (en) 2002-09-24 2005-09-22 Christian Wohler Method and device for safeguarding a hazardous area
WO2004029502A1 (en) 2002-09-24 2004-04-08 Pilz Gmbh & Co. Kg Method and device for making a hazardous area safe
DE10245720A1 (en) 2002-09-24 2004-04-01 Pilz Gmbh & Co. Safety method for protecting automatic machine danger area with scene analysis of different images of danger area via 2 different analysis algorithms for reliable detection of foreign object in danger area
WO2004055732A1 (en) 2002-12-18 2004-07-01 Pilz Gmbh & Co. Kg Method for determining three-dimensional object contours by means of images taken synchronously
EP1482238A2 (en) 2003-05-29 2004-12-01 CASAGRANDE SpA Safety device for operating machines, particularly drilling machines or suchlike, and method to recognize the presence of persons, using such safety device
US6956469B2 (en) * 2003-06-13 2005-10-18 Sarnoff Corporation Method and apparatus for pedestrian detection
US20050033497A1 (en) * 2003-08-06 2005-02-10 Stopczynski Lawrence Gerard Method of controlling an external object sensor for an automotive vehicle
US7353082B2 (en) * 2003-11-24 2008-04-01 Abb Research Ltd. Method and a system for programming an industrial robot
US20060186702A1 (en) * 2005-01-17 2006-08-24 Kabushiki Kaisha Toyota Chuo Kenkyusho Collision behavior control apparatus
DE102006013598A1 (en) 2005-09-01 2007-03-15 Daimlerchrysler Ag Image area`s spatial correspondence determining method for three-dimensional reconstruction of scene, involves using parameters of parameterize functions for formation of similarity measurement between image areas of several image sequences
US7460125B2 (en) * 2005-12-02 2008-12-02 Electronics And Telecommunications Research Institute Apparatus and method for immediately creating and controlling virtual reality interactive human body model for user-centric interface
US20080285807A1 (en) * 2005-12-08 2008-11-20 Lee Jae-Ho Apparatus for Recognizing Three-Dimensional Motion Using Linear Discriminant Analysis
US20080312765A1 (en) * 2006-02-16 2008-12-18 Virtual-Mirrors Limited Design and Production of Garments

Non-Patent Citations (31)

* Cited by examiner, † Cited by third party
Title
A. Baumberg et al.; An Efficient Method for Contour Tracking Using Active Shape Models; Apr. 1994; pp. 1-16.
A. Ramamoorthy et al.; Recognition of dynamic hand gestures; 2003; pp. 2069-2081.
B. Heisele et al.; Motion-Based Recognition of Pedestrians; 1998; pp. 1325-1330.
B. Rosenhahn et al.; A System for Marker-Less Human Motion Estimation; 2005; pp. 230-237.
C. Chang et al.; Multiple Object Tracking with Kernel Particle Filter; 2005; pp. 566-573.
C. Nölker et al.; Visual Recognition of Continuous Hand Postures; 2002; pp. 1-12.
C. Papageorgiou et al.; A Pattern Classification Approach to Dynamical Object Detection; 1999; pp. 1223-1228.
C. Wöhler, Neuronale Zeitverzögerungsnetzwerke für die Bildsequenzanalyse und ihre Anwendung in fahr-zeuggebundenen Bildverarbeitungssystemen. [Neural time delay networks for image sequence analysis, and application in vehicle-bound image processing systems]. Dissertation. Mathematisch-Naturwissenschaftliche Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn, 2000 VDI-Fortschritt-Berichte, series 10, No. 645, VDI-Verlag, Düsseldorf, 2000; 3 pages.
D. Ebert et al.; Simero-Sichere Mensch-Roboter-Koexistenz [Simero-safe human/robot coexistence]; 2003; pp. 119-134.
D.M. Gavrila et al.; Real-Time Object Detection for "Smart" Vehicles; 1999; pp. 87-93.
D.M. Gavrila et al.; Vision-Based Pedestrian Detection: The PROTECTOR System; 2004; pp. 13-18.
D.M. Gavrila; The Visual Analysis of Human Movement: A Survey; 1999; pp. 1-43.
D.M. Gavrila; The Visual Analysis of Human Movement: A Survey; Jan. 1999; pp. 82-98.
E. Braun et al.; Incorporating Process Knowledge into Object Recognition for Assemblies; 2001; pp. 726-732.
G. Heidemann et al.; Hand Gesture Recognition: Self-Organising Maps as a Graphical User Interface for the Partitioning of Large Training Data Sets; 2004; 4 pages.
Gandhi, T.; Trivedi, M.M.; , "Pedestrian collision avoidance systems: a survey of computer vision based recent studies," Intelligent Transportation Systems Conference, 2006. ITSC '06. IEEE , vol., no., pp. 976-981, Sep. 17-20, 2006. *
H. Bekel et al.; Adaptive Computer Vision: Online Learning for Object Recognition; 2004; pp. 1-8.
I. Mikic et al.; Human Body Model Acquisition and Tracking Using Voxel Data; Jul. 2003; pp. 199-223.
I.-Cheng Chang et al.; Ribbon-Based Motion Analysis of Human Body Movements;1996; pp. 436-440.
J. Fritsch et al.; Combining Sensory and Symbolic Data for Manipulative Gesture Recognition; 2004; pp. 930-933.
J. Fritsch et al.; Erkennung von Konstruktionshandlungen aus Bildfolgen [Detection of building actions from image sequences]; 2000; pp. 389-396.
K. Akita; Image Sequence Analysis of Real World Human Motion; 1984; pp. 73-83.
M. Oren et al.; Pedestrian Detection Using Wavelet Templates; 1997; pp. 193-199.
M. Pardas et al.; Body detection, tracking and analysis; http://www-rocq.inria.fr/imedia/Muscle/WP5-FFM-docs/E-TEAM-BODY-3.ppt; 2005; 40 pages.
M.J.S. Day et al.; A Projection Filter for Use with Parameterised Learning Models; 1998; pp. 867-869.
R. Kahn et al.; Gesture Recognition Using the Perseus Architecture; 1996; pp. 734-741.
S.U. Lee et al.; 3D Hand Reconstruction from a Monocular View; 2004; pp. 310-313.
T. Horprasert et al.; Real-time 3D Motion Capture; Nov. 1998; pp. 87-90.
W. Long et al.; Log-Tracker: An Attribute-Based Approach to Tracking Human Body Motion; 1991; pp. 439-458.
Y. Guo et al.; Understanding Human Motion Patterns; 1994; pp. 325-329.
Z. Li et al.; Hierarchical Modelling and Recognition of Manipulative Gesture; 2005; pp. 1-8.

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10210382B2 (en) 2009-05-01 2019-02-19 Microsoft Technology Licensing, Llc Human body pose estimation
US9659377B2 (en) 2009-10-07 2017-05-23 Microsoft Technology Licensing, Llc Methods and systems for determining and tracking extremities of a target
US20110081044A1 (en) * 2009-10-07 2011-04-07 Microsoft Corporation Systems And Methods For Removing A Background Of An Image
US8970487B2 (en) 2009-10-07 2015-03-03 Microsoft Technology Licensing, Llc Human tracking system
US9522328B2 (en) 2009-10-07 2016-12-20 Microsoft Technology Licensing, Llc Human tracking system
US8564534B2 (en) 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system
US10147194B2 (en) * 2009-10-07 2018-12-04 Microsoft Technology Licensing, Llc Systems and methods for removing a background of an image
US8861839B2 (en) 2009-10-07 2014-10-14 Microsoft Corporation Human tracking system
US8867820B2 (en) * 2009-10-07 2014-10-21 Microsoft Corporation Systems and methods for removing a background of an image
US8891827B2 (en) 2009-10-07 2014-11-18 Microsoft Corporation Systems and methods for tracking a model
US8897495B2 (en) 2009-10-07 2014-11-25 Microsoft Corporation Systems and methods for tracking a model
US9821226B2 (en) 2009-10-07 2017-11-21 Microsoft Technology Licensing, Llc Human tracking system
US8963829B2 (en) 2009-10-07 2015-02-24 Microsoft Corporation Methods and systems for determining and tracking extremities of a target
US20110080336A1 (en) * 2009-10-07 2011-04-07 Microsoft Corporation Human Tracking System
US20170278251A1 (en) * 2009-10-07 2017-09-28 Microsoft Technology Licensing, Llc Systems and methods for removing a background of an image
US8542910B2 (en) 2009-10-07 2013-09-24 Microsoft Corporation Human tracking system
US9582717B2 (en) 2009-10-07 2017-02-28 Microsoft Technology Licensing, Llc Systems and methods for tracking a model
US20110080475A1 (en) * 2009-10-07 2011-04-07 Microsoft Corporation Methods And Systems For Determining And Tracking Extremities Of A Target
US9679390B2 (en) 2009-10-07 2017-06-13 Microsoft Technology Licensing, Llc Systems and methods for removing a background of an image
US9619561B2 (en) 2011-02-14 2017-04-11 Microsoft Technology Licensing, Llc Change invariant scene recognition by an agent
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US20130129230A1 (en) * 2011-11-18 2013-05-23 Microsoft Corporation Computing Pose and/or Shape of Modifiable Entities
US8724906B2 (en) * 2011-11-18 2014-05-13 Microsoft Corporation Computing pose and/or shape of modifiable entities
US11215711B2 (en) 2012-12-28 2022-01-04 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
US9940553B2 (en) 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
US11710309B2 (en) 2013-02-22 2023-07-25 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
US9427871B2 (en) 2013-05-06 2016-08-30 Abb Technology Ag Human safety provision in mobile automation environments
CN107053165A (en) * 2015-09-28 2017-08-18 西门子产品生命周期管理软件公司 Method and data handling system for simulating and handling anti-collision management
US10414047B2 (en) 2015-09-28 2019-09-17 Siemens Product Lifecycle Management Software Inc. Method and a data processing system for simulating and handling of anti-collision management for an area of a production plant
CN107053165B (en) * 2015-09-28 2022-04-19 西门子工业软件有限公司 Method and data processing system for simulating and processing anti-collision management
US10924881B2 (en) * 2016-03-03 2021-02-16 Husqvarna Ab Device for determining construction device and worker position
US11040450B2 (en) 2017-02-07 2021-06-22 Veo Robotics, Inc. Dynamically determining and monitoring workspace safe zones using semantic representations of workpieces
US10882185B2 (en) 2017-02-07 2021-01-05 Veo Robotics, Inc. Dynamically determining workspace safe zones with speed and separation monitoring
US11279039B2 (en) 2017-02-07 2022-03-22 Veo Robotics, Inc. Ensuring safe operation of industrial machinery
US10899007B2 (en) 2017-02-07 2021-01-26 Veo Robotics, Inc. Ensuring safe operation of industrial machinery
US11376741B2 (en) 2017-02-07 2022-07-05 Veo Robotics, Inc. Dynamically determining workspace safe zones with speed and separation monitoring
US11518051B2 (en) 2017-02-07 2022-12-06 Veo Robotics, Inc. Dynamic, interactive signaling of safety-related conditions in a monitored environment
US11541543B2 (en) 2017-02-07 2023-01-03 Veo Robotics, Inc. Dynamic, interactive signaling of safety-related conditions in a monitored environment
US11623356B2 (en) 2017-02-07 2023-04-11 Veo Robotics, Inc. Dynamic, interactive signaling of safety-related conditions in a monitored environment
US10099372B2 (en) 2017-02-07 2018-10-16 Veo Robotics, Inc. Detecting and classifying workspace regions for safety monitoring
US11820025B2 (en) 2017-02-07 2023-11-21 Veo Robotics, Inc. Safe motion planning for machinery operation

Also Published As

Publication number Publication date
DE102006048166A1 (en) 2008-02-07
EP2046537A2 (en) 2009-04-15
JP2009545789A (en) 2009-12-24
US20090237499A1 (en) 2009-09-24
CN101511550B (en) 2013-12-18
WO2008014831A3 (en) 2008-04-03
CN101511550A (en) 2009-08-19
WO2008014831A2 (en) 2008-02-07

Similar Documents

Publication Publication Date Title
US8154590B2 (en) Method for observation of a person in an industrial environment
Nickels et al. Model-based tracking of complex articulated objects
Okada et al. Multi-cue 3D object recognition in knowledge-based vision-guided humanoid robot system
Urgo et al. A human modelling and monitoring approach to support the execution of manufacturing operations
Mao et al. Learning hand movements from markerless demonstrations for humanoid tasks
Hak et al. Reverse control for humanoid robot task recognition
Manns et al. Identifying human intention during assembly operations using wearable motion capturing systems including eye focus
JP2008140101A (en) Unconstrained and real-time hand tracking device using no marker
Puls et al. Cognitive robotics in industrial environments
Shariatee et al. Safe collaboration of humans and SCARA robots
Amat et al. Stereoscopic system for human body tracking in natural scenes
Cui et al. Visual hand motion capture for guiding a dexterous hand
Moughlbay et al. Reliable workspace monitoring in safe human-robot environment
Shah et al. Gesture recognition technique: a review
Malassiotis et al. A face and gesture recognition system based on an active stereo sensor
Stefańczyk et al. Localization of essential door features for mobile manipulation
Mackay et al. Time-varying-geometry object surveillance using a multi-camera active-vision system
Funakubo et al. Verification of illumination tolerance for clothes recognition
Amat et al. Virtual exoskeleton for telemanipulation
Knoop et al. Sensor fusion for model based 3d tracking
Wöhler et al. Applications to Safe Human–Robot Interaction
Kojo et al. Gesture recognition for humanoids using proto-symbol space
Kahlouche et al. Human pose recognition and tracking using RGB-D camera
Winiarski et al. Automated inspection of door parts based on fuzzy recognition system
Senior et al. Hybrid machine vision control

Legal Events

Date Code Title Description
AS Assignment

Owner name: PILZ GMBH & CO. KG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KRESSEL, ULRICH;KRUEGER, LARS;PROGSCHA, WERNER;AND OTHERS;REEL/FRAME:022880/0757;SIGNING DATES FROM 20090407 TO 20090506

Owner name: PILZ GMBH & CO. KG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KRESSEL, ULRICH;KRUEGER, LARS;PROGSCHA, WERNER;AND OTHERS;SIGNING DATES FROM 20090407 TO 20090506;REEL/FRAME:022880/0757

AS Assignment

Owner name: PILZ GMBH & CO. KG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KRESSEL, ULRICH;KRUEGER, LARS;PROGSCHA, WERNER;AND OTHERS;SIGNING DATES FROM 20091014 TO 20091108;REEL/FRAME:024961/0148

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY