WO2010035182A1 - Activity assistance apparatus - Google Patents

Activity assistance apparatus Download PDF

Info

Publication number
WO2010035182A1
WO2010035182A1 PCT/IB2009/054067 IB2009054067W WO2010035182A1 WO 2010035182 A1 WO2010035182 A1 WO 2010035182A1 IB 2009054067 W IB2009054067 W IB 2009054067W WO 2010035182 A1 WO2010035182 A1 WO 2010035182A1
Authority
WO
WIPO (PCT)
Prior art keywords
central controller
user
designed
output
images
Prior art date
Application number
PCT/IB2009/054067
Other languages
French (fr)
Inventor
Maria E. Mena Benito
Ralph Braspenning
Richard G. C. Van Der Wolf
Mark T. Johnson
Marieke Van Dooren
Franciscus A. M. Van Der Meijden
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2010035182A1 publication Critical patent/WO2010035182A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0003Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2230/00Measuring physical parameters of the user
    • A61H2230/62Posture
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/806Video cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

An activity assistance apparatus (1; 2) for assisting one or more person(s) (P) in performing an activity comprises: - a memory (22) containing information defining at least one body position; - position monitor means (10) monitoring the current position(s) of the body or bodies of the user(s), and providing real time output signals defining said current position(s); - a central controller (20) receiving the output signal from the position monitor means (10) and comparing said current position(s) with the selected desired position(s); - output means (30; 40). The central controller (20) compares the measured current body position(s) with the target body position(s), and outputs signals through said output means (30; 40) to communicate a result of this comparison to the user(s).

Description

Activity assistance apparatus
FIELD OF THE INVENTION
The present invention relates in general to an apparatus for assisting one or more persons in performing a certain activity, particularly an activity which requires a person to make certain predefined movements with certain parts of his body and/or which requires a person to adopt certain positions with certain parts of his body. As examples of such activity, yoga, massaging, and love-making are mentioned, but the invention is not restricted to these types of activities.
BACKGROUND OF THE INVENTION It is of course commonly known to give information to persons on how to perform a certain activity. Conventionally, such information is given in the form of spoken information or written information. Spoken information is for example given by an instructor. Written information, in the form of text or illustrations or both, is for example given in books or courses; by way of example, the Kama Sutra is a well-known book giving information on a large number of sexual positions. More recently, such information (spoken or written) has been made available on CD or DVD, for the user to consult via his/her PC.
Generally, persons who are not experienced in the activity are in need of guidance. If they do not know what to do, they may hesitate to start in the first place, they may be afraid to end up in an embarrassing situation, and they may even be afraid of getting injured. Such guidance not only involves giving information on the activity in advance, but also, and perhaps more importantly, giving feedback to the persons on how they are doing, and advising them on what their next movement should be in order to reach the desired position. However, when performing such activity, there is intimacy involved, which especially applies in the case of sexual activities, guidance by an instructor is not suitable. On the other hand, the process of referring to a book or a manual, or reading instructions from a computer display, while allowing for intimacy, is cumbersome, unpractical, annoying, and reduces the spontaneity of the activity. SUMMARY OF THE INVENTION
A general objective of the present invention is to provide an apparatus overcoming the above problems.
According to an important aspect of the present invention, an interactive activity assistance apparatus comprises means for monitoring the current position(s) of the body or bodies of the user(s), means comparing these positions with the desired positions, and means for issuing advice to the user(s) on the basis of positional differences detected. Further advantageous elaborations are mentioned in the dependent claims.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects, features and advantages of the present invention will be further explained by the following description of one or more preferred embodiments with reference to the drawings, in which same reference numerals indicate same or similar parts, and in which: Figure 1 schematically illustrates an activity assistance apparatus according to the present invention;
Figure 2 schematically illustrates a second embodiment of an activity assistance apparatus according to the present invention.
DETAILED DESCRIPTION OF THE INVENTION
Figure 1 schematically shows a person P lying on a flat surface, for instance a bed B. It is noted that the present invention applies specifically to the situation of two persons engaged in a sexual activity, which persons will be mutually distinguished as Pl and P2, but for sake of convenience only one person is shown in figure 1. An activity assistance apparatus according to the present invention, in the following abbreviated as AAA, is generally indicated by the reference numeral 1. AAA 1 includes position monitor means 10 for monitoring the current positions of the bodies of the persons Pl and P2, wherein each body position is defined in any suitable way.
Defining body position involves defining a suitable model for the body. By way of example, such model may be based on the approximation that the body consists of stiff parts, such as head, torso, upper arm, lower arm, upper leg, lower leg, connected together at joints, such as neck, shoulder, elbow, knee. A joint may have one (elbow, knee) or more (shoulder) degrees of angular freedom. While the body parts typically have relatively large spatial dimensions, the joints may be approximated as points in space. Thus, the body position may be defined as the collection of 3D locations of the joints plus the angular values (angles) of the respective joints. All these values together form coordinates in a multidimensional space.
There are several implementation possibilities for the position monitor means 10, in the following abbreviated as PMM. Since examples of such implementations are commercially availably, it suffices here to give only a brief explanation. In one example, the PMM 10 comprises a plurality of reflective markers to be attached to specified locations on the body (for instance the joints), a plurality of light sources for illuminating the scene, and a plurality of cameras associated with the light sources for receiving light reflected by the markers. In another example, the PMM 10 comprises a plurality of cameras arranged around the scene. It goes without saying that such systems are not very practical for use by individuals to assist them in a sexual activity, if only for reason of complexity and costs. Therefore, the present invention prefers that the PMM 10 comprises a single image capture means 11 arranged above the scene and viewing downwards, for instance suspended from the ceiling C, as shown. Preferably, the image capture means 11, in the following abbreviated as ICM, may comprise a stereo camera (or an equivalent array of two juxtaposed mono cameras) or a 3D camera. 3D cameras are known per se, and commercially available, for instance from Mesa Imaging AG, Zurich, Switzerland. By way of example, reference is made to the article "An all- so lid- state optical range camera for 3D real-time imaging with sub- centimeter depth resolution (SwissRangerTM)" by Thierry Oggier et al in SPIE Proc Vol. 5249-65, St. Etienne (2003). In both cases, the ICM 11 provides a 2D top view image and a 2D depth profile image. More particularly, the ICM 11 will provide a stream of such images at a certain image rate (number of images per unit time).
The PMM 10 further comprises an image processor 12 receiving the image stream from the ICM 11 , programmed to process the images such as to derive there from information defining the body positions, typically as a set of coordinates in said multidimensional space; an output signal provided by the image processor 12 is indicated as Sp, and the AAA 1 comprises a central controller 20 receiving the output signal SP from the image processor 12, in the following abbreviated as IP. Since image processors capable of image recognition and/or processing are known per se, an elaborate explanation of the design and functioning of the IP 12 is not needed here. By way of non- limiting example, it is noted that the IP 12 may estimate joint positions and angles as an optimum fit to the image data, i.e. the set of joint positions and angles that has the highest probability of resulting in the observed top view image and depth profile. The central controller 20, in the following abbreviated as CC, has a user input interface 21 and a memory 22. The memory 22 contains data defining target body positions, for instance body positions as specified in the Kama Sutra. With the user input interface 21, in the following abbreviated as UII, the user can input a selection of a desired target body position from the memory 22. The UII 21 may for instance comprise a key board and/or a mouse and/or a touch screen.
In practice, it may happen that the image data result in interpretation ambiguities, in that an observed top view image and depth profile can result from different body positions. Such ambiguities may be solved by the IP 12 by keeping track of history, e.g. of kinematical quantities such as position and velocity as the rate of change of position. Further, for the case of only one body, techniques for solving such ambiguities have been proposed in literature, for instance the article "Recovering 3D Human Pose from Monocular Images" by A. Agarwal and B. Triggs in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol 28, no. 1, pp. 44-58, 2006, or the article "Silhouette Lookup for Monocular 3D Pose Tracking" by N. R. Howe in press Image and Vision Computing, 2006. Such known techniques can be used by the IP 12, but an adaptation is needed when two entangled bodies are present. To this end, the present invention proposes that the user-selected target body position is communicated to the IP 12, and that the IP 12 may assume that the persons Pl and P2 have their bodies roughly in the correct positions or in associated intermediate positions. Furthermore, the desired end pose known by the system contains labels of which body parts belongs to which person. This prior information can be used to assign high probability for observed clearly separate body parts to belong to one of the two persons. Thus, the IP 12 has available information that can be used to initialize the search for the observed joint angles and that will substantially reduce the search space. The search algorithm will then hypothesize different labels and joint angles for the unassigned body parts and return the total labeling and set of joint angles with the highest posterior probability.
The CC 20 is programmed to compare the measured body positions with the target body positions, and to communicate a result of this comparison to the users. In one example, the CC 20 calculates a value indicating how well the current body positions correspond to the target body positions. This value may for instance be calculated as the distance in said multi-dimensional space between the current body positions and the target body positions as follows:
D2 = £(x, -αI)2 i=l in which D indicates said distance, ai indicates a target coordinate,
Xi indicates a measured coordinate, and N indicates the number of coordinates.
It is also possible that the distance D is calculated as the sum of the absolute values of the coordinate differences |xf-ai .
In a particularly preferred embodiment, the AAA 1 comprises output means 30 for issuing advice to the users on the basis of the positional differences detected. In an embodiment, the CC 20 is designed to communicate said difference value to the user via visual means or audio means. For example, the CC 20 may be provided with a display (not shown for sake of simplicity) having a color indicator ranging from red to green.
In a preferred embodiment, the output means 30 comprise a loudspeaker 31, and the CC 20 is designed to generate spoken instructions to the users. Specifically, once the CC 20 knows the current body positions and thus knows the deviation from the desired body positions, it can issue an advice to the persons as to which body parts should be moved into which direction in order to come closer to the desired position (for instance, a spoken instruction can be "lift right knee"). Such instructions may typically involve pre-defined instructions stored in an instruction memory 23 associated with the CC 20.
In a more advanced embodiment, indicated generally by reference numeral 2 in figure 2, the AAA comprises display means 40 for displaying images. The display means 40 may involve a CRT type display, or an LCD type display, or a plasma type display, or the like. It is also possible that the display means involve a projector and a projection screen. Since displays of the above types are commonly known, a more detailed explanation of their design and functioning is not needed here.
In this second embodiment, the CC 20 is designed to control the display means 40 such as to show images representing the selected desired target position(s). This in itself is already helpful to the user(s), since it gives the user(s) visual information on the target position(s) to be achieved. As a further elaboration of the present invention, the CC 20 is designed to control the display means 40 such as to overlap said images with images representing the current position(s) of the user(s) as calculated. This is even more helpful to the user(s), since it gives the user(s) visual information on the difference between his/her/their current position(s) and the target position(s), and thus allows him/her/them to easily determine what should be the next movement, further assisted by audio information (spoken instructions) as mentioned above.
In the case of two or more persons, the different bodies can be distinguished by showing the images in different colors. In uncertain areas, the color could be made a mixture of the two colors depending on the confidence of it belonging to either person. The highest degree of uncertainty will be in the areas where the two bodies touch each other. Further, it is possible that the image parts representing the body parts that should be moved next are highlighted, for instance by a different color, a higher intensity, a higher color saturation, or a combination of these. Further, it is possible that the CC 20 is designed to show said images stationary, but it is even more helpful to the user(s), and therefore preferred, when the CC 20 is designed to show said images moving from the current position(s) to the target position(s) or to a next intermediate position(s).
Further, it is possible that the images are shown as seen from the viewpoint of the camera, but it is also possible that the CC 20 is capable of rotating the scene to be displayed in 3D, so that the images are shown from a different, more suitable angle.
Summarizing, the present invention provides an activity assistance apparatus for assisting one or more person(s) in performing an activity. The apparatus comprises:
- a memory containing information defining at least one body position; - position monitor means monitoring the current position(s) of the body or bodies of the user(s), and providing in real time output signals defining said current position(s);
- a central controller receiving the output signal from the position monitor means and comparing said current position(s) with the selected desired position(s); - output means.
The central controller compares the measured current body position(s) with the target body position(s), and outputs signals through said output means to communicate a result of this comparison to the user(s).
While the invention has been illustrated and described in detail in the drawings and foregoing description, it should be clear to a person skilled in the art that such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments; rather, several variations and modifications are possible within the protective scope of the invention as defined in the appending claims. For instance, it is possible that the IP 12 and the CC 20 are integrated. It is possible for instance that the CC 20 translates the output signal from the position monitoring means to a velocity of body parts and comparing said current velocities with selected velocities or velocity patterns. It is also possible that the activity assistance apparatus 1 for assisting one or more person(s) (P) in performing an activity comprises a memory 22 containing information defining at least one body position; position monitor means 10 for monitoring the current position(s) of the body or bodies of the user(s), and for providing real time output signals defining said current position(s),a central controller 20 receiving the output signal from the position monitor means 10, the central controller being programmed for comparing said current position(s) with the selected desired position(s), output means 30, wherein the central controller 20 is programmed to compare the measured current body position(s) with the target body position(s), and for outputting signals through said output means 30 to communicate a result of this comparison to the user(s). The output means 30 may further comprise a loudspeaker 31. The central controller 20 may be designed to generate spoken instructions to the user(s), wherein the central controller 20 is provided with an instruction memory 23 containing pre-defined instructions stored therein.
It is also possible that the activity assistance apparatus 2 for assisting one or more person(s) (P) in performing an activity, the apparatus comprises a memory 22 containing information defining at least one body position; position monitor means (10) for monitoring the current position(s) of the body or bodies of the user(s), and for providing real time output signals defining said current position(s); a central controller (20) receiving the output signal from the position monitor means (10), the central controller being programmed for comparing said current position(s) with the selected desired position(s); output means 40; wherein the central controller (20) is programmed to compare the measured current body position(s) with the target body position(s), and for outputting signals through said output means 40 to communicate a result of this comparison to the user(s), wherein the output means comprise display means 40 for displaying images. In such an embodiment the central controller 20 may be designed to control the display means 40 such as to show images representing the selected desired target position(s). The central controller 20 may further be designed to control the display means 40 such as to overlap said images with images representing the measured current position(s) of the user(s). The central controller 20 can be designed to distinguish two or more different bodies by showing the respective images in different colors. The central controller 20 can be designed to highlight the image parts representing the body parts that should be moved. The central controller 20 c designed to show said images representing the measured current position(s) of the user(s) moving from the current position(s) to the target position(s) or to a next intermediate position(s).
Further, it is important that the position monitor means 10 provides its output signals in real time, i.e. corresponding to the actual and current situation, but it is not necessary that these output signals are updated constantly: the output signals may be provided or updated regularly, i.e. with some mutual time distance. This time distance is preferably in the order of normal TV refresh frame rate, i.e. 20 ms or lower, but may possibly be higher, for instance in the order of 1 sec.
Further, the above explanation is based on the assumption that the activity takes place on a substantial horizontal surface, and the position of the camera 11 is chosen in conjunction with this assumption, i.e. arranged above this surface and looking downwards, preferably suspended from the ceiling. However, it is also possible that the activity takes place against a substantial vertical surface, in which case the camera may be positioned more suitably in front of such surface and looking horizontally towards said surface. Further, it is possible that the camera equipment and the processing elements
IP 12 and CC 20 are implemented without non- volatile memory elements (apart from the memories 22 and 23 as mentioned above), so that it is not possible that the system will hold any of the images captured from the user(s), at least not after having been switched off. This will safeguard the privacy of the user(s). Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
In the above, the present invention has been explained with reference to block diagrams, which illustrate functional blocks of the device according to the present invention. It is to be understood that one or more of these functional blocks may be implemented in hardware, where the function of such functional block is performed by individual hardware components, but it is also possible that one or more of these functional blocks are implemented in software, so that the function of such functional block is performed by one or more program lines of a computer program or a programmable device such as a microprocessor, microcontroller, digital signal processor, etc.

Claims

CLAIMS:
1. Activity assistance apparatus (1; 2) for assisting one or more person(s) (P) in performing an activity, the apparatus comprising:
- a memory (22) containing information defining at least one body position;
- position monitor means (10) for monitoring the current position(s) of the body or bodies of the user(s), and for providing real time output signals defining said current position(s);
- a central controller (20) receiving the output signal from the position monitor means (10), the central controller being programmed for comparing said current position(s) with the selected desired position(s); - output means (30; 40); wherein the central controller (20) is programmed to compare the measured current body position(s) with the target body position(s), and for outputting signals through said output means (30; 40) to communicate a result of this comparison to the user(s).
2. Apparatus according to claim 1, wherein the memory (22) contains information defining at least two body positions.
3. Apparatus according to claim 2, wherein the apparatus further comprises a user interface (21) for inputting a selection of a desired position.
4. Apparatus according to claim 1, wherein the position monitor means (10) comprises a single image capture means (11).
5. Apparatus according to claim 4, wherein the image capture means comprises a stereo camera or a 3D camera.
6. Apparatus according to claim 4, wherein the position monitor means (10) comprises an image processor (12) receiving the output signals from the image capture means (11), the image processor (12) being programmed to process said signals such as to derive there from information defining the body position(s), and for providing an output signal (Sp) containing said derived information.
7. Apparatus according to claim 1, wherein the output signals from the central controller (20) contain information indicating to the user(s) the positional differences detected between the measured current body position(s) and the target body position(s).
8. Apparatus according to claim 1, wherein the central controller (20) calculates a distance in a multi-dimensional space between the current body positions and the target body positions.
9. Apparatus according to claim 1, wherein the output signals from the central controller (20) contain advices to the user(s) relating to a next movement for approaching the target body position(s).
10. Apparatus according to claim 1, wherein the output means (30) comprise a loudspeaker (31), and wherein the central controller (20) is designed to generate spoken instructions to the user(s).
11. Apparatus according to claim 10, wherein the central controller (20) is provided with an instruction memory (23) containing pre-defined instructions stored therein.
12. Apparatus according to claim 1, wherein the output means comprise display means (40) for displaying images, wherein the central controller (20) is designed to control the display means (40) such as to show images representing the selected desired target position(s), wherein the central controller (20) is designed to control the display means (40) such as to overlap said images with images representing the measured current position(s) of the user(s), wherein the central controller (20) is designed to show said images representing the measured current position(s) of the user(s) moving from the current position(s) to the target position(s) or to a next intermediate position(s).
13. Apparatus according to claim 1, wherein the output means comprise display means (40) for displaying images, wherein the central controller (20) is designed to distinguish two or more different bodies by showing the respective images in different colors.
14. Apparatus according to claim 1, wherein the output means comprise display means (40) for displaying images, wherein the central controller (20) is designed to highlight the image parts representing the body parts that should be moved.
15. Apparatus according to claim 1, wherein the central controller (20) is designed to distinguish body parts of one user or to distinguish body parts of two users on the basis of knowledge regarding the target position(s).
PCT/IB2009/054067 2008-09-26 2009-09-17 Activity assistance apparatus WO2010035182A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP08165191.1 2008-09-26
EP08165191 2008-09-26

Publications (1)

Publication Number Publication Date
WO2010035182A1 true WO2010035182A1 (en) 2010-04-01

Family

ID=41228374

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/054067 WO2010035182A1 (en) 2008-09-26 2009-09-17 Activity assistance apparatus

Country Status (1)

Country Link
WO (1) WO2010035182A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4813436A (en) * 1987-07-30 1989-03-21 Human Performance Technologies, Inc. Motion analysis system employing various operating modes
US20050013467A1 (en) * 2003-07-16 2005-01-20 Mcnitt Michael J. Method and system for physical motion analysis and training of a golf club swing motion using image analysis techniques
WO2005088541A1 (en) * 2004-03-08 2005-09-22 Hendrik Fehlis Real-time movement analysis device
WO2007019441A2 (en) * 2005-08-04 2007-02-15 Recognition Insight, Llc Swing position recognition and reinforcement
GB2430830A (en) * 2005-09-28 2007-04-04 Univ Dundee Image sequence movement analysis system using object model, likelihood sampling and scoring
US20070238538A1 (en) * 2006-03-16 2007-10-11 Priester William B Motion training apparatus and method
WO2008023250A1 (en) * 2006-08-25 2008-02-28 The Sports Production Company Limited Motion coaching device, method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4813436A (en) * 1987-07-30 1989-03-21 Human Performance Technologies, Inc. Motion analysis system employing various operating modes
US20050013467A1 (en) * 2003-07-16 2005-01-20 Mcnitt Michael J. Method and system for physical motion analysis and training of a golf club swing motion using image analysis techniques
WO2005088541A1 (en) * 2004-03-08 2005-09-22 Hendrik Fehlis Real-time movement analysis device
WO2007019441A2 (en) * 2005-08-04 2007-02-15 Recognition Insight, Llc Swing position recognition and reinforcement
GB2430830A (en) * 2005-09-28 2007-04-04 Univ Dundee Image sequence movement analysis system using object model, likelihood sampling and scoring
US20070238538A1 (en) * 2006-03-16 2007-10-11 Priester William B Motion training apparatus and method
WO2008023250A1 (en) * 2006-08-25 2008-02-28 The Sports Production Company Limited Motion coaching device, method and system

Similar Documents

Publication Publication Date Title
US9110557B2 (en) System and method for tracking and mapping an object to a target
JP5547968B2 (en) Feedback device for instructing and supervising physical movement and method of operating
CN101379455B (en) Input device and its method
US9495008B2 (en) Detecting a primary user of a device
CN111091732B (en) Cardiopulmonary resuscitation (CPR) instructor based on AR technology and guiding method
JP4278979B2 (en) Single camera system for gesture-based input and target indication
CN107004279A (en) Natural user interface camera calibrated
US20070098250A1 (en) Man-machine interface based on 3-D positions of the human body
US10600253B2 (en) Information processing apparatus, information processing method, and program
JP6062039B2 (en) Image processing system and image processing program
CN109196406B (en) Virtual reality system using mixed reality and implementation method thereof
WO2012081194A1 (en) Medical-treatment assisting apparatus, medical-treatment assisting method, and medical-treatment assisting system
JP2015528359A (en) Method and apparatus for determining a point of interest on a three-dimensional object
JP2013061937A (en) Combined stereo camera and stereo display interaction
CN102981616A (en) Identification method and identification system and computer capable of enhancing reality objects
NL2022371B1 (en) Method and assembly for spatial mapping of a model of a surgical tool onto a spatial location of the surgical tool, as well as a surgical tool
CN109558004B (en) Control method and device for human body auxiliary robot
JP2024054137A (en) Image Display System
Chen et al. Camera networks for healthcare, teleimmersion, and surveillance
JPH1020998A (en) Position indication device
KR101426378B1 (en) System and Method for Processing Presentation Event Using Depth Information
CN111515946B (en) Control method and device for human body auxiliary robot
WO2010035182A1 (en) Activity assistance apparatus
JP2021177580A (en) Information processing apparatus, information processing method, and program
CN113223344B (en) Big data-based professional teaching display system for art design

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09787224

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09787224

Country of ref document: EP

Kind code of ref document: A1