WO2011045422A1 - Method and system for measuring emotional probabilities of a facial image - Google Patents

Method and system for measuring emotional probabilities of a facial image Download PDF

Info

Publication number
WO2011045422A1
WO2011045422A1 PCT/EP2010/065544 EP2010065544W WO2011045422A1 WO 2011045422 A1 WO2011045422 A1 WO 2011045422A1 EP 2010065544 W EP2010065544 W EP 2010065544W WO 2011045422 A1 WO2011045422 A1 WO 2011045422A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
facial
probabilities
features
model
Prior art date
Application number
PCT/EP2010/065544
Other languages
French (fr)
Inventor
Tim Llewellynn
Matteo Sorci
Original Assignee
Nviso Sàrl
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nviso Sàrl filed Critical Nviso Sàrl
Publication of WO2011045422A1 publication Critical patent/WO2011045422A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present invention concerns a method and a system related to the measurement of the emotional probabilities of a facial image according to the independent claims.
  • advertising is extensively used to promote consumer and commercial products.
  • the intent of advertising is to leave embedded impressions of brands and products, creating brand awareness and influencing decisionmaking. It is almost universally accepted that, as between or among commodity products, which are generally similar to one another in content, price, or quality, successful advertising can help a particular product achieve much greater market penetration and financial success than an otherwise similar product.
  • Advertising and particularly consumer advertising although a multi-billion dollar industry in the United States alone is an area wherein workers find it extremely difficult to create and reproduce what prove to be consistently successful advertising campaigns. While it is often easy to predict that the response to a particular proposed advertisement or campaign will be unfavorable, it is not known how to assure success on a consistent basis. Accordingly, it is common to find that long after decisions are made and expenditures incurred, that such efforts have simply not been successful, in that the advertisement or campaign failed to produce sales in amounts proportionate to the expenditure of effort and money.
  • Self-report is the most commonly used explicit method for measuring emotions especially connected to consumer behavior. It is commonly used in focus group interviews, telephone surveys, paper-and- pencil questionnaires, online surveys, and instrument-mediated
  • US71 13916 discloses a method to score visible facial muscle movements in video taped interviews and U.S. Pat. No. 20030032890 by Genco et al describes a method using facial electromyography (EMG) to measure facial muscle activity via electrodes placed on various locations on the face.
  • EMG facial electromyography
  • the electrical activity is used to gauge emotion response to advertising.
  • the main limitations of these methods are 1 ) they must be conducted in a laboratory setting with specific equipment 2) they
  • Electrodermal reaction or Skin conductance measures activation of the autonomic nervous system which indicates 'arousal'.
  • the EDR measure indicates the electrical conductance of the skin related to the level of sweat in the eccrine sweat glands which is involved in emotion- evoked sweating and is conducted using electrodes.
  • This method requires a lot of experience and sensitive equipment.
  • EDR only measures the occurrence of arousal not the valence of the arousal, which can be both positive and negative.
  • Another problem with using EDR are the individual variation and situational factors such as fatigue, medication etc, which makes it hard to know what you are measuring.
  • U.S. Pat. No. 6,453,194 by Hill utilizes synchronized EDR signals to measure reactions to consumer activities and
  • U.S. Pat. No. 6,584,346 by Flugger describes a multimodal system and process for measuring physiological responses using EDR, EMG, and brainwave measures, but only for the purpose of assessing product-related sounds, such as the sounds of automobile muffler
  • Brain imaging is a new method in consumer research.
  • the method has entered from neuroscience and offers the opportunity for interesting new insights. Emotions are pointed out as an area of specific relevance.
  • the method is extremely expensive, it requires expert knowledge and has severe technological limitations for experimental designs.
  • knowledge within neuroscience is still relatively young and therefore the complexity of the problems investigated must be relatively simple.
  • the use in consumer research is so far relatively limited and thus are the examples of use related to measurement of emotions in consumer research. The most commonly applied methods from
  • MEG Magnetoencephalography
  • PET Positron emission topography
  • fMRI Functional Magnetic Resonance Imaging
  • tomography to collect brain functioning data while exposed to marketing stimuli and performing experimental tasks (e.g., metaphor elicitation).
  • US2008043025 comprises the use of Digital Image Speckle Correlation (DISC) to map facial deformations due to voluntary and/or involuntary, often subtle, facial expressions.
  • the facial expressions may be the result of a response to internal or external stimuli. It is possible to develop
  • the techniques of the present invention can be used to extend and improve the FACS, which includes assigning human emotions to facial expressions. It is also possible to use test subject feedback to correlate facial expressions to human emotion.
  • US5676138 discloses a multimedia computerized system for detecting emotional responses of human beings and the changes therein over time. The system includes a stimulus presentation device for
  • presenting a stimulus such as a television commercial, occurring over a predetermined period of time to each of one or more individuals forming a population sample; measuring devices for measuring and recording a plurality of values associated with each of a plurality of physiological variables, such as heartrate, electromyography and electrodermal activity, each associated with one or more individuals; software programmed for receiving and translating each value measured by statistical calculation into a unitless statistical measure known as a z-score, then graphing the z-score on an interaction index, and then associating automatically the z-score with a semantic descriptor for an associated emotion or state of feeling, more particularly, ; and, an interactive multimedia computer for electronically storing, displaying and retrieving information and capable of at least visually displaying one or more stimuli presented at a given time interval, at least one semantic descriptor associated with the stimulus presented, and an interaction index associated with the semantic descriptor associated with the stimulus presented.
  • a stimulus such as a television commercial, occurring over a predetermined period of time to each of one or more individuals
  • US6292688 discloses a method of determining the extent of the emotional response of a test subject to stimului having a time-varying visual content, for example, an advertising presentation.
  • the test subject is positioned to observe the presentation for a given duration, and a path of communication is established between the subject and a brain wave detector/analyzer.
  • the intensity component of each of at least two different brain wave frequencies is measured during the exposure, and each frequency is associated with a particular emotion. While the subject views the presentation, periodic variations in the intensity component of the brain waves of each of the particular frequencies selected is measured.
  • the change rates in the intensity at regular periods during the duration are also measured.
  • the intensity change rates are then used to construct a graph of plural coordinate points, and these coordinate points graphically establish the composite emotional reaction of the subject as the
  • EMG facial electromyography
  • DISC digital image speckle correlation
  • EDR galvanic skin response
  • these aims are achieved by means of a method related to the measurement of the emotional probabilities of a facial image comprising the steps of:
  • a features computation unit for computing facial features which are a combination of spatial, appearance, and temporal features
  • Fig. 1 represents one embodiment of a system in which the method steps can be carried out across a communications network in an online survey.
  • Fig. 2 is an overall flow chart of one embodiment showing the major steps to conduct an online survey in applying the method to measure emotional probabilities in a method of to assess emotional impact to a marketing stimulus.
  • Fig. 3 is a detailed flow chart showing the step-by-step actions used to determine emotion probabilities as illustrated in Fig. 2.
  • Fig. 4 is the mask of the 55 facial landmarks representing the shape description of the face used to compute the spatial features reported in the tables in Fig.7 and Fig. 8.
  • Fig. 5 is the mask of the featural descriptors used in the definition of the Expression Descriptive Units, listed in the table in Fig.7.
  • Fig. 6 is the mask of the geometrical relationships of facial feature points, where the rectangles represent the regions of furrows and wrinkles. These measures represent the geometrical description of some of the main Action Units from the FACS. The linguistic description of these measures is reported in the table in Fig.8.
  • Fig. 7 is the table of Expressions Descriptive Units are computed from geometric relationships of facial feature points.
  • Fig. 8 is the table of the geometrical measures describing some of the Action Units from the FACS.
  • Fig. 9 is the appearance associated to a set of appearance vector.
  • Fig. 10 is a system flow diagram of one embodiment describing how the method can be executed as part of an online web survey and across a communications network.
  • Fig. 1 1 illustrates how emotional probabilities of facial image can be presented a) top left shows a facial image and corresponding spatial features that have been located b) top right shows emotional probabilities of a facial image displayed c) bottom shows a history of emotion
  • Fig. 1 represents one embodiment of a system in which the method steps can be carried out in an online.
  • a stimulus 10 is presented to respondent 20, generally recruited to participate in the survey as belonging to a particular target market population.
  • the stimulus is displayed on a display unit 30 while an image capture device 40, such as a webcam, captures non-verbal responses of the respondent such as facial expressions or head and eye movements while he is exposed to the stimulus 10 and answers questions of the survey.
  • the survey respondent needs no special instructions while performing the survey in relation to his non-verbal response being imaged i.e. does not need to look into the camera, he is free to move his body or head, and he can touch his face, etc.
  • the verbal responses to questions of the survey can be recorded using an input device 50 such as a keyboard or mouse.
  • the recorded non-verbal and verbal responses can be stored directly on a local storage device 60 such as a memory of the computer or directly and immediately transmitted or sent across a communications network 70 such as the Internet to servers for further processing (step a).
  • the facial image of the non-verbal response is sent to an image processing server unit 80 (step b), while the verbal response data is sent to a data processing server unit 90 (step c).
  • a face model is extracted from the image by the face extraction unit 84.
  • features computation unit 86 computes a facial description which are a combination of spatial, appearance, and temporal features.
  • the features description is given to a classification unit 88 which outputs predicted classification probabilities from said images.
  • the automatic calculations in units 84 - 88 are done continuously with the received images.
  • the predicted classification probabilities of the facial image are sent from the image processing unit to the data processing unit for further analysis (step d).
  • the preferred embodiment of this invention is an online survey intended to test any marketing element, such as concept, print advertising, or video advertising.
  • any marketing element such as concept, print advertising, or video advertising.
  • alterative embodiments and applications are envisaged, such as measuring emotions :
  • a survey can be generated in step 100 that is conducted in step 200.
  • the non-verbal responses of the survey are predicted in step 300 via images taken during the survey of the respondent.
  • the predicted nonverbal responses can be merged with the verbal responses of the survey in step 400 to permit data analysis in step 500 of the survey response data.
  • Fig. 3 illustrates how an automated non-verbal classification system generates a set of predicted probabilities for a provided image.
  • the aim of 300 is to classify facial behaviors by class.
  • the behaviors can be any response that are communicated through facial expressions, head or eye movements, body language or pose which can be observed through an image or series of facial images.
  • this step can also comprise of classifying demographic characteristics of the respondent such as gender, age, or race.
  • the process starts in 310 by the system receiving an image received across a communications system, such as storage devices, web camera and ip camera, however it is not limited to these means of transmission.
  • AAMs Active Appearance Models
  • AAM is a technique that elegantly combines shape and texture models, in a statistical framework, and provides as output a mask of face landmarks as shown in Fig. 4. Faces are defined by marking up each example with points of correspondence (i.e. landmarks) over the set either by hand, or by semi- to completely automated methods. From these landmarks a shape model is built. Further, given a suitable warp function, a per-pixel correspondence is established between training objects, thereby enabling a proper modelling of texture variability.
  • PCA Principal Component Analysis
  • the AAM model in 320 can be applied to images captured in uncontrolled environments. In this case where pose, lighting, and
  • Step 320 can include steps to deal with these situations. These steps can include:
  • the model based represented in 320 can be processed to extract a feature description.
  • the aim of this processing is to generate a measurable description, based on movements, presence of features, and visual appearance found in the model, which can be relevant visual cues to the classification step of 350.
  • Numerous techniques can be employed to extract the feature description, however in the case of building feature description for emotion classification we prefer the use of a combination of Facial Actions Unit Coding System (FACS), Expression Description Units (EDU), and AAM Appearance Vectors.
  • FACS Facial Actions Unit Coding System
  • ENU Expression Description Units
  • AAM Appearance Vectors AAM Appearance Vectors.
  • the processing of 330 can be a single processing stage or divided into multiple stages. In the case the non-verbal behavioral is an emotional response, two stages are preferred, where the first stage involves
  • the FACS represents the leading standard for measuring facial expressions in behavioral science.
  • the main measures suggested by this human observer system represent a valid base in the quest of variables characterizing the different expressions.
  • the 6 basic emotions can be described linguistically using Ekman's AUs.
  • Aus in a set of quantitatively measures, we transform these appearance changes descriptors in a set geometrical relationships of some featural points, shown in Fig.6 and linguistically reported in Fig.8.
  • the second set of facial features describes the spatial relationships between facial components that have not been taken into account by the FACS.
  • the EDU are computed based on the featural descriptors in Fig. 5 provided by the extracted shape of Fig.4.
  • the complete set of EDU is reported in the table in Fig. 7.
  • the EDU descriptors can be static or contain dynamic temporal information such as movements and velocities in a 3D space.
  • the third set of measures in 340 is introduced. This information is obtained considering the appearance vector providing by the AAM fitting and matching the face in the processed image. An example of appearance provided by this vector is shown in Fig. 9.
  • the feature description is then passed to 340 for classification.
  • the aim of 340 is to classify the feature description computed in 330.
  • the classification can be performed with numerous multi class methods such as discrete choice models, support vector machines, neural networks, decision trees, random forests or Bayesian networks.
  • discrete choice models are preferred for expression classification, as they have been shown to give superior accuracy performance.
  • it is automatically calculated a distribution rather than unique categorization of the perceived emotional responses to a visual stimulus. Thereby a probability of emotion per image is used employing statistical techniques to associate the emotion
  • the inventive method does not rely on empirical methods (such as lookup tables or similar), but uses only statistical inferences on estimated
  • emotional probabilities of the received images instead of scores based on the presence of emotional cues.
  • the present approach is therefore not only different in respect to state of the art approaches, but superior as it is more objective, precise, and benefits from large sample sizes by using statistical inference on estimated emotional probabilities, instead of scores based on the presence of emotional cues.
  • the output of 340 is the predicted probabilities of the intended variables to be classified. These probabilities can be then stored in 350, in any means appropriate, such as in a spreadsheet on the local file system or in a database. Once stored, they then can be downloaded and merged with the verbal data from the survey to be further processed.
  • Fig. 10 is a system flow diagram of one embodiment describing how the method can be incorporated into an online web survey and across a communications network: a1 . Design; Programming of based questionnaire: A
  • questionnaire can be programmed for the online survey.
  • a variety of different programming languages can be used such as html, flash, php, asp, jsp, javascript, or java although the choice of programming language is not limited in anyway to these examples.
  • invitation respondents can be invited to answer the online or offline survey in which the stimuli material is presented: Respondents can be contacted via a variety of methods such as email, telephone or letter to take part in the survey.
  • Non-verbal response prediction calibration An optional step can be used where respondents answer a set of questions to improve and update the algorithms that can be utilized to predict the probabilities of non-verbal responses.
  • the respondent answers the questionnaire The respondents non-verbal response can be recorded as a sequence of images captured using an imaging device such as a web camera.
  • the respondent's verbal responses can be recorded using a mouse, key board, or microphone, or directly recorded by an interviewer in the case of a face-to-face interview.
  • the verbal answers to the questionnaire (a5a) can be stored in server 90.
  • Images of non-verbal responses can be stored server 80 (a5b).
  • Server 80 and Server 90 can be the same or a different server or different software modules at the same server.
  • a6 An automated system is used to compute predicted probabilities of non-verbal responses.
  • the predicted probabilities can represent basic emotions such as happiness, surprise, fear, and disgust sadness or any other emotional state.
  • Other non-verbal responses can also include visual attention and posture, but is not limited in any way to these examples.
  • Data file is automatically produced with vector of predicted probabilities per respondent per stimuli presented. A data file is now ready for analysis. It can contain all variables from the questions used in the survey with the vector of predicted probabilities for the non-verbal responses for the questions or stimuli where the non-verbal responses have been captured.
  • the method described in this invention provides objective measure of emotional probabilities of a facial image. It is a scientific method, enabling marketers to effectively track consumers' conscious and unconscious feelings and reactions about brands, advertising, and marketing material. It has numerous advantages for businesses in that it is fast and inexpensive, and given its simplicity, is applicable to large samples, which are a necessary condition for valid and statistical inference. This approach reduces significantly the cost of making more accurate decisions and is accessible to a much larger audience of practitioners than previous methods. It is objective and commercially practical. Major advantages over current methods include: Suitable for large scale survey sampling without need for expensive equipment.

Abstract

It is disclosed a method related to the measurement of the emotional probabilities of a facial image comprising the steps of: receiving a facial image of a person in a image receiving unit (82); building a model based representation from the image; extracting a feature description of said model based representation; generating a measurable description of the image comprising facial features, based on movements, presence of features, and visual appearance found in the model; outputting from these computed facial features predicted classification probabilities from said image. The invention relates as well to a system with corresponding features.

Description

Method and System for Measuring Emotional Probabilities of a
Facial Image
Field of the invention
The present invention concerns a method and a system related to the measurement of the emotional probabilities of a facial image according to the independent claims.
Background of the invention
In the United States, and elsewhere throughout the world, advertising is extensively used to promote consumer and commercial products. The intent of advertising is to leave embedded impressions of brands and products, creating brand awareness and influencing decisionmaking. It is almost universally accepted that, as between or among commodity products, which are generally similar to one another in content, price, or quality, successful advertising can help a particular product achieve much greater market penetration and financial success than an otherwise similar product.
Advertising and particularly consumer advertising, although a multi-billion dollar industry in the United States alone is an area wherein workers find it extremely difficult to create and reproduce what prove to be consistently successful advertising campaigns. While it is often easy to predict that the response to a particular proposed advertisement or campaign will be unfavorable, it is not known how to assure success on a consistent basis. Accordingly, it is common to find that long after decisions are made and expenditures incurred, that such efforts have simply not been successful, in that the advertisement or campaign failed to produce sales in amounts proportionate to the expenditure of effort and money.
Key to improving this situation lies in understanding the drivers of consumer behavior and unlocking the buyer decision making process, which today, is among the biggest challenges in marketing research. Recent findings in cognitive neuroscience and Neuroeconomics (Loewenstein 2000; Mellers and McGraw 2001) have made it clear that emotions play an even larger role in decision making than so far assumed. The idea of rational decision making and emotion and feelings as noise has ultimately been rejected. Decision-making without the influence of emotions is not possible. Sound and rational decision-making depends on prior accurate emotion processing (Bachara and Damasio, 2005) Thus the importance of including emotional aspects in consumer research is even greater than was earlier recognized. Neuroscience findings support the notion that emotions can appear prior to cognition but also shows that the influence goes both ways . Neuroscience has given foundation for new research on emotions in consumer research, also known as Neuroeconomics or consumer
neuroscience. In advertising neuroscience methods have been applied by e.g. Ambler, loannides and Rose (2000). Yoon et al. (2006) test the notion of brand personality, and Erk et al. (2002) made an interesting study of consumer choice between products in form of different car types finding differences in activation of reward areas related to different types of cars.
Despite these latest advancements in our understanding of emotions on consumer decision making, few companies have come close to exploiting emotions in the design of new products, marketing material or advertising campaigns. Perhaps the root of this misplacement can be attributed to the consumer model borrowed from neoclassical economics. From a business perspective, dealing with a rational consumer paradigm is easier. It can be quantified, segmented, and put into a spreadsheet. If emotions can not be measured and analyzed in a comparable way, they can not be managed. Thus there is clear need for new scientific methods for measuring and assessing the impact of emotions on consumers to
marketing stimuli which are compatible with the processes and tools that businesses use to analyze and predict consumer decisions.
Although the importance of emotions in determining consumer behavior is now understood, there are few objective methods to collect and analyze such emotional responses. The few methods, that do exist, have been borrowed from medical or physiological fields and are not specifically adapted to meet the needs of today's marketing practitioners. The methods used throughout time to measure emotions in consumer research can be divided in two overall groups: Explicit measures such as verbal and visual self-report and implicit measures such as autonomic measures and brain imaging.
Self-report is the most commonly used explicit method for measuring emotions especially connected to consumer behavior. It is commonly used in focus group interviews, telephone surveys, paper-and- pencil questionnaires, online surveys, and instrument-mediated
measurement systems using sliders or dials to capture moment-to-moment changes in emotional reactions. Responses measured include stated preferences among alternative products or messages, propensities to buy, likelihood of use, aesthetic judgments of product and packaging designs, moment-to-moment affective responses, and other predictions of likely future behaviors.
Although commonly used due to the low costs of acquiring the response data, self-report is difficult to apply to measuring emotions since emotions are often unconscious or simply hard to define causing bias to the reported emotions. They involve a long list of emotion adjectives and the rating can cause fatigue in the respondents which can damage the reliability. Furthermore self-report involves cognitive processing, which may distort the original emotional reaction. Recently, researchers have begun measuring naturally occurring biological processes to overcome some of these problems of self-reporting. These measures are often referred to as implicit measures and can further divided into autonomic measures and brain imaging. These measurements has been used in consumer research as early as the 1920s mostly applied to measuring response to advertising. Autonomic measures rely on bodily reactions that are partially beyond an individual's control. It therefore overcomes the cognitive bias linked to self-report. However most autonomic measures are conducted in a laboratory setting, which it is often criticized for, since it is considered out of social context. The most common autonomic methods include the measurement of facial expressions via Facial electromyography (EMG) or Facial Action Coding System (FACS) and Electrodermal reaction (EDR) or Skin conductance that measures activation of the autonomic nervous system.
US71 13916 discloses a method to score visible facial muscle movements in video taped interviews and U.S. Pat. No. 20030032890 by Genco et al describes a method using facial electromyography (EMG) to measure facial muscle activity via electrodes placed on various locations on the face. The electrical activity is used to gauge emotion response to advertising. The main limitations of these methods are 1 ) they must be conducted in a laboratory setting with specific equipment 2) they
requirements specialized skills not commonly available to market
researchers to interpret the data 3) respondents are highly affected by the fact that they know they are being measured (physical contact of sensors) and therefore try to control muscle reactions (Bolls, Lang and Potter, 2001 ) 4) only a single metric is used to assess the impact of emotion on the presented stimulus, thus limiting the usefulness of the metric, and in the case of EMG it is nearly always impossible to reliability aggregate the results when measures are combined or averaged across a sample of consumers as different individuals have different baseline levels of activity that can bias such aggregation.
Electrodermal reaction (EDR) or Skin conductance measures activation of the autonomic nervous system which indicates 'arousal'. The EDR measure indicates the electrical conductance of the skin related to the level of sweat in the eccrine sweat glands which is involved in emotion- evoked sweating and is conducted using electrodes. However this method requires a lot of experience and sensitive equipment. Furthermore EDR only measures the occurrence of arousal not the valence of the arousal, which can be both positive and negative. Another problem with using EDR are the individual variation and situational factors such as fatigue, medication etc, which makes it hard to know what you are measuring. U.S. Pat. No. 6,453,194 by Hill utilizes synchronized EDR signals to measure reactions to consumer activities and U.S. Pat. No. 6,584,346 by Flugger describes a multimodal system and process for measuring physiological responses using EDR, EMG, and brainwave measures, but only for the purpose of assessing product-related sounds, such as the sounds of automobile mufflers.
Brain imaging is a new method in consumer research. The method has entered from neuroscience and offers the opportunity for interesting new insights. Emotions are pointed out as an area of specific relevance. However the method is extremely expensive, it requires expert knowledge and has severe technological limitations for experimental designs. Furthermore knowledge within neuroscience is still relatively young and therefore the complexity of the problems investigated must be relatively simple. The use in consumer research is so far relatively limited and thus are the examples of use related to measurement of emotions in consumer research. The most commonly applied methods from
neuroscience are the Electroencephalography (EEG),
Magnetoencephalography (MEG), Positron emission topography (PET), Functional Magnetic Resonance Imaging (fMRI) and U.S. Pat. Nos. 6,099,319 by Zaltman and 6,292,688 by Patton focus on the use of neuroimaging (positron emission tomography, functional magnetic resonance imaging, magnetoencephalography and single photon emission computer
tomography) to collect brain functioning data while exposed to marketing stimuli and performing experimental tasks (e.g., metaphor elicitation).
Other prior art related to the current invention uses dedicated equipment and techniques to estimate the presence of emotions.
US2008043025 comprises the use of Digital Image Speckle Correlation (DISC) to map facial deformations due to voluntary and/or involuntary, often subtle, facial expressions. The facial expressions may be the result of a response to internal or external stimuli. It is possible to develop
quantitative and qualitative characterizations of the individual's response. The techniques of the present invention can be used to extend and improve the FACS, which includes assigning human emotions to facial expressions. It is also possible to use test subject feedback to correlate facial expressions to human emotion. US5676138 discloses a multimedia computerized system for detecting emotional responses of human beings and the changes therein over time. The system includes a stimulus presentation device for
presenting a stimulus, such as a television commercial, occurring over a predetermined period of time to each of one or more individuals forming a population sample; measuring devices for measuring and recording a plurality of values associated with each of a plurality of physiological variables, such as heartrate, electromyography and electrodermal activity, each associated with one or more individuals; software programmed for receiving and translating each value measured by statistical calculation into a unitless statistical measure known as a z-score, then graphing the z-score on an interaction index, and then associating automatically the z-score with a semantic descriptor for an associated emotion or state of feeling, more particularly, ; and, an interactive multimedia computer for electronically storing, displaying and retrieving information and capable of at least visually displaying one or more stimuli presented at a given time interval, at least one semantic descriptor associated with the stimulus presented, and an interaction index associated with the semantic descriptor associated with the stimulus presented.
US6292688 discloses a method of determining the extent of the emotional response of a test subject to stimului having a time-varying visual content, for example, an advertising presentation. The test subject is positioned to observe the presentation for a given duration, and a path of communication is established between the subject and a brain wave detector/analyzer. The intensity component of each of at least two different brain wave frequencies is measured during the exposure, and each frequency is associated with a particular emotion. While the subject views the presentation, periodic variations in the intensity component of the brain waves of each of the particular frequencies selected is measured. The change rates in the intensity at regular periods during the duration are also measured. The intensity change rates are then used to construct a graph of plural coordinate points, and these coordinate points graphically establish the composite emotional reaction of the subject as the
presentation continues.
While prior art shows precise tools measuring physiological activities using methods such as facial electromyography (EMG), digital image speckle correlation (DISC), galvanic skin response (EDR) or
neurological activity like fMRI-scanning used in consumer research, they are however have has significant limitations. They are impractical and very expensive if adapted to studies that demand large samples. The cost is high and the time carrying out these types of experiments is quite long. They also demand respondents to meet in specially adapted facilities or laboratories. They also limit the ability to generalize conclusions from a statistical viewpoint as they are in most case only applied to small samples.
Accordingly, there is a need for systems and methods of measuring emotions of respondents in consumer research that avoid, or at least alleviate, these limitations and provide accurate and measures of their emotional (non-verbal) responses. Furthermore much prior art addresses methods for only processing emotional consumer research data in laboratory settings, none specifically describe a complete system and method for capturing and measuring emotional responses deployable outside the laboratory such provided by aspects of the current invention.
Brief summary of the invention It is one aim of the present invention to offer a method and a system related to the measurement of emotional probabilities of a facial image typically obtained from a consumer's non-verbal response to a marketing stimulus, which is more practical and less expensive if adapted to studies that demand large samples. It is another aim of the present invention to provide a method and a system related to the measurement of emotional probabilities of a facial image typically obtained from a consumer's non-verbal response to a marketing stimulus, which is less time consuming than the known methods. It is another aim of the present invention to provide a method and a system related to the measurement of emotional probabilities of a facial image typically obtained from a consumer's non-verbal response to a marketing stimulus, which can be easily carried out over a communication network such as the internet without the need of a special equipment except a standard home computer.
It is another aim of the present invention to provide a method and a system related to the measurement of emotional probabilities of a facial image typically obtained from a consumer's non-verbal response to a marketing stimulus, which allows generalizing conclusions from a statistical viewpoint as they are applied to large samples.
According to the invention, these aims are achieved by means of a method related to the measurement of the emotional probabilities of a facial image comprising the steps of:
• receiving a facial image of a person in a image receiving unit;
· building at least one model based representation from the image;
• extracting a feature description of said model based representation;
• generating a measurable description of the image comprising facial features, based on movements, presence of features, and visual appearance found in the model;
· outputting from these computed facial features predicted classification probabilities from said image.
According to the invention, these aims are achieved as well by means of a system related to the measurement of the emotional
probabilities of a facial image with an image processing unit, said image comprising
• an image processing receiving unit; • a face extraction unit for building a model based representation from the image; and for extracting a feature description of said model based representation;
• a features computation unit for computing facial features which are a combination of spatial, appearance, and temporal features; and
• a classification unit which outputs from these computed facial features predicted classification probabilities from said image.
Brief Description of the Drawings The invention will be better understood with the aid of the description of an embodiment given by way of example and illustrated by the figures, in which:
Fig. 1 represents one embodiment of a system in which the method steps can be carried out across a communications network in an online survey.
Fig. 2 is an overall flow chart of one embodiment showing the major steps to conduct an online survey in applying the method to measure emotional probabilities in a method of to assess emotional impact to a marketing stimulus. Fig. 3 is a detailed flow chart showing the step-by-step actions used to determine emotion probabilities as illustrated in Fig. 2.
Fig. 4 is the mask of the 55 facial landmarks representing the shape description of the face used to compute the spatial features reported in the tables in Fig.7 and Fig. 8. Fig. 5 is the mask of the featural descriptors used in the definition of the Expression Descriptive Units, listed in the table in Fig.7. Fig. 6 is the mask of the geometrical relationships of facial feature points, where the rectangles represent the regions of furrows and wrinkles. These measures represent the geometrical description of some of the main Action Units from the FACS. The linguistic description of these measures is reported in the table in Fig.8.
Fig. 7 is the table of Expressions Descriptive Units are computed from geometric relationships of facial feature points.
Fig. 8 is the table of the geometrical measures describing some of the Action Units from the FACS. Fig. 9 is the appearance associated to a set of appearance vector.
Fig. 10 is a system flow diagram of one embodiment describing how the method can be executed as part of an online web survey and across a communications network.
Fig. 1 1 illustrates how emotional probabilities of facial image can be presented a) top left shows a facial image and corresponding spatial features that have been located b) top right shows emotional probabilities of a facial image displayed c) bottom shows a history of emotion
probabilities across a series of facial images.
Detailed Description of possible embodiments of the Invention Fig. 1 represents one embodiment of a system in which the method steps can be carried out in an online. A stimulus 10 is presented to respondent 20, generally recruited to participate in the survey as belonging to a particular target market population. The stimulus is displayed on a display unit 30 while an image capture device 40, such as a webcam, captures non-verbal responses of the respondent such as facial expressions or head and eye movements while he is exposed to the stimulus 10 and answers questions of the survey. The survey respondent needs no special instructions while performing the survey in relation to his non-verbal response being imaged i.e. does not need to look into the camera, he is free to move his body or head, and he can touch his face, etc.
After being exposed to the stimulus 10, the verbal responses to questions of the survey can be recorded using an input device 50 such as a keyboard or mouse. The recorded non-verbal and verbal responses can be stored directly on a local storage device 60 such as a memory of the computer or directly and immediately transmitted or sent across a communications network 70 such as the Internet to servers for further processing (step a). The facial image of the non-verbal response is sent to an image processing server unit 80 (step b), while the verbal response data is sent to a data processing server unit 90 (step c). Directly after having received said images at said image processing receiving unit 82 a face model is extracted from the image by the face extraction unit 84.
Following, features computation unit 86 computes a facial description which are a combination of spatial, appearance, and temporal features. The features description is given to a classification unit 88 which outputs predicted classification probabilities from said images. When the images are received from the image capturing device by the image processing receiving unit 82, the automatic calculations in units 84 - 88 are done continuously with the received images. Finally the predicted classification probabilities of the facial image are sent from the image processing unit to the data processing unit for further analysis (step d).
The preferred embodiment of this invention is an online survey intended to test any marketing element, such as concept, print advertising, or video advertising. However, alterative embodiments and applications are envisaged, such as measuring emotions :
in video games to enhance game play and communication in multiplayer games; in digital animation production to digitize human actor performances; in digital signage and out-of-home advertising networks for audience measurement;
in digital advertising to optimize real-time media buying;
in photo management to search and classify photos; " in home entertainment for targeted advertising;
in automotive to monitor drivers to improve driver safety;
in medical devices to monitor and diagnose patients for better standard and levels of care;
in retail kiosks to improve user experiences; " in interactive entertainment and marketing to provide more engaging user experiences;
in health-care services to monitor and intervene during patent care activities;
in customer service situations to improve the
understanding of customer satisfaction or behavior;
in corporate training to train people in sales and client orientated job functions to better serve customers;
in passport photos to determine compliance to passport standards; in human resources in the selection and hiring of
personnel;
in human-computer interfaces to improve user interfaces; in security situations to determine specific human behavior patterns and aid in interviewing;
In the case of the preferred embodiment of an online survey, and referencing Fig. 2, it shows the overall steps, where the inventive method is described in 300. A survey can be generated in step 100 that is conducted in step 200. The non-verbal responses of the survey are predicted in step 300 via images taken during the survey of the respondent. The predicted nonverbal responses can be merged with the verbal responses of the survey in step 400 to permit data analysis in step 500 of the survey response data. Fig. 3 illustrates how an automated non-verbal classification system generates a set of predicted probabilities for a provided image. The aim of 300 is to classify facial behaviors by class. The behaviors can be any response that are communicated through facial expressions, head or eye movements, body language or pose which can be observed through an image or series of facial images. In addition this step can also comprise of classifying demographic characteristics of the respondent such as gender, age, or race. The process starts in 310 by the system receiving an image received across a communications system, such as storage devices, web camera and ip camera, however it is not limited to these means of transmission.
In 320 the image is processed to build a model based
representation. The aim of this process is to map features of the
respondent such as the face present in the image to a model based representation, which allows further descriptive processing in 330. Faces are highly variable, deformable objects, and manifest very different
appearances in images depending on pose, lighting, expression, and the identity of the person and the interpretation of such images requires the ability to understand this variability in order to extract useful information. There are numerous methods to convert a deformable object, such as the face, into a model based representation, however in 320 we prefer the use Active Appearance Models (AAMs), although other model based
representations are possible. AAM is a technique that elegantly combines shape and texture models, in a statistical framework, and provides as output a mask of face landmarks as shown in Fig. 4. Faces are defined by marking up each example with points of correspondence (i.e. landmarks) over the set either by hand, or by semi- to completely automated methods. From these landmarks a shape model is built. Further, given a suitable warp function, a per-pixel correspondence is established between training objects, thereby enabling a proper modelling of texture variability.
Variability is modelled by means of a Principal Component Analysis (PCA), i.e. an eigenanalysis of the dispersions of shape and texture. By exploiting prior knowledge of the nature of the optimisation space, these models of shape and texture can be rapidly fitted to unseen images, thus providing image interpretation through synthesis.
The AAM model in 320 can be applied to images captured in uncontrolled environments. In this case where pose, lighting, and
occlusions can occur in unpredictable ways special steps need to be taken to ensure the model built of the face is robust to handle these situations. Step 320 can include steps to deal with these situations. These steps can include:
Building multiple model representations of the face, where each model varies parameters related to pose, illumination, and occlusion.
Choosing the best model based on an error criteria related to some measure of the difference between a synthesized facial image from the model, and the original facial image.
Building multiple models in parallel on parallel processing unit such as a Graphics Processing Unit (GPU) and calculating the error criteria in parallel.
In 330 and 340 the model based represented in 320 can be processed to extract a feature description. The aim of this processing is to generate a measurable description, based on movements, presence of features, and visual appearance found in the model, which can be relevant visual cues to the classification step of 350. Numerous techniques can be employed to extract the feature description, however in the case of building feature description for emotion classification we prefer the use of a combination of Facial Actions Unit Coding System (FACS), Expression Description Units (EDU), and AAM Appearance Vectors. However the steps 320-340 are not limited in any way these preferred feature descriptions. The processing of 330 can be a single processing stage or divided into multiple stages. In the case the non-verbal behavioral is an emotional response, two stages are preferred, where the first stage involves
computing the measures coming from the FACS and the second stage computes a set of configurable measures such as EDU. The FACS represents the leading standard for measuring facial expressions in behavioral science. The main measures suggested by this human observer system represent a valid base in the quest of variables characterizing the different expressions. Based on the FACS the 6 basic emotions can be described linguistically using Ekman's AUs. In order to transform the Aus in a set of quantitatively measures, we transform these appearance changes descriptors in a set geometrical relationships of some featural points, shown in Fig.6 and linguistically reported in Fig.8. We use the shape mask, provided by the AAM (see Fig. 4), to measure the set of angles and distances detailed in Fig. 8. The second set of facial features, the EDU, describes the spatial relationships between facial components that have not been taken into account by the FACS. The EDU are computed based on the featural descriptors in Fig. 5 provided by the extracted shape of Fig.4. The complete set of EDU is reported in the table in Fig. 7. The EDU descriptors can be static or contain dynamic temporal information such as movements and velocities in a 3D space. In order to consider measures capable of describing a face as a global entity, the third set of measures in 340 is introduced. This information is obtained considering the appearance vector providing by the AAM fitting and matching the face in the processed image. An example of appearance provided by this vector is shown in Fig. 9. The feature description is then passed to 340 for classification.
The aim of 340 is to classify the feature description computed in 330. The classification can be performed with numerous multi class methods such as discrete choice models, support vector machines, neural networks, decision trees, random forests or Bayesian networks. In this case, discrete choice models are preferred for expression classification, as they have been shown to give superior accuracy performance. In the presented method, it is automatically calculated a distribution rather than unique categorization of the perceived emotional responses to a visual stimulus. Thereby a probability of emotion per image is used employing statistical techniques to associate the emotion
probability to impact on the response to a presented stimulus. The inventive method does not rely on empirical methods (such as lookup tables or similar), but uses only statistical inferences on estimated
emotional probabilities of the received images instead of scores based on the presence of emotional cues. The present approach is therefore not only different in respect to state of the art approaches, but superior as it is more objective, precise, and benefits from large sample sizes by using statistical inference on estimated emotional probabilities, instead of scores based on the presence of emotional cues.
The output of 340 is the predicted probabilities of the intended variables to be classified. These probabilities can be then stored in 350, in any means appropriate, such as in a spreadsheet on the local file system or in a database. Once stored, they then can be downloaded and merged with the verbal data from the survey to be further processed.
Fig. 10 is a system flow diagram of one embodiment describing how the method can be incorporated into an online web survey and across a communications network: a1 . Design; Programming of based questionnaire: A
questionnaire can be programmed for the online survey. Depending on th type of survey, a variety of different programming languages can be used such as html, flash, php, asp, jsp, javascript, or java although the choice of programming language is not limited in anyway to these examples. a2. Deployment of survey: In the case of an online survey, i.e. where the respondent answers the survey on the internet, the survey can be uploaded to a server. In the case of an offline survey the survey can be deployed directly on the computer of the respondent. a3. Invitation; respondents can be invited to answer the online or offline survey in which the stimuli material is presented: Respondents can be contacted via a variety of methods such as email, telephone or letter to take part in the survey. For online panels this mostly happens via email. However other means can be used. As the survey can be carried out offline and can be a face to face interview, the step functions in both situations. a4. Non-verbal response prediction calibration: An optional step can be used where respondents answer a set of questions to improve and update the algorithms that can be utilized to predict the probabilities of non-verbal responses. a5. The respondent answers the questionnaire: The respondents non-verbal response can be recorded as a sequence of images captured using an imaging device such as a web camera. The respondent's verbal responses can be recorded using a mouse, key board, or microphone, or directly recorded by an interviewer in the case of a face-to-face interview. The verbal answers to the questionnaire (a5a) can be stored in server 90. Images of non-verbal responses can be stored server 80 (a5b). Server 80 and Server 90 can be the same or a different server or different software modules at the same server. a6. An automated system is used to compute predicted probabilities of non-verbal responses. In the case that the non-verbal response is an emotional response, the predicted probabilities can represent basic emotions such as happiness, surprise, fear, and disgust sadness or any other emotional state. Other non-verbal responses can also include visual attention and posture, but is not limited in any way to these examples. a7. Data file is automatically produced with vector of predicted probabilities per respondent per stimuli presented. A data file is now ready for analysis. It can contain all variables from the questions used in the survey with the vector of predicted probabilities for the non-verbal responses for the questions or stimuli where the non-verbal responses have been captured.
Alternative Embodiments: Although this invention has been described with particular reference to its preferred embodiment in consumer research across the internet, it is envisaged by the inventors in many other forms such as :
• Games consoles, TV's, set-top boxes
• Automotive systems
• Retail kiosks
• Medical devices
• Fixed and mobile telecommunication devi
• Human computer interfaces
• Interactive entertainment systems
• Training guides or systems
• Security and interview systems The method described in this invention provides objective measure of emotional probabilities of a facial image. It is a scientific method, enabling marketers to effectively track consumers' conscious and unconscious feelings and reactions about brands, advertising, and marketing material. It has numerous advantages for businesses in that it is fast and inexpensive, and given its simplicity, is applicable to large samples, which are a necessary condition for valid and statistical inference. This approach reduces significantly the cost of making more accurate decisions and is accessible to a much larger audience of practitioners than previous methods. It is objective and commercially practical. Major advantages over current methods include: Suitable for large scale survey sampling without need for expensive equipment.
Deployable outside of the laboratory environment.
Applicable cross-culturally and language independent.
Measurement of emotional responses are free from cognitive or researcher bias.
Gives objective measurements and analysis without the need for highly trained personnel or expert domain knowledge in emotion measurement.

Claims

Claims
1 . A method related to the measurement of the emotional probabilities of a facial image comprising the steps of:
• receiving a facial image of a person in a image receiving unit (82);
• building at least one model based representation from the image;
· extracting a feature description of said model based representation;
• generating a measurable description of the image comprising facial features, based on movements, presence of features, and visual appearance found in the model;
• outputting from these computed facial features predicted classification probabilities from said image.
2. The method according to claim 1 , wherein the facial image is received from an image capturing device continuously and said steps of extracting a facial representation, computing facial features and outputting predicted probabilities are done continuously with the received images.
3. The method according to claim 1 or 2, wherein the predicted
classification probabilities are sent from an image processing unit to a data processing unit.
4. The method according to any to the claims 1 to 3, comprising the step of using Active Appearance Models (AAMs) for building a model based representation from the image.
5. The method according to any to the claims 1 to 4, wherein three steps are performed for extracting a feature description of said model
• computing a set of measures coming from the Facial Action Coding System (FACS),
• computing a set of configurable measures called Expression Descriptive Units (EDU), and
• setting measures representing the appearance of the face.
6. The method according to any to the claims 1 to 5, wherein the facial image is taken by a webcam and send over the internet to said image receiving unit (82).
7. The method according to any to the claims 1 to 6, wherein multiple model representations of the face are built, where each model varies parameters related to pose, illumination, and occlusion.
8. The method according to claim 7, wherein the best model based on an error criteria related to the difference between a synthesized face image from the model, and the original facial image is chosen.
9. The method according to claim 8, wherein all models are built in parallel on parallel processing unit such as a Graphics Processing Unit (GPU) and calculating the error criteria in parallel.
10. The method according to any to the claims 1 to 9, wherein the predicted classification probabilities represent basic emotions such as happiness, surprise, fear, and disgust sadness or any other emotional state.
1 1 .The method according to any to the claims 1 to 10, wherein said predicted classification probabilities are calculated by one or a combination of discrete choice models, support vector machines, neural networks, decision trees, random forests or Bayesian networks.
12. A system related to the measurement of the emotional probabilities of a facial image with an image processing unit (80), said image comprising
• an image processing receiving unit (82);
• a face extraction unit (84) for building a model based representation from the image; and for extracting a feature description of said model based representation;
• a features computation unit (86) for computing facial feature
descriptions which are a combination of spatial, appearance, and temporal features; and a classification unit (88) which outputs from these computed facial feature descriptions predicted classification probabilities from said image.
PCT/EP2010/065544 2009-10-16 2010-10-15 Method and system for measuring emotional probabilities of a facial image WO2011045422A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP09173316.2 2009-10-16
EP09173316 2009-10-16

Publications (1)

Publication Number Publication Date
WO2011045422A1 true WO2011045422A1 (en) 2011-04-21

Family

ID=43385129

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/065544 WO2011045422A1 (en) 2009-10-16 2010-10-15 Method and system for measuring emotional probabilities of a facial image

Country Status (1)

Country Link
WO (1) WO2011045422A1 (en)

Cited By (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9106958B2 (en) 2011-02-27 2015-08-11 Affectiva, Inc. Video recommendation based on affect
US9204836B2 (en) 2010-06-07 2015-12-08 Affectiva, Inc. Sporadic collection of mobile affect data
US9247903B2 (en) 2010-06-07 2016-02-02 Affectiva, Inc. Using affect within a gaming context
WO2016048502A1 (en) * 2014-09-24 2016-03-31 Intel Corporation Facilitating dynamic affect-based adaptive representation and reasoning of user behavior on computing devices
US20160180722A1 (en) * 2014-12-22 2016-06-23 Intel Corporation Systems and methods for self-learning, content-aware affect recognition
US9503786B2 (en) 2010-06-07 2016-11-22 Affectiva, Inc. Video recommendation using affect
US9642536B2 (en) 2010-06-07 2017-05-09 Affectiva, Inc. Mental state analysis using heart rate collection based on video imagery
US9646046B2 (en) 2010-06-07 2017-05-09 Affectiva, Inc. Mental state data tagging for data collected from multiple sources
US9723992B2 (en) 2010-06-07 2017-08-08 Affectiva, Inc. Mental state analysis using blink rate
CN107491716A (en) * 2016-06-13 2017-12-19 腾讯科技(深圳)有限公司 A kind of face authentication method and device
US9934425B2 (en) 2010-06-07 2018-04-03 Affectiva, Inc. Collection of affect data from multiple mobile devices
US9959549B2 (en) 2010-06-07 2018-05-01 Affectiva, Inc. Mental state analysis for norm generation
US10013892B2 (en) 2013-10-07 2018-07-03 Intel Corporation Adaptive learning environment driven by real-time identification of engagement level
US10019489B1 (en) 2016-04-27 2018-07-10 Amazon Technologies, Inc. Indirect feedback systems and methods
US10074024B2 (en) 2010-06-07 2018-09-11 Affectiva, Inc. Mental state analysis using blink rate for vehicles
US10108852B2 (en) 2010-06-07 2018-10-23 Affectiva, Inc. Facial analysis to detect asymmetric expressions
US10111611B2 (en) 2010-06-07 2018-10-30 Affectiva, Inc. Personal emotional profile generation
US10143414B2 (en) 2010-06-07 2018-12-04 Affectiva, Inc. Sporadic collection with mobile affect data
US10204625B2 (en) 2010-06-07 2019-02-12 Affectiva, Inc. Audio analysis learning using video data
US10289898B2 (en) 2010-06-07 2019-05-14 Affectiva, Inc. Video recommendation via affect
US10321050B2 (en) 2016-11-29 2019-06-11 International Business Machines Corporation Determining optimal photograph capture timing based on data from wearable computer eyewear devices
US10401860B2 (en) 2010-06-07 2019-09-03 Affectiva, Inc. Image analysis for two-sided data hub
US10430810B2 (en) 2015-09-22 2019-10-01 Health Care Direct, Inc. Systems and methods for assessing the marketability of a product
US10474875B2 (en) 2010-06-07 2019-11-12 Affectiva, Inc. Image analysis using a semiconductor processor for facial evaluation
US10482333B1 (en) 2017-01-04 2019-11-19 Affectiva, Inc. Mental state analysis using blink rate within vehicles
US10517521B2 (en) 2010-06-07 2019-12-31 Affectiva, Inc. Mental state mood analysis using heart rate collection based on video imagery
US10592757B2 (en) 2010-06-07 2020-03-17 Affectiva, Inc. Vehicular cognitive data collection using multiple devices
US10614289B2 (en) 2010-06-07 2020-04-07 Affectiva, Inc. Facial tracking with classifiers
US10627817B2 (en) 2010-06-07 2020-04-21 Affectiva, Inc. Vehicle manipulation using occupant image analysis
US10628741B2 (en) 2010-06-07 2020-04-21 Affectiva, Inc. Multimodal machine learning for emotion metrics
US10628985B2 (en) 2017-12-01 2020-04-21 Affectiva, Inc. Avatar image animation using translation vectors
US10779761B2 (en) 2010-06-07 2020-09-22 Affectiva, Inc. Sporadic collection of affect data within a vehicle
US10796176B2 (en) 2010-06-07 2020-10-06 Affectiva, Inc. Personal emotional profile generation for vehicle manipulation
US10799168B2 (en) 2010-06-07 2020-10-13 Affectiva, Inc. Individual data sharing across a social network
US10843078B2 (en) 2010-06-07 2020-11-24 Affectiva, Inc. Affect usage within a gaming context
US10869626B2 (en) 2010-06-07 2020-12-22 Affectiva, Inc. Image analysis for emotional metric evaluation
US10897650B2 (en) 2010-06-07 2021-01-19 Affectiva, Inc. Vehicle content recommendation using cognitive states
US10911829B2 (en) 2010-06-07 2021-02-02 Affectiva, Inc. Vehicle video recommendation via affect
US10922567B2 (en) 2010-06-07 2021-02-16 Affectiva, Inc. Cognitive state based vehicle manipulation using near-infrared image processing
US10922566B2 (en) 2017-05-09 2021-02-16 Affectiva, Inc. Cognitive state evaluation for vehicle navigation
US10949461B2 (en) 2016-04-18 2021-03-16 International Business Machines Corporation Composable templates for managing disturbing image and sounds
US11017250B2 (en) 2010-06-07 2021-05-25 Affectiva, Inc. Vehicle manipulation using convolutional image processing
US11056225B2 (en) 2010-06-07 2021-07-06 Affectiva, Inc. Analytics for livestreaming based on image analysis within a shared digital environment
US11067405B2 (en) 2010-06-07 2021-07-20 Affectiva, Inc. Cognitive state vehicle navigation based on image processing
US11073899B2 (en) 2010-06-07 2021-07-27 Affectiva, Inc. Multidevice multimodal emotion services monitoring
US11151610B2 (en) 2010-06-07 2021-10-19 Affectiva, Inc. Autonomous vehicle control using heart rate collection based on video imagery
US11232290B2 (en) 2010-06-07 2022-01-25 Affectiva, Inc. Image analysis using sub-sectional component evaluation to augment classifier usage
US11292477B2 (en) 2010-06-07 2022-04-05 Affectiva, Inc. Vehicle manipulation using cognitive state engineering
US11318949B2 (en) 2010-06-07 2022-05-03 Affectiva, Inc. In-vehicle drowsiness analysis using blink rate
US11393133B2 (en) 2010-06-07 2022-07-19 Affectiva, Inc. Emoji manipulation using machine learning
US11410438B2 (en) 2010-06-07 2022-08-09 Affectiva, Inc. Image analysis using a semiconductor processor for facial evaluation in vehicles
US11430260B2 (en) 2010-06-07 2022-08-30 Affectiva, Inc. Electronic display viewing verification
US11430561B2 (en) 2010-06-07 2022-08-30 Affectiva, Inc. Remote computing analysis for cognitive state data metrics
US11465640B2 (en) 2010-06-07 2022-10-11 Affectiva, Inc. Directed control transfer for autonomous vehicles
US11484685B2 (en) 2010-06-07 2022-11-01 Affectiva, Inc. Robotic control using profiles
US11511757B2 (en) 2010-06-07 2022-11-29 Affectiva, Inc. Vehicle manipulation with crowdsourcing
US11587357B2 (en) 2010-06-07 2023-02-21 Affectiva, Inc. Vehicular cognitive data collection with multiple devices
US11657288B2 (en) 2010-06-07 2023-05-23 Affectiva, Inc. Convolutional computing using multilayered analysis engine
US11700420B2 (en) 2010-06-07 2023-07-11 Affectiva, Inc. Media manipulation using cognitive state metric analysis
US11704574B2 (en) 2010-06-07 2023-07-18 Affectiva, Inc. Multimodal machine learning for vehicle manipulation
US11769056B2 (en) 2019-12-30 2023-09-26 Affectiva, Inc. Synthetic data for neural network training using vectors
US11823055B2 (en) 2019-03-31 2023-11-21 Affectiva, Inc. Vehicular in-cabin sensing using machine learning
US11887383B2 (en) 2019-03-31 2024-01-30 Affectiva, Inc. Vehicle interior object management
US11887352B2 (en) 2010-06-07 2024-01-30 Affectiva, Inc. Live streaming analytics within a shared digital environment
US11935281B2 (en) 2010-06-07 2024-03-19 Affectiva, Inc. Vehicular in-cabin facial tracking using machine learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5676138A (en) 1996-03-15 1997-10-14 Zawilinski; Kenneth Michael Emotional response analyzer system with multimedia display
US6099319A (en) 1998-02-24 2000-08-08 Zaltman; Gerald Neuroimaging as a marketing tool
US6292688B1 (en) 1996-02-28 2001-09-18 Advanced Neurotechnologies, Inc. Method and apparatus for analyzing neurological response to emotion-inducing stimuli
US6453194B1 (en) 2000-03-29 2002-09-17 Daniel A. Hill Method of measuring consumer reaction while participating in a consumer activity
US20030032890A1 (en) 2001-07-12 2003-02-13 Hazlett Richard L. Continuous emotional response analysis with facial EMG
US6584346B2 (en) 2001-01-22 2003-06-24 Flowmaster, Inc. Process and apparatus for selecting or designing products having sound outputs
US7113916B1 (en) 2001-09-07 2006-09-26 Hill Daniel A Method of facial coding monitoring for the purpose of gauging the impact and appeal of commercially-related stimuli
US20080043025A1 (en) 2006-08-21 2008-02-21 Afriat Isabelle Using DISC to Evaluate The Emotional Response Of An Individual

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292688B1 (en) 1996-02-28 2001-09-18 Advanced Neurotechnologies, Inc. Method and apparatus for analyzing neurological response to emotion-inducing stimuli
US5676138A (en) 1996-03-15 1997-10-14 Zawilinski; Kenneth Michael Emotional response analyzer system with multimedia display
US6099319A (en) 1998-02-24 2000-08-08 Zaltman; Gerald Neuroimaging as a marketing tool
US6453194B1 (en) 2000-03-29 2002-09-17 Daniel A. Hill Method of measuring consumer reaction while participating in a consumer activity
US6584346B2 (en) 2001-01-22 2003-06-24 Flowmaster, Inc. Process and apparatus for selecting or designing products having sound outputs
US20030032890A1 (en) 2001-07-12 2003-02-13 Hazlett Richard L. Continuous emotional response analysis with facial EMG
US7113916B1 (en) 2001-09-07 2006-09-26 Hill Daniel A Method of facial coding monitoring for the purpose of gauging the impact and appeal of commercially-related stimuli
US20080043025A1 (en) 2006-08-21 2008-02-21 Afriat Isabelle Using DISC to Evaluate The Emotional Response Of An Individual

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
COOTES T F ET AL: "ACTIVE APPREARANCE MODELS", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 23, no. 6, 1 June 2001 (2001-06-01), pages 681 - 685, XP001110809, ISSN: 0162-8828, DOI: DOI:10.1109/34.927467 *
KRAISS K-F ET AL: "Advanced Man-Machine Interaction: Fundamentals and Implementation", 2006, SPRINGER VERLAG BERLIN-HEIDELBERG, ISBN: 3-540-30618-8, XP008131270 *
LI S Z, JAIN A K (EDS.): "Handbook of Face Recognition", 2005, SPRINGER SCIENCE+BUSINESS MEDIA INC., ISBN: 0-387-40595-X, XP002616043 *
SORCI M ET AL: "Modelling human perception of static facial expressions", 8TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION, FG '08, 17-19 SEPT. 2008, IEEE, PISCATAWAY, NJ, USA, 17 September 2008 (2008-09-17), pages 1 - 8, XP031448464, ISBN: 978-1-4244-2153-4 *
SORCI M ET AL: "Modelling human perception of static facial expressions", IMAGE AND VISION COMPUTING, 14 October 2009 (2009-10-14), pages 1 - 20, XP002616042, Retrieved from the Internet <URL:http://dx.doi.org/10.1016/j.imavis.2009.10.003> [retrieved on 20110110], DOI: 10.1016/j.imavis.2009.10.003 *

Cited By (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10796176B2 (en) 2010-06-07 2020-10-06 Affectiva, Inc. Personal emotional profile generation for vehicle manipulation
US9204836B2 (en) 2010-06-07 2015-12-08 Affectiva, Inc. Sporadic collection of mobile affect data
US9247903B2 (en) 2010-06-07 2016-02-02 Affectiva, Inc. Using affect within a gaming context
US10779761B2 (en) 2010-06-07 2020-09-22 Affectiva, Inc. Sporadic collection of affect data within a vehicle
US11935281B2 (en) 2010-06-07 2024-03-19 Affectiva, Inc. Vehicular in-cabin facial tracking using machine learning
US11887352B2 (en) 2010-06-07 2024-01-30 Affectiva, Inc. Live streaming analytics within a shared digital environment
US9642536B2 (en) 2010-06-07 2017-05-09 Affectiva, Inc. Mental state analysis using heart rate collection based on video imagery
US9646046B2 (en) 2010-06-07 2017-05-09 Affectiva, Inc. Mental state data tagging for data collected from multiple sources
US9723992B2 (en) 2010-06-07 2017-08-08 Affectiva, Inc. Mental state analysis using blink rate
US11704574B2 (en) 2010-06-07 2023-07-18 Affectiva, Inc. Multimodal machine learning for vehicle manipulation
US9934425B2 (en) 2010-06-07 2018-04-03 Affectiva, Inc. Collection of affect data from multiple mobile devices
US9959549B2 (en) 2010-06-07 2018-05-01 Affectiva, Inc. Mental state analysis for norm generation
US11700420B2 (en) 2010-06-07 2023-07-11 Affectiva, Inc. Media manipulation using cognitive state metric analysis
US11657288B2 (en) 2010-06-07 2023-05-23 Affectiva, Inc. Convolutional computing using multilayered analysis engine
US10074024B2 (en) 2010-06-07 2018-09-11 Affectiva, Inc. Mental state analysis using blink rate for vehicles
US11587357B2 (en) 2010-06-07 2023-02-21 Affectiva, Inc. Vehicular cognitive data collection with multiple devices
US10108852B2 (en) 2010-06-07 2018-10-23 Affectiva, Inc. Facial analysis to detect asymmetric expressions
US10111611B2 (en) 2010-06-07 2018-10-30 Affectiva, Inc. Personal emotional profile generation
US10143414B2 (en) 2010-06-07 2018-12-04 Affectiva, Inc. Sporadic collection with mobile affect data
US10204625B2 (en) 2010-06-07 2019-02-12 Affectiva, Inc. Audio analysis learning using video data
US10289898B2 (en) 2010-06-07 2019-05-14 Affectiva, Inc. Video recommendation via affect
US11511757B2 (en) 2010-06-07 2022-11-29 Affectiva, Inc. Vehicle manipulation with crowdsourcing
US10401860B2 (en) 2010-06-07 2019-09-03 Affectiva, Inc. Image analysis for two-sided data hub
US11484685B2 (en) 2010-06-07 2022-11-01 Affectiva, Inc. Robotic control using profiles
US10474875B2 (en) 2010-06-07 2019-11-12 Affectiva, Inc. Image analysis using a semiconductor processor for facial evaluation
US11465640B2 (en) 2010-06-07 2022-10-11 Affectiva, Inc. Directed control transfer for autonomous vehicles
US10517521B2 (en) 2010-06-07 2019-12-31 Affectiva, Inc. Mental state mood analysis using heart rate collection based on video imagery
US10573313B2 (en) 2010-06-07 2020-02-25 Affectiva, Inc. Audio analysis learning with video data
US10592757B2 (en) 2010-06-07 2020-03-17 Affectiva, Inc. Vehicular cognitive data collection using multiple devices
US10614289B2 (en) 2010-06-07 2020-04-07 Affectiva, Inc. Facial tracking with classifiers
US10627817B2 (en) 2010-06-07 2020-04-21 Affectiva, Inc. Vehicle manipulation using occupant image analysis
US10628741B2 (en) 2010-06-07 2020-04-21 Affectiva, Inc. Multimodal machine learning for emotion metrics
US11430561B2 (en) 2010-06-07 2022-08-30 Affectiva, Inc. Remote computing analysis for cognitive state data metrics
US11430260B2 (en) 2010-06-07 2022-08-30 Affectiva, Inc. Electronic display viewing verification
US11410438B2 (en) 2010-06-07 2022-08-09 Affectiva, Inc. Image analysis using a semiconductor processor for facial evaluation in vehicles
US9503786B2 (en) 2010-06-07 2016-11-22 Affectiva, Inc. Video recommendation using affect
US10799168B2 (en) 2010-06-07 2020-10-13 Affectiva, Inc. Individual data sharing across a social network
US10843078B2 (en) 2010-06-07 2020-11-24 Affectiva, Inc. Affect usage within a gaming context
US10867197B2 (en) 2010-06-07 2020-12-15 Affectiva, Inc. Drowsiness mental state analysis using blink rate
US10869626B2 (en) 2010-06-07 2020-12-22 Affectiva, Inc. Image analysis for emotional metric evaluation
US10897650B2 (en) 2010-06-07 2021-01-19 Affectiva, Inc. Vehicle content recommendation using cognitive states
US10911829B2 (en) 2010-06-07 2021-02-02 Affectiva, Inc. Vehicle video recommendation via affect
US10922567B2 (en) 2010-06-07 2021-02-16 Affectiva, Inc. Cognitive state based vehicle manipulation using near-infrared image processing
US11393133B2 (en) 2010-06-07 2022-07-19 Affectiva, Inc. Emoji manipulation using machine learning
US11318949B2 (en) 2010-06-07 2022-05-03 Affectiva, Inc. In-vehicle drowsiness analysis using blink rate
US11017250B2 (en) 2010-06-07 2021-05-25 Affectiva, Inc. Vehicle manipulation using convolutional image processing
US11056225B2 (en) 2010-06-07 2021-07-06 Affectiva, Inc. Analytics for livestreaming based on image analysis within a shared digital environment
US11067405B2 (en) 2010-06-07 2021-07-20 Affectiva, Inc. Cognitive state vehicle navigation based on image processing
US11073899B2 (en) 2010-06-07 2021-07-27 Affectiva, Inc. Multidevice multimodal emotion services monitoring
US11292477B2 (en) 2010-06-07 2022-04-05 Affectiva, Inc. Vehicle manipulation using cognitive state engineering
US11151610B2 (en) 2010-06-07 2021-10-19 Affectiva, Inc. Autonomous vehicle control using heart rate collection based on video imagery
US11232290B2 (en) 2010-06-07 2022-01-25 Affectiva, Inc. Image analysis using sub-sectional component evaluation to augment classifier usage
US9106958B2 (en) 2011-02-27 2015-08-11 Affectiva, Inc. Video recommendation based on affect
US10013892B2 (en) 2013-10-07 2018-07-03 Intel Corporation Adaptive learning environment driven by real-time identification of engagement level
WO2016048502A1 (en) * 2014-09-24 2016-03-31 Intel Corporation Facilitating dynamic affect-based adaptive representation and reasoning of user behavior on computing devices
US20160180722A1 (en) * 2014-12-22 2016-06-23 Intel Corporation Systems and methods for self-learning, content-aware affect recognition
US11288685B2 (en) 2015-09-22 2022-03-29 Health Care Direct, Inc. Systems and methods for assessing the marketability of a product
US10430810B2 (en) 2015-09-22 2019-10-01 Health Care Direct, Inc. Systems and methods for assessing the marketability of a product
US10949461B2 (en) 2016-04-18 2021-03-16 International Business Machines Corporation Composable templates for managing disturbing image and sounds
US11086928B2 (en) 2016-04-18 2021-08-10 International Business Machines Corporation Composable templates for managing disturbing image and sounds
US10019489B1 (en) 2016-04-27 2018-07-10 Amazon Technologies, Inc. Indirect feedback systems and methods
CN107491716B (en) * 2016-06-13 2018-10-19 腾讯科技(深圳)有限公司 A kind of face authentication method and device
CN107491716A (en) * 2016-06-13 2017-12-19 腾讯科技(深圳)有限公司 A kind of face authentication method and device
US10321050B2 (en) 2016-11-29 2019-06-11 International Business Machines Corporation Determining optimal photograph capture timing based on data from wearable computer eyewear devices
US10742873B2 (en) 2016-11-29 2020-08-11 International Business Machines Corporation Determining optimal photograph capture timing based on data from wearable computer eyewear devices
US10482333B1 (en) 2017-01-04 2019-11-19 Affectiva, Inc. Mental state analysis using blink rate within vehicles
US10922566B2 (en) 2017-05-09 2021-02-16 Affectiva, Inc. Cognitive state evaluation for vehicle navigation
US10628985B2 (en) 2017-12-01 2020-04-21 Affectiva, Inc. Avatar image animation using translation vectors
US11887383B2 (en) 2019-03-31 2024-01-30 Affectiva, Inc. Vehicle interior object management
US11823055B2 (en) 2019-03-31 2023-11-21 Affectiva, Inc. Vehicular in-cabin sensing using machine learning
US11769056B2 (en) 2019-12-30 2023-09-26 Affectiva, Inc. Synthetic data for neural network training using vectors

Similar Documents

Publication Publication Date Title
WO2011045422A1 (en) Method and system for measuring emotional probabilities of a facial image
US20120259240A1 (en) Method and System for Assessing and Measuring Emotional Intensity to a Stimulus
Aldayel et al. Deep learning for EEG-based preference classification in neuromarketing
US11200964B2 (en) Short imagery task (SIT) research method
Hakim et al. A gateway to consumers' minds: Achievements, caveats, and prospects of electroencephalography‐based prediction in neuromarketing
Marín-Morales et al. Real vs. immersive-virtual emotional experience: Analysis of psycho-physiological patterns in a free exploration of an art museum
Sweeny et al. Perceiving crowd attention: Ensemble perception of a crowd’s gaze
Giakoumis et al. Using activity-related behavioural features towards more effective automatic stress detection
Grill-Spector et al. Visual recognition: As soon as you know it is there, you know what it is
Generosi et al. A deep learning-based system to track and analyze customer behavior in retail store
CN110036402A (en) The data processing method of prediction for media content performance
US11700420B2 (en) Media manipulation using cognitive state metric analysis
Ahn et al. Using automated facial expression analysis for emotion and behavior prediction
Wang et al. Human perception of animacy in light of the uncanny valley phenomenon
US11430561B2 (en) Remote computing analysis for cognitive state data metrics
McDuff Crowdsourcing affective responses for predicting media effectiveness
Kalaganis et al. Unlocking the subconscious consumer bias: a survey on the past, present, and future of hybrid EEG schemes in neuromarketing
Danner et al. Automatic facial expressions analysis in consumer science
McDuff New methods for measuring advertising efficacy
Masui et al. Measurement of advertisement effect based on multimodal emotional responses considering personality
Meschtscherjakov et al. Utilizing emoticons on mobile devices within ESM studies to measure emotions in the field
Ivonin et al. Beyond cognition and affect: sensing the unconscious
Watier The saliency of angular shapes in threatening and nonthreatening faces
Hadar Implicit Media Tagging and Affect Prediction from video of spontaneous facial expressions, recorded with depth camera
Berry et al. The dynamic mask: Facial correlates of character portrayal in professional actors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10765461

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10765461

Country of ref document: EP

Kind code of ref document: A1