WO2014026240A1 - Communication analysis and interpretation - Google Patents

Communication analysis and interpretation Download PDF

Info

Publication number
WO2014026240A1
WO2014026240A1 PCT/AU2013/000909 AU2013000909W WO2014026240A1 WO 2014026240 A1 WO2014026240 A1 WO 2014026240A1 AU 2013000909 W AU2013000909 W AU 2013000909W WO 2014026240 A1 WO2014026240 A1 WO 2014026240A1
Authority
WO
WIPO (PCT)
Prior art keywords
communication
semantic
communications
alert
parameters
Prior art date
Application number
PCT/AU2013/000909
Other languages
French (fr)
Inventor
Robert Fong
Lukie Ali
Original Assignee
Morf Dynamics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2012903533A external-priority patent/AU2012903533A0/en
Application filed by Morf Dynamics filed Critical Morf Dynamics
Publication of WO2014026240A1 publication Critical patent/WO2014026240A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • the present invention generally relates to a method, system, computer readable medium of instructions and/or computer program product for detecting the nature of a communication between one or more parties.
  • the relative anonymity afforded by the Internet provides an opportunity for predators and bullies to anonymously target users, particularly children.
  • the present invention provides a method for automatically analysing a communication to provide an alert or indication in relation to content, behaviour or goals, including at least the steps of:
  • the present invention provides a system for automatically analysing a communication to provide an alert or indication in relation to content, behaviour or goals, the system including at least a processor with associated memory, and a communication interface for receiving the communications to be analysed and for sending alert and/or indications, the processor including software so that it is operatively adapted to analyse the language of the communication to determine one or semantic representations of the communication, said semantic representations being constrained by predetermined contextual rules, and to further analyse the semantic representations relative to predetermined domain models of behaviour or goals, and if the semantic representation, in the context, meets predetermined parameters, transmitting an alert or indication.
  • the present invention also encompasses software to implement the method.
  • predetermined is used to indicate that the parameters, domain models and other features are determined at the time that a particular analysis is undertaken.
  • the term predetermined encompasses both specific items or selections made in advance by an administrator or user, as well as the domain models, parameters and other features being modified dynamically, through the application of self-learning or correlating algorithms, and/or through the exchange of communications with other users in the system. The latter may be communications with human users initiated by the system. The detail of how this is implemented is necessarily dependent upon the particular application of the present invention, as would be understood by those skilled in the art.
  • Figure 1 shows the relationship between the principle subsystems of a language
  • Figure 2 shows a functional block diagram of an example of a processing system that can be utilised to embody or give effect to a particular embodiment
  • Figure 3 shows a basic flow diagram illustrating the dataflow of one implementation of the present invention
  • Figure 4 shows a flow diagram illustrating the dataflow of the communication manager component
  • Figure 5 shows a functional block diagram of a communication between two parties
  • Figure 6 shows an overall system architecture of an implementation of the present invention.
  • Figure 7 illustrates a flow diagram of a method of operation of an implementation of the present invention.
  • the present invention will be described with reference to particular illustrative examples. It will be appreciated that, because of the nature of the invention, there are many possible different modes in which is can be implemented, using many different software methodologies, schemes, and coding languages. Those skilled in the art will understand that the examples are merely illustrative of how the invention may be implemented, and are not limitative of the scope and possible ways of implementation of the present invention.
  • the present invention may be implemented as a stand alone system, within a particular website or server. However, it is envisaged that it may equally be implemented as an additional feature for another site or facility. For example, in the context of preventing inappropriate communications with children, or bullying, the present invention may be implemented as an additional feature on a social media or gaming site.
  • the implementation of the present invention may be operated as a standalone module or service operating in the background, and activated as a security preference from the social media site. The module would send a message when the required parameters and threshold had been met, for action by the social media site.
  • the present invention could be applied to a training situation, for example for sales staff.
  • the analysis and interpretation may be looking to detect problematic behaviours in interactions with real or simulated customers, for example arguing with customers, to facilitate their correction.
  • the system may also detect positive behaviours, for example selling up, and allow the system to reinforce them.
  • the present invention may be implemented as a module or service operating in the background of a training or monitoring system.
  • the present invention utilises a structured, analytical approach based on linguistics to detect the nature of a communication between one or more parties.
  • Figure 1 depicts the conceptual relationship between the principal subsystems of a language, particularly as it will be dissected for the purposes of the present implementation.
  • context 106 presides over the formulation of meaning from phonology and graphology 100, through to the lexicogrammar 102 and semantics 104.
  • Communication takes many forms and modes.
  • Phonology 100 is the base unit of speech and graphology 100 is the base unit of writing.
  • the lexicogrammar 102 is the system of words and writing whilst semantics 104 relates to the system of meaning.
  • the interaction between two or more parties and, for example, their cultural context dictate how they interact, the tone of their speech, the selection of their words, and so on.
  • the influence of context 106 is clearly demonstrable at the phonology and graphology 100 level. It is within the layers of lexicogrammar 102 and semantics 104 that much of the influence of context 106 is determined and applied by implementations of the present invention.
  • Implementations of the present invention subject the communication to a formal analysis, extracting possible words from the phonology and graphology 100, applying a lexicogrammar 102 to identify the or at least the possible words, and using semantics 104 to impute a meaning to the identified language.
  • the context 106 is used to constrain the set of alternatives, so that a more accurate determination of the nature and intentions of a communication can be determined.
  • a particular embodiment of the present invention can be realised using a processing system, an example of which is shown in Figure 2.
  • the processing system 200 can be used as a client processing system and/or a server processing system.
  • the present invention is operable using conventional general purpose processors and computer systems.
  • the capabilities of the system will of course need to be adequate for the expected load and number of users, as will be well understood in the art.
  • performing aspects of the present invention in real time may require significant resources.
  • the term processor may mean a set of multiple processors, or may mean use of only part of a processor which is also used for other tasks.
  • Processing system 200 includes at least one processor 202, or processing unit or plurality of processors, memory 204, at least one input device 206 and at least one output device 208, coupled together via a bus or group of buses 210.
  • input device 206 and output device 208 could be the same device.
  • An interface 212 can also be provided for coupling the processing system 200 to one or more peripheral devices, for example interface 212 could be a PCI card or PC card.
  • At least one storage device 214 which houses at least one database 216 can also be provided.
  • the memory 204 can be any form of memory device, for example, volatile or non-volatile memory, solid state storage devices such as flash memory, hard drives, etc.
  • the processor 202 could include more than one distinct processing device, for example to handle different functions within the processing system 200.
  • the memory 204 typically stores an operating system to provide functionality to the processing system 200.
  • a file system and files are also typically stored on the storage device 214 and/or the memory 204.
  • Input device 206 receives input data 218 and can include, for example, a keyboard, a pointer device such as a pen-like device or a mouse, audio receiving device for voice controlled activation such as a microphone, data receiver or antenna such as a modem or wireless data adaptor, data acquisition card, etc.
  • Input data 218 could come from different sources, for example keyboard instructions in conjunction with data received via a network.
  • Output device 208 produces or generates output data 220 and can include, for example, a display device or monitor in which case output data 220 is visual, a printer in which case output data 220 is printed, a port for example a USB port, a peripheral component adaptor, a data transmitter or antenna such as a modem or wireless network adaptor, etc.
  • Output data 220 could be distinct and derived from different output devices, for example a visual display on a monitor in conjunction with data transmitted to a network.
  • a user could view data output, or an interpretation of the data output, on, for example, a monitor or using a printer, or via audio transmitting device such as a speaker.
  • the storage device 214 can be any form of data or information storage means, for example, volatile or non-volatile memory, solid state storage devices, magnetic devices, etc.
  • the processing system 200 can be adapted to allow data or information to be stored in and/or retrieved from, via wired or wireless communication means, the at least one database 216.
  • the interface 212 may allow wired and/or wireless communication between the processing unit 202 and peripheral components that may serve a specialized purpose.
  • the processor 202 receives instructions as input data 218 via input device 206 and can display processed results or other output to a user by utilising output device 208. More than one input device 206 and/or output device 208 can be provided.
  • processing system 200 may be any form of terminal, server processing system, specialised hardware, computer, computer system or computerised device, personal computer (PC), mobile or cellular telephone, mobile data terminal, portable computer, Personal Digital Assistant (PDA), pager or any other similar type of device.
  • PC personal computer
  • PDA Personal Digital Assistant
  • the processing system 200 may be a part of a networked communications system.
  • the processing system 200 could connect to network, for example the Internet or a WAN.
  • the network can include one or more client processing systems and one or more server processing systems, wherein the one or more client processing systems and the one or more server processing systems are forms of processing system 200.
  • Input data 218 and output data 220 could be communicated to other devices via the network.
  • the transfer of information and/or data over the network can be achieved using wired communications means or wireless communications means.
  • the server processing system can facilitate the transfer of data between the network and one or more databases. It will be understood that what is described to this stage is a generally conventional computing arrangement.
  • the processing system 200 may include an adult supervised profiler 221 , which is a component that allows adults or parents to flag items that are inappropriate for their children.
  • the adult supervised profiler 221 allows the processing system 200 to be customised based on the needs of the parents or guardians.
  • the adult supervised profiler 221 may store items manually flagged by the adults or parents, and/or may learn from the settings of other users of the processing system 200, so that users with the same profiles or activities as other users may inherit security settings from one another if appropriate.
  • the items may be flagged based on specific semantics, for example 'material related to drug manufacture', or on a more specific basis.
  • the processing system 200 may include a threat modeller 222.
  • the threat modeller 222 is a learning component which analyses the behaviour of specific threat types, i.e. paedophiles and rapists, to build a model or profile that can be compared to profiles of the users in a particular system.
  • Figure 3 illustrates the overall data flow of the method of determining the nature of a communication.
  • Each of the components of language described with reference to Figure 1 contribute to detecting the nature of the communication and the generation of appropriate action in response.
  • contextual space 300 which is stored in a storage device 214.
  • the contextual space 300 represents the configuration of the parties in the communication which is to be analysed and the relationships between the parties within the current context.
  • the database 210 may contain a collection of configurations or situations describing threatening and safe conversations used as boundary models for comparison with a conversation between the two users.
  • the contextual space 300 is only affected by either of the parties in the communication to the extent that the each of the parties are responsible for their own utterances in the overall communication. Therefore, the parties do not have direct control over the contextual space 300.
  • An example contextual space 300 may be the relationship between an attorney and a client. For example, assume that John Smith spoke to his attorney Jill Black about the class action law suit.
  • the following table defines a contextual space for this conversation: Participant 0 Participant 1 Relationship 0-1 Relationship 1 -0
  • An utterance in the form of text is entered into text input component 302.
  • the utterance may be received via an input device 206 or may be received as input data 218 from a network such as the Internet and stored in a storage device 214.
  • the utterance is part of a communication between one or more parties.
  • the utterance may be speech which has been synthesised into text.
  • the utterance is then passed to the processor 202 where the semantic analysis component 304 interprets the meaning of the utterance in light of the contextual space 300.
  • the semantic analysis component 304 includes a set of language processors 310 preferably arranged within a pipeline architecture.
  • the language processors used may be those known in the art such as tokenisation, lemmatisation, part-of-speech tagging, named entity recognition, syntactic parsing, and word sense disambiguation modules.
  • the semantic analysis component 304 initially represents each clause within a sentence in the communication in terms of its predicate- argument structure.
  • predicate-argument structure for the 'slight movement' sense of the predicate "edge” in the sentence "Tokyo stocks edged up Wednesday in relatively active but unfocused trading”.
  • Predicate-argument structure refers to the arguments of a verb (in this case "edge") and their respective roles.
  • the verb "edge" has many senses and the sense referred to in the above sentence is that of "slight movement”.
  • the roles depicted above are those that have been identified for the example sentence. These roles are listed along with their corresponding argument values from the example sentence.
  • the predicate-argument structure is firstly interpreted within the context of its respective roleset and further with respect to the contextual space 300.
  • a roleset defines the possible arguments and their types (e.g. modifier-Temporal, modifier- Location) for any given sense of a predicate verb.
  • the roleset for 'edge' includes a total of six possible argument types, and each argument type having a respective role or function.
  • the full roleset for this type of predicate together with each arguments' default definition is presented below:
  • ArgM-LOC medium : location
  • the utterance in the communication has been interpreted by the semantic analysis component 304, the utterance is passed to the behaviour analysis component 306 in the processor 202.
  • the analysis of the behaviour of the communication will be described throughout this example with reference to "threatening behaviour". It will be appreciated that any type of behaviour may be analysed and detected such as unsolicited offers for sale of products etc.
  • the analysis of the behaviour of the communication is based upon the semantic interpretation of utterances (which makes up one or more sentences within a communication) within a given contextual space 300 and domain model 312 in combination with the users' current security settings 314.
  • the security settings 314 are stored in a storage device 214.
  • the domain model 312 is a learned mapping between context sensitive semantic structures and semantic parameters.
  • a system consists of a set of entry conditions and a set of output features.
  • the domain model 312 is "threatening behaviour".
  • the behaviour analysis component 306 may also pass information to and/or from the threat modeller 222 and the adult supervised profiler 221 of the processing system 200.
  • the behaviour analysis component 306 uses a number of semantic parameters 316 in order to infer specific levels of behaviour.
  • the semantic parameters 316 are stored in the storage device 214.
  • the semantic parameters 316 may include Co-incidenceLevel, PersonalPrivacy, SuspiciousDialogue, AntagonisticBehaviour and PhysicalSafety.
  • each of the semantic parameters 316 are populated when an instance of the semantic parameter 316 occurs.
  • the interpretation of behaviours, and in this case, threats, by the semantic analysis component 304 via the semantic parameters 316 relies upon the analysis of goals derived from contextual space 300 of the communication.
  • the options that are chosen from within a network determine the degree to which threats are assessed. Each choice that is made is contrasted within a range of possibilities for a given context.
  • World knowledge in the form of axioms such as "Virgin Islands is a member of Island" is used to build and update situational contexts.
  • World knowledge may be stored in a database 216 in a storage device 214 or may be accessed from an online library such as the British National Corpus.
  • the world knowledge can be used to interpret the semantic content of an utterance in the context, so that an intention can be assessed and determined.
  • the reason as to why a particular utterance in a communication was made in the contextual space 300 is an integral step in determining the goals underlying the current communication.
  • the communication manager component 308 is responsible for the facilitation of communication.
  • the principal goals of the communication manager component 308 are the interpretation, incorporation, and generation of utterances which make up a sentence and, in turn, a communication.
  • the communication manager component 308 receives and considers the output of the semantic analysis component 304 and integrates this into a model of discourse.
  • the communication manager component 308 considers factors such as relevance, factuality, and conciseness and interprets these factors in light of the overall contextual space 300.
  • the communication manager component 308 may simulate communication with the single user by generating appropriate utterances in response to the user. In this way, the communication manager component 308 acts as a chatbot.
  • Utterances are incorporated into the communication in light of the contextual space 300 and also subject to constraints supplied by a domain model 312.
  • the domain model 312 is that of threat analysis and output is generated given the mapping constraints associated with the domain model 312.
  • the mapping constraints may be learnable rules that are parameterised by the user's security settings 314. These rules influence for example, the number of semantic parameters 316 that must be encountered (and hence the sematic parameter being populated) before a different semantic parameter is populated or the communication is terminated.
  • a rule may include identifying an instance of the semantic parameter SuspiciousDialogue when a third instance of the semantic parameter Co-lncidenceLevel is detected in a communication between one or more parties.
  • Figure 4 illustrates the dataflow of the communication manager component 308 in further detail which may be carried out by software in a processing system 200.
  • the recognition module 400 receives the utterance from the outputs of the semantic analysis component 304 and behaviour analysis component 306 as shown in Figure 3.
  • the recognition module 400 analyses the utterance for one or more possible goals in the utterance. If a potential goal in the utterance is identified, the recognition module 400 passes the utterance containing the potential goal to the inference and planning module 402 which analyses the utterance to determine what the goal of the communication might be. Goals are inferred by ranking the most probable outcome from all possible outcomes or 'configurations'. As more and more utterances in a communication is analysed, the inference and planning module 402 is able to better identify the potential goal of the communication.
  • the inference and planning module 402 passes the utterance to the dialogue management module 404.
  • the dialogue management module 404 manages the communication. In the case of a single user communicating with the communication manager component 308, the dialogue management module 404 manages the nature of the communication between the communication manager component 308 (acting as a chatbot) and the single user.
  • the dialogue management module 404 also feeds utterances and goals back to the behaviour analysis component 306 which determines whether or not the goal of the communication falls within the semantic parameters 316.
  • the dialogue management module 404 further analyses the utterance to determine whether any inference can be made from any implicit language in the utterance or for language that is idiosyncratic to that particular user. This information may also be fed back to the behaviour analysis component 306. As mentioned, the behaviour analysis component 306 may pass information to and/or from the threat modeller 222 and the adult supervised profiler 221 of the processing system 200.
  • the suspected user profile may be targeted and the appropriate action is taken.
  • the threat modeller 222 may learn from both interactions from other users as well as specific trained data given to the modeller 222.
  • the adult supervised profiler 221 may also provide additional information to the behaviour analysis component 306 such that the dialogue management module 404 may manage the communication according to the additional information.
  • the additional information provided by the adult supervised profiler 221 may include knowledge, such as a particular religion or belief, that parents deem inappropriate for their own children.
  • dialogue management module 404 passes the utterance to the language generation module 406 and the synthesis module 408.
  • the language generation module 406 generates an utterance in response to the utterance that the user inputted into the text input component 302 of Figure 3.
  • the dialogue management module 404 also attempts to generate a concise, factual, and relevant utterance that will advance the likelihood of achieving the current goal under consideration.
  • the utterance must also be appropriate given the overall contextual space 300.
  • the utterance is then passed to the synthesis module 408 which may output the utterance to the user as speech or may output the utterance to the user as text in a form which mirrors their writing style.
  • the communication manager component 308 may also store information in a database 216 on a storage device 214 as shown in Figure 2 of communications related to products or services. This information can be provided to market research companies on a subscription basis. For example, the number of positive discussions of a particular brand of product in communications between teenagers. Further the use of keywords in a communication may also trigger suitable advertisements to be displayed to the parties.
  • Figure 5 shows a functional block diagram of a communication between two parties including a computer 500 belonging to a first party, a network 502 and a computer 504 belonging to a second party.
  • the computers 500, 504 are connected to the network 502.
  • the network 502 may be a local area network or a wide area network such as the Internet.
  • a communication is initiated by either the first party computer 500 or second party computer 504.
  • the communication may be in the form of text or voice data.
  • the method of detecting the nature of a communication between the first party computer 500 and the second party computer 504 may be implemented by software resident on either the first party computer 500, the second party computer 504 or both computers. Alternatively, the software may reside on a server 506 associated with the network 502.
  • Figure 6 is a flow diagram illustrating overall system architecture of the method of detecting the nature of a communication.
  • Process monitor 600 oversees the overall system and has the function of managing accounts that are employing the software associated with the method of detecting the nature of a communication.
  • the process monitor 600 also functions to associate advertising and product placement with the communication, and to provide feedback to a website.
  • a central dialogue controller hub 602 which co-ordinates the input and output from the text I/O NLP server 604, recognition server 606, syntactic parsing component 610, confidence component 612, utterance analyser 616, security domain component 620, NLP language generation component 624 and synthesis component 628.
  • the Text I/O NLP (Natural Language Processing) server 604 is responsible for managing network communications, accepting and disseminating utterances from one or more parties in a communication 604.
  • the input/output utterances may be in the form of speech.
  • a speech recognition component 608 is used and this is associated with a speech recognition server 606 which encodes/decodes the speech into text.
  • the speech recognition component 608 may decode the speech in parallel as shown in Figure 6.
  • the syntactic parsing component 610 interprets the meaning of the utterance in light of the context as described in Figure 3. Whereas semantic parsing refers to the overall process of extracting meaning from dialogue (text or voice), syntactic parsing is a subset of semantic extraction and analysis. In the context of the process monitor 600 shown in Figure 6, the syntactic parsing component 610 accurately extracts the syntax reference in a particular sentence string that may allude to a potential threat in a conversation. Combined with contextual learning and referencing abilities as well as rhetorical structures, the process monitor 600 is able to define better semantics of a particular goal within a conversation session.
  • the confidence component 612 determines the confidence with which the meaning of the utterance has been determined at the syntactic parsing component 610.
  • an utterance may be received via component 614, with the utterance not in the form of a text or speech but another modality such as touch, taste or smell. These modalities may be converted into a suitable form such as text by the component 614.
  • the confidence with which the meaning of the utterance has been determined may be determined at the confidence component 612.
  • the utterance analyser 616 is responsible for the facilitation of communication.
  • the utterance analyser 616 has an analogous function to the communication manager 308 shown in Figure 3, however the functionality of the utterance analyser in 616 relates to natural language feedback or generation (nig).
  • the communications manager 308 manages this as well as general communications between pertinent servers that communicate with the web server, the nig server and the application server.
  • the principal goals of the utterance analyser 616 are the interpretation, incorporation, and generation of utterances which make up a sentence and, in turn, a communication.
  • the goal detection component 618 constructs representations of 'possible world' configurations. This is achieved by projecting the current context in directions that are analogous to the way in which corpus analysis suggests discourse typically unfolds. By ranking these possible configurations in context the most likely goal is detected.
  • the goal detection component 618 may compare the text or utterances entered by a user to a stored set of learned situations in the database 216.
  • the information in the database 216 may be entered manually or obtained through downloaded libraries such as the British National Corpus.
  • the threat modeller 222 may assist in goal detection by providing to the goal detection component 618 models or profiles that are known to fit specific threat types, i.e. paedophiles and rapists,
  • the security domain component 620 defines the conceptual semantic predicates and the constraints under which they operate.
  • the adult supervised profiler 221 may provide some of these constraints. These constraints may be further moderated by the client security settings component 622.
  • the NLP language generation component 624 and synthesis component 628 are used in the case of a single user communicating with the system acting as a chatbot.
  • Utterance analyser 616 passes the utterance to the NLP language generation component 624 and in turn to the synthesis component 628.
  • the NLP language generation component 624 generates an utterance in response to a user's utterance.
  • the generated utterance is then passed to the synthesis component 628 which may output the utterance to the user as speech or may output the utterance to the user as text in a form which mirrors their writing style via text I/O NLP server 604.
  • a user language generation component 626 facilitates interaction between a user and the NLP language generation component 624. This "natural" interaction between the user and a machine interface enables the user to be notified of undesirable topics or threats in a conversation session.
  • an utterance is received as part of a communication between one or more parties, it is forwarded to the central dialogue controller hub 602 together with its associated metadata.
  • An utterance in the form of text is ordinarily passed through the language processing pipeline (e.g. syntactic parsing component 610, confidence component 612, and utterance analyser 616) in order to arrive at a semantic and updated contextual representation of the utterance.
  • the utterance is analysed and compared against the values of the current context, the semantic role output, the domain models (in this case, threatening behaviour), and the inferences made from the utterance.
  • potential goals of the utterance are analysed against the security settings in the security domain component 620 and against individual users' security settings in the client security settings component 622 so that appropriate alerts are triggered where necessary.
  • suitable utterances are generated via the language generation component 624, bearing in mind the context of the communication and the potential goal of the communication.
  • Possible generated utterances are evaluated in terms of the effects they have on the goal of the communication.
  • the generated utterances are optionally synthesised to speech via the synthesis component 628.
  • FIG. 7 there is shown an example of a method 700 of detecting the nature of a communication between one or more parties over a network.
  • the context of the communication is determined.
  • the context may be inferred through the semantic analysis of predicate argument structure as described in step 715.
  • the context may also be inferred by semantic analysis in conjunction with lexical resources which may be stored on a local database, together with world knowledge accessed from an online library such as the British National Corpus or via corpus analysis.
  • An example context setting may be the relationship between a client and attorney as previously described or the relationship between two children of primary school age.
  • security settings and goals are determined.
  • the security settings are determined by the user.
  • Typical security settings may include the setting of semantic parameters. For example, the maximum number of instances of Co-incidenceLevel occurring during a communication may be limited to three times before another semantic parameter is incremented or action is taken.
  • the goal settings are computed by taking into consideration the contextual space or context of the communication and based on the configuration of predicates in the communication and the content of their argument structures in the communication.
  • the models from which the classification of goals and, in this case, threats are derived may be based on the analysis of a corpus. In the case of "threatening behaviour" the corpus may contain a collection of Internet chat scripts annotated with semantic parameters relating to child safety.
  • a typical goal may include not giving out personal information unless the parties in the communication are known to each other.
  • the communication between one or more parties is analysed at step 715.
  • this step involves using language processors known in the art such as tokenisation, lemmatisation, part-of-speech tagging, named entity recognition, syntactic parsing, and word sense disambiguation modules.
  • the analysed communication is compared with the context which was determined at step 705.
  • the communication is analysed and a possible goal or outcome of the communication is predicted based on the content and context of the communication.
  • the potential behaviour (in this case "threatening behaviour") is assessed against one or more semantic parameters.
  • the semantic parameters may include Co-incidenceLevel, PersonalPrivacy, SuspiciousDialogue, AntagonisticBehaviour and PhysicalSafety.
  • step 735 the nature of the communication is determined. If at step 735, the communication does not fall into one or more of the semantic parameters, then control moves back to step 705 where the context of the communication is checked or reassessed against the current communication. Control then moves to step 710 where security settings and goals are checked or reassessed against the current communication. Control the moves to step 715 where the next portion of the communication is analysed. In the case of a two way communication, the next portion of the communication may be the response to the previously analysed communication or a one-way follow-up communication from the same party. The two way communication may be between a first party who is human and a second party which is a chatbot.
  • step 735 If at step 735 the communication falls into one or more semantic parameters then the communication is deemed to be suspicious and each instance of the sematic parameter detected is incremented.
  • step 745 the value of the one or more sematic parameters is checked against a threshold value which is determined in the security settings at step 710. If at step 745, the semantic parameter value is greater than the threshold then the control moves to step 750 where action is taken. The action taken may be to terminate the communication, or to notify a third party such as parents or authorities, or the action taken, may be simply a warning.
  • step 745 the semantic parameter value is less than the threshold then the communication is allowed to continue and control moves to step 705 where the context of the communication is checked or reassessed against the current communication. Control then moves to step 710 where security settings and goals are checked or reassessed against the current communication. Control then moves to step 715 where the next portion of the communication is analysed.
  • FIG. 7 An example of the method of Figure 7 is outlined below in a sample communication between a predatory attacker (A) and a child victim (C).
  • NP Noun Phrase
  • WHADVP Adverb Phrase such as how/why
  • VP Verb Phrase
  • ADVP Adverb Phrase.
  • the sentence is also analysed in terms of its predicate-argument structure and this is compared against the context of the communication. In this case, the predicate is the word "going" and the argument is "today”.
  • the semantic parameters of Co- incidenceLevel, PersonalPrivacy, SuspiciousDialogue, and PhysicalSafety are not triggered at this stage.
  • NP (NP (NP I'm) (NP (NP head) (PP of (NP my class,))) (VP it's (NP my favourite subject.))
  • SuspiciousDialogue is incremented as a third instance of Co-incidenceLevel has been triggered.
  • the user security settings have determined that SuspiciousDialogue should be incremented if there are 3 instances of Co-incidenceLevel.
  • the example above is a sample communication between two parties, the predatory attacker (A) and the child victim (C) over a real time instant messaging system.
  • the two parties need not be communicating in real time.
  • the method may be utilised over a blog (weblog) or a web site that allows a user to edit content on the page (a Wiki). Further, the method may be utilised over a real time instant messaging system where the communication is between two parties with one party being a chatbot.
  • the chatbot may, for example, be as described in Australian Provisional Application No 2006902803.
  • Optional embodiments of the present invention may also be said to broadly consist in the parts, elements and features referred to or indicated herein, individually or collectively, in any or all combinations of two or more of the parts, elements or features, and wherein specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.

Abstract

Disclosed is a method for automatically analysing a communication to provide an alert or indication in relation to content, behaviour or goals. This may relate to training, child protection or other applications. The method includes the steps of: (a) analysing the language of the communication to determine one or semantic representations of the communication, said semantic representations being constrained by predetermined contextual rules; (b) analysing the semantic representation relative to predetermined domain models of behaviour or goals; and (c) providing an alert or indication if the semantic content, in the context, meets predetermined parameters.

Description

COMMUNICATION ANALYSIS AND INTERPRETATION
Technical Field
[0001 ] The present invention generally relates to a method, system, computer readable medium of instructions and/or computer program product for detecting the nature of a communication between one or more parties.
Background of the Invention
[0002] Electronic communications have become a routine way of interacting. Instant messaging and email have been in use for some years. Social networking web sites, such as FaceBook ®, Twitter ®, Snapchat ® and Linkedin ®, have become a popular method of communication amongst children, teenagers and adults. Other related forms include chat, both text and verbal, associated with online gaming, both console based, PC based and multiplayer online systems.
[0003] These communications mechanisms are used for personal communications, for interaction with friends and family, as well as for business and professional advertising, marketing and communications. Indeed, the line between different types of communications and their purpose has blurred over time, producing ambiguity in the purpose and content of communications.
[0004] For example, the relative anonymity afforded by the Internet provides an opportunity for predators and bullies to anonymously target users, particularly children.
[0005] In order to protect users, and in particular child users, many social networking web sites and instant messaging software implement verification systems such as credit card verification and age verification. Others employ content control software such as CyberNanny™ in order to ensure that users are legitimate or within a defined age group before allowing them to access their service or applications. Such URL, identity and content filters have significant limitations, particularly when communications are via a game or social media site which has been approved by a parent or guardian.
[0006] These systems provide an initial barrier to ensure the validity of users within a chat room or social network web site, however if the initial barrier is circumvented these systems provide little or no protection at all.
[0007] The increasing complexity and ambiguity of communications means that there are a variety of other situations where it would be useful to have improved electronic management and understanding of communications. [0008] It is an object of the present invention to provide automated methods and systems to assist in analysing communications and thereby determining the nature of those communications.
[0009] The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.
Summary of the Invention
[0010] According to one aspect, the present invention provides a method for automatically analysing a communication to provide an alert or indication in relation to content, behaviour or goals, including at least the steps of:
(a) analysing the language of the communication to determine one or semantic representations of the communication, said semantic representations being constrained by predetermined contextual rules;
(b) analysing the semantic representation relative to predetermined domain models of behaviour or goals; and
(c) providing an alert or indication if the semantic content, in the context, meets predetermined parameters.
[001 1 ] According to another aspect, the present invention provides a system for automatically analysing a communication to provide an alert or indication in relation to content, behaviour or goals, the system including at least a processor with associated memory, and a communication interface for receiving the communications to be analysed and for sending alert and/or indications, the processor including software so that it is operatively adapted to analyse the language of the communication to determine one or semantic representations of the communication, said semantic representations being constrained by predetermined contextual rules, and to further analyse the semantic representations relative to predetermined domain models of behaviour or goals, and if the semantic representation, in the context, meets predetermined parameters, transmitting an alert or indication.
[0012] The present invention also encompasses software to implement the method.
[0013] The term predetermined is used to indicate that the parameters, domain models and other features are determined at the time that a particular analysis is undertaken. However, the term predetermined encompasses both specific items or selections made in advance by an administrator or user, as well as the domain models, parameters and other features being modified dynamically, through the application of self-learning or correlating algorithms, and/or through the exchange of communications with other users in the system. The latter may be communications with human users initiated by the system. The detail of how this is implemented is necessarily dependent upon the particular application of the present invention, as would be understood by those skilled in the art.
Brief Description of the Drawings
[0014] Illustrative embodiments of the present invention will now be described with reference to the accompanying figures, in which:
Figure 1 shows the relationship between the principle subsystems of a language;
Figure 2 shows a functional block diagram of an example of a processing system that can be utilised to embody or give effect to a particular embodiment;
Figure 3 shows a basic flow diagram illustrating the dataflow of one implementation of the present invention;
Figure 4 shows a flow diagram illustrating the dataflow of the communication manager component;
Figure 5 shows a functional block diagram of a communication between two parties;
Figure 6 shows an overall system architecture of an implementation of the present invention; and
Figure 7 illustrates a flow diagram of a method of operation of an implementation of the present invention.
Detailed Description of the invention
[0015] The present invention will be described with reference to particular illustrative examples. It will be appreciated that, because of the nature of the invention, there are many possible different modes in which is can be implemented, using many different software methodologies, schemes, and coding languages. Those skilled in the art will understand that the examples are merely illustrative of how the invention may be implemented, and are not limitative of the scope and possible ways of implementation of the present invention. [0016] The present invention may be implemented as a stand alone system, within a particular website or server. However, it is envisaged that it may equally be implemented as an additional feature for another site or facility. For example, in the context of preventing inappropriate communications with children, or bullying, the present invention may be implemented as an additional feature on a social media or gaming site. The implementation of the present invention may be operated as a standalone module or service operating in the background, and activated as a security preference from the social media site. The module would send a message when the required parameters and threshold had been met, for action by the social media site.
[0017] In another potential application, the present invention could be applied to a training situation, for example for sales staff. In this instance, the analysis and interpretation may be looking to detect problematic behaviours in interactions with real or simulated customers, for example arguing with customers, to facilitate their correction. The system may also detect positive behaviours, for example selling up, and allow the system to reinforce them. In this case, the present invention may be implemented as a module or service operating in the background of a training or monitoring system.
[0018] Hence, while for the purposes of explanation and accuracy the present invention is primarily discussed in the context of child protection, it will be appreciated that it is broadly applicable where automated processes to identify an (at least postulated) intention can be applied.
[0019] It will also be appreciated that while the examples primarily deal with text communications, the principles of the present invention are equally applicable to spoken communications. On the one hand, automated text to speech systems, for example Dragon Dictate and other commercially available systems, are readily available to perform relatively high quality conversions. Further, in a suitable systems the tone and inflections which in many cases provide an emotional and intentional overtone to a spoken conversation can be analysed, and taken into account in the analysis processes.
[0020] In the figures, incorporated to illustrate features of an example embodiment, like reference numerals are used to identify like parts throughout the figures.
[0021 ] The present invention utilises a structured, analytical approach based on linguistics to detect the nature of a communication between one or more parties. Figure 1 depicts the conceptual relationship between the principal subsystems of a language, particularly as it will be dissected for the purposes of the present implementation.
[0022] In particular, context 106 presides over the formulation of meaning from phonology and graphology 100, through to the lexicogrammar 102 and semantics 104. Communication takes many forms and modes. Phonology 100 is the base unit of speech and graphology 100 is the base unit of writing. The lexicogrammar 102 is the system of words and writing whilst semantics 104 relates to the system of meaning. The interaction between two or more parties and, for example, their cultural context, dictate how they interact, the tone of their speech, the selection of their words, and so on. The influence of context 106 is clearly demonstrable at the phonology and graphology 100 level. It is within the layers of lexicogrammar 102 and semantics 104 that much of the influence of context 106 is determined and applied by implementations of the present invention.
[0023] Implementations of the present invention subject the communication to a formal analysis, extracting possible words from the phonology and graphology 100, applying a lexicogrammar 102 to identify the or at least the possible words, and using semantics 104 to impute a meaning to the identified language. In all cases, the context 106 is used to constrain the set of alternatives, so that a more accurate determination of the nature and intentions of a communication can be determined.
Example of a Processing System
[0024] A particular embodiment of the present invention can be realised using a processing system, an example of which is shown in Figure 2. The processing system 200 can be used as a client processing system and/or a server processing system. It will be appreciated that the present invention is operable using conventional general purpose processors and computer systems. The capabilities of the system will of course need to be adequate for the expected load and number of users, as will be well understood in the art. In particular, performing aspects of the present invention in real time may require significant resources. The term processor may mean a set of multiple processors, or may mean use of only part of a processor which is also used for other tasks.
[0025] Processing system 200 includes at least one processor 202, or processing unit or plurality of processors, memory 204, at least one input device 206 and at least one output device 208, coupled together via a bus or group of buses 210. In certain embodiments, input device 206 and output device 208 could be the same device. An interface 212 can also be provided for coupling the processing system 200 to one or more peripheral devices, for example interface 212 could be a PCI card or PC card. At least one storage device 214 which houses at least one database 216 can also be provided. The memory 204 can be any form of memory device, for example, volatile or non-volatile memory, solid state storage devices such as flash memory, hard drives, etc. The processor 202 could include more than one distinct processing device, for example to handle different functions within the processing system 200. The memory 204 typically stores an operating system to provide functionality to the processing system 200. A file system and files are also typically stored on the storage device 214 and/or the memory 204. [0026] Input device 206 receives input data 218 and can include, for example, a keyboard, a pointer device such as a pen-like device or a mouse, audio receiving device for voice controlled activation such as a microphone, data receiver or antenna such as a modem or wireless data adaptor, data acquisition card, etc. Input data 218 could come from different sources, for example keyboard instructions in conjunction with data received via a network. Output device 208 produces or generates output data 220 and can include, for example, a display device or monitor in which case output data 220 is visual, a printer in which case output data 220 is printed, a port for example a USB port, a peripheral component adaptor, a data transmitter or antenna such as a modem or wireless network adaptor, etc. Output data 220 could be distinct and derived from different output devices, for example a visual display on a monitor in conjunction with data transmitted to a network. A user could view data output, or an interpretation of the data output, on, for example, a monitor or using a printer, or via audio transmitting device such as a speaker. The storage device 214 can be any form of data or information storage means, for example, volatile or non-volatile memory, solid state storage devices, magnetic devices, etc.
[0027] In use, the processing system 200 can be adapted to allow data or information to be stored in and/or retrieved from, via wired or wireless communication means, the at least one database 216. The interface 212 may allow wired and/or wireless communication between the processing unit 202 and peripheral components that may serve a specialized purpose. The processor 202 receives instructions as input data 218 via input device 206 and can display processed results or other output to a user by utilising output device 208. More than one input device 206 and/or output device 208 can be provided. It should be appreciated that the processing system 200 may be any form of terminal, server processing system, specialised hardware, computer, computer system or computerised device, personal computer (PC), mobile or cellular telephone, mobile data terminal, portable computer, Personal Digital Assistant (PDA), pager or any other similar type of device.
[0028] The processing system 200 may be a part of a networked communications system. The processing system 200 could connect to network, for example the Internet or a WAN. The network can include one or more client processing systems and one or more server processing systems, wherein the one or more client processing systems and the one or more server processing systems are forms of processing system 200. Input data 218 and output data 220 could be communicated to other devices via the network. The transfer of information and/or data over the network can be achieved using wired communications means or wireless communications means. The server processing system can facilitate the transfer of data between the network and one or more databases. It will be understood that what is described to this stage is a generally conventional computing arrangement. [0029] In some embodiments, the processing system 200 may include an adult supervised profiler 221 , which is a component that allows adults or parents to flag items that are inappropriate for their children. The adult supervised profiler 221 allows the processing system 200 to be customised based on the needs of the parents or guardians. The adult supervised profiler 221 may store items manually flagged by the adults or parents, and/or may learn from the settings of other users of the processing system 200, so that users with the same profiles or activities as other users may inherit security settings from one another if appropriate. The items may be flagged based on specific semantics, for example 'material related to drug manufacture', or on a more specific basis.
[0030] In some embodiments, the processing system 200 may include a threat modeller 222. The threat modeller 222 is a learning component which analyses the behaviour of specific threat types, i.e. paedophiles and rapists, to build a model or profile that can be compared to profiles of the users in a particular system.
Detecting the nature of a communication
[0031 ] Figure 3 illustrates the overall data flow of the method of determining the nature of a communication. Each of the components of language described with reference to Figure 1 contribute to detecting the nature of the communication and the generation of appropriate action in response.
[0032] There are four components in the overall data flow of the present implementation as illustrated in Figure 3, namely text input component 302, semantic analysis component 304, behaviour analysis component 306 and the communication manager component 308. These components are executed on a processing system 200 such as that shown in Figure 2.
[0033] All of these components operate within a predefined context denoted by contextual space 300 which is stored in a storage device 214. The contextual space 300 represents the configuration of the parties in the communication which is to be analysed and the relationships between the parties within the current context. The database 210 may contain a collection of configurations or situations describing threatening and safe conversations used as boundary models for comparison with a conversation between the two users. The contextual space 300 is only affected by either of the parties in the communication to the extent that the each of the parties are responsible for their own utterances in the overall communication. Therefore, the parties do not have direct control over the contextual space 300. An example contextual space 300 may be the relationship between an attorney and a client. For example, assume that John Smith spoke to his attorney Jill Black about the class action law suit. The following table defines a contextual space for this conversation: Participant 0 Participant 1 Relationship 0-1 Relationship 1 -0
John Smith Jill Black client attorney
Speaker Hearer speak listen
Client Attorney retain advise
Figure imgf000009_0001
[0034] An utterance in the form of text is entered into text input component 302. The utterance may be received via an input device 206 or may be received as input data 218 from a network such as the Internet and stored in a storage device 214. The utterance is part of a communication between one or more parties. The utterance may be speech which has been synthesised into text. The utterance is then passed to the processor 202 where the semantic analysis component 304 interprets the meaning of the utterance in light of the contextual space 300. The semantic analysis component 304 includes a set of language processors 310 preferably arranged within a pipeline architecture. The language processors used may be those known in the art such as tokenisation, lemmatisation, part-of-speech tagging, named entity recognition, syntactic parsing, and word sense disambiguation modules. The semantic analysis component 304 initially represents each clause within a sentence in the communication in terms of its predicate- argument structure. Below is an example of the predicate-argument structure for the 'slight movement' sense of the predicate "edge" in the sentence "Tokyo stocks edged up Wednesday in relatively active but unfocused trading". Predicate-argument structure refers to the arguments of a verb (in this case "edge") and their respective roles. The verb "edge" has many senses and the sense referred to in the above sentence is that of "slight movement".
Argument 1 (Arg1 ): Tokyo stocks
Relationship (REL): edged
Arg5: up
Argument Modifier -Temporal (ArgM-TMP): Wednesday
(Argument Modifier - Location) ArgM-LOC: in relatively active but unfocused trading
[0035] The roles depicted above are those that have been identified for the example sentence. These roles are listed along with their corresponding argument values from the example sentence. The predicate-argument structure is firstly interpreted within the context of its respective roleset and further with respect to the contextual space 300. A roleset defines the possible arguments and their types (e.g. modifier-Temporal, modifier- Location) for any given sense of a predicate verb. In the example sentence above, the roleset for 'edge' includes a total of six possible argument types, and each argument type having a respective role or function. The full roleset for this type of predicate together with each arguments' default definition is presented below:
Arg1 : Logical subject : patient : thing moving
Arg2: amount moved
Arg3: start point
Arg4: end point
Arg5: direction-REQUIRED
ArgM-LOC: medium : location
[0036] Once the utterance in the communication has been interpreted by the semantic analysis component 304, the utterance is passed to the behaviour analysis component 306 in the processor 202. The analysis of the behaviour of the communication will be described throughout this example with reference to "threatening behaviour". It will be appreciated that any type of behaviour may be analysed and detected such as unsolicited offers for sale of products etc. The analysis of the behaviour of the communication is based upon the semantic interpretation of utterances (which makes up one or more sentences within a communication) within a given contextual space 300 and domain model 312 in combination with the users' current security settings 314. The security settings 314 are stored in a storage device 214. The domain model 312 is a learned mapping between context sensitive semantic structures and semantic parameters. A system consists of a set of entry conditions and a set of output features. In this case, the domain model 312 is "threatening behaviour". The behaviour analysis component 306 may also pass information to and/or from the threat modeller 222 and the adult supervised profiler 221 of the processing system 200.
[0037] The behaviour analysis component 306 uses a number of semantic parameters 316 in order to infer specific levels of behaviour. The semantic parameters 316 are stored in the storage device 214. In the case of the domain model of "threatening behaviour", the semantic parameters 316 may include Co-incidenceLevel, PersonalPrivacy, SuspiciousDialogue, AntagonisticBehaviour and PhysicalSafety. As more utterances are analysed as part of the overall communication, each of the semantic parameters 316 are populated when an instance of the semantic parameter 316 occurs. [0038] The interpretation of behaviours, and in this case, threats, by the semantic analysis component 304 via the semantic parameters 316, relies upon the analysis of goals derived from contextual space 300 of the communication. The options that are chosen from within a network determine the degree to which threats are assessed. Each choice that is made is contrasted within a range of possibilities for a given context.
[0039] It will be appreciated that in other applications of this invention, the same approach is used, but the behaviours and intentions which are used, and indeed the context, will be different. In some cases, it may be important to see whether behaviours are moving towards a particular intention, as well as to see whether they are moving away from that intention.
[0040] In order to ascertain the possible set of outcomes that are possible as a result of an utterance in a communication a combination of world knowledge and local knowledge (i.e. a users' typical linguistic usage patterns) are employed. World knowledge in the form of axioms such as "Virgin Islands is a member of Island" is used to build and update situational contexts. World knowledge may be stored in a database 216 in a storage device 214 or may be accessed from an online library such as the British National Corpus. The world knowledge can be used to interpret the semantic content of an utterance in the context, so that an intention can be assessed and determined. The reason as to why a particular utterance in a communication was made in the contextual space 300 is an integral step in determining the goals underlying the current communication.
[0041 ] The communication manager component 308 is responsible for the facilitation of communication. The principal goals of the communication manager component 308 are the interpretation, incorporation, and generation of utterances which make up a sentence and, in turn, a communication. The communication manager component 308 receives and considers the output of the semantic analysis component 304 and integrates this into a model of discourse. The communication manager component 308 considers factors such as relevance, factuality, and conciseness and interprets these factors in light of the overall contextual space 300.
[0042] In the case of a single user communicating, the communication manager component 308 may simulate communication with the single user by generating appropriate utterances in response to the user. In this way, the communication manager component 308 acts as a chatbot. Utterances are incorporated into the communication in light of the contextual space 300 and also subject to constraints supplied by a domain model 312. In this case the domain model 312 is that of threat analysis and output is generated given the mapping constraints associated with the domain model 312. The mapping constraints may be learnable rules that are parameterised by the user's security settings 314. These rules influence for example, the number of semantic parameters 316 that must be encountered (and hence the sematic parameter being populated) before a different semantic parameter is populated or the communication is terminated. For example, a rule may include identifying an instance of the semantic parameter SuspiciousDialogue when a third instance of the semantic parameter Co-lncidenceLevel is detected in a communication between one or more parties.
[0043] Figure 4 illustrates the dataflow of the communication manager component 308 in further detail which may be carried out by software in a processing system 200. There are five modules in the overall data flow of the communication manager component 308, namely recognition module 400, inference and planning module 402, dialogue management module 404, language generation module 406 and synthesis module 408. The recognition module 400 receives the utterance from the outputs of the semantic analysis component 304 and behaviour analysis component 306 as shown in Figure 3. The recognition module 400 analyses the utterance for one or more possible goals in the utterance. If a potential goal in the utterance is identified, the recognition module 400 passes the utterance containing the potential goal to the inference and planning module 402 which analyses the utterance to determine what the goal of the communication might be. Goals are inferred by ranking the most probable outcome from all possible outcomes or 'configurations'. As more and more utterances in a communication is analysed, the inference and planning module 402 is able to better identify the potential goal of the communication.
[0044] Once a potential goal has been identified, the inference and planning module 402 passes the utterance to the dialogue management module 404. The dialogue management module 404 manages the communication. In the case of a single user communicating with the communication manager component 308, the dialogue management module 404 manages the nature of the communication between the communication manager component 308 (acting as a chatbot) and the single user. The dialogue management module 404 also feeds utterances and goals back to the behaviour analysis component 306 which determines whether or not the goal of the communication falls within the semantic parameters 316.
[0045] The dialogue management module 404 further analyses the utterance to determine whether any inference can be made from any implicit language in the utterance or for language that is idiosyncratic to that particular user. This information may also be fed back to the behaviour analysis component 306. As mentioned, the behaviour analysis component 306 may pass information to and/or from the threat modeller 222 and the adult supervised profiler 221 of the processing system 200.
[0046] For example, if at any time during the interaction between two users, a user is deemed to match a known threat model, the suspected user profile may be targeted and the appropriate action is taken. The threat modeller 222 may learn from both interactions from other users as well as specific trained data given to the modeller 222. The adult supervised profiler 221 may also provide additional information to the behaviour analysis component 306 such that the dialogue management module 404 may manage the communication according to the additional information. The additional information provided by the adult supervised profiler 221 may include knowledge, such as a particular religion or belief, that parents deem inappropriate for their own children.
[0047] In the case of a single user communicating with the communication manager component 308, (acting as a chatbot) dialogue management module 404 passes the utterance to the language generation module 406 and the synthesis module 408. The language generation module 406 generates an utterance in response to the utterance that the user inputted into the text input component 302 of Figure 3. The dialogue management module 404 also attempts to generate a concise, factual, and relevant utterance that will advance the likelihood of achieving the current goal under consideration. The utterance must also be appropriate given the overall contextual space 300. The utterance is then passed to the synthesis module 408 which may output the utterance to the user as speech or may output the utterance to the user as text in a form which mirrors their writing style.
[0048] The communication manager component 308 may also store information in a database 216 on a storage device 214 as shown in Figure 2 of communications related to products or services. This information can be provided to market research companies on a subscription basis. For example, the number of positive discussions of a particular brand of product in communications between teenagers. Further the use of keywords in a communication may also trigger suitable advertisements to be displayed to the parties.
[0049] Figure 5 shows a functional block diagram of a communication between two parties including a computer 500 belonging to a first party, a network 502 and a computer 504 belonging to a second party. The computers 500, 504 are connected to the network 502. The network 502 may be a local area network or a wide area network such as the Internet. A communication is initiated by either the first party computer 500 or second party computer 504. The communication may be in the form of text or voice data. The method of detecting the nature of a communication between the first party computer 500 and the second party computer 504 may be implemented by software resident on either the first party computer 500, the second party computer 504 or both computers. Alternatively, the software may reside on a server 506 associated with the network 502.
[0050] Figure 6 is a flow diagram illustrating overall system architecture of the method of detecting the nature of a communication. Process monitor 600 oversees the overall system and has the function of managing accounts that are employing the software associated with the method of detecting the nature of a communication. The process monitor 600 also functions to associate advertising and product placement with the communication, and to provide feedback to a website.
[0051 ] Within the system is a central dialogue controller hub 602 which co-ordinates the input and output from the text I/O NLP server 604, recognition server 606, syntactic parsing component 610, confidence component 612, utterance analyser 616, security domain component 620, NLP language generation component 624 and synthesis component 628.
[0052] The Text I/O NLP (Natural Language Processing) server 604 is responsible for managing network communications, accepting and disseminating utterances from one or more parties in a communication 604.
[0053] The input/output utterances may be in the form of speech. In this case, a speech recognition component 608 is used and this is associated with a speech recognition server 606 which encodes/decodes the speech into text. The speech recognition component 608 may decode the speech in parallel as shown in Figure 6.
[0054] The syntactic parsing component 610 interprets the meaning of the utterance in light of the context as described in Figure 3. Whereas semantic parsing refers to the overall process of extracting meaning from dialogue (text or voice), syntactic parsing is a subset of semantic extraction and analysis. In the context of the process monitor 600 shown in Figure 6, the syntactic parsing component 610 accurately extracts the syntax reference in a particular sentence string that may allude to a potential threat in a conversation. Combined with contextual learning and referencing abilities as well as rhetorical structures, the process monitor 600 is able to define better semantics of a particular goal within a conversation session.
[0055] The confidence component 612 determines the confidence with which the meaning of the utterance has been determined at the syntactic parsing component 610.
[0056] Optionally an utterance may be received via component 614, with the utterance not in the form of a text or speech but another modality such as touch, taste or smell. These modalities may be converted into a suitable form such as text by the component 614. The confidence with which the meaning of the utterance has been determined may be determined at the confidence component 612.
[0057] The utterance analyser 616 is responsible for the facilitation of communication. The utterance analyser 616 has an analogous function to the communication manager 308 shown in Figure 3, however the functionality of the utterance analyser in 616 relates to natural language feedback or generation (nig). The communications manager 308 manages this as well as general communications between pertinent servers that communicate with the web server, the nig server and the application server. The principal goals of the utterance analyser 616 are the interpretation, incorporation, and generation of utterances which make up a sentence and, in turn, a communication.
[0058] The goal detection component 618 constructs representations of 'possible world' configurations. This is achieved by projecting the current context in directions that are analogous to the way in which corpus analysis suggests discourse typically unfolds. By ranking these possible configurations in context the most likely goal is detected. The goal detection component 618 may compare the text or utterances entered by a user to a stored set of learned situations in the database 216. The information in the database 216 may be entered manually or obtained through downloaded libraries such as the British National Corpus.
[0059] The threat modeller 222 may assist in goal detection by providing to the goal detection component 618 models or profiles that are known to fit specific threat types, i.e. paedophiles and rapists,
[0060] The security domain component 620 defines the conceptual semantic predicates and the constraints under which they operate. The adult supervised profiler 221 may provide some of these constraints. These constraints may be further moderated by the client security settings component 622.
[0061 ] The NLP language generation component 624 and synthesis component 628 are used in the case of a single user communicating with the system acting as a chatbot. Utterance analyser 616 passes the utterance to the NLP language generation component 624 and in turn to the synthesis component 628. The NLP language generation component 624 generates an utterance in response to a user's utterance. The generated utterance is then passed to the synthesis component 628 which may output the utterance to the user as speech or may output the utterance to the user as text in a form which mirrors their writing style via text I/O NLP server 604.
[0062] A user language generation component 626 facilitates interaction between a user and the NLP language generation component 624. This "natural" interaction between the user and a machine interface enables the user to be notified of undesirable topics or threats in a conversation session.
[0063] Once an utterance is received as part of a communication between one or more parties, it is forwarded to the central dialogue controller hub 602 together with its associated metadata. An utterance in the form of text is ordinarily passed through the language processing pipeline (e.g. syntactic parsing component 610, confidence component 612, and utterance analyser 616) in order to arrive at a semantic and updated contextual representation of the utterance. The utterance is analysed and compared against the values of the current context, the semantic role output, the domain models (in this case, threatening behaviour), and the inferences made from the utterance. Further, potential goals of the utterance are analysed against the security settings in the security domain component 620 and against individual users' security settings in the client security settings component 622 so that appropriate alerts are triggered where necessary. In the case of a single user communicating with the system (i.e. the system acting as a chatbot) suitable utterances are generated via the language generation component 624, bearing in mind the context of the communication and the potential goal of the communication. Possible generated utterances are evaluated in terms of the effects they have on the goal of the communication. The generated utterances are optionally synthesised to speech via the synthesis component 628.
[0064] Referring to Figure 7, there is shown an example of a method 700 of detecting the nature of a communication between one or more parties over a network. At step 705, the context of the communication is determined. The context may be inferred through the semantic analysis of predicate argument structure as described in step 715. The context may also be inferred by semantic analysis in conjunction with lexical resources which may be stored on a local database, together with world knowledge accessed from an online library such as the British National Corpus or via corpus analysis. An example context setting may be the relationship between a client and attorney as previously described or the relationship between two children of primary school age.
[0065] At step 710 security settings and goals are determined. The security settings are determined by the user. Typical security settings may include the setting of semantic parameters. For example, the maximum number of instances of Co-incidenceLevel occurring during a communication may be limited to three times before another semantic parameter is incremented or action is taken. The goal settings are computed by taking into consideration the contextual space or context of the communication and based on the configuration of predicates in the communication and the content of their argument structures in the communication. The models from which the classification of goals and, in this case, threats are derived, may be based on the analysis of a corpus. In the case of "threatening behaviour" the corpus may contain a collection of Internet chat scripts annotated with semantic parameters relating to child safety. A typical goal may include not giving out personal information unless the parties in the communication are known to each other.
[0066] Once the context and security settings at steps 705 and 710 are determined, the communication between one or more parties is analysed at step 715. As described with reference to Figure 3, this step involves using language processors known in the art such as tokenisation, lemmatisation, part-of-speech tagging, named entity recognition, syntactic parsing, and word sense disambiguation modules. At step 720 the analysed communication is compared with the context which was determined at step 705. At step 725 the communication is analysed and a possible goal or outcome of the communication is predicted based on the content and context of the communication. At step 730, based on the context as determined at step 705 and the predicted goal of the communication at step 725, the potential behaviour (in this case "threatening behaviour") is assessed against one or more semantic parameters. In the case of "threatening behaviour" the semantic parameters may include Co-incidenceLevel, PersonalPrivacy, SuspiciousDialogue, AntagonisticBehaviour and PhysicalSafety.
[0067] At step 735, the nature of the communication is determined. If at step 735, the communication does not fall into one or more of the semantic parameters, then control moves back to step 705 where the context of the communication is checked or reassessed against the current communication. Control then moves to step 710 where security settings and goals are checked or reassessed against the current communication. Control the moves to step 715 where the next portion of the communication is analysed. In the case of a two way communication, the next portion of the communication may be the response to the previously analysed communication or a one-way follow-up communication from the same party. The two way communication may be between a first party who is human and a second party which is a chatbot.
[0068] If at step 735 the communication falls into one or more semantic parameters then the communication is deemed to be suspicious and each instance of the sematic parameter detected is incremented. At step 745 the value of the one or more sematic parameters is checked against a threshold value which is determined in the security settings at step 710. If at step 745, the semantic parameter value is greater than the threshold then the control moves to step 750 where action is taken. The action taken may be to terminate the communication, or to notify a third party such as parents or authorities, or the action taken, may be simply a warning.
[0069] If at step 745, the semantic parameter value is less than the threshold then the communication is allowed to continue and control moves to step 705 where the context of the communication is checked or reassessed against the current communication. Control then moves to step 710 where security settings and goals are checked or reassessed against the current communication. Control then moves to step 715 where the next portion of the communication is analysed.
Example of a Communication
[0070] An example of the method of Figure 7 is outlined below in a sample communication between a predatory attacker (A) and a child victim (C).
A: Hi, how are you going today? (FRAG : (NP Hi,) (WHADVP how) (VP are (S (NP you) (VP going (ADVP today?))))) Predicate : going
: Hi, how are (A1 you) going (AM-DIR today?)
Threat Levels:
Normal
[0071 ] Each of the words in the sentence are analysed, e.g. NP = Noun Phrase, WHADVP = Adverb Phrase such as how/why, VP = Verb Phrase, ADVP = Adverb Phrase. The sentence is also analysed in terms of its predicate-argument structure and this is compared against the context of the communication. In this case, the predicate is the word "going" and the argument is "today". The semantic parameters of Co- incidenceLevel, PersonalPrivacy, SuspiciousDialogue, and PhysicalSafety are not triggered at this stage.
C: very well thanks.
(FRAG : (ADVP very well) (NP thanks.))
Threat Levels:
Normal
A: I'm so glad you came back online.
(S : (NP I'm) (ADJP so glad) (S (NP you) (VP came (ADVP back) (NP online.))))
Predicate : came
: I'm so glad (A1 you) came (AM-DIR back) (AM-TMP online.)
Threat Levels:
Normal
A: I thought you may never come back.
(S : (NP I) (VP thought (SBAR (S (NP you) (VP may (ADVP never) (VP come (ADJP back.))))))) Predicate : thought
: (AO I) thought (A1 you may never come back.)
Predicate : come
: I thought (A1 you) may (AM-ADV never) come (A2 back.)
Threat Levels:
Normal C: I've been studying, I have exams coming up.
(S : (NP I've) (VP been (VP studying, (SBAR (S (NP I) (VP have (S (NP exams) (VP coming (ADVP up-)))))))))
Predicate : have
: I've been studying, (AO I) have (A1 exams coming up.)
Predicate : coming
: I've been studying, I have (A1 exams) coming (AM-DIR up.)
Threat Levels:
Normal
A: i know how it is, i have exams soon too.
(S : (NP i) (VP know (SBAR (WHADVP how) (S (NP it) (VP is, (SBAR (S (NP i) (VP have (S (NP exams) (VP (ADVP soon) too.))))))))))
Predicate : know
: (AO i) know (A1 how it is, i have exams soon too.)
Predicate : have
: i know how it is, (AO i) have (A1 exams soon too.)
Threat Levels:
Co-incidenceLevel 1
[0072] The semantic parameter of Co-incidenceLevel is incremented given that the child C has previously indicated that they have exams coming up.
A: what grade are you in?
(S : (SBAR (WHNP what grade) (S (VP are (NP you)))) (VP in?))
Threat Levels:
Co-incidenceLevel 1
Personal Privacy 1
[0073] Given the context and the possible goal of this communication (i.e. to obtain personal details as to the age of the child) the semantic parameter of PersonalPrivacy is incremented. C: I am in year 6.
(S : (NP I) (VP am (PP in (NP year 6.))))
Threat Levels:
Co-incidenceLevel 1
PersonalPrivacy 1
A: So am I, we should study together.
(FRAG : (ADVP So am) (NP I,) (S (NP we) (VP should (VP study (NP together.)))))
Predicate : study
: So am I, we should study (A1 together.)
Threat Levels:
Co-incidenceLevel 2
PersonalPrivacy 1
[0074] The semantic parameter of Co-incidenceLevel is incremented given that previously in the communication, the child C has indicated that they are in year 6.
C: Are you any good at maths?
(S : (VP Are (NP you) (NP any good) (PP at (NP maths?))))
Threat Levels:
Co-incidenceLevel 2
PersonalPrivacy 1
A: I'm head of my class, it's my favourite subject.
(NP : (NP (NP I'm) (NP (NP head) (PP of (NP my class,)))) (VP it's (NP my favourite subject.)))
Threat Levels:
Co-incidenceLevel 3
PersonalPrivacy 1
SuspiciousDialogue 1
[0075] The semantic parameter of SuspiciousDialogue is incremented as a third instance of Co-incidenceLevel has been triggered. In this case, the user security settings have determined that SuspiciousDialogue should be incremented if there are 3 instances of Co-incidenceLevel.
C: I have a lot of trouble with it.
(S : (NP I) (VP have (NP (NP a lot) (PP of (NP (NP trouble) (PP with (NP it.)))))))
Predicate : have
: (AO I) have (A1 a lot of trouble with it.)
Threat Levels:
Co-incidenceLevel 3
PersonalPrivacy 1
SuspiciousDialogue 1
A: I know I can help, where are you from?
(S : (NP I) (VP know (SBAR (S (NP I) (VP can (VP help, (SBAR (WHADVP where) (S (VP are (ADJP (NP you) from?))))))))))
Predicate : know
: (AO I) know (A1 I can help, where are you from?)
Threat Levels:
Co-incidenceLevel 3
PersonalPrivacy 2
SuspiciousDialogue 1
[0076] Given the context and the possible goal of this communication (i.e. to obtain personal details as to the location of the child) the semantic parameter of PersonalPrivacy is incremented.
C: Sydney.
(NP : Sydney.)
Threat Levels:
Co-incidenceLevel 3
PersonalPrivacy 2
SuspiciousDialogue 1 A: me too, where abouts?
(S : (NP me) (VP too, (SBAR (WHADVP where) (FRAG (ADJP abouts?)))))
Threat Levels:
Co-incidenceLevel 4
Personal Privacy 3
SuspiciousDialogue 1
C: Glebe.
(NP : Glebe.)
Threat Levels:
Co-incidenceLevel 4
PersonalPrivacy 3
SuspiciousDialogue 1
A: I'm not far from there actually, you should come over.
(S : (NP I'm) (VP (ADVP not far (PP from (NP there))) actually, (SBAR (S (NP you) (VP should (VP come (ADVP over.)))))))
Predicate : come
: I'm not far from there actually, (A1 you) should come (AM-DI R
over.)
Threat Levels:
Co-incidenceLevel 4
PersonalPrivacy 3
SuspiciousDialogue 1
PhysicalSafety 1
Given the context and the possible goal of this communication (i.e. for the childe their physical location) the semantic parameter of Physical Safety is incremented.
C: when's a good time?
(FRAG : (VP when's (NP a good time?)))
Threat Levels:
Co-incidenceLevel 4 Personal Privacy 3
SuspiciousDialogue 1
PhysicalSafety 1
A: anytime tonight is good?
(S : (NP anytime tonight) (VP is (ADJP good?)))
Threat Levels:
Co-incidenceLevel 4
Personal Privacy 3
SuspiciousDialogue 1
PhysicalSafety 2
[0078] Given the context and the possible goal of this communication and in light of the previous communications (i.e. for the child to leave their physical location at a certain time) the semantic parameter of Physical Safety is incremented.
C: ok, send me your address and i'll get my parents to drop me off.
(FRAG : (VP ok, (S (VP (VP send (NP me) (NP your address)) and (VP i'll (VP get (S (NP my parents) (VP to (VP drop (NP me) (ADVP off.))))))))))
Predicate : send
: ok, send (A1 me) your address and i'll get my parents to drop me off.
Predicate : get
: ok, send me your address and i'll get (A1 my parents) (A2 to drop me off.)
Predicate : drop
: ok, send me your address and i'll get (AO my parents) to drop (A1 me) (AM-TMP off.)
Threat Levels:
Co-incidenceLevel 4
Personal Privacy 3
SuspiciousDialogue 1
PhysicalSafety 2
A: Do you like soccer?
(S : (VP Do (NP you) (PP like (NP soccer?))))
Predicate : do : Do (A1 you) (A2 like soccer?)
Threat Levels:
Co-incidenceLevel 4
PersonalPrivacy 3
SuspiciousDialogue 2
PhysicalSafety 2
[0079] The semantic parameter of SuspiciousDialogue is incremented due to a break in flow of the communication (i.e. the change in topic).
C: Yeh, why?
(NP : Yeh, why?)
Threat Levels:
Co-incidenceLevel 4
PersonalPrivacy 3
SuspiciousDialogue 2
PhysicalSafety 2
A: How about we have a kick before studying at your local park?
(FRAG : (SBAR (WHADJP How about) (S (NP we) (VP have (NP a kick) (PP before (S (VP studying (PP at (NP your local park?)))))))))
Predicate : have
: How about (AO we) have (A1 a kick) (AM-LOC before studying at your local park?)
Predicate : studying
: How about we have a kick before studying (AM-TMP at your local park?)
Threat Levels:
Co-incidenceLevel 4
PersonalPrivacy 3
SuspiciousDialogue 2
PhysicalSafety 3
[0080] Given the context and the possible goal of this communication and in light of the previous communications (i.e. for the child to leave their physical location at a certain time and to meet in a particular location) the semantic parameter of Physical Safety is incremented.
C: I really do need to study lots ...
(S : (NP I) (ADVP really) (VP do (VP need (S (VP to (VP study (NP lots)))))) ...)
Predicate : do
: (AO I) (AM-TMP really) do need to study lots ...
Predicate : need
: (AO I) (AM-TMP really) do need (A1 to study lots) ...
Predicate : study
: I really do need to study (A1 lots) ...
Threat Levels:
Co-incidenceLevel 4
Personal Privacy 3
SuspiciousDialogue 2
PhysicalSafety 3
A: just a quick kick, i'll bring my books and we can go to your place afterwards?
(S : (ADVP just) (S (NP a quick kick, i'll) (VP bring (NP my books))) and (S (NP we) (VP can (VP go (PP to (NP your place)) (ADVP afterwards?)))))
Predicate : bring
: just (AO a quick kick, i'll) bring (A1 my books) and (AM-ADV we can go to your place afterwards?) Predicate : go
: just a quick kick, i'll bring my books and (A1 we) can go (A2 to your place) (AM-DIR afterwards?)
Threat Levels:
Co-incidenceLevel 4
Personal Privacy 3
SuspiciousDialogue 2
PhysicalSafety 4
C: ok, let's meet at 5 o'clock then.
(S : (NP ok, let's) (VP meet (PP at (NP 5 o'clock)) (ADVP then.)))
Predicate : meet : (AO ok, let's) meet (A1 at 5 o'clock) then.
Threat Levels:
Co-incidenceLevel 4
Personal Privacy 3
SuspiciousDialogue 2
PhysicalSafety 5
[0081 ] Given the child has agreed to the meeting the semantic parameter of Physical Safety is incremented. Depending on the security setting and goal settings, action may be taken at this point (or earlier depending on the security and goal settings) with the communication being terminated or a third party notified (such as a parent).
[0082] The example above is a sample communication between two parties, the predatory attacker (A) and the child victim (C) over a real time instant messaging system. The two parties need not be communicating in real time. The method may be utilised over a blog (weblog) or a web site that allows a user to edit content on the page (a Wiki). Further, the method may be utilised over a real time instant messaging system where the communication is between two parties with one party being a chatbot. The chatbot may, for example, be as described in Australian Provisional Application No 2006902803.
[0083] It will be appreciated that the general approach to analysis and subsequent action outlined may be applied to a wide range of situations, for example training of various types, monitoring of telephone conversations ( for example for sales and similar purposes).
[0084] Optional embodiments of the present invention may also be said to broadly consist in the parts, elements and features referred to or indicated herein, individually or collectively, in any or all combinations of two or more of the parts, elements or features, and wherein specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
[0085] Although a preferred embodiment has been described in detail, it should be understood that various changes, substitutions, and alterations can be made by one of ordinary skill in the art without departing from the scope of the present invention. For example, to avoid misclassification, a minimum number of activities and attributes of unknown processes may be detected before these behaviours are compared with attributes and activity associated with known malicious and non-malicious processes to determine the likelihood of that process being malicious.

Claims

Claims
1 . A method for automatically analysing a communication to provide an alert or indication in relation to content, behaviour or goals, ; including at least the steps of:
(a) analysing the language of the communication to determine one or semantic representations of the communication, said semantic representations being constrained by predetermined contextual rules;
(b) analysing the semantic representation relative to predetermined domain models of behaviour or goals; and
(c) providing an alert or indication if the semantic content, in the context, meets predetermined parameters.
2. A method according to claim 1 , wherein step (a) includes extracting possible words from the communication, applying a lexicogrammar to identify the possible words, and using semantic rules to determine a semantic representation and thereby impute a meaning to the identified language.
3. A method according to claim 1 or claim 2, wherein a minimum number of semantic parameters are detected in a communication before providing an alert or indication.
4. A method according to any one of the preceding claims, wherein the domain model includes user defined semantic parameters, optionally including key words and phrases.
5. A method according to any one of the preceding claims, wherein the communication is a series of related communications, involving one or more parties.
6. A method according to any one of the preceding claims, wherein the method is carried out in the background of one or more of a communication medium, a social networking service, a networked game, a telephone call, an email chain, or a messaging service.
7. A method according to any one of the preceding claims, wherein the method may be carried out in real time or based on stored communications.
8. A method according to any one of the preceding claims, wherein at least some of the predetermined parameters, domain models and semantic rules are dynamically changeable in response to one or more of self learning algorithms, communications with other parties, and analysis of communications.
9. Software for use with a computer including a processor and associated memory device for storing the software, the software including a series of instructions to cause the processor to carry out a method according any of the preceding claims.
10. A system for automatically analysing a communication to provide an alert or indication in relation to content, behaviour or goals, the system including at least a processor with associated memory, and a communication interface for receiving the communications to be analysed and for sending alert and/or indications, the processor including software so that it is operatively adapted to analyse the language of the communication to determine one or semantic representations of the communication, said semantic representations being constrained by predetermined contextual rules, and to further analyse the semantic representations relative to predetermined domain models of behaviour or goals, and if the semantic representation, in the context, meets predetermined parameters, transmitting an alert or indication.
1 1 . A system according to claim 10, wherein analysis of the language includes extracting possible words from the communication, applying a lexicogrammar to identify the possible words, and using semantic rules to determine a semantic representation and thereby impute a meaning to the identified language.
12. A system according to claim 10 or claim 1 1 , wherein a minimum number of semantic parameters are detected in a communication before providing an alert or indication.
13. A system according to any one of claims 10 to 12, wherein the domain model includes user defined semantic parameters, optionally including key words and phrases.
14. A system according to any one of claims 10 to 13, wherein the communication is a series of related communications, involving one or more parties.
15. A method according to any one of claims 10 to 14, wherein the method is carried out in the background of one or more of a communication medium, a social networking service, a networked game, a telephone call, an email chain, or a messaging service.
16. A method according to any one of claims 10 to 15, wherein the method may be carried out in real time or based on stored communications.
17. A method according to any one of the preceding claims, wherein at least some of the predetermined parameters, domain models and semantic rules are dynamically changeable in response to one or more of self learning algorithms, communications with other parties, and analysis of communications.
PCT/AU2013/000909 2012-08-16 2013-08-16 Communication analysis and interpretation WO2014026240A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2012903533A AU2012903533A0 (en) 2012-08-16 Detecting the Nature of a Communication
AU2012903533 2012-08-16

Publications (1)

Publication Number Publication Date
WO2014026240A1 true WO2014026240A1 (en) 2014-02-20

Family

ID=50101123

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2013/000909 WO2014026240A1 (en) 2012-08-16 2013-08-16 Communication analysis and interpretation

Country Status (1)

Country Link
WO (1) WO2014026240A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180084073A1 (en) * 2015-03-27 2018-03-22 Globallogic, Inc. Method and system for sensing information, imputing meaning to the information, and determining actions based on that meaning, in a distributed computing environment
AU2016200701A2 (en) * 2015-02-04 2021-08-19 Jonathan Bishop Limited Monitoring on-line activity

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040111479A1 (en) * 2002-06-25 2004-06-10 Borden Walter W. System and method for online monitoring of and interaction with chat and instant messaging participants
US20040249650A1 (en) * 2001-07-19 2004-12-09 Ilan Freedman Method apparatus and system for capturing and analyzing interaction based content

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040249650A1 (en) * 2001-07-19 2004-12-09 Ilan Freedman Method apparatus and system for capturing and analyzing interaction based content
US20040111479A1 (en) * 2002-06-25 2004-06-10 Borden Walter W. System and method for online monitoring of and interaction with chat and instant messaging participants

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2016200701A2 (en) * 2015-02-04 2021-08-19 Jonathan Bishop Limited Monitoring on-line activity
US20180084073A1 (en) * 2015-03-27 2018-03-22 Globallogic, Inc. Method and system for sensing information, imputing meaning to the information, and determining actions based on that meaning, in a distributed computing environment
US11258874B2 (en) * 2015-03-27 2022-02-22 Globallogic, Inc. Method and system for sensing information, imputing meaning to the information, and determining actions based on that meaning, in a distributed computing environment

Similar Documents

Publication Publication Date Title
US11887595B2 (en) User-programmable automated assistant
Oprea et al. isarcasm: A dataset of intended sarcasm
US11809829B2 (en) Virtual assistant for generating personalized responses within a communication session
US20190057310A1 (en) Expert knowledge platform
CA2973138C (en) Systems, devices, and methods for automatic detection of feelings in text
Sykora et al. A qualitative analysis of sarcasm, irony and related# hashtags on Twitter
WO2019203866A1 (en) Assisting users with efficient information sharing among social connections
CN110892395A (en) Virtual assistant providing enhanced communication session services
US20140304814A1 (en) System and methods for automatically detecting deceptive content
US20220138432A1 (en) Relying on discourse analysis to answer complex questions by neural machine reading comprehension
US20190138599A1 (en) Performing semantic analyses of user-generated text content using a lexicon
US10878202B2 (en) Natural language processing contextual translation
US20220247700A1 (en) Interactive chatbot for multi-way communication
EP3557502A1 (en) Aggregating semantic information for improved understanding of users
Thun et al. CyberAid: Are your children safe from cyberbullying?
EP3557504A1 (en) Intent identification for agent matching by assistant systems
EP3031030A1 (en) Methods and apparatus for determining outcomes of on-line conversations and similar discourses through analysis of expressions of sentiment during the conversations
EP3557498A1 (en) Processing multimodal user input for assistant systems
EP3557503A1 (en) Generating personalized content summaries for users
EP3557501A1 (en) Assisting users with personalized and contextual communication content
WO2014026240A1 (en) Communication analysis and interpretation
Anwar et al. Social relationship analysis using state-of-the-art embeddings
EP3557499A1 (en) Assisting users with efficient information sharing among social connections
US20230030870A1 (en) Utterance intent detection
CN114296547A (en) Method, device and storage medium for initiating active dialogue

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13829932

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13829932

Country of ref document: EP

Kind code of ref document: A1