WO2008141168A1 - Systems and methods for fraud detection via interactive link analysis - Google Patents

Systems and methods for fraud detection via interactive link analysis Download PDF

Info

Publication number
WO2008141168A1
WO2008141168A1 PCT/US2008/063229 US2008063229W WO2008141168A1 WO 2008141168 A1 WO2008141168 A1 WO 2008141168A1 US 2008063229 W US2008063229 W US 2008063229W WO 2008141168 A1 WO2008141168 A1 WO 2008141168A1
Authority
WO
WIPO (PCT)
Prior art keywords
database
rules
fraudulent
individuals
generated
Prior art date
Application number
PCT/US2008/063229
Other languages
French (fr)
Inventor
Stuart L. Crawford
Chris Erickson
Victor Miagkikh
Michael Steele
Megan Thorsen
Sergei Tolmanov
Original Assignee
Fair Isaac Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fair Isaac Corporation filed Critical Fair Isaac Corporation
Publication of WO2008141168A1 publication Critical patent/WO2008141168A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4016Transaction verification involving fraud or risk level assessment in transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists

Definitions

  • FIGURE 14 shows one embodiment 1400 of the use of a fraud rule to block, in real-time, fraudulent activity with respect to an imposter attempting to obtain credit history data from a database of credit information.
  • Process 1401 controls the logon access to a credit database. This access can be, for example, so that the individual can access his/her credit history.
  • a process such as process 1402 queries the accessing user for some combination of attributes uniquely pertaining to that user's data file. Some of thee possible attributes are shown in process 1402, but any number and any combination can be required, and the combination can change depending upon security levels, or depending upon previous query answers.

Abstract

Fraud detection is facilitated by developing account cluster membership rules and converting them to database queries via an examination of linkage clusters abstracted from the customer database. The account membership rules are based upon certain observed data patterns associated with potentially fraudulent activity. In one embodiment, account clusters are grouped around behavior patterns exhibited by imposters. The system then identifies those clusters exhibiting a high probability of fraud and builds cluster membership rules for identifying subsequent accounts that match those rules. The rules are designed to define the parameters of the identified clusters. When the rules are deployed in a transaction blocking system, when a rule pertaining to an identified fraudulent cluster is triggered, the transaction blocking system blocks the transaction with respect to new users who enter the website.

Description

SYSTEMS AND METHODS FOR FRAUD DETECTION VIA INTERACTIVE LINK ANALYSIS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Application No. 60/917,518 filed May 11, 2007 titled "SYSTEMS AND METHODS FOR E-COMMERCE FRAUD DETECTION," and to the U.S. Utility Application No.: 12/117,441 filed May 8, 2008 titled "SYSTEMS AND METHODS FOR FRAUD DETECTION VIA INTERACTIVE LINK ANALYSIS," the disclosures of which are incorporated herein by reference.
TECHNICAL FIELD
[0002] This disclosure relates to fraud detection and more specifically to systems and methods for fraud detection in accessing of an informational database.
BACKGROUND OF THE INVENTION
[0003] E-commerce systems exist where members of the general pubic, using an Internet accessible website, can obtain sensitive information pertaining to individuals. Such information, by way of example, takes the form of credit histories and other credit sensitive data. These types of websites are prone to users trying to obtain private information about others. Often, such attempts are made by imposters who have some, but not all, of the identification needed to identify a target. In some situations these imposters have stolen, or are trying to steal, the target's identity.
[0004] In a typical scenario, the fraudster has obtained some piece of the target's personal information. Typically, this would be the target's name and perhaps his/her address. The fraudster then obtains a (typically stolen) credit card belonging to someone other than the target. The object then for the fraudster is to steal the full identity of the target. To do this the fraudster will make use of a website, such as the myfico.com website, that contains a full range of data pertaining to individuals. The fraudster will issue a query in the form of a credit report request.
[0005] Using this scenario, the fraudster creates an account on the website and then attempts to purchase a credit report of the target using the stolen credit card number. In this scenario the fraudster is trying to pass him/her self off as the target. In order to obtain the report, the fraudster must go through an identity authentication process administered by one of the credit bureaus. In this process the fraudster engages in a computer-generated interview where a small number of questions are posed about some of the items that the real target would know about the credit report. Since the fraudster usually does not yet have access to sufficient information about the target and past credit transactions, the fraudster often fails the interview. Fraudsters being what they are, don't give up at this point.
[0006] The foiled fraudster then creates another account and tries again. Often the fraudster will use similar information to create each new account. This similar information can be, for example, password, security answer, e-mail address, credit card number, and the like. Once in a while, the imposter will succeed and obtain a target's credit report containing sensitive data that then facilitates the imposter's desire to trade off of the credit of the target.
[0007] The occurrence of many accounts that are similar enough to have possibly been created by the same individuals is a strong indicator of potential fraud. Currently, trying to identify collections of similar accounts is a laborious and time consuming process which involves repeatedly querying the database for information and patterns.
BRIEF SUMMARY OF THE INVENTION
[0008] Fraud detection is facilitated by developing account cluster membership rules and converting them to database queries via an examination of linkage clusters abstracted from the customer database. The account membership rules are based upon certain observed data patterns associated with potentially fraudulent activity. In one embodiment, account clusters are grouped around behavior patterns exhibited by imposters. The system then identifies those clusters exhibiting a high probability of fraud and builds cluster membership rules for identifying subsequent accounts that match those rules. The rules are designed to define the parameters of the identified clusters. When the rules are deployed in a transaction blocking system, when a rule pertaining to an identified fraudulent cluster is triggered, the transaction blocking system blocks the transaction with respect to new users who enter the website.
[0009] The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS
[0010] For a more complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawing, in which:
[0011] FIGURE 1 shows one embodiment of a system for establishing rules for the detection of possible fraudulent transactions in accordance with concepts of this invention;
[0012] FIGURES 2 through 12 show typical screen shots as a user works through the various aspects of the invention;
[0013] FIGURE 13 shows one embodiment of the operation of a rule generator; and.
[0014] FIGURE 14 shows one embodiment of the use of a fraud rule to block, in real-time, fraudulent activity with respect to an imposter attempting to obtain credit history data from a database of credit information.
DETAILED DESCRIPTION OF THE INVENTION
[0015] Turning now to FIGURE 1, there is shown one embodiment 10 for practicing the invention. In operation, the user (system administrator) formulates a query and issues an SQL query against database 108 which is the database in which the customer inquiries are maintained. In this context, a query would, for example, instruct the user to "select all of the accounts and all of the transactions created within the past 30 days, and extract all of the following fields, A through G, but exclude fields C and F." Basically, the query instructs the user to enter the data for all of the accounts and all of the transactions created within the past 30 days, but to exclude some of this information because it is not important at that point in time.
[0016] The results of the query, namely the accounts and the transactions, are then loaded into pattern matcher 101. The pattern matcher takes as its input previously established pattern matching rules. For example, a very simple pattern matching rule would say "in order for two credit card numbers to match, they must be identical." Another rule might say "in order for two e-mail addresses to match at least four letters or numbers of the information before the @ must match." That's another kind of rule. A more sophisticated rule might reflect that "for two passwords to match, they must have a pair of identical substrings." That means that if you had one password that said "dogdog," and another password that said "catcat," those two passwords would match even though they have different words.
[0017] One can imagine many types of sophisticated matches, such as, for example, two passwords can match if they start with the first initial of the first name, followed by a two digit number, followed by the last name. Thus, "J12smith" would match with "dl 5jones." A pattern match generator, such as will be discussed with respect to FIGURE 13, can be used when a user identifies what the user considers to be an anomaly situation where certain patterns seem to be repeating.
[0018] Rules, such as discussed above, are applied to pattern matcher and all of the account activity that matches the rules are collected and linked together by matching rules. Thus, all of the accounts found by a rule that defines matching e-mail addressed are linked. Also, all the accounts that are found by a rule that defines matching passwords are linked as are all the accounts that are found based on a rule that specifies matching credit card numbers. The link dataset basically lists those accounts that are connected to other accounts by which types of links and at what strength.
[0019] The links have certain types. The types are, for example, credit cards, passwords, e-mail addresses, and they have certain strings indicating the degree to which the pattern is matched. The link dataset is loaded into layer builder 103 which creates an internal data structure representing the way that those accounts are connected on each layer. Here a layer means a type of link. For example, an e-mail address is a layer, a security answer is a layer, a password is a layer, and credit card number is a layer. Layer builder 103 builds the layers and describes the way in which the accounts are connected within each layer.
[0020] The layer information is then run through graph renderer 104 which generates a visual display so that the user, as will be discussed, can visualize the various links. Different colors assist in this visualization. The links are also shown with different width connectors representing the relative strength of the association. The user then can expand out on a layer-by-layer basis as will be discussed.
[0021] At a certain point, the user begins to identify what might be a cluster and then the user can add or remove accounts from the cluster as desired using cluster editor 105. When the user is satisfied with a cluster, the cluster can be automatically characterized by cluster explainer 106, with that characterization being represented by a decision tree. That decision tree can then be transformed to a corresponding SQL expression which can be applied to the database for later retrieval of additional matching accounts.
[0022] Cluster explainer 106 is used to automatically induce a set of cluster membership rules that identify the parameters that caused an account to be part of the identified cluster. For example, the rules might indicate that "to be a member of the cluster, the e-mail address must follow a certain pattern and the security answer must follow another pattern, and the account holder must be a resident in Bakersfield, and so on and so forth." These membership rules can be modified, if desired, by the user via rule editor 107. [0023] The user can then transform a set of cluster membership rules into a SQL query and apply that query against customer database 108 effectively asking "see whether any accounts in the entire history of the database match the particular cluster membership rule set corresponding to the current cluster." What the user is effectively saying is "in this last month of data, a cluster of accounts has been identified that is suspicious. The suspicious account activity is defined by a set of rules that describe the attributes of accounts that are members of the cluster. Every account in the database is searched (via the cluster membership rule set expressed as a SQL query) in order to identify any other accounts that match the pattern described by the rule set. If found, those accounts are loaded, run through the pattern matcher and then displayed on the screen as were the previously loaded accounts. Then the user can once again enter into the exploratory state and perhaps further refine the cluster. This iteration can go on as long as the user desires
[0024] Returning now to cluster editor 105, in addition to simply generating rules (as discussed above) the system operates to use pattern editor 109 to create pattern matching rule(s) based on patterns of data that have been hitherto unseen. For example, the user may notice a password that is characterized by a pattern of: the first letter of the first name, followed by the number 99, followed by the last letter of the first name, followed by 99, followed by the remainder of the surname. The user determines that this is an "interesting" password pattern. The user might then want to find out if there are any other accounts in the entire database that have a password patterns that match that one.
[0025] FIGURES 2 through 12 show one embodiment of typical screen shots encountered as a user works through the various aspects of the inventive concepts as taught herein.
[0026] FIGURE 2 shows a common usage scenario which, in this view 20, is a charge-back screen shot. A charge-back occurs when a person calls the customer service system of the eCommerce website from which credit reports are purchased. That person is typically directed to the system by a credit card company when the person calls to complain that a charge on their credit card does not belong to them. For example, the charge might be for a credit report that the caller did not purchase. This is typically (but not exclusively) how a search for a fraudster begins.
[0027] The search begins in this scenario with the system user knowing the credit card number used to fraudulently purchase the credit report (since that number was obtained from the caller). The user also knows the true identity of the person whose credit report was purchased since the purchased credit card report information is stored in association with the credit card number used to commit fraud.
[0028] In our example, the fraudulently purchased credit report belongs to a person named Jones as shown in line 201 of screen section 21. Screen section 21 contains the true names and credit card numbers (as well as other information) of a large number of persons. The system user then types "jones" in jump-to field 202 which then brings up an e-mail address 203 of, for example, stilgoing 13 @domain.name5. The user then can right click on screen 20 to show expand-on box 204. The user then selects "credit card" for further expansion. In this context, the process of expansion corresponds to displaying additional accounts connected to the currently displayed account.
[0029] FIGURE 3 shows the results of the expansion. In this case, there is shown three nodes 301, 302 and 303 having similar e-mail addresses each of which are associated with the use of a "matching" credit card number. Note that while "matching" in the context of credit cards means "exact match", "matching" is generally determined by matching rules specific to the layer being considered. They need not be exact matches. As shown in FIGURE 2, the nodes are inter connected by a line which is color coded according to the layer being matched.
[0030] The user searches for similarities and notices that the e-mail addresses for all of these nodes are similar. The user brings up expand-on box 304 and checks the "e-mail" box. This instructs the system to expand to "same" e-mails following the rule for "same" matches as established from the patterns already known.
[0031] FIGURE 4 shows several nodes interconnected by different colors (shown in the drawing as different line types), corresponding to the different match types. In particular, we see accounts linked by credit card and email matches. The user then can inspect the details of each account by, for example, rolling the mouse over the "bubble." The results from placing the mouse pointer over bubble 404 is shown in FIGURE 5 by box 510. This then shows the credit card holder's name, address, e-mail address, login, password, security answer, credit card number (which is encrypted in the drawing) and a variety of other data.
[0032] The lines of section 512 indicate how this particular account is connected to other accounts. In this example, this node is connected to groups (unlabeled clusters), of matching credit cards, groups of matching email addresses, groups of matching IP addresses, etc.
[0033] The user can select all of the accounts displayed, and request that the characteristics of those accounts alone be displayed in a table below the graph display. By looking at this table, it can be observed that the selected accounts have similar passwords. By right clicking "similar password" in expand-on box 503, the user can then expand the graph to show those accounts with similar passwords.
[0034] FIGURE 6 shows a total of 14 accounts that are connected via similar passwords, credit card numbers and email addresses. By further investigation (via the table mechanism described above) it can be observed that they also have similarities in terms of their respective security answers. The user then uses expand-on- box 603 to enable the display (as shown in FIGURE 7) of accounts linked on the basis of security answers.
[0035] As shown in FIGURE 7, the interconnecting links have now expanded to a point where it is difficult to focus on anything of value since it is all mostly hidden from view by the clutter. However, there are a number of different links that have some things in common. Because the links are colored, the overlapping colors intensify where many links of the same color intersect. Thus, the links that have the most in common have the most intense color and the links with the weakest interconnections have much less color intensity. [0036] Saying this another way, when the color is intense, there are a number of common attributes, such as common passwords, common e-mails, common passwords, etc. Where the color intensity is less, the number of common attributes are less. Accordingly, it is possible to selectively remove links with less intense color from the screen by drawing a box around the undesired (for now) links, right clicking and responding to a prompt to remove the links within the box.
[0037] FIGURE 8 is a screen of what remains after removing the loosely connected (less intense colored) sub-clusters. This screen shows e-mail addresses for the remaining accounts with a high number of interconnected links in the background.
[0038] FIGURE 9 then shows what remains after temporarily hiding the security answer connections for these accounts. There is presented a set of nodes 901 that are not connected at all, or do not appear to be connected. The set of nodes 901 are actually connected based upon the security answer, but the display of those links has been temporarily disabled. There is another group of connected accounts 902 that are nicely connected. By placing the curser on each of them, the attributes of each of those accounts can be determined.
[0039] It is then determined that every one of the accounts in list 902 has Bakersfield as the home address. By then observing the accounts in list 901, it can be observed that they are from cities all over the country. The only common connection is that one account exhibits a Bakersfield address. Then, by removing all of the accounts that do not list Bakersfield as a home address, the display can be reconfigured as shown in FIGURE 10.
[0040] FIGURE 10 now shows all of the accounts belonging to the potentially fraudulent cluster. By re-enabling the security answer layer, the display reveals that they are all connected. This display is then labeled as cluster 1010. Cluster 1010 can then be expanded to show all the interconnections. This expanded cluster then can be given a name by cluster creator 1101. In the example shown, box 1102 is labeled "My Potentially Fraudulent Cluster." Once created, this cluster is then run through cluster explainer 106 (FIGURE 1) to generate a decision tree. [0041] FIGURE 12 shows one portion of a decision tree that says "if the security answer is "barkyt" or "barky," and the AVS check is failed or not performed, then the transaction is deemed to be fraudulent. Otherwise, if the AVS check is okay or not required, then the rest of the decision tree would indicate, "if the transaction is in the following set of zip codes, then it is deemed to be fraudulent." At this point, the decision tree can be translated into a simple SQL expression that can be applied to the entire database of known accounts, in order to determine identify accounts that have a high probability of fraud, both in the past (as contained in the database of past transactions) and in real-time (as new attempts are being made).
[0042] Note that the database that the fraud rule is run against can be the same database, for example database 108 (FIGURE 1) that was used to begin the drill- down process, as discussed above, and/or the fraud rule can be sent to one or more databases (not shown) remote from the originating database via communication device 110 (FIGURE 1). This then allows for fraud detection rules to be circulated among different databases, perhaps at different credit monitoring facilities.
[0043] FIGURE 13 shows one embodiment of a method for creating a pattern for use in pattern matcher 101, as shown in FIGURE 1. Assuming that the user who has been studying the screen and looking at various items such as passwords notices a pattern. For example, the user notices that there are several passwords that have one character from the target's first name, then two digits, which could be two random digits, then the target's last name. Another pattern that the user, for example, has noticed is that the password could have one character from the target's first name followed by a specific string of digits followed by the target's last name.
[0044] The user then brings up pattern match generator 1300 as shown in FIGURE 13 and begins to create a pattern. In this example, the user prepares an expression consisting of two compound phrases connected by an OR condition. The user then uses box 1301 and selects what the first part of the pattern will be, in this case the user selects the word "first." Then using box 1302, the user selects N (which would mean the first N characters) and another box pops up to allow the user to select the specific value for N. In our case, the user selects "1." The user would then go to box 1303 and select where those characters are from. In this case, the user would select "First Name Field" and then using box 1304 would select the "followed by" notation. The user would then press the "Next Phrase" button and then would repeat back at box 1301 to select the word "exactly" followed by the "2" from box 1302, followed by "the integers" from box 1303. Then the user would select "followed by" from box 1304, then press the "Next Phrase" button again, then would repeat back at box 1301 and select the words "all" from box 1301, and then "Last Name Field" from box 1303.
[0045] The user would then press "OR" then "(" then repeat the process described in 0044 to prepare the second compound phrase as shown in 1312. is the two compound phrases are then shown in screen 1310 as the user is creating them, for example, the phrase that was just created is shown as field 1312. Assuming that the user wants to save the phrase, then box 1306 is enabled. If the user desires to generate random strings that match the current expression, the user can use box 1330 which generates sample matches which randomly fills in all the blocks and the user can see on the screen if, after a number of samples have been created, the pattern is being generated properly.
[0046] The user can create example matches using box 1330 and if the user desires to edit the phrase, that can be done via screen 1320 and where the syntax for controlling the machine process is shown. If the user wants to edit the phrase, then the user can do so at this point, or if the user wants to check the syntax to be sure that the syntax is in the correct format, then box 1322 can perform that function. When the user is finished creating a match or a set of matches, then the user can create the pattern match using 1331. Sometimes the user may want to create a phrase and then reuse the phrase in another pattern or in another portion of the same pattern. This action is acomplished by creating the pattern, such as pattern 1311 and then enabling save phrase box 1306. Save- phrase box 1306 then allows the user to name that phrase and then, if desired, to create a set of new phrases using some portion of the new phrase to the save phrase file.
[0047] FIGURE 14 shows one embodiment 1400 of the use of a fraud rule to block, in real-time, fraudulent activity with respect to an imposter attempting to obtain credit history data from a database of credit information. Process 1401 controls the logon access to a credit database. This access can be, for example, so that the individual can access his/her credit history. As is well-known, before such access will be granted a process, such as process 1402, queries the accessing user for some combination of attributes uniquely pertaining to that user's data file. Some of thee possible attributes are shown in process 1402, but any number and any combination can be required, and the combination can change depending upon security levels, or depending upon previous query answers.
[0048] Process 1403 reviews the answers, either one at a time or in bulk, and process 1404 compares the answers against one or more fraud rules that have been generated, as discussed above. If one or more answers, such as the answer to the password or the answer to the e-mail address, etc, match a fraud rule, then process 1405 acts to take whatever action is required by the system administrator, such as recording the machine identity of the user or blocking further access for this user, or blocking any other action defined by the system.
[0049] Process 1406, either acting concurrently with process 1404 or serial thereto, will either grant access to the credit information if all the queries are answered correctly or deny access in problem situations as is well-known. Note that the operation of process 1400 can be within the same processor (not shown) that controls the operation of the processes described for FIGURES 1 through 13 or can be in a processor remote from the processor that generated the fraud query rule.
[0050] Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

Claims

CLAIMSWhat is claimed is:
1. The method of determining fraudulent use of a database containing credit data on individuals, said method comprising:
examining a database of information pertaining to a credit history of individuals, said database containing, for each individual, at least one attribute selected from the list of: credit card identification, home address, phone number, password, e-mail address, answers to security questions, or a portion of a social security number, said examination calculated to find linkages between similar attributes used for different individuals in said database; and
generating at least one rule pertaining to a potentially fraudulent transaction, said generated rule based upon certain combinations of letters and numbers constituting of at least one of said attributes.
2. The method of claim 1 further comprising:
running one or more of said generated rules against one or more databases of individuals to identify which individuals in said database have a high probability of being fraudulent and wherein said last-mentioned databases contain, for each individual, at least one attribute selected from the list of: credit card identification, home address, phone number, password, e-mail address, answers to security questions, or a portion of a social security number.
3. The method of claim 1 further comprising:
sending one or more of said rules to a database manager so as to allow said database manager to run sent ones of said rules against one or more databases under control of said manager to identify which individuals in said databases have a high probability of being fraudulent.
4. The method of claim 1 further comprising:
using at least one of said generated rules in real-time to detect credit history transactions that have a high probability of being fraudulent.
5. The method of claim 4 wherein said examining comprises:
selecting a starting point based on a known anomaly for a particular individual, said anomaly arising with respect to at least one particular attribute of said individual;
searching said database for linkages to other individuals in said database, said search based on said particular attributes;
determining linkages between attributes of said particular individual and attributes of other individuals based upon said database search; and
drilling down on said displayed linkages to generate a rule pertaining to a potentially fraudulent past transaction.
6. A method of tracking fraudulent transactions, said method comprising:
establishing rules defining parameters for various match operations;
selecting a starting point based on a known anomaly for a particular user, said starting point having at least one of the following attributes, credit card identification, home address, phone number, password, e-mail address, answers to security questions, or a portion of a social security number;
searching a first database of a plurality of users for linkages to other users , said database having all of said attributes for said users, and said search based on a selected one of said attributes;
displaying a linkage between said particular user and other users based upon said database search; and
drilling down on said displayed linkage to generate a fraud rule pertaining to a potentially fraudulent transaction.
7. The method of claim 6 further comprising:
running one or more of said generated fraud rules against one or more databases of information sets pertaining to individuals to identify which data sets in said database have a high probability of being fraudulent.
8. The method of claim 6 further comprising:
sending one or more of said generated fraud rules to a second database remote from said first database so as to allow said generated fraud rule to be run with respect to said second database so as to identify information sets in said second database have a high probability of being fraudulent.
9. The method of claim 6 further comprising:
using at least one of said generated fraud rules in real-time to detect first database related transactions having a high probability of being fraudulent.
10. The method of claim 6 further comprising:
using at least one of said generated fraud rules in real-time to detect second database related transactions having a high probability of being fraudulent, said second database being at a location remote from said first database.
11. A system for fraud detection, said system comprising:
a database containing, for each individual, at least one attribute selected from the list of: credit card identification, home address, phone number, password, e-mail address, answers to security questions, or a portion of a social security number;
means for examining said database to find linkages between similar attributes used for different transactions; and
means for generating at least one rule pertaining to a potentially fraudulent transaction, said generated rule based upon certain combinations of letters and numbers constituting of at least one of said attributes.
12. The system of claim 11 wherein said linkages are based, at least in part, upon certain information pertaining to a credit history of individuals,
13. The system of claim 11 further comprising:
means for running one or more of said generated rules at least one database containing credit information pertaining to individuals to identify which individuals in said database have a high probability of being fraudulent.
14. The system of claim 13 wherein said last-mentioned database contains, for each individual, at least one attribute selected from the list of: credit card identification, home address, phone number, password, e-mail address, answers to security questions, or a portion of a social security number.
15. The system of claim 11 further comprising:
means for sending one or more of said rules to a second database remote from said database so as to identify which individuals in said second database has a high probability of being fraudulent.
16. The system of claim 12 further comprising: means for using at least one of said generated rules in real-time to detect credit history transactions that have a high probability of being fraudulent.
17. The system of claim 12 wherein said examining means comprises: means for selecting a starting point based on a known anomaly for a particular individual, said anomaly arising with respect to at least one particular attribute of said individual; means for searching said database for linkages to other individuals in said database, said search based on said particular attributes; means for determining linkages between attributes of said particular individual and attributes of other individuals based upon said database search; and means for drilling down on said displayed linkages to generate a rule pertaining to a potentially fraudulent past transaction.
18. A system for detecting fraudulent credit transactions, said system comprising: a database of credit information pertaining to individuals; said database accepting both user generated and system generated queries; a pattern matcher for accepting rules for system matching operations; a link generator for determining linkages between similar data entries in said database, said similar entries based, at least in part, on data from said pattern matcher a cluster editor for allowing a user to drill down on selected aspects of generated ones of said links; and a rule editor for establishing at least one fraud detection rule based, at least in part, on information determined from said user drilling down on said links; said rule editor producing said system generated queries.
19. The system of claim 18 wherein said credit information comprises a plurality of items selected from the list of: credit card identification, home address, phone number, password, e-mail address, answers to security questions, or a portion of a social security number.
20. The system of claim 18 further comprising:
means for communicating generated ones of said fraud detection rules to at least one database containing credit information of a plurality of individuals, said credit information comprising a plurality of items selected from the list of: credit card identification, home address, phone number, password, e-mail address, answers to security questions, or a portion of a social security number.
PCT/US2008/063229 2007-05-11 2008-05-09 Systems and methods for fraud detection via interactive link analysis WO2008141168A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US91751807P 2007-05-11 2007-05-11
US60/917,518 2007-05-11
US11744108A 2008-05-08 2008-05-08
US12/117,441 2008-05-08

Publications (1)

Publication Number Publication Date
WO2008141168A1 true WO2008141168A1 (en) 2008-11-20

Family

ID=40002609

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/063229 WO2008141168A1 (en) 2007-05-11 2008-05-09 Systems and methods for fraud detection via interactive link analysis

Country Status (1)

Country Link
WO (1) WO2008141168A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013062713A1 (en) * 2011-10-28 2013-05-02 Visa International Service Association System and method for identity chaining
US9311351B2 (en) 2013-03-11 2016-04-12 Sas Institute Inc. Techniques to block records for matching
US11354583B2 (en) 2020-10-15 2022-06-07 Sas Institute Inc. Automatically generating rules for event detection systems
US11954218B2 (en) 2021-02-08 2024-04-09 Visa International Service Association Real-time access rules using aggregation of periodic historical outcomes

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790645A (en) * 1996-08-01 1998-08-04 Nynex Science & Technology, Inc. Automatic design of fraud detection systems
US6157864A (en) * 1998-05-08 2000-12-05 Rockwell Technologies, Llc System, method and article of manufacture for displaying an animated, realtime updated control sequence chart
US6163604A (en) * 1998-04-03 2000-12-19 Lucent Technologies Automated fraud management in transaction-based networks
US6597775B2 (en) * 2000-09-29 2003-07-22 Fair Isaac Corporation Self-learning real-time prioritization of telecommunication fraud control actions
US6714918B2 (en) * 2000-03-24 2004-03-30 Access Business Group International Llc System and method for detecting fraudulent transactions
US20060149674A1 (en) * 2004-12-30 2006-07-06 Mike Cook System and method for identity-based fraud detection for transactions using a plurality of historical identity records
US20070039049A1 (en) * 2005-08-11 2007-02-15 Netmanage, Inc. Real-time activity monitoring and reporting

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790645A (en) * 1996-08-01 1998-08-04 Nynex Science & Technology, Inc. Automatic design of fraud detection systems
US6163604A (en) * 1998-04-03 2000-12-19 Lucent Technologies Automated fraud management in transaction-based networks
US6157864A (en) * 1998-05-08 2000-12-05 Rockwell Technologies, Llc System, method and article of manufacture for displaying an animated, realtime updated control sequence chart
US6714918B2 (en) * 2000-03-24 2004-03-30 Access Business Group International Llc System and method for detecting fraudulent transactions
US6597775B2 (en) * 2000-09-29 2003-07-22 Fair Isaac Corporation Self-learning real-time prioritization of telecommunication fraud control actions
US20060149674A1 (en) * 2004-12-30 2006-07-06 Mike Cook System and method for identity-based fraud detection for transactions using a plurality of historical identity records
US20070039049A1 (en) * 2005-08-11 2007-02-15 Netmanage, Inc. Real-time activity monitoring and reporting

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013062713A1 (en) * 2011-10-28 2013-05-02 Visa International Service Association System and method for identity chaining
US11676146B2 (en) 2011-10-28 2023-06-13 Visa International Service Association System and method for identity chaining
US9311351B2 (en) 2013-03-11 2016-04-12 Sas Institute Inc. Techniques to block records for matching
US11354583B2 (en) 2020-10-15 2022-06-07 Sas Institute Inc. Automatically generating rules for event detection systems
US11954218B2 (en) 2021-02-08 2024-04-09 Visa International Service Association Real-time access rules using aggregation of periodic historical outcomes

Similar Documents

Publication Publication Date Title
US10769290B2 (en) Systems and methods for fraud detection via interactive link analysis
Shehnepoor et al. NetSpam: A network-based spam detection framework for reviews in online social media
US11630918B2 (en) Systems and methods of determining compromised identity information
AU2017203586B2 (en) System and methods for identifying compromised personally identifiable information on the internet
US7373669B2 (en) Method and system for determining presence of probable error or fraud in a data set by linking common data values or elements
CN104077396B (en) Method and device for detecting phishing website
Ding Applying weighted PageRank to author citation networks
Bercovitch et al. HoneyGen: An automated honeytokens generator
US6947924B2 (en) Group based search engine generating search results ranking based on at least one nomination previously made by member of the user group where nomination system is independent from visitation system
US7693833B2 (en) System and method for improving integrity of internet search
CN107292189B (en) The privacy of user guard method of text-oriented retrieval service
WO2007134130A2 (en) Systems and methods for generating statistics from search engine query logs
CN106713347A (en) Method for detecting unauthorized access vulnerability of power mobile application
US20160148319A1 (en) Method and system for evaluating trustworthiness
US11595416B2 (en) Method, product, and system for maintaining an ensemble of hierarchical machine learning models for detection of security risks and breaches in a network
CN108268886A (en) For identifying the method and system of plug-in operation
Xiao et al. SQL injection attack detection method using expectation criterion
Ye et al. Modeling Access Environment and Behavior Sequence for Financial Identity Theft Detection in E-Commerce Services
WO2008141168A1 (en) Systems and methods for fraud detection via interactive link analysis
CN111125747B (en) Commodity browsing privacy protection method and system for commercial website user
US9971814B1 (en) Monitoring and analysis of social network traffic
Aonghusa et al. Don’t let Google know I’m lonely
Gafny et al. Poster: applying unsupervised context-based analysis for detecting unauthorized data disclosure
Schroeder et al. Crimelink explorer: Using domain knowledge to facilitate automated crime association analysis
Varshney et al. Improving the accuracy of search engine based anti-phishing solutions using lightweight features

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08755228

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08755228

Country of ref document: EP

Kind code of ref document: A1