CA2655547C - Method and system for determining parameter distribution, variance, outliers and trends in systems - Google Patents

Method and system for determining parameter distribution, variance, outliers and trends in systems Download PDF

Info

Publication number
CA2655547C
CA2655547C CA2655547A CA2655547A CA2655547C CA 2655547 C CA2655547 C CA 2655547C CA 2655547 A CA2655547 A CA 2655547A CA 2655547 A CA2655547 A CA 2655547A CA 2655547 C CA2655547 C CA 2655547C
Authority
CA
Canada
Prior art keywords
parameters
systems
conformity
data
computer systems
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA2655547A
Other languages
French (fr)
Other versions
CA2655547A1 (en
Inventor
Andrew D. Hillier
Tom Yuyitung
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cirba Inc
Original Assignee
Cirba Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cirba Inc filed Critical Cirba Inc
Publication of CA2655547A1 publication Critical patent/CA2655547A1/en
Application granted granted Critical
Publication of CA2655547C publication Critical patent/CA2655547C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Abstract

A system and method for generating statistical reports showing distribution, variance, outliers and trends for parameters across a set of systems is provided. The reports are generated based on audited data for each system that pertains to the parameters. A distribution report assesses the uniformity of the parameters of a population of systems and provides frequency distributions and statistics describing the data values from the analyzed systems. A variance report assesses the conformity of one or more target systems against a reference data set comprised of a set of baseline systems. The report compares each target system individually against the reference data set to measure the consistency of the target's parameters. A trend report shows trends in the uniformity and conformity measures of the parameters by comparing the statistical analysis results of sets of systems at two or more points in time or the different systems at the same time.

Description

METHOD AND SYSTEM FOR DETERMINING PARAMETER DISTRIBUTION, VARIANCE, OUTLIERS AND TRENDS IN SYSTEMS
FIELD OF THE INVENTION:
[0001] The present invention relates to the empirical analysis of systems and has particular utility in determining and visualizing the distribution, variance, outliers and trends of parameters and characteristics across a set of systems.
DESCRIPTION OF THE PRIOR ART
[0002] The operation and behaviour of devices that utilize computing power such as servers, personal computers, laptops, personal digital assistants (PDA) etc., depend on thousands of parameters related to the operating system, hardware devices, software applications, patches, etc. Such devices often require configuration updates, hardware upgrades, patches and security features that can change on a periodic basis.
[0003] For computing devices to function effectively and communicate with each other and the supporting infrastructure, they should be compatible and up to date.
As organizations become more reliant on computing devices of all types to perform day-to-day activities, so does the need increase to periodically update and repair devices to minimize downtime and inefficiencies. Such a need extends beyond central and/or distributed computing environments to mobile devices, virtual networks etc.
[0004] As organizations grow and the necessary IT infrastructures also grow, the ability to evaluate parameters of computer systems becomes more and more difficult to manage.
Often, the parameters in a computer system become very different from other computer systems, resulting in problems ranging from downtime to poor performance.
These inconsistencies in system parameters would be of interest to the organizations. Similar problems and inconsistencies can be experienced in evaluating other entities such as datacenters, clusters, database instances, application instances, etc.
[0005] It is therefore an object of the following to obviate or mitigate the above-described disadvantages.

SUMMARY OF THE INVENTION
[0006] In one aspect, a method for determining parameter distribution for one or more systems is provided comprising obtaining data pertaining to the one or more systems, the data comprising information pertaining to one or more parameters; generating a statistical model for the one or more systems using the data; and analyzing each of the one or more parameters for each of the one or more systems to determine the uniformity of respective ones of the parameters among the one or more systems.
[0007] In another aspect, a method for determining parameter variance for a target system in relation to one or more baseline systems is provided comprising obtaining a statistical model for the baseline systems using data pertaining to the baseline systems, the data comprising one or more parameters; obtaining data pertaining to the target system comprising at least one of the one or more parameters; and analyzing the target system with respect to the baseline systems using the statistical model and the data pertaining to the target system to determine the conformity of the parameters in the target system when compared to the parameters in the baseline systems.
[0008] In yet another aspect, a method for analyzing trends between a first data set and a second data set pertaining to one or more parameters for one or more systems is provided comprising generating a first statistical model using the first data set;
generating a second statistical model using the second data set; and analyzing the first and second statistical models to determine one or more trends according to differences in values of the one or more parameters in the first and second data sets.
[0009] In yet another aspect, a computer implemented analysis program for determining at least one of uniformity and conformity of parameters for one or more systems is provided comprising an audit engine for obtaining audit data pertaining to the one or more systems, the data comprising one or more parameters; and an analysis engine for determining the at least one of the uniformity and conformity for values of the one or more parameters using the audit data.
[0010] In yet another aspect, a graphical interface for displaying scores pertaining to the conformity of one or more parameters for a plurality of systems is provided comprising a matrix of cells, each row of the matrix indicating one of the plurality of systems and each column of the matrix indicating a metadata category pertaining to one of the plurality of parameters, each cell displaying a score indicating the conformity of the respective one of the plurality of systems for a corresponding one of the one or more parameters, the score being computed according to predefined criteria.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] An embodiment of the invention will now be described by way of example only with reference to the appended drawings wherein:
[0012] Figure 1 is a schematic representation of a system for analyzing systems.
[0013] Figure 2 is a schematic block diagram of an underlying architecture for implementing the analysis program of Figure 1.
[0014] Figure 3 is a schematic diagram of the analysis program showing data flow pertinent to generating the statistical reports.
[0015] Figure 4 is a schematic representation of an exemplary network of systems analyzed by the analysis program.
[0016] Figure 5 is a flow chart illustrating the generation of a distribution report.
[0017] Figure 6 is a flow chart illustrating a data aggregation procedure.
[0018] Figure 7 is a flow chart illustrating the generation of a variance report.
[0019] Figure 8 is a flow chart illustrating the generation of a trend report
[0020] Figure 9 is a graphical outlier matrix.
[0021] Figure 10 is a summary page for sample system module data.
[0022] Figure 11 is a summary page for sample metadata category definitions and outlier weights.
[0023] Figure 12 is a metadata category selection page.
[0024] Figure 13 is a summary page for a sample audit data set.
[0025] Figure 14 is a summary page showing frequency distributions for the sample data set of Figure 13.
[0026] Figure 15 is summary page showing statistics for the sample data set of Figure 13.
[0027] Figure 16 is a graphical distribution matrix.
[0028] Figure 17 is a summary page listing outliers for the systems listed in the matrix of Figure 16.
[0029] Figure 18 is a summary page showing module details.
[0030] Figure 19 is a summary page showing selected property details.
[0031] Figure 20 is a summary page listing network interface instance details.
[0032] Figure 21 is a summary page showing the list of systems corresponding to the OS
name property.
[0033] Figure 22 is a summary page listing module details for the target system to be compared against the baseline systems.
[0034] Figure 23 is a graphical variance matrix for a target system in Figure 22.
[0035] Figure 24 is a summary page for a statistical variance report for a target system.
[0036] Figure 25 lists a summary of outliers for the target system listed in Figure 22.
[0037] Figure 26 is a summary page showing another sample data set at a different time.
[0038] Figure 27 is a summary page showing distribution frequencies for the data sets shown in Figures 13 and 26.
[0039] Figure 28 shows a statistical trend report.
DETAILED DESCRIPTION OF THE INVENTION
[0040] Referring to Figure 1, an analysis program 10 collects data from a set of systems 28 (3 are shown in Figure 1 as an example). The systems 28 may be physical or virtual computer systems or may be other entities such as datacenters, clusters, database instances, application instances, etc. It will therefore be appreciated that a "system"
hereinafter referred, can encompass any entity which is capable of being modeled and/or analysed and should not be considered limited to physical or virtual computer systems. It should also be noted that systems 28 can be related to one another and may have parent-child relationships.
[0041] The analysis program 10 builds a statistical model from the collected data and generates reports showing the distribution, variance, outliers, and trends of the parameters across the analyzed systems 28. A distinct data set is preferably obtained for each system 28.
[0042] Each data set comprises one or more parameters that relates to characteristics or features of the respective system 28. The parameters can be evaluated by scrutinizing program definitions, properties, objects, instances and any other representation or manifestation of a component, feature or characteristic of the system 28. In general, a parameter is anything related to the system 28 that can be evaluated, quantified, measured, compared etc.
[0043] For the following description, a general evaluation of differences between systems uses the following nomenclature: A target system refers to a system being evaluated, and a baseline system is a system to which the target system is being compared. The baseline and target systems may be the same system at different instances in time (baseline = prior, target = now) or may be different systems being compared to each other. As such, a single system can be evaluated against itself to indicate changes with respect to a datum as well as how it compares to its peers.
Architecture Overview
[0044] An example block diagram of the analysis program 10 is shown in Figure 2.
Typically, the flow of data through the program 10 begins when the user initiates an audit through the web client 54. This causes the audit engine 34 to pull audit data from audited environments 36 comprised of the systems 38 being analyzed (e.g. servers, desktop computers, etc.).
[0045] The audit engine 34 collects data from audited systems 38 through a variety of data acquisition (DAQ) adapters. DAQ adapters are typically classified as agent-based, agentless or ESM (Enterprise Systems Management) framework-based. Agent-based DAQ
adapters such as SNMP request data directly from agents running on the audited systems.
Agentless DAQ adapters such as Secure Shell (SSH) and WindowsTM Management Instrumentation (WMI) minimize the need to install additional software on the audited systems by communicating through native system services. ESM framework-based DAQ
adapters (such as Tivoli Management Framework) leverage third-party ESM
frameworks that manage the systems of interest 38. The DAQ adapters used depend on the available system instrumentation and the desired audit data.
[0046] The data collected by the audit engine 34 is stored in the audit data repository 40 for subsequent analysis and reporting. As shown in Figure 3, audit data 42 of selected systems can be retrieved from the repository 40 and evaluated by the analysis engine 44 to perform distribution, variance, and trend analyses. The analysis engine 44 uses metadata 46 to categorize the data and filter extraneous data. The analysis engine also uses outlier rules and thresholds 48 to evaluate and detect outlier values.
[0047] The distribution analysis builds a statistical model comprised of frequency distributions, numerical statistics and uniformity measures of the audit data from a set of selected systems 48. This model is used to detect outlier data values inconsistent with the remainder of the data set. Outlier values are rolled up for each meta category and at the system level to provide an overall outlier measure for each system.
[0048] Given sets of target and baseline system audit data 48, the variance analysis builds a statistical model from the baseline system, and compares each target against the model.
This analysis detects outlier values associated with the target systems relative to the baseline.
As with distribution reports, outlier values are combined at the meta category and system levels to provide overall assessments of the outliers.
[0049] Given sets of target and baseline system audit data 48 from different points in time, the trend analysis compares the target against the baseline analysis results to assess the trends of the uniformity and conformity measures of the parameters over time.
[0050] A report generator 50 utilizes a set of report templates 52 to generate reports 60 presenting the analysis results of the selected systems. Typically, the program 10 supports a web client 54 to allow a user to enter settings, initiate an audit or analysis, view reports, etc.

Example Analysis Program Deployment and Audit Environment
[0051] Referring to Figure 4, the distribution and variance analysis program, also generally referred to by numeral 10 is deployed to gather data from the exemplary computing environment 12 and uses this data to evaluate each system 28 with respect to the other systems 28.
[0052] The analysis program 10 is preferably part of a client-server application comprising a master server 14 accessed via a web browser client 54 running on a remote computer station 16. The audited systems 28 are exemplified as UNIXTM, LinuxTM
and WindowsTM servers running in a local network 18 and a pair of remote networks 20, 22.
Some servers have local agents and others are agentless.
[0053] In the example, the master server 14 collects system configuration settings, workload data, etc. from the audited servers in the local network 18 using protocols such as SNMP, WMI, SSH, etc. With the help of a slave collector 30 and proxies 32, the master server also audits servers in pair of remote networks 20, 22 through firewalls 24.
[0054] The proxy 32 is used to audit agentless WindowsTM based server via WMI. It converts firewall-friendly TCP connection-based audit requests to the less firewall-friendly WindowsTM WMI protocol requests. A proxy 32 is deployed in the remote network 20 to avoid the need to open a port range in the firewall 24.
[0055] A slave collector 30 is deployed in the remote network 22 to audit the servers locally. In turn, the master server 14 collects the audited data from slave collector 30 through the SSH protocol. This configuration simplifies the communication through the firewall 24.
The proxy 32 may also be required to audit agentless WindowsTM based server via WMI if the slave collector 30 is running on a nonWindowsTM operating system such as UNIXTM or LinuxTM.
[0056] As shown, the web client running on the computer station 16 interacts with the master server 14 to operate the analysis program 10. The web client gathers user input for executing an audit or analysis and displays reports. The analysis program 10 can gather data directly from servers, or load audit data collected by other master server instances. As such, the analysis program 10 can operate in the environment 12 or independently (and remote thereto) so long as it can obtain audited data from the environment 12 for analyzing the parameters of audited systems 28.
Overview of Reports
[0057] Figure 3 shows three types of statistical reports 60 generated by the report generator 50, namely a distribution report, a variance report and a trend report.
[0058] The Distribution Report assesses the uniformity of the parameters of a population of systems. The report provides frequency distributions and statistics describing the data values from the analyzed systems 38. It measures the uniformity of data values across the systems 38 and identifies outlier values that may indicate incorrect or out of date values on the analyzed systems. Outliers are organized by meta categories and summarized for each system 38 to produce overall outlier scores for each system 38. Outliers 38.
Figure 9 shows an example graphical matrix 64 showing outlier scores for an arbitrary list of systems in the rows and the corresponding metadata categories in the columns
[0059] The Variance Report assesses the conformity of one or more target systems against a reference data set comprised of a set of baseline systems. The report compares each target system individually against the reference data set to measure the consistency of the target's parameters. Similar to the distribution report, this analysis identifies outlier property values and summarizes the outlier values by meta category and for the overall system. The outlier values may be indicative of incorrect or out of date values on the target systems.
[0060] The Trend Report shows trends in the uniformity and conformity measures of the parameters by comparing the statistical analysis results of a set of systems at two or more points in time. Uniformity trends indicate whether data values for a specific property are converging or diverging. In general, a convergent trend in the data values is preferred to promote consistency among the systems. Conformity trends can imply whether specific data values are "leaders" (value is becoming more common or popular) or laggards (value is becoming less common). Empirically, leaders may indicate an improved parameter setting that is becoming more widely adopted. Conversely, laggards can indicate inferior or obsolete settings.
Distribution Report Generation
[0061] A flowchart illustrating the generation of a distribution report is shown in Figure 5. The audited system data 62 refers to detailed per-system data models that are typically collected through the above-described auditing process from the systems 28 of interest.
[0062] For each system 28, the data model can be organized into a hierarchy of modules, tables, objects, properties and instances. In general, each system data model can contain one or more modules, each module can contain one or more tables and/or objects, each table typically comprises one or more column properties and can contain zero or more row instances, and each object contains one or more scalar properties. System data may include OS settings, hardware configuration, installed software, patches, application settings, performance and workload data, etc.
[0063] Figure 10 provides a sample system data set 62 acquired and/or obtained and/or stored by the analysis program 10 at a particular time, for an arbitrary system (herein named server01). The sample data set 62 includes a hardware details object, operating system details object and a network interfaces table. The hardware details object lists hardware-related properties such as total number of CPUs, CPU architecture etc. and the corresponding value. Similarly, the operating system details object lists various OS
properties such as the hostname, OS name etc., and the corresponding value (e.g. hostname =
server01). The network interfaces table lists network interface properties such as the IP
address for the system, the type of interface, domain name etc.
[0064] Figure 5 also shows metadata categories 65 as an input for the generation of the distribution reports 60a. Meta category definitions 65 are pre-defined specifications that classify system data properties into logical categories. Specific meta categories can be selected to identify the categories of data to include in the analysis and report.
[0065] Figure 12 illustrates an example meta category selection page 64.
Meta categories are typically defined at multiple levels, in this example, category 66 and subcategory 67.
Preferably, system data 62 is broadly categorized as configuration and data (run-time).
Configuration subcategories 67 may include, e.g., hardware, OS, application, path etc. Run-time subcategories 67 may include, e.g., application, management, etc.
[0066] Figure 11 illustrates a sample metadata category definition pertaining to the properties shown in Figure 10. For example, in the hardware details object, the CPU

architecture property (e.g. IntelTmXeonTm) is categorized under hardware configuration data;
in the operating system details object, the hostname (e.g. server01) is categorized under OS
configuration data; and in the network interfaces table, the IP address (e.g.
10Ø0.101) is also categorized under OS configuration data.
[0067] Turning back to Figure 5, a metadata filtering step classifies and filters the input system data 62 using the selected metadata categories 65, to create filtered per-system data 70. The system data 62 typically contains configuration and run-time data in a variety of areas that may or may not be of interest, including hardware, OS, application, patch, performance, environment etc. The system data 62 can be filtered with the user-selected meta categories 65 to confine the analysis and report to areas that are of interest. For example, selecting the Configuration/Hardware and Configuration/OS meta categories 65 focuses the report to cover hardware configuration and operating system (OS) settings only (e.g. Total CPUs, CPU architecture, memory, OS name, OS version, patch level, network settings, etc.).
[00681 The metadata filtering step is preferably performed early in the overall report generation process to reduce the working data set 62 to make downstream processes such as data analysis and reporting more computationally efficient.
[0069] Using the filtered system data 70, a data aggregation step may then be performed.
The data aggregation sub-steps are visualized in Figure 6. Data aggregation compiles multiple filtered per-system data into a statistical data model, referred to herein as the aggregated system data 72. The aggregated system data 72 is a statistical data model of the filtered data of multiple systems. The statistical data model's structure is closely aligned with the system data model organized as a hierarchy of modules, tables, objects, properties and instances.
[0070] Referring to Figure 6, for each filtered baseline system, the data model hierarchy is first traversed (module/table/object/property/instance). Figure 13 illustrates a selected sample set of audit data for a set of fifteen (15) arbitrary server systems, server01 through server15. It will be appreciated that the sample data set is limited to four properties, namely OS name, total CPUs, domain name and IP address for illustrative purposes only. Typical statistical analyses can involve thousands of properties.

100711 The traversal preferably accumulates module, table, object, and instance frequency distributions, which compiles occurrences of each module (e.g.
Generic System Information), table (e.g. Network Interfaces), object (e.g. Hardware Details), and table row instance (e.g. LAN in the Network Interfaces table). For each unique group in the frequency distributions, the list of corresponding systems (e.g. server01, server02 etc.) is also tracked.
[0072] The traversal also preferably accumulates the frequency distribution for every data property by treating them as categorical data. Categorical data are types of data that can be divided into groups. For example, the OS Name property in the Operating System Details object can have values like Windows, Linux, Solaris, AIX, etc. For every unique data property group value in the frequency distribution and the list of corresponding systems (e.g.
server01, server02 etc.) are maintained. Figure 14 illustrates example frequency distributions of the selected properties listed in Figure 13. In the example shown, the WindowsTM OS
name was detected on 10 of 15 systems, whereas the Linux OS name was detected on 3 of 15.
It can also be seen that every system included the same domain name property "abc.com", however, as expected, each system has a different IP address.
[0073] For each property, a uniformity index (UI) can be computed to measure the homogeneity of the property values across the population of systems 38 being evaluated. The uniformity index ranges from 0 to 1. UI approaches 0 when the data set is comprised of singletons, and UI approaches 1 when all the values in the data set are the same. In general, a higher uniformity index represents greater consistency in the property values.
[0074] The UI can be calculated for a property as follows:
1[N, *(N, ¨1)]
[00751 U/
T * (T ¨1) [0076] Where T is the total number of values, B is the total number of distinct values, and N, is the number of occurrences of the value V. To handle the special case where T=1 (single sample value), UI is automatically set to 1.
[0077] For example, the uniformity index of the OS name property from the example data set in Figures 13 and 14 can be computed as follows:

10*9+3*2+1*0+1*0 [0078] Ul = __________________ =0.68 15*14 [0079] A summary of statistics for the sample data properties listed in Figure 13 is shown in Figure 15. The domain names, which are all the same in the sample data set, have 100%
uniformity. Conversely, the IP addresses, which are all unique in the sample data set, have zero uniformity.
[0080] The traversal also preferably accumulates data property statistics, where, if the data property is numeric (e.g. total CPUs), the numeral statistics are computed, e.g. mean, minimum, maximum and standard deviation. As shown in Figure 15, the total CPUs property for this particular data set includes 2 unique values, which results in a high UI of 0.93, mean of 2.1, standard deviation of 0.52 and mm and max values of 2 and 4 respectively.
[0081] A list of relevant systems 28 may also be maintained for the frequency distributions. Figure 21 shows the different OS names found in the example data set, and the corresponding systems associated with the distinct property values. It will be appreciated that the details listed in Figure 21 may be included in a summary report page. The overall statistics may then be computed, which calculates total systems, modules, tables, objects, properties, etc. that make up the statistical model.
100821 Turning back to Figure 5, the outlier detection step is then performed on the aggregated system data 72 prior to generating the statistical distribution report 60a. This step involves computing the conformity indices, conformity scores, and detecting outliers.
[0083] For a specific property value, the conformity index (CI) measures the degree to which the value is consistent with the remainder of the data set. The conformity index ranges from 0 to 1. The higher the CI, the more consistent the value is with its peers. The conformity index for a specific property value (V,) can be computed as follows:
[0084] CI. = 2 1 + eY"' [0085] Where Gamma (7) is the shape factor and is set to 0.9 in this example to yield an appropriate sigmoidal function for the conformity index as it ranges from 0 to 1, and R, is computed as follows:

jT *(T ¨1)* (B ¨1) [0086] R = ___________ N * 132 [0087] Where T is the total number of values, B is the total number of distinct values, and Ni is the number of occurrences of value i.
[0088] For example, the total CPUs property from the sample data set contains 15 values comprised of 2 distinct values with the frequency distribution of 14 and 1, CI
for the least frequent total CPUs value (number of 4-CPU systems is 1) can be computed as follows:
V15*14*(2 ¨1) [0089] R. = = 3.37 1*22 [0090] Cl = = 0.09 1+ e() [0091] The low conformity index of 0.09 indicates that 4-CPU systems are not common in the sample data set.
[0092] Conversely, the CI for the most frequent total CPUs value (number of systems is 14) is 0.89. The high conformity index implies that 2-CPU systems are significant more common in the sample data.
[0093] Outlier rules 74 identify the properties of interest and specify associated weights that signify the property's relative importance in the outlier analysis. Rule weights range from 0 to 1 with a higher weight indicating a greater relative importance of the property.
[0094] Further details pertaining to rules and their use in analyzing and evaluating system parameters can be found in co-pending U.S. Patent Application No. 11/535,308 filed on September 26, 2006. It shall be noted that the conformity and uniformity analyses described herein may be used to create new rule definitions and/or rule sets, e.g. for targeting new parameters in a compatibility analysis. Details pertaining to the usage of rules in conducting compatibility analyses can be found in co-pending U.S. Patent Application No.
11/535,355 filed on September 26, 2006.

22591208.1 [00951 The conformity score (CS) combines the conformity index with the property's corresponding rule weight as follows:
[00961 CS = (1¨ weight * (1¨ CI)) [0097] Conformity scores can rank outlier values as a function of the property's relative importance and its degree of non-conformance. Conformity scores range from 0 to 1 with low conformity scores indicating severe outliers. Conversely, high conformity scores indicate that value is consistent with its peers. A weight of 0 results in a conformity score of 1, while a weight of 1 produces a conformity score equal to the corresponding conformity index.
100981 Figure 11 shows an example set of outlier rule weights pertaining to the sample data properties. In this example, the OS name and domain name properties are assigned weights of 1, 113 address is assigned a weight of 0, and total CPUs is assigned 0.5.
[00991 For example, given a CI of 0.09 and a weight of 0.5, the CS is:
1001001 CS = (1¨ 0.5*(1¨ 0.09)) = 0.54 [00101] A set of threshold ranges for the conformity score 74 is an input to the outlier detection process. The ranges define varying levels of severity of non-conformity and the matrix 64 shown in Figure 9 conveys such information visually in a graphical form. An example set of outlier threshold ranges as percentages is:
1001021 0 to 1 ¨ Severe outlier [00103] 2 to 25 ¨ Outlier [001041 26 to 60 ¨ Mild Outlier 100105] 61 to 99 ¨ Not significant [00106] 100 ¨ Value is consistent [001071 For each system, the conformity scores can also be rolled up for each meta category combining the scores of all the property values classified under the category. The conformity score can be further rolled up to the system level by combining the scores for all the meta categories. The overall conformity scores can be computed as follows:
[00108] CS Overall z-- CSI* CS,* CS3...
[00109] Turning back to Figure 5, a report generation step is then performed.
This step generates the distribution report 60a, preferably organized into multi-level hyperlinked HTML pages. The report highlights the detected outliers, and provides views for summary information and statistical details, which can be viewed by navigating within the data model hierarchy. The distribution report is organized as follows, making reference to Figures 16 through 21, which provides example distribution report pages for an arbitrary set of systems named server01 through server15.
[00110] The top page of the report is the system conformity scorecard shown in Figure 16.
This page presents the top overall outliers in a color coded matrix 66 similar to the matrix 64 shown in Figure 9. In Figure 16, the conformity scores (CS %) are displayed as percentages in the matrix cells for each system. The conformity scores are shown for the selected metadata categories (e.g. hardware and OS) as well as an overall system score.
Typically, the systems are sorted by the lowest overall score to highlight the top outliers.
Preferable, the scorecard supports the option to hide the non-outlier systems.
[00111] As noted above, the cells are color coded based on the outlier's threshold ranges.
An example color coding scheme is as follows:
[00112] Red ¨ Severe outlier [00113] Orange ¨ Outlier [00114] Yellow ¨Mild Outlier [00115] Green ¨ Not significant [00116] Dark Green ¨ Value is consistent [00117] The conformity scorecard in Figure 16 highlights two outlier systems:
server15 and server14. Details of the system property values that contributed to the conformity scores can be viewed by selecting the corresponding cells.
[00118] From the conformity scorecard page, selecting the Outlier Summary hyperlink accesses a Summary of Outliers page as shown in Figure 17. This page lists the top outlier property values that apply to one or more of the analyzed systems. In this example, the top six (6) outliers from the sample data set are listed.
[00119] Alternatively, selecting the Full Statistics hyperlink from the conformity scorecard page accesses the statistical data model details. The data is organized according to the system data model hierarchy comprise of module tables and objects, properties details, instance details and system lists. These pages are depicted in Figures 18 to 21.
[00120] The top statistical data model page presents the composite data values arranged by the module tables and objects. Property data is summarized by showing the most common (or average for numerical properties) as well as the uniformity index of the property. Figure 18 depicts the composite data for the generic system information module objects and table (hardware, operating system, network interfaces). In this example, the total CPUs property value is reported by its average value, 2.1, while the OS name property is reported by its top value, Windows. Singleton properties like IP addresses are denoted as being all unique, whereas properties whose values were all the same are reported accordingly.
[00121] From the top statistical data model page, a specific table or object can be selected to access detailed property statistics associated with the selected item. For each related property, the detailed statistics reported include the number of unique values, the uniformity index, the top 3 values and their corresponding conformity indices. If property value is numeric, the mean, range, and standard deviation are also reported.
[00122] Figure 19 provides details for a selected list of properties, in this example, OS
name, Total CPUs, Domain Name and IP address. It will be appreciated that the properties listed typically belong to a specific table or object, and the selected sample shown in Figure 19 is for illustrative purposes only.

[00123] The instance details page is shown in Figure 20, which shows details statistics from the table row instance perspective presenting the top 3 property instance values, and, if applicable, numerical statistics. This page also reports the number of occurrences of each row instance in the table across the sample data set.
[00124] The corresponding list of systems associated with each property value is reported in the system list page. The page shown in Figure 21 provides a list of the systems and the corresponding OS name values. The system count row summarizes the distribution of the OS
name among the systems being analyzed.
[00125] It will be appreciated that the report pages shown in Figures 16 - 21 are for illustrative purposes only and can be presented in any number of variations as required. For example, the summary tables may be presented in hyperlinked HTML pages or other graphical outputs that can be displayed, stored and analyzed by a user. Also, the top N values (not just the top 3) may be displayed in the property and instance details.
Additional statistics such as medians, quartiles, inter-quartile ranges, etc. may be compiled and reported for numeric property values.
Variance Report Generation [00126] The generation of a system variance report 60b is shown in Figure 7.
For the variance report 60b, one or more target systems 76 are individually compared to a set of baseline systems 62. A statistical model 72 is constructed from all of the baseline systems, which, as explained above, includes the frequency distribution of settings across the sample set and numerical statistics (when applicable). Each target system is compared to the baseline model (aggregated system data 72) in order to determine, on a setting-by-setting basis, whether the target system is above/below average, using common/uncommon settings etc. Alternatively stated, the variance report 60b indicates whether the target system is an outlier with respect to a given setting.
[00127] As seen in Figure 7, the target system data 76 are inputs to the variance report generation process where the data 76 is compared to the aggregated system data 72 in a comparison analysis. The filtered target system data 78 is obtained in a manner similar to the filtered baseline data 70, as explained above.

[00128] In the comparison analysis, each property in the filtered target system data 78 is analyzed against the aggregated system data 72 for the baseline systems. To assess whether the target value is an outlier, the conformity index and weighted score is computed and evaluated against the set of outlier threshold ranges 80. The conformity indices, scores and outlier threshold ranges 80 are in general, analogous to the outlier measures discussed in detail above. The conformity scores provide a relative measure of how each target system's property values compare against their peers (i.e. the particular collection of baseline systems).
Low conformity scores indicate outliers. Depending on the threshold range the conformity score falls in 80, the target's property value may considered to be somewhere between a severe outlier and a consistent value.
[00129] The comparison analysis, for each data property in the aggregated system data 72, comprises computing the target property's rank, percentile and standard deviation relative to the statistical model, and computing the target's conformity score and comparing this to the set of outlier threshold ranges 80. A report generation step is then performed to produce the variance report 60b.
[00130] The statistical variance report 60b shows how the target systems compare against the statistical model derived from the baseline systems. Preferably, the variance report is comprised of a multi-level hyperlinked HTML report, similar to that produced for the distribution report 60a. The target systems are listed as the rows and the meta categories and overall scores represent the columns. Details of the outlier values that comprise the conformity scores are accessed by selecting the appropriate cell. Selecting the target system in the matrix accesses the comparison analysis details for the selected system. Like the distribution report, full statistics and an outlier summary can also be accessed through hyperlinks at the top of the scorecard page of the variance report.
[00131] An example of a variance analysis and report is illustrated through Figures 22 to 25. Figure 22 depicts the sample data of a target system, server99 that will be compared against the sample data set comprised of the 15 systems shown in Figure 13. As such, for this example, the baseline systems are server01 to server15.
[00132] Performing the variance analysis based on the sample data sets for these target and baseline systems generates the variance report conformity scorecard shown in Figures 23.

The example matrix 68 in Figure 23 visually shows the variance of the target system server99 against the baseline systems with respect to the Hardware and OS meta categories as well as the overall system. The overall conformity score of 4 denotes that server99 is an outlier system.
(001331 Selecting the overall score for server99 in the scorecard matrix accesses the outlier summary page for the system shown in Figure 25. This page lists the outlier property values of server99 that contributed to its poor score. The primary outlier property is the domain name.
[00134] Alternatively, selecting the server99 label in system column of the scorecard matrix accesses the comparison details for server99, shown in Figure 24. This page is organized by the system data model hierarchy (modules, tables, objects, etc.) shows how each property compared against the baseline systems.
Trend Report Generation [001351 The statistical trend report 60c compares data sets of target and baseline systems from two instances in time. Figure 8 shows a procedure for performing the analysis and generating the trend report 60c. This report tracks the trends in the uniformity and conformity of property values over time. This is done by creating separate statistical data models 72 for the target and baseline systems. Outliers are then found separately for the target and baseline systems. The analysis results, specifically the uniformity index and conformity scores are then compared between the target and baseline systems in a trend analysis.
[00136( The absolute value of the change in index or score is then evaluated against a set of change threshold ranges. Separate threshold ranges can be defined for the uniformity index and conformity scores. An example set of change threshold ranges for the uniformity indices can be as follows:
(00137] 0 to 0.05 ¨ Not significant [00138] 0.06 to 0.10 ¨ Notable trend [00139] 0.11 or greater¨ Significant trend [00140] Uniformity indices increasing over time indicate that the data property values are converging to more consistent values. Uniformity indices decreasing over time indicate that the data values are diverging into a less consistent data set.
[00141] The change in the conformity score of a specific data property value and its actual score can indicate the adoption stage of the property value. At general level, leaders can be considered to be any property value whose conformity scores are increasing, whereas laggards have scores that are decreasing.
[00142] At a more granular level, the adoption stages can be categorized as innovators, early adopters, early majority, late majority and laggards. An innovator is defined by a data property value with a very small conformity score (severe outlier) that is increasing over time. A mild outlier whose conformity score is increasing may be an early adopter. Early majority is typified by non-outlier values whose conformity scores are increasing. Late majority is defined by highly conformant values whose conformity score is decreasing.
Finally, laggards are defined by less conformant scores whose conformity is decreasing.
[00143] The trend report is preferably presented as a multi-level hyperlinked HTML
report. The top page lists the property values with the largest uniformity and conformity score changes. These indices and scores are color coded, depending on the size of the change and whether they are increasing or decreasing. For each property value, detailed statistics about the property value are available in a hyperlinked page.
[00144] Figure 26 shows sample audit data for a set of systems, which include the systems listed in Figure 13 and additional systems, where the data is obtained at a later time. It can be seen that at the point in time shown in Figure 26, 5 additional servers, namely server16 through server20 did not exist at the earlier point in time.
[00145] For example, the sample data for the fifteen (15) systems shown in Figure 13 and the sample data for twenty (20) systems shown in Figure 26 can be considered as baseline and target data sets, respectively. A comparison of the frequency distributions of a sub-set of the properties from the data sets is summarized in Figure 27. In the target sample, five (5) 4-CPU Linux OS-based systems have been added. New IP address values have also been added by the new target systems.

[00146] Figure 28 shows the resulting trend report. The report indicates the relevant dates for baseline and target data as well as the number of systems in each data set. A uniformity trend table is also provided, which shows how the uniformity index values have changed between baseline and target data, which in turn indicates the trend. It can be seen that were the UI score has decreased, the statistics are less uniform and thus the values for that particular property are diverging, as a whole. Also shown in Figure 28 is a conformity trend table, which indicates how the conformity scores for certain properties have changed between baseline and target data. It can be seen that where the conformity score has increased, the property is becoming more common, which in turn indicates that the property value may be a leader. Conversely, where the CS score decreases, the property value is becoming less common and thus that property value may be a laggard (e.g. OS Name/AIX).
[00147] It will be appreciated that the adopter stage identification and reporting for data property values can be based on arbitrarily advanced adopter classification schemes (e.g.
leader/laggard, innovator/early adopter/early majority/late majority/laggard, etc.).
[00148] In addition, the target-baseline trend analysis can be applied to cases involving more than two (2) data sets obtained at different points in time. These scenarios can be addressed by analysing the data sets as a time-series, and calculating uniformity and conformity trends using standard mathematical procedures such as least squares fit.
Summary and Commentary [00149] Therefore, the program 10 can perform an audit of data pertaining to parameters from a plurality of systems in an audited environment 36 to generate statistical reports that show distribution, variance, outliers and trends across the systems.
[00150] The Distribution Report assesses the uniformity of the parameters of a population of systems and provides frequency distributions and statistics describing the data values from the analyzed systems 38. It measures the unifoimity of data values across the systems 38 and identifies outlier values. This report also summarizes the outlier values by the metadata categories and for the overall system.
[00151] The Variance Report assesses the conformity of one or more target systems against a reference data set comprised of a set of baseline systems. The report compares each target system individually against the reference data set to measure the consistency of the target's parameters. Similar to the distribution report, this analysis identifies outlier property values and summarizes the outlier values by meta category and for the overall system. The outlier values may be indicative of incorrect, emerging, or out of date values on the target systems.
[00152] The Trend Report shows trends in the uniformity and conformity measures of the parameters by comparing the statistical analysis results of a set of systems at two or more points in time. Uniformity trends indicate whether data values are converging or diverging.
In general, a convergent trend in the data values is preferred to promote consistency among the systems. Conformity trends can imply whether specific data values are "leaders" (value is becoming more common or popular) or laggards (value is becoming less common).
Empirically, leaders may indicate an improved parameter setting that is becoming more widely adopted. Conversely, laggards can indicate inferior or obsolete settings.
[00153] It will be appreciated that the above principles and analyses can be performed on any type of system and should not be limited in applicability to servers as exemplified above.
It will also be appreciated that any number of meta categories can be defined to accommodate varying data sets. Similarly, all summary reports and graphical outputs can be modified and/or adapted to any such data set and can be web accessible or localized to a particular environment 12. The analysis program 10 may be located in the particular environment 12 or may alternatively obtain audited data from a remote location.
[00154] It will also be appreciated that the trend reports 60c can be generated not only for data sets obtained at different times (same or different systems), but can also be generated based on data sets for different systems at the same time or any variation thereof and as such, should not be limited to those examples provided above.
[00155] As such, variations in physical implementations and outputs may be accommodated whilst providing similar results and, although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the scope of the invention as outlined in the claims appended hereto.

22591208.1

Claims (39)

What is claimed is:
1. A method for determining parameter distribution for one or more computer systems comprising:
obtaining a plurality of data sets, each data set pertaining to one of said one or more computer systems at a point in time, said data sets comprising information pertaining to one or more parameters of said one of said one or more computer systems;
generating a statistical model comprising said plurality of data sets; and comparing said data sets in said statistical model to determine the uniformity of respective ones of said parameters among said data sets and to identify the existence of outlier values associated with respective data sets indicating non-uniformity of corresponding parameters.
2. The method according to claim 1 further comprising generating a statistical distribution report providing summary information and statistical details for said parameter distribution.
3. The method according to claim 2 wherein said distribution report is organized into multi-level hyperlinked pages.
4. The method according to claim 1 wherein said statistical model comprises a uniformity index for each said one or more parameters, said uniformity index being indicative of the homogeneity of each said one or more parameters among said one or more systems.
5. The method according to claim 1 further comprising:
generating a conformity score pertaining to the conformity of each of said one or more parameters;
displaying a graphical interface comprising a matrix of cells, each row of said matrix indicating one of said plurality of computer systems and each column of said matrix indicating a metadata category pertaining to one of said plurality of parameters; and displaying in each cell a respective conformity score indicating the conformity of the respective one of said plurality of systems to others of said plurality of systems for a corresponding one of said one or more parameters.
6. The method according to claim 5 further comprising displaying a column in said matrix comprising overall system scores.
7. The method according to claim 5 further comprising enabling further details pertaining to said scores to be accessed by selecting a respective cell.
8. A method for determining consistency of parameters of one or more target computer systems in relation to a reference comprising one or more baseline computer systems comprising:
obtaining a statistical model comprising a plurality of data sets each comprising information pertaining to one or more parameters of a corresponding baseline computer system;
obtaining one or more data sets pertaining to said one or more target computer systems comprising information pertaining to at least one of said one or more parameters; and comparing said one or more data sets pertaining to said one or more target computer systems against said statistical model to determine the conformity of parameters in said one or more target systems to corresponding ones of said parameters in said statistical model and to identify the existence of outlier values in said one or more target systems indicating non-conformity of corresponding parameters.
9. The method according to claim 8 repeated for a plurality of target computer systems.
10. The method according to claim 8 further comprising generating a statistical variance report providing summary information and statistical details for said parameter variance.
11. The method according to claim 8 wherein said step of analyzing comprises computing a conformity index for a value for each of said one or more parameters, said conformity index being indicative of the degree to which said value for each system is consistent with the others of said one or more computer systems.
12. The method according to claim 11 further comprising computing a conformity score for said values for each of said one or more parameters according to a corresponding conformity index and a corresponding rule weight.
13. The method according to claim 12 further comprising combining said conformity scores to obtain overall metadata category conformity scores.
14. The method according to claim 13 further comprising combining said overall metadata category conformity scores to obtain an overall system conformity score.
15. A method for analyzing trends pertaining to one or more parameters for a plurality of computer systems over time, said method comprising:
generating a first statistical model comprising a plurality of data sets each comprising information pertaining to said one or more parameters of a corresponding one of said plurality of computer systems at a first point in time;
generating a second statistical model comprising a plurality of data sets each comprising information pertaining to said one or more parameters of a corresponding one of said plurality of computer systems at a second point in time; and comparing said first and second statistical models to determine one or more trends for said one or more parameters according to at least one of uniformity and conformity of said one or more parameters over time.
16. The method according to claim 15 wherein said first and second data sets are indicative of values at different times.
17. The method according to claim 15 wherein said first data set pertains to at least one of said plurality of computer systems and said second data set pertains to at least one other of said plurality of computer systems.
18. The method according to claim 15 further comprising generating a trend report indicating at least one of: the convergence or divergence of values in said statistical models based on said uniformity, and where said values are leading and/or lagging based on said conformity.
19. A computer readable medium comprising computer executable instructions for determining at least one of uniformity and conformity of parameters associated with one or more computer systems comprising:
instructions for obtaining a plurality of data sets, each data set pertaining to one of said one or more computer systems at a point in time, said data set comprising information pertaining to one or more parameters of said one of said one or more computer systems;
instructions for obtaining a statistical model comprising said plurality of data sets;
instructions for comparing said data sets in said statistical model to determine at least one of said uniformity and conformity of respective ones of said parameters among said data sets;
and instructions for identifying the existence of outlier values associated with respective data sets indicating at least one of non-uniformity and non-conformity of corresponding parameters.
20. The computer readable medium according to claim 19 further comprising instructions for generating reports comprising statistics related to one or more of said uniformity and conformity.
21. The computer readable medium according to claim 19 further comprising instructions for storing said audit data in an audit data repository.
22. The computer readable medium according to claim 19 further comprising instructions for supporting a web client to enable a user to enter settings and initiate an audit to obtain said audit data.
23. A computer readable medium comprising computer executable instructions that when executed perform the method according to any one of claims 1 to 7.
24. A computer readable medium comprising computer executable instructions that when executed perform the method according to any one of claims 8 to 14.
25. A computer readable medium comprising computer executable instructions that when executed perform the method according to any one of claims 15 to 18.
26. A method for determining parameter variance for a target computer system in relation to one or more baseline computer systems, the method comprising:
obtaining a statistical model for said one or more baseline computer systems using data pertaining to said one or more baseline systems, said data comprising one or more parameters relating to at least one of: hardware configuration, software configuration;
performance and workload of said systems;
obtaining data pertaining to said target computer system comprising at least one of said one or more parameters;
analyzing said target computer system with respect to said one or more baseline computer systems using said statistical model and said data pertaining to said target system to determine the conformity of said parameters in said target computer system when compared to said parameters in said one or more baseline computer systems, wherein said analyzing comprises computing a conformity index for a value for each of said one or more parameters, said conformity index being indicative of the degree to which said value for said target system is consistent with said one or more baseline computer systems; and generating an output identifying the existence of outlier values associated with said data being analyzed.
27. The method according to claim 26, further comprising:
analyzing each said one or more parameters for one or more computer systems associated with said data, to determine the uniformity of respective ones of said parameters among said one or more computer systems.
28. The method according to claim 26 or claim 27 further comprising generating a statistical distribution report providing summary information and statistical details for said parameter distribution, said statistical distribution report indicating at least one of:
uniformity of parameters across said one or more computer systems; and outlier values in said one or more systems.
29. The method according to any one of claims 26 to 28, further comprising generating a uniformity index for each said one or more parameters, said uniformity index being indicative of the homogeneity of each said one or more parameters among said one or more systems.
30. The method according to any one of claims 26 to 29, the method being repeated for a plurality of target systems.
31. The method according to any one of claims 26 to 30 further comprising generating a statistical variance report providing summary information and statistical details for said parameter variance, said variance report indicating consistency of parameters of said target system against said one or more baseline systems to identify said outlier values.
32. The method according to any one of claims 26 to 31, further comprising computing a conformity score for said values for each of said one or more parameters according to a corresponding conformity index and a corresponding rule weight.
33. The method according to claim 32 further comprising combining said conformity scores to obtain overall conformity scores for each computer system.
34. The method according to any one of claims 26 to 33, further comprising generating a trend report indicating the convergence or divergence of values in said statistical models.
35. The method according to claim 34, further comprising indicating where said values in said statistical model are at least one of leading and lagging.
36. The method according to any one of claims 26 to 35, further comprising displaying a graphical interface for displaying scores pertaining to the conformity of one or more parameters for a plurality of systems comprising a matrix of cells, each row of said matrix indicating one of said plurality of systems and each column of said matrix indicating a metadata category pertaining to one of said plurality of parameters, each cell displaying a score indicating the conformity of the respective one of said plurality of systems for a corresponding one of said one or more parameters, said score being computed according to predefined criteria.
37. The method according to claim 36, the graphical interface further comprising a column comprising overall system scores.
38. The method according to claim 36 or claim 37, further comprising providing a capability of accessing further details pertaining to said scores.
39. A computer readable medium comprising computer executable instructions that when executed perform the method according to any one of claims 26 to 38.
CA2655547A 2006-06-23 2007-06-22 Method and system for determining parameter distribution, variance, outliers and trends in systems Active CA2655547C (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US80570106P 2006-06-23 2006-06-23
US60/805,701 2006-06-23
US11/548,938 2006-10-12
US11/548,938 US7502713B2 (en) 2006-06-23 2006-10-12 Method and system for determining parameter distribution, variance, outliers and trends in computer systems
PCT/CA2007/001122 WO2007147258A1 (en) 2006-06-23 2007-06-22 Method and system for determining parameter distribution, variance, outliers and trends in systems

Publications (2)

Publication Number Publication Date
CA2655547A1 CA2655547A1 (en) 2007-12-27
CA2655547C true CA2655547C (en) 2016-01-19

Family

ID=38833036

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2655547A Active CA2655547C (en) 2006-06-23 2007-06-22 Method and system for determining parameter distribution, variance, outliers and trends in systems

Country Status (4)

Country Link
US (1) US7502713B2 (en)
EP (1) EP2033149A4 (en)
CA (1) CA2655547C (en)
WO (1) WO2007147258A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8521752B2 (en) * 2005-06-03 2013-08-27 Osr Open Systems Resources, Inc. Systems and methods for arbitrary data transformations
US8539228B1 (en) 2006-08-24 2013-09-17 Osr Open Systems Resources, Inc. Managing access to a resource
US8024433B2 (en) * 2007-04-24 2011-09-20 Osr Open Systems Resources, Inc. Managing application resources
CA2697965C (en) * 2007-08-31 2018-06-12 Cirba Inc. Method and system for evaluating virtualized environments
KR101029243B1 (en) * 2009-02-03 2011-04-18 밀러 도웰 컴퍼니 Beveled block palette
US9514024B2 (en) * 2009-09-29 2016-12-06 Oracle International Corporation Agentless data collection
EP2745248A4 (en) 2011-08-16 2015-06-17 Cirba Inc System and method for determining and visualizing efficiencies and risks in computing environments
US8903874B2 (en) 2011-11-03 2014-12-02 Osr Open Systems Resources, Inc. File system directory attribute correction
US9064097B2 (en) 2012-06-06 2015-06-23 Oracle International Corporation System and method of automatically detecting outliers in usage patterns
US10531251B2 (en) 2012-10-22 2020-01-07 United States Cellular Corporation Detecting and processing anomalous parameter data points by a mobile wireless data network forecasting system
US9830329B2 (en) 2014-01-15 2017-11-28 W. Anthony Mason Methods and systems for data storage
US10374924B1 (en) * 2014-12-05 2019-08-06 Amazon Technologies, Inc. Virtualized network device failure detection
US20160188818A1 (en) * 2014-12-31 2016-06-30 Accenture Global Services Limited Identifying claim anomalies for analysis using dimensional analysis
US10909177B1 (en) 2017-01-17 2021-02-02 Workday, Inc. Percentile determination system
US10552485B1 (en) * 2017-01-17 2020-02-04 Workday, Inc. Performance percentile determination and display
CN110569912B (en) * 2019-09-09 2022-02-01 自然资源部第一海洋研究所 Method for removing singular values of observation data of sea water profile
CN113076525A (en) * 2021-03-15 2021-07-06 北京明略软件系统有限公司 Population attribute value calculation method and device, storage medium and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6138078A (en) 1996-08-22 2000-10-24 Csi Technology, Inc. Machine monitor with tethered sensors
US6148335A (en) * 1997-11-25 2000-11-14 International Business Machines Corporation Performance/capacity management framework over many servers
US6564174B1 (en) * 1999-09-29 2003-05-13 Bmc Software, Inc. Enterprise management system and method which indicates chaotic behavior in system resource usage for more accurate modeling and prediction
CA2420076C (en) 2000-08-25 2010-09-28 Shikoku Electric Power Co., Inc. Remote control server, center server, and system constructed of them
WO2003009140A2 (en) 2001-07-20 2003-01-30 Altaworks Corporation System and method for adaptive threshold determination for performance metrics
US7437446B2 (en) 2002-09-30 2008-10-14 Electronic Data Systems Corporation Reporting of abnormal computer resource utilization data
US20050027466A1 (en) 2003-07-29 2005-02-03 Jay Steinmetz Wireless collection of battery performance metrics system, method, and computer program product
US20060020866A1 (en) 2004-06-15 2006-01-26 K5 Systems Inc. System and method for monitoring performance of network infrastructure and applications by automatically identifying system variables or components constructed from such variables that dominate variance of performance

Also Published As

Publication number Publication date
WO2007147258A1 (en) 2007-12-27
EP2033149A1 (en) 2009-03-11
US20080011569A1 (en) 2008-01-17
CA2655547A1 (en) 2007-12-27
EP2033149A4 (en) 2011-08-03
US7502713B2 (en) 2009-03-10

Similar Documents

Publication Publication Date Title
CA2655547C (en) Method and system for determining parameter distribution, variance, outliers and trends in systems
US10862928B1 (en) System and method for role validation in identity management artificial intelligence systems using analysis of network identity graphs
US7680754B2 (en) System and method for evaluating differences in parameters for computer systems using differential rule definitions
US11888602B2 (en) System and method for predictive platforms in identity management artificial intelligence systems using analysis of network identity graphs
US11818196B2 (en) Method and apparatus for predicting experience degradation events in microservice-based applications
US8028061B2 (en) Methods, systems, and computer program products extracting network behavioral metrics and tracking network behavioral changes
US7809817B2 (en) Method and system for determining compatibility of computer systems
EP1058886B1 (en) System and method for optimizing performance monitoring of complex information technology systems
US11748227B2 (en) Proactive information technology infrastructure management
US20080148242A1 (en) Optimizing an interaction model for an application
US8619084B2 (en) Dynamic adaptive process discovery and compliance
US20150205691A1 (en) Event prediction using historical time series observations of a computer application
US20070300103A1 (en) Method and system for troubleshooting a misconfiguration of a computer system based on configurations of other computer systems
US20050278786A1 (en) System and method for assessing risk to a collection of information resources
US9870294B2 (en) Visualization of behavior clustering of computer applications
Nguyen et al. Vasabi: Hierarchical user profiles for interactive visual user behaviour analytics
CN1734427A (en) Automatic configuration of transaction-based performance models
Papenbrock Asset clusters and asset networks in financial risk management and portfolio optimization
Chen et al. Same stats, different graphs: Exploring the space of graphs in terms of graph properties
GB2473117A (en) Risk and reward assessment mechanism
Montes et al. Finding order in chaos: a behavior model of the whole grid
Nicholson et al. Traceability network analysis: A case study of links in issue tracking systems
Skopik et al. Establishing national cyber situational awareness through incident information clustering
CN112882935A (en) Method and device for diagnosing running state of distributed environment
Silva et al. A business intelligence approach to support decision making in service evolution management

Legal Events

Date Code Title Description
EEER Examination request