US20040199481A1 - Bayesian neural networks for optimization and control - Google Patents
Bayesian neural networks for optimization and control Download PDFInfo
- Publication number
- US20040199481A1 US20040199481A1 US10/827,977 US82797704A US2004199481A1 US 20040199481 A1 US20040199481 A1 US 20040199481A1 US 82797704 A US82797704 A US 82797704A US 2004199481 A1 US2004199481 A1 US 2004199481A1
- Authority
- US
- United States
- Prior art keywords
- models
- determining
- model
- derivative
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/027—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S706/00—Data processing: artificial intelligence
- Y10S706/902—Application using ai with detail of the ai system
- Y10S706/903—Control
- Y10S706/906—Process plant
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S706/00—Data processing: artificial intelligence
- Y10S706/902—Application using ai with detail of the ai system
- Y10S706/911—Nonmedical diagnostics
- Y10S706/914—Process plant
Definitions
- the present invention pertains in general to neural networks for use with optimization of plants and, more particularly, to the use of Bayesian-trained neural networks for optimization and control.
- modeling techniques for a plant involve the generation of some type of model. This is typically done utilizing a single model, either a linear model or a non-linear model.
- another technique of generating a model is to utilize a plurality of models that can be utilized to define a predicted vector output y(t) of values y 1 (t), y 2 (t), . . . , y q (t) as a function of an input vector x(t) of values x 1 (t), x 2 (t), . . . , x p (t).
- a vector in the text shall be defined in bold and in equation form shall be defined with an overstrike arrow.
- a stochastical method for generating y(t) with respect to x(t) can be defined by averaging over many (non-linear) regression models F (w) . Given x(t), Fx(t) is approximated via a stochastic neural network training algorithm (non-linear regression) to the set of functions F (w) x(t), with “w” being the index for the number of models, by fitting to the dataset (x(t), y(t)) in the dataset D.
- this only provides a forward predictive model and does not facilitate the use thereof for optimization or control purposes.
- the present invention disclosed and claimed herein comprises a method for optimizing a system in which a plant is provided for optimization.
- a training network having an input layer for receiving inputs to the plant, an output layer for outputting predicted outputs, and a hidden layer for storing a learned representation of the plant for mapping the input layer to the output layer is also provided.
- a method for training the neural network in utilizing the stochastical method of a Bayesian-type is provided.
- a method utilizing the network in an optimization mode in feedback from the output of the plant to the input of the plant to optimize the output with respect to the input via the stochastical Bayesian method is provided.
- FIG. 1 illustrates a block diagram of the present invention utilizing the optimizer of the disclosed embodiment
- FIG. 2 illustrates a diagrammatic view of the optimizer of FIG. 1
- FIG. 3 a block diagram of the combination of the models utilizing a weighted average
- FIG. 4 illustrates a diagram depicting the training operation for the network
- FIG. 5 illustrates a process flow for the training operation of the multiple models
- FIG. 6 illustrates a block diagram of the optimizer wherein a single optimized value is determined averaged over all of the models
- FIG. 7 illustrates a block diagram depicting the optimizer wherein each model is optimized and then the optimized values averaged
- FIG. 8 illustrates a diagram for projecting a prediction over a horizon for a dynamic model in accordance with the disclosed embodiment
- FIG. 8 a illustrates a block diagram of a simplified embodiment of FIG. 8
- FIG. 8 b illustrates a distribution plot of the estimated x(t) values
- FIG. 9 illustrates a diagrammatic view of the optimization process for control
- FIG. 10 illustrates a block diagram of the plant utilizing a multiple model feedback control network for predicting a trajectory over the control horizon
- FIGS. 11 and 12 illustrate block diagrams of an the implementation of the network for a dynamic model
- FIG. 13 illustrates a block diagram of the dynamic model corresponding to FIG. 8;
- FIG. 14 illustrates a block diagram of the dynamic model utilizing a steady state model to fix the gain
- FIG. 15 illustrates an alternate embodiment of the dynamic model of FIG. 14.
- FIG. 1 there is illustrated a block diagram of the system of the disclosed embodiment for optimizing/controlling the operation of a plant.
- a plant 10 is illustrated having a vector input x(t) and a vector output y(t).
- the plant 10 is operable to receive the inputs x(t), perform the process and generate the outputs y(t).
- These inputs x(t) are variables that can be manipulated.
- inputs which cannot be measured and also some inputs that cannot be manipulated.
- the output y(t) illustrated on an output 12 comprises all of the outputs, both measurable and immeasurable.
- the values for operating the plant in the form of the x(t) variables are generated by a controller 14 .
- the controller 14 generates these values in response to information received from an optimizer 16 .
- the optimizer 16 is operable to receive all or select ones of the outputs from the plant 10 and is also operable to receive the current inputs x(t). Although not shown, the optimizer 16 can also receive such things as immeasurable inputs and inputs that cannot be manipulated.
- the optimizer 16 is operable to provide an optimized set of input variables ⁇ circumflex over (x) ⁇ (t) input to the controller 14 .
- This optimizer 16 operates to generate optimized values by utilizing a set of constraints and also some type of optimization objective.
- the optimization objective defines the desired result of the optimization operation. For example, it may be that certain economic costs need to be considered when optimizing the operation of the plant. Constraints are basically the restrictions placed upon the optimization process, such as gain, maximum range of values for the inputs x(t), etc., which will be described in more detail hereinbelow.
- the optimizer 16 includes a plurality of models 200 each operable to map the values x(t) through a representation of the plant 10 to provide a predicted value y (w) (t) on the output where “w” is the index of the model.
- the x(t) variables or values are mapped through the representation as follows:
- each of these models is trained to provide a stochastical method for optimization of the output with respect to the input by averaging over many (non-linear or linear) progression models 200 .
- Each of the models 200 is related to the other of the models 200 by a stochastical relationship.
- the models 200 are related to each other such that each of the models have parameters that are stochastically related to each other, the models defined by the relationship:
- the prediction operation will be described for the stochastic method in a more detailed manner in the following.
- the data is contained in a dataset D with an index n representing the portion thereof that is associated with training.
- Indices exceeding n (n+1, n+2, . . . ) refers to data not included in the training process, this being the testing data, and no index refers to an arbitrary data point.
- Subscripted values x p and y q refer to an arbitrary component of an x(t) or y(t) vector, respectively.
- the model outputs y(t) are subject to Gaussian noise. This choice is just for pedagogical reasons—the method is valid for any distribution, and it should be understood that other distributions could exist.
- the first step it is necessary to predict y (n+k) (t)(k ⁇ 1), given the measured data D and a set of model functions F (w) x(t) each described by a set of parameters ⁇ (e.g. neural network weights), with ⁇ being a vector value of weights with values ⁇ 1 , ⁇ 2 , . . . , ⁇ r , “r” being the number of weights in the model.
- ⁇ e.g. neural network weights
- D) is the conditional probability (called the posterior) for the model F (w) x(t) with weights ⁇ given the dataset D.
- P(D) is constant.
- the third term, also part of the prior, is written as a generic constraint that could include, for instance, fitting on different levels to first principles knowledge.
- the value of i ranges over the dataset D, with n being the number of patterns.
- FIG. 3 there is illustrated a block diagram of the average predicted output y (n+k) (t).
- Each of the models 200 has the output thereof input to a weighting block 302 which applies the weighting function e ⁇ E( ⁇ , ⁇ , ⁇ , D) to the output of each of the models 200 .
- the output of each of the weighting blocks 302 is then input to a summing block 304 to provide the weighted average y (n+k) .
- f( ) measures whether the argument satisfies the known constraint
- the index i in the sum indicates the x i (t) vector point at which the derivative is evaluated.
- the models F (w) x(t) are generated for estimating ⁇ y (n+k) (t)> (in Equation (7)) by making moves in the high dimensional parameter space ( ⁇ , ⁇ , ⁇ ). Since the distribution over F (w) x (n+k) (t) is averaged over a strongly peaked distribution, this is typically done using importance sampling Monte Carlo algorithms, such as the Langevin method, or type Metropolis with Hybrid Monte Carlo, and/or tempering extensions to ensure that the entire search space is properly probed. (A wide variety of other methods are of course possible and these noted methods are not considered exhaustive.) The error E for this model is monitored as the parameter updates proceed.
- FIG. 4 there is illustrated a diagrammatic view for the “thermalized” static behavior utilizing the training operation and, also referring to FIG. 5, there is illustrated a diagrammatic flow for the training operation.
- the training operation is initiated by a set of weights ⁇ 1 , represented by a block 502 . These weights are utilized to generate a first model F (1) x(t), represented by a block 504 .
- the model is then tested in a test block 506 by utilizing the test data in the portion of the dataset n+k, the data above the training data n. This generates a first error value E 1 .
- This data is then utilized to generate the next set of weights ⁇ 2 .
- This is represented in a block 508 .
- These weights are utilized to train a second model F (2) x(t), as represented in block 510 , this representing a progression model. This is a model that has a stochastic relationship to the first model in block 504 . This model is subsequently tested in a block 512 to determine the new error E 2 for that model.
- a new set of weights ⁇ 3 is generated by the stochastic method, as represented by a block 514 . This will be utilized to generate a third model F (3) x(t), as represented by a block 516 . This will also be tested in a block 518 to generate an error value E 3 .
- N min This represents the first model for the value N min such that for the next N w ⁇ 1 models up to the value of N max , there will be N w models.
- the model associated with N max is represented in block 526 , which model is then tested in block 528 to generate the error value E max .
- this number can be any range. In the disclosed embodiment, as an example only, five hundred models are utilized. This results in the earliest model over five hundred being discarded such that only five hundred models are maintained.
- the training continues until the model in block 522 and the model in block 562 are both within the region represented by reference numeral 404 in FIG. 4. Therefore, all of the models will be in the “thermalized” region.
- each of the N w ensembles of F (w) models is a continuous function; therefore each derivative ⁇ y q (w) (t)/ ⁇ x p (t) may be evaluated at any vector point x(t) by the elementary rules of calculating derivatives.
- Equation (10) the values for derivatives are averaged over the models.
- Equation (10) it may be desirable to estimate Equation (10) instead of computing it fully.
- the best single-term estimate would be the one with the largest posterior (or probability weighting factor) for weighting the gains:
- any such estimate is called the MAP (maximum a posteriori) estimate.
- MAP maximum a posteriori
- a sorted index to the models at the completion of the training procedure could quickly and easily be created. Since this would be done only once, the required computing time would be insignificant.
- FIG. 6 there is illustrated a block diagram depicting the operation illustrated in Equation (10), the models 200 each having outputs thereof input to a weighted average block to provide the weighted average output ⁇ y(t)>.
- the values on the output of each of the models 200 must be back-propagated through the model to determine the derivative thereof, or any other technique that Will provide that derivative.
- the backpropagated method is basically a recursive approach.
- This is represented by a derivative block 600 for each of the models 200 .
- the derivative block 600 is operable to receive the x(t) value and the output of the associated model 200 y (w) (t).
- the output of the derivative block 600 is ⁇ y (w) (t)/ ⁇ x(t).
- Each of the derivatives output by each of the blocks 600 are input a weighting block 604 which is operable to provide a weight to each of the derivatives which are then summed in a summing block 606 . This provides the weighted average of the derivative ⁇ y(t)>/ ⁇ x(t) over all of the models.
- This basic idea of estimating using the single MAP model can be iterated to improve the estimation to the desired level of accuracy.
- the second level estimate would consist of taking the model with the next highest posterior (next model in the indexed list) and averaging it with the first (the MAP) model, to yield a two-model average.
- This process could be iterated, incrementally improving the estimate, with some stopping criterion defined to halt the procedure.
- a stopping criterion might be to halt when the change in the estimate due to adding the next model is less than some threshold. The extreme of this process is of course the full sum of Equation (10).
- Equation (10) or its estimation, involved taking the derivative ⁇ y q (w) (t)/ ⁇ x p (t) evaluated at a given vector point x(t), and computing their (posterior weighted) average over the ensemble of models.
- Sensitivity analysis examines statistics over the dataset of these ensemble-averaged derivatives. Consider, for instance, taking the absolute value of the ensemble-averaged derivatives, and averaging them over the dataset: this information would indicate the overall relative strengths, over the historical operating conditions represented by the dataset, of the effect of each input variable x p (t) on a given output variable y q (t).
- Process optimization ordinarily refers to determining the optimal input vector ⁇ circumflex over (x) ⁇ (t) that will minimize a defined objective function J while satisfying any defined constraint functions C m .
- J is ordinarily a function of the process model and its variables, which expresses the desired characteristics of process operation, output product characteristics, and so forth.
- a valid solution of a constrained optimization problem always satisfies the C m relationship, and minimizes J as well as possible within those constraints.
- Optimization typically means “steady-state optimization” (finding an optimal point in operating space using a steady-state model), while “control” typically means “dynamic control” (finding an optimal trajectory in operating space using a dynamic model). Both are “optimization problems.”
- an optimization algorithm uses the process model to find the optimal ⁇ circumflex over (x) ⁇ (t), given the objective J and constraint C m functions.
- Neural network models are in general nonlinear, so nonlinear optimization algorithms are used. Unconstrained or constrained optimization is performed depending upon whether or not any constraint functions are defined. Mathematically, unconstrained and constrained nonlinear optimizations are very different, and different optimization algorithms are used. Henceforth the general (and most typical in industrial processes) case of constrained nonlinear optimization will be assumed.
- Nonlinear constrained optimization algorithms that make use of derivatives generally execute much faster than those that do not.
- a variety of such nonlinear constrained optimization programs are commercially available. The most popular codes are based on the Sequential Quadratic Programming (SQP) or the Generalized Reduced Gradient (GRG) methods.
- An objective function is one representing the (negative, as the objective function is to be minimized) profit of the process (by associating prices and costs with the input and output variables). One possibility would be to then use the resulting output variable values as setpoints for those output variables.
- the first factor ⁇ J/ ⁇ y q (t)> is computable by the elementary rules of differentiation for each input x p (t).
- the second factor involves the Bayesian ensemble of models, which relate ⁇ y q (t)> to x p (t), thus representing the sum over all models.
- any nonlinear constrained optimization code such as an SQP or GRG code, may be used to perform the optimization. Any such code searches the x-space for the x-value that will minimize the given J while satisfying the given C m by iteratively passing to user-supplied subroutines an x(t) value (in general, different at each iteration) and receiving back the objective and constraint functions, and the derivatives, all evaluated at x(t).
- the derivatives may of course be computed in full or may be estimated to any degree of accuracy, as described hereinabove.
- the optimization routine performs a single optimization over all of the models in the ensemble and returns a single optimal value for ⁇ circumflex over (x) ⁇ (t).
- a user-supplied subroutine When the optimizer requests the values of the functions and derivatives evaluated at a point x(t), a user-supplied subroutine must compute the derivative values ⁇ y q (t)>/ ⁇ x p (t) by applying the chain rule of Equation (13) to the entire ensemble of models. After some number of iterations of such subroutine calls, the single optimization procedure terminates with an optimal ⁇ circumflex over (x) ⁇ (t).
- FIG. 7 there is illustrated a block diagram of the first optimization method.
- the models 200 are provided, each for receiving the input value x(t) and outputting the output value y(t).
- the output of each of the models 200 is input to the block 600 which is operable to determine the derivative on the output thereof.
- Each of the derivatives from each of the blocks 600 for each of the models 200 are input to a weighted average block 702 which is operable to provide the weighted average of the derivative as set forth in Equation (10). This is then subjected to the chain rule of Equation (13) via a block 706 to provide the single optimal value for ⁇ circumflex over (x) ⁇ (t).
- each model in the Bayesian ensemble is optimized separately, yielding an optimal ⁇ circumflex over (x) ⁇ (w) (t) for each model F (w) x(t).
- an optimization process for a model F (w) x(t) requests function and derivative values at a point x p (t).
- the functions and derivative values returned are for that single model only, that is, ⁇ y q (t) (w) / ⁇ x p (t).
- the optimization of each model 200 terminates with an optimal value ⁇ circumflex over (x) ⁇ (w) (t) for that model, such that there are N w such optimized values.
- FIG. 8 there is illustrated a block diagram depicting the second method of optimization.
- the models 200 and derivative blocks 600 are provided for generating the derivatives for each of the associated models 200 .
- the output of each of the derivative blocks 600 is, as distinguished from the embodiment of FIG. 4, input to a block 800 , wherein that value is optimized over the output such that the optimized value ⁇ J/ ⁇ y (w) (t) provides the value ⁇ circumflex over (x) ⁇ (w) (t) for each model.
- a weighted average block 804 which is operable to implement Equation (11) to provide a single optimal ⁇ circumflex over (x) ⁇ (t).
- D, J, C) provides the probability distribution over the ⁇ circumflex over (x) ⁇ (t), given D, J and possibly C.
- FIG. 8 a there is illustrated a simplified block diagram of the embodiment of FIG. 8, wherein a single block 810 represents the combination of the models 200 , derivative block 600 and multiplication blocks 800 .
- This is operable to provide a plurality of estimated values for each model 200 in the form of ⁇ circumflex over (x) ⁇ (1) (t), wherein there are w estimated values for the input, one for each model.
- a block 812 which block is operable to provide some type of selection or averaging operation, this being similar to the block 804 which performs specifically a weighted average.
- the selection operation in block 812 can utilize any criteria.
- FIG. 8 b illustrates a plot of the estimated input values as a function the index value where it can be seen that there is a Gaussian distribution of such values.
- the important aspect of this block 804 is that any type of selection criteria can be utilized to provide some value that is a function of all of the estimated values or some selection criteria that eliminates certain ones of the values and selects other values for an averaging type operation. In such a manner, all of the estimated values need not be utilized for the selection criteria.
- indices k described hereinabove describe new data (n+k), whereas a dynamic system utilizes the index k to represent time intervals, which need not represent equally spaced time intervals.
- a trajectory of output values ⁇ y(t+1) . . . y(t+k max ) ⁇ (y x (t)) is predicted from a current y(t) for each new “setpoint” y(t) along with the a corresponding trajectory of control input values (corresponding to the x(t) inputs for the steady state process) necessary to provide such y x (t), which trajectory takes a predetermined number of the time intervals to achieve.
- the system will then make a move along this trajectory for a first interval of time and even additional intervals of times by predicting the new u(t) values necessary to make such a move before the next dynamic prediction is made.
- This dynamic prediction could be made at each move interval such that a new trajectory for the setpoint is predicted from a new control input value u(t).
- the new “setpoint” optimization could be performed at each time interval.
- FIG. 9 there is illustrated a diagrammatic view of the trajectory of y k (t) determined in the optimization process for control.
- the time horizon is divided up into m increments or intervals, these being equal time segments or unequal time segments.
- the dynamic prediction for the value of y k (t) is made along the increments from the current position in time (t+1) and extending out to the horizon at the value of (t+k).
- This final y (n+k) (t) value is often equal (within some tolerances) to the output setpoint, provided the optimal change to u(t) was made.
- FIG. 10 there is illustrated a block diagram for the plant 10 utilizing a controller network 1002 in feedback.
- This is very similar to the optimizer 16 of FIG. 1, with the exception that it predicts a trajectory utilizing cost function J, constraints C m , and various setpoints.
- the setpoints are typically in the form of a desired move in the output vector y(t).
- the controller network 1002 will receive as inputs select ones of the output y(t), and project the trajectory out over the control horizon in time and the u(t) values necessary to achieve such trajectory. This will typically be predicted in time intervals such that a dynamic plant controller 1004 will be able to generate new input values u(t).
- the models can be of differing types.
- the dynamic model is a linear model which is defined by the following relationship:
- y k ⁇ 1 , y k ⁇ 2 are outputs at different points in time;
- u k ⁇ 1 ⁇ d , u k ⁇ 2 ⁇ d are input values at different points in time;
- d is a delay value
- a, b are the parameters of the linear model.
- ⁇ right arrow over ( y ) ⁇ k ⁇ a 1 ⁇ right arrow over ( y ) ⁇ k ⁇ 1 ⁇ a 2 ⁇ right arrow over ( y ) ⁇ k ⁇ 2 +b 1 ⁇ right arrow over ( u ) ⁇ k ⁇ 1 ⁇ d +b 2 ⁇ right arrow over ( u ) ⁇ k ⁇ 2 ⁇ d (0017)
- Equation (17) is set forth as a linear equation with a linear model
- additional non-linear terms can be attached to this equation to result in a non-linear model.
- the parameters of this model are set by the a's and b's, i.e., the parameter values of the model. This also pertains to the gain, this described in detail in U.S. patent application Ser. No. 08/643,464, which was incorporated by reference hereinabove.
- ⁇ right arrow over (y) ⁇ k (w) ⁇ a 1 (w) ⁇ right arrow over (y) ⁇ k ⁇ 1 (w) ⁇ a 2 (w) ⁇ right arrow over (y) ⁇ k ⁇ 2 (w) +b 1 (w) u k ⁇ 1 ⁇ d +b 2 (w) u k ⁇ 2 ⁇ d (0018)
- the first step is to predict the output value from each of the models given the measured dataset D and the set of model functions G (w) , each described by a set of parameters a, b.
- D) is a conditional probability for the model G (w) with parameters a and b, given the dataset D.
- [0093] is the likelihood.
- P(a, b) is the prior distribution of the parameters (a, b) of the model, and their product is the posterior distribution, as was described hereinabove with respect to the steady-state case. All of the above-noted equations apply to the dynamic case. The only difference is that the input is now u(t) and the parameters of the model are (a, b), as compared to ⁇ .
- each of the N w ensembles of G (w) models must have the derivative thereof determined by the following relationship: ⁇ y q , k ( w ) ⁇ ( t ) ⁇ u p ⁇ ( t ) ( 21 )
- FIG. 11 there is illustrated a block diagram depicting the operation illustrated in Equation (22) for a dynamic model to determine the average derivative.
- This basically parallels the operation of the embodiment in the FIG. 6, described hereinabove with respect to steady-state models.
- a plurality of dynamic models 1100 corresponding to the static models 200 described hereinabove which are operable to provide a predicted value y (w) (t).
- the output of each of these models 1100 can be input to a weighted average block 1102 to provide the value ⁇ y(t)>.
- ⁇ y(t)>/ ⁇ u(t it is necessary to optimize the derivative as described hereinabove in the optimization method (1).
- These derivatives for each of the models are then input to an averaging block 1106 to take the weighted average thereof, and then to a summing block 1108 to provide the average derivative ⁇ (y(t)>/ ⁇ u(t).
- the average derivative is determined for the dynamic model, then this can be optimized, utilizing the optimization method (1) or the optimization method (2) described hereinabove, except that a dynamic model is used. This is illustrated in FIG. 12 which parallels FIG. 7 for the static model.
- the derivatives of each of the models output from the derivative block 1104 are first subjected to the weighted average in a block 1200 , representing block 1106 and 1108 , and then the average of the derivatives is then multiplied by the optimization objective J D for the dynamic model condition.
- each of the models 1100 has the derivative thereof determined by the derivative block 1104 .
- each û (w) (t+k min ) . . . û (w) (t+k max )(û k (w) (t)) of the models has the derivative thereof optimized to provide an estimated control trajectory û (w) (t) for each value of w.
- a weighted average block 1304 which, as described hereinabove, is operable to perform some type of algorithm or selection process thereon. This can be any type of selection process for the model. It could be an averaging operation, or it could be a selection process. This provides a single control trajectory û(t) value. Given the control trajectory û(t), the first value thereof is input to the plant.
- the dynamic gain k d of the dynamic model can then be forced to match the gain of the steady-state model by scaling the b i parameters.
- FIG. 14 there is illustrated a block diagram of the optimizer 16 utilizing a steady-state model to fix the gain of the dynamic model 1100 .
- the optimizer as described hereinabove, is divided into a plurality of blocks for determining the derivative of ⁇ y (w) (t)/ ⁇ x(t). These blocks are referred to with the reference numeral 1400 . There are provided “w” of these blocks 1400 , the output of each of these inputs input to a product block 1405 , the outputs thereof summed in a summing block 1406 to provide on the output thereof the average or weighted derivative ⁇ y(t)>/ ⁇ x(t).
- Each of the blocks 1400 has associated therewith the dynamic model 1100 with the input u(t) and the output y(t) input to the derivative block 1104 .
- a steady-state model 200 is provided for each of the dynamic models 1100 in each of the blocks 1400 . Therefore, each of the models 200 is a stochastically related model F (w) x(t) which has an associated steady-state gain K (w) ss . This is a known gain for that model.
- a gain modulator 1410 is provided for determining the dynamic gain k (1) d .
- the gain of a single steady-state model 1500 is utilized to provide a single steady-state gain K ss for all of the gain modulation modules 1410 , such that only one steady-state model is required.
- This is not a progressive model.
- the dynamic model for each of the blocks 1400 could be the same with the steady-state models 200 being progressive stochastically-related models. Therefore, there would be a plurality of blocks 1502 which contained only the dynamic model 1100 , the derivative block 1104 and the gain modulator 1410 .
- the steady-state gain K ss of the model 1500 would be input to each of the blocks 1502 .
Abstract
An optimization system is provided utilizing a Bayesian neural network calculation of a derivative wherein an output is optimized with respect to an input utilizing a stochastical method that averages over many regression models. This is done such that constraints from first principal models are incorporated in terms of prior art distributions.
Description
- The present application is a Continuation Application of application Ser. No. 09/290,791, filed Oct. 6, 1998, entitled: BAYESIAN NEURAL NETWORK FOR OPTIMIZATION, which is a Continuation-in-Part of, and claims priority in, U.S. Provisional Patent Application Serial No. 60/103,269, entitled Bayesian Neural Networks For Optimization and Control, and filed Oct. 6, 1998 (Attorney Docket No. PAVI-24,473).
- The present invention pertains in general to neural networks for use with optimization of plants and, more particularly, to the use of Bayesian-trained neural networks for optimization and control.
- In general, modeling techniques for a plant involve the generation of some type of model. This is typically done utilizing a single model, either a linear model or a non-linear model. However, another technique of generating a model is to utilize a plurality of models that can be utilized to define a predicted vector output y(t) of values y1(t), y2(t), . . . , yq(t) as a function of an input vector x(t) of values x1(t), x2(t), . . . , xp(t). For the purposes of this application, a vector in the text shall be defined in bold and in equation form shall be defined with an overstrike arrow.
- Given a set n of measured process data points:
- D=({right arrow over (x)}1, {right arrow over (y)}1)=({right arrow over (x)}(1), {right arrow over (y)}(1))), ({right arrow over (x)}(2), {right arrow over (y)}(2)), . . . , ({right arrow over (x)}(n), {right arrow over (y)}(n)) (0001)
- and assuming that an underlying mapping exists with the following relationship:
- {right arrow over (y)}=F({right arrow over (x)}) (0002)
- exists, a stochastical method for generating y(t) with respect to x(t) can be defined by averaging over many (non-linear) regression models F(w). Given x(t), Fx(t) is approximated via a stochastic neural network training algorithm (non-linear regression) to the set of functions F(w)x(t), with “w” being the index for the number of models, by fitting to the dataset (x(t), y(t)) in the dataset D. However, this only provides a forward predictive model and does not facilitate the use thereof for optimization or control purposes.
- The present invention disclosed and claimed herein comprises a method for optimizing a system in which a plant is provided for optimization. A training network having an input layer for receiving inputs to the plant, an output layer for outputting predicted outputs, and a hidden layer for storing a learned representation of the plant for mapping the input layer to the output layer is also provided. A method for training the neural network in utilizing the stochastical method of a Bayesian-type is provided.
- In another aspect of the present invention, a method utilizing the network in an optimization mode in feedback from the output of the plant to the input of the plant to optimize the output with respect to the input via the stochastical Bayesian method is provided.
- For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description taken in conjunction with the accompanying Drawings in which:
- FIG. 1 illustrates a block diagram of the present invention utilizing the optimizer of the disclosed embodiment;
- FIG. 2 illustrates a diagrammatic view of the optimizer of FIG. 1;
- FIG. 3 a block diagram of the combination of the models utilizing a weighted average;
- FIG. 4 illustrates a diagram depicting the training operation for the network;
- FIG. 5 illustrates a process flow for the training operation of the multiple models;
- FIG. 6 illustrates a block diagram of the optimizer wherein a single optimized value is determined averaged over all of the models;
- FIG. 7 illustrates a block diagram depicting the optimizer wherein each model is optimized and then the optimized values averaged;
- FIG. 8 illustrates a diagram for projecting a prediction over a horizon for a dynamic model in accordance with the disclosed embodiment;
- FIG. 8a illustrates a block diagram of a simplified embodiment of FIG. 8;
- FIG. 8b illustrates a distribution plot of the estimated x(t) values;
- FIG. 9 illustrates a diagrammatic view of the optimization process for control;
- FIG. 10 illustrates a block diagram of the plant utilizing a multiple model feedback control network for predicting a trajectory over the control horizon;
- FIGS. 11 and 12 illustrate block diagrams of an the implementation of the network for a dynamic model;
- FIG. 13 illustrates a block diagram of the dynamic model corresponding to FIG. 8;
- FIG. 14 illustrates a block diagram of the dynamic model utilizing a steady state model to fix the gain; and
- FIG. 15 illustrates an alternate embodiment of the dynamic model of FIG. 14.
- Referring now to FIG. 1, there is illustrated a block diagram of the system of the disclosed embodiment for optimizing/controlling the operation of a plant. A
plant 10 is illustrated having a vector input x(t) and a vector output y(t). Theplant 10 is operable to receive the inputs x(t), perform the process and generate the outputs y(t). These inputs x(t) are variables that can be manipulated. In addition to these inputs, there are also inputs which cannot be measured and also some inputs that cannot be manipulated. These are not shown. From the standpoint of the outputs y(t), there are a number of types of outputs—there are measurable outputs and immeasurable outputs. In general, the output y(t) illustrated on anoutput 12 comprises all of the outputs, both measurable and immeasurable. - The values for operating the plant in the form of the x(t) variables are generated by a
controller 14. Thecontroller 14 generates these values in response to information received from anoptimizer 16. Theoptimizer 16 is operable to receive all or select ones of the outputs from theplant 10 and is also operable to receive the current inputs x(t). Although not shown, theoptimizer 16 can also receive such things as immeasurable inputs and inputs that cannot be manipulated. - The
optimizer 16 is operable to provide an optimized set of input variables {circumflex over (x)}(t) input to thecontroller 14. Thisoptimizer 16 operates to generate optimized values by utilizing a set of constraints and also some type of optimization objective. The optimization objective defines the desired result of the optimization operation. For example, it may be that certain economic costs need to be considered when optimizing the operation of the plant. Constraints are basically the restrictions placed upon the optimization process, such as gain, maximum range of values for the inputs x(t), etc., which will be described in more detail hereinbelow. - Referring now to FIG. 2, there is illustrated a block diagram of the
optimizer 16. Theoptimizer 16 includes a plurality ofmodels 200 each operable to map the values x(t) through a representation of theplant 10 to provide a predicted value y(w)(t) on the output where “w” is the index of the model. The x(t) variables or values are mapped through the representation as follows: - {right arrow over (y)}(t)=F (w)({right arrow over (x)}(t)) (0003)
- This will provide a plurality of the
models 200, each in general being different. As will be described hereinbelow, each of these models is trained to provide a stochastical method for optimization of the output with respect to the input by averaging over many (non-linear or linear)progression models 200. Each of themodels 200 is related to the other of themodels 200 by a stochastical relationship. During training, as will be described hereinbelow, themodels 200 are related to each other such that each of the models have parameters that are stochastically related to each other, the models defined by the relationship: - {right arrow over (y)}=F (w)({right arrow over (x)}) (0004)
- There are “w” of these models, such that there are also y(w)(t) outputs provided, one for each of the models F(w)x(t). The index “w” indicates these stochastically related models. The optimization operation utilizes these models in conjunction with the optimization objectives and constraints to generate an optimized value {circumflex over (x)}(t) which is averaged over all of the
models 200. - The method of optimizing an output with respect to inputs described hereinabove, with the option of being subject to constraints from first principles models, provides some advantages over the standard neural network methods primarily by giving rise to high quality solutions in the system identification phase in a parameter insensitive way that avoids overfitting. Furthermore, by having a clean statistical interpretation, the approach easily lends itself to estimating confidence levels and related quantities.
- The prediction operation will be described for the stochastic method in a more detailed manner in the following. The data is contained in a dataset D with an index n representing the portion thereof that is associated with training. Indices exceeding n (n+1, n+2, . . . ) refers to data not included in the training process, this being the testing data, and no index refers to an arbitrary data point. Subscripted values xp and yq refer to an arbitrary component of an x(t) or y(t) vector, respectively. Also in the formalism herein, it will be assumed that the model outputs y(t) are subject to Gaussian noise. This choice is just for pedagogical reasons—the method is valid for any distribution, and it should be understood that other distributions could exist.
-
-
-
- is the likelihood, P(ω) is a prior distribution of the parameters or model weights ω, and their product is the posterior distribution. Assuming (not necessary) also a Gaussian distribution for the likelihood distribution of the weights of the model, the average predicted output relationship is as follows:
- the first term representing the summed square error over the dataset D with n being the number of patterns, and the second term corresponding to the prior penalizes large weights (a regularizer). The third term, also part of the prior, is written as a generic constraint that could include, for instance, fitting on different levels to first principles knowledge. The value of i ranges over the dataset D, with n being the number of patterns.
- Referring now to FIG. 3, there is illustrated a block diagram of the average predicted output y(n+k)(t). Each of the
models 200 has the output thereof input to aweighting block 302 which applies the weighting function e−E(ω, α, β, D) to the output of each of themodels 200. The output of each of the weighting blocks 302 is then input to a summingblock 304 to provide the weighted average y(n+k). - In the situation wherein the models are utilized in a feedback mode, i.e., for the purpose of predicting input values, and which feedback is utilized for control, the gain is an important factor. Therefore, during the training of the model, it is necessary to take into account the gain as one constraint on the training. This is reflected in the term H(ω, D) which, when gain constraints are considered, results in the following relationship:
- where f( ) measures whether the argument satisfies the known constraint, and the index i in the sum indicates the xi(t) vector point at which the derivative is evaluated. The adjustable parameter families α and β are drawn from fixed prior distributions described by α0 and β0, and also it is noted that the derivative (dyq(t)(w)/dxp(t)) is the derivative for the model F(w)x(t) (in Equation (5)) summed over the (q,p) matrix of inputs/outputs. This provides the (q,p) matrix of gains for each model. It is noted that minimizing the error value E corresponds to standard neural network learning that would give rise to one solution (or network, or model) ω=ω*, these being the weights necessary to minimize the error.
- The models F(w)x(t) are generated for estimating <y(n+k)(t)> (in Equation (7)) by making moves in the high dimensional parameter space (ω, α, β). Since the distribution over F(w)x(n+k)(t) is averaged over a strongly peaked distribution, this is typically done using importance sampling Monte Carlo algorithms, such as the Langevin method, or type Metropolis with Hybrid Monte Carlo, and/or tempering extensions to ensure that the entire search space is properly probed. (A wide variety of other methods are of course possible and these noted methods are not considered exhaustive.) The error E for this model is monitored as the parameter updates proceed. Initially, its long-time average drops and then it flattens off to a “thermalized” static behavior, from which configurations are used to compute <y(n+k)> in Equation (7). In other words, the summation over Nw in Equation (7) is restricted to an interval [Nmin, Nmax], where Nmin (w=1) is given by the onset of static behavior, and Nmax is set by the required accuracy of <y(n+k)>, and in the case of process optimization, which is described hereinbelow, by the available computational time. This provides an ensemble of Nw=Nmax−Nmin models F(w)x(t) trained to be valid for each of the n data records over the dataset D. In the above, this ensemble was used for predicting the outputs y(n+k)(t) when only the corresponding inputs x(n+k)(t) are available (Equation (7)).
- Referring now to FIG. 4, there is illustrated a diagrammatic view for the “thermalized” static behavior utilizing the training operation and, also referring to FIG. 5, there is illustrated a diagrammatic flow for the training operation. Both of FIGS. 4 and 5 will be referred to. With specific reference to FIG. 5, the training operation is initiated by a set of weights ω1, represented by a
block 502. These weights are utilized to generate a first model F(1)x(t), represented by ablock 504. The model is then tested in atest block 506 by utilizing the test data in the portion of the dataset n+k, the data above the training data n. This generates a first error value E1. This data is then utilized to generate the next set of weights ω2. The above-noted directed random search techniques, such as the Langevin method, are utilized to generate these new weights by utilizing the weights from the previous model in the progression as the starting point. Therefore, they will have the relationship ω2=f(D, ω1). This is represented in ablock 508. These weights are utilized to train a second model F(2)x(t), as represented inblock 510, this representing a progression model. This is a model that has a stochastic relationship to the first model inblock 504. This model is subsequently tested in ablock 512 to determine the new error E2 for that model. A new set of weights ω3 is generated by the stochastic method, as represented by ablock 514. This will be utilized to generate a third model F(3)x(t), as represented by ablock 516. This will also be tested in ablock 518 to generate an error value E3. - The steps noted hereinabove, between
blocks block 520. This will be represented by the relationship ωmin=f(D, ωmin−1) as represented by theblock 520. This will be utilized to generated the model F(min)x(t), which is then tested at ablock 524 to generate the error value Emin. This represents the first model for the value Nmin such that for the next Nw−1 models up to the value of Nmax, there will be Nw models. The model associated with Nmax is represented inblock 526, which model is then tested inblock 528 to generate the error value Emax. In general, as represented by the diagram in the diagrammatic view of FIG. 4, there is a “sliding” group of Nw adjacent models maintained. Depending upon the processing capabilities of the system, this number can be any range. In the disclosed embodiment, as an example only, five hundred models are utilized. This results in the earliest model over five hundred being discarded such that only five hundred models are maintained. The training continues until the model inblock 522 and the model in block 562 are both within the region represented byreference numeral 404 in FIG. 4. Therefore, all of the models will be in the “thermalized” region. - In addition to being able to compute y(t)=F(w)x(t) from x(t) for each model in the ensemble (from which the final average <y(t)> is computed), another essential computation for most process applications is to be able to compute the (q, p) matrix of derivatives ∂yq (w)(t)/∂xp(t) for each model F(w)x(t) in the ensemble, at any vector point x(t), the (q,p) matrix representing the matrix of derivatives indexed by the inputs and outputs for a multi-input/multi-output system. These derivatives are necessary for at least two fundamental purposes:
- (1) sensitivity analysis, and
- (2) optimization and/or control
- Each of the Nw ensembles of F(w) models is a continuous function; therefore each derivative ∂yq (w)(t)/∂xp(t) may be evaluated at any vector point x(t) by the elementary rules of calculating derivatives.
-
- In this relationship the values for derivatives are averaged over the models. To reduce computation time, it may be desirable to estimate Equation (10) instead of computing it fully. The best single-term estimate would be the one with the largest posterior (or probability weighting factor) for weighting the gains:
- Πi=1 n P(y (i) |x (i), {right arrow over (ω)})P({right arrow over (ω)}) (0011)
- In Bayesian terminology, any such estimate is called the MAP (maximum a posteriori) estimate. In order for this MAP estimate to significantly reduce computing time, it would be necessary to have access to the ensemble of models already sorted in posterior magnitude order: a sorted index to the models at the completion of the training procedure could quickly and easily be created. Since this would be done only once, the required computing time would be insignificant.
- Referring now to FIG. 6, there is illustrated a block diagram depicting the operation illustrated in Equation (10), the
models 200 each having outputs thereof input to a weighted average block to provide the weighted average output <y(t)>. In order to provide the derivatives, the values on the output of each of themodels 200 must be back-propagated through the model to determine the derivative thereof, or any other technique that Will provide that derivative. The backpropagated method is basically a recursive approach. This is represented by aderivative block 600 for each of themodels 200. Thederivative block 600 is operable to receive the x(t) value and the output of the associated model 200 y(w)(t). The output of thederivative block 600 is ∂y(w)(t)/∂x(t). Each of the derivatives output by each of theblocks 600 are input aweighting block 604 which is operable to provide a weight to each of the derivatives which are then summed in a summingblock 606. This provides the weighted average of the derivative ∂<y(t)>/∂x(t) over all of the models. - This basic idea of estimating using the single MAP model can be iterated to improve the estimation to the desired level of accuracy. The second level estimate would consist of taking the model with the next highest posterior (next model in the indexed list) and averaging it with the first (the MAP) model, to yield a two-model average. This process could be iterated, incrementally improving the estimate, with some stopping criterion defined to halt the procedure. A stopping criterion might be to halt when the change in the estimate due to adding the next model is less than some threshold. The extreme of this process is of course the full sum of Equation (10).
-
- where <>D indicates the average over the dataset of vector points xj(t).
- In addition, statistics over the dataset other than the average can often yield useful information, such as the median, the standard deviation, and so forth.
- Optimization/Control
- Process optimization ordinarily refers to determining the optimal input vector {circumflex over (x)}(t) that will minimize a defined objective function J while satisfying any defined constraint functions Cm. J is ordinarily a function of the process model and its variables, which expresses the desired characteristics of process operation, output product characteristics, and so forth. The Cm functions are more often (though not always) a function only of the input variables, which express relationships among the process variables which must hold for physical or desired operational reasons; for example, a mass-balance constraint might dictate that x1=x2+x3. A valid solution of a constrained optimization problem always satisfies the Cm relationship, and minimizes J as well as possible within those constraints.
- “Optimization” typically means “steady-state optimization” (finding an optimal point in operating space using a steady-state model), while “control” typically means “dynamic control” (finding an optimal trajectory in operating space using a dynamic model). Both are “optimization problems.”
- In optimization or control, an optimization algorithm uses the process model to find the optimal {circumflex over (x)}(t), given the objective J and constraint Cm functions. Neural network models are in general nonlinear, so nonlinear optimization algorithms are used. Unconstrained or constrained optimization is performed depending upon whether or not any constraint functions are defined. Mathematically, unconstrained and constrained nonlinear optimizations are very different, and different optimization algorithms are used. Henceforth the general (and most typical in industrial processes) case of constrained nonlinear optimization will be assumed.
- Nonlinear constrained optimization algorithms that make use of derivatives generally execute much faster than those that do not. A variety of such nonlinear constrained optimization programs are commercially available. The most popular codes are based on the Sequential Quadratic Programming (SQP) or the Generalized Reduced Gradient (GRG) methods.
- A prototypical objective function is J=Σk(<y(n+k)>−y(n+k))2, i.e., the sum over all non-training datapoints of the squared difference between a desired (setpoint) output value y(n+k)(t) and the Bayesian model output <y(n+k)(t)>. A more general example of an objective function is one representing the (negative, as the objective function is to be minimized) profit of the process (by associating prices and costs with the input and output variables). One possibility would be to then use the resulting output variable values as setpoints for those output variables.
-
- because the output variables <yq(t)> are referenced explicitly, the first factor ∂J/∂<yq(t)> is computable by the elementary rules of differentiation for each input xp(t). The second factor, however, involves the Bayesian ensemble of models, which relate <yq(t)> to xp(t), thus representing the sum over all models.
-
- In each method, any nonlinear constrained optimization code, such as an SQP or GRG code, may be used to perform the optimization. Any such code searches the x-space for the x-value that will minimize the given J while satisfying the given Cm by iteratively passing to user-supplied subroutines an x(t) value (in general, different at each iteration) and receiving back the objective and constraint functions, and the derivatives, all evaluated at x(t). The derivatives may of course be computed in full or may be estimated to any degree of accuracy, as described hereinabove.
- There are at least two fundamentally different ways that optimization over a Bayesian ensemble of models may be carried out. Roughly speaking, method (1) performs a single optimization over the entire ensemble, and method (2) performs multiple optimizations, one for each model in the ensemble, and when finished combines the results.
- Optimization Method (1):
- In this method, the optimization routine performs a single optimization over all of the models in the ensemble and returns a single optimal value for {circumflex over (x)}(t). When the optimizer requests the values of the functions and derivatives evaluated at a point x(t), a user-supplied subroutine must compute the derivative values ∂<yq(t)>/∂xp(t) by applying the chain rule of Equation (13) to the entire ensemble of models. After some number of iterations of such subroutine calls, the single optimization procedure terminates with an optimal {circumflex over (x)}(t).
- Referring now to FIG. 7, there is illustrated a block diagram of the first optimization method. In this method, the
models 200 are provided, each for receiving the input value x(t) and outputting the output value y(t). There are provided Nw models F(w)(x(t)). The output of each of themodels 200 is input to theblock 600 which is operable to determine the derivative on the output thereof. Each of the derivatives from each of theblocks 600 for each of themodels 200 are input to a weightedaverage block 702 which is operable to provide the weighted average of the derivative as set forth in Equation (10). This is then subjected to the chain rule of Equation (13) via ablock 706 to provide the single optimal value for {circumflex over (x)}(t). Note that this is performed at each value of x(t), as noted hereinabove with respect to Equation (13). Also, it is noted that the weighted average operation is performed prior to applying the chain rule. To summarize method (1), the probability distribution P(ω|D) is provided for a single model with the other probability distribution then sampled utilizing the optimizer. - Optimization Method (2):
- In this method, each model in the Bayesian ensemble is optimized separately, yielding an optimal {circumflex over (x)}(w)(t) for each model F(w)x(t). During the optimization of each model, an optimization process for a model F(w)x(t) requests function and derivative values at a point xp(t). The functions and derivative values returned are for that single model only, that is, ∂yq(t)(w)/∂xp(t). The optimization of each
model 200 terminates with an optimal value {circumflex over (x)}(w)(t) for that model, such that there are Nw such optimized values. At the completion of all optimizations, some operation, such as a weighted average, is performed over the set of {circumflex over (x)}(w)(t) values to output a single optimal {circumflex over (x)}(t), e.g.: - In addition, the distribution of {circumflex over (x)}(t) values may hold useful information for process operation in addition to the single averages. It should be understood that combinations of these two fundamental optimization methods are to be considered and that the disclosed embodiment is not exhaustive.
- Referring now to FIG. 8, there is illustrated a block diagram depicting the second method of optimization. In this diagrammatic view, the
models 200 andderivative blocks 600 are provided for generating the derivatives for each of the associatedmodels 200. The output of each of thederivative blocks 600 is, as distinguished from the embodiment of FIG. 4, input to ablock 800, wherein that value is optimized over the output such that the optimized value ∂J/∂y(w)(t) provides the value {circumflex over (x)}(w)(t) for each model. These are then processed through a weightedaverage block 804, which is operable to implement Equation (11) to provide a single optimal {circumflex over (x)}(t). To summarize this optimization method, the probability distribution P({circumflex over (x)}, {right arrow over (ω)}|D, J, C) provides the probability distribution over the {circumflex over (x)}(t), given D, J and possibly C. - Referring now to FIG. 8a, there is illustrated a simplified block diagram of the embodiment of FIG. 8, wherein a
single block 810 represents the combination of themodels 200,derivative block 600 and multiplication blocks 800. This, as noted hereinabove, is operable to provide a plurality of estimated values for eachmodel 200 in the form of {circumflex over (x)}(1)(t), wherein there are w estimated values for the input, one for each model. These are all input to ablock 812 which block is operable to provide some type of selection or averaging operation, this being similar to theblock 804 which performs specifically a weighted average. However, it should be understood that the selection operation inblock 812 can utilize any criteria. For example, it could be a weighted average as described hereinabove, or it could be some type of selection criteria that selected the best single value. FIG. 8b illustrates a plot of the estimated input values as a function the index value where it can be seen that there is a Gaussian distribution of such values. The important aspect of thisblock 804 is that any type of selection criteria can be utilized to provide some value that is a function of all of the estimated values or some selection criteria that eliminates certain ones of the values and selects other values for an averaging type operation. In such a manner, all of the estimated values need not be utilized for the selection criteria. - Dynamic Models
- The above discussion has been described with respect to steady-state process models. The indices k described hereinabove describe new data (n+k), whereas a dynamic system utilizes the index k to represent time intervals, which need not represent equally spaced time intervals. In general, a trajectory of output values {y(t+1) . . . y(t+kmax)}(yx(t)) is predicted from a current y(t) for each new “setpoint” y(t) along with the a corresponding trajectory of control input values (corresponding to the x(t) inputs for the steady state process) necessary to provide such yx(t), which trajectory takes a predetermined number of the time intervals to achieve. The trajectory of control input values is defined for k=1 to k=kmax as {u(t+1) . . . u(t+kmax)}(uk(t)). The system will then make a move along this trajectory for a first interval of time and even additional intervals of times by predicting the new u(t) values necessary to make such a move before the next dynamic prediction is made. This dynamic prediction could be made at each move interval such that a new trajectory for the setpoint is predicted from a new control input value u(t). As such, the new “setpoint” optimization could be performed at each time interval.
- When using the above-described optimization/control procedures for dynamic models, which are iterated out to the control horizon in time, the optimization is performed over the entire trajectory (time interval (t+1, t+kmax), where (t+1) represents the current time interval). The first step in the optimal trajectory is taken by the plant, and the whole optimization is begun again from that point. This is the generic model predictive control picture. The optimization over the trajectory is subject to an objective function and a variety of constraints, such as “hard” constraints on the trajectory which are wide at (t+1) and converge (with some specified tolerance) to the output setpoint at (t+kmax). In concept, the control optimization result is the optimal trajectory specified by values at each time increment. However, most model predictive control packages, including that described in U.S. patent application Ser. No. 08/643,464, filed May 6, 1996, and entitled “Method and Apparatus for Modeling Dynamic and Steady-State Processes for Prediction, Control and Optimization,” incorporated by reference herein, use “blocking” to speed up the optimization, i.e., the control horizon is divided into t intervals, usually spaced closer near (t+1) and wider near (t+kmax) (logarithmic spacing). The number of independent optimization variables is thus only m times the number of control input variables, which shortens computation time dramatically with (almost always) very little difference in the first increment of the optimal trajectory, and makes it computationally tractable. The same choice between the two fundamentally different optimization methods described above apply just as clearly in the dynamic case.
- Referring now to FIG. 9, there is illustrated a diagrammatic view of the trajectory of yk(t) determined in the optimization process for control. In this process, the time horizon is divided up into m increments or intervals, these being equal time segments or unequal time segments. It can be seen that the dynamic prediction for the value of yk(t) is made along the increments from the current position in time (t+1) and extending out to the horizon at the value of (t+k). This final y(n+k)(t) value is often equal (within some tolerances) to the output setpoint, provided the optimal change to u(t) was made.
- Referring now to FIG. 10, there is illustrated a block diagram for the
plant 10 utilizing acontroller network 1002 in feedback. This is very similar to theoptimizer 16 of FIG. 1, with the exception that it predicts a trajectory utilizing cost function J, constraints Cm, and various setpoints. The setpoints are typically in the form of a desired move in the output vector y(t). Thecontroller network 1002 will receive as inputs select ones of the output y(t), and project the trajectory out over the control horizon in time and the u(t) values necessary to achieve such trajectory. This will typically be predicted in time intervals such that adynamic plant controller 1004 will be able to generate new input values u(t). - In utilizing dynamic models, the models can be of differing types. In the disclosed embodiment, the dynamic model is a linear model which is defined by the following relationship:
- {right arrow over (y)}k =G({right arrow over (y)} k−1 , {right arrow over (y)} k−2 , {right arrow over (u)} k−1−d , {right arrow over (u)} k−2−d a, b) (0016)
- where:
- yk−1, yk−2 are outputs at different points in time;
- uk−1−d, uk−2−d are input values at different points in time;
- d is a delay value; and
- a, b are the parameters of the linear model.
- One example of the linear model is set by the following relationship:
- {right arrow over (y)}k =−a 1{right arrow over (y)}k−1 −a 2{right arrow over (y)}k−2 +b 1{right arrow over (u)}k−1−d +b 2{right arrow over (u)}k−2−d (0017)
- Although Equation (17) is set forth as a linear equation with a linear model, additional non-linear terms can be attached to this equation to result in a non-linear model. However, the parameters of this model are set by the a's and b's, i.e., the parameter values of the model. This also pertains to the gain, this described in detail in U.S. patent application Ser. No. 08/643,464, which was incorporated by reference hereinabove.
- When identifying the stochastically-related model via the various techniques described hereinabove, the disclosed one being the Bayesian technique, the models are trained in substantially the same way as the non-linear and neural networks, described with respect to the steady-state process hereinabove. This will yield w models which are stochastically-related by the following relationship:
- {right arrow over (y)} k (w) =−a 1 (w) {right arrow over (y)} k−1 (w) −a 2 (w) {right arrow over (y)} k−2 (w) +b 1 (w) u k−1−d +b 2 (w) u k−2−d (0018)
-
-
-
- is the likelihood. P(a, b) is the prior distribution of the parameters (a, b) of the model, and their product is the posterior distribution, as was described hereinabove with respect to the steady-state case. All of the above-noted equations apply to the dynamic case. The only difference is that the input is now u(t) and the parameters of the model are (a, b), as compared to ω.
-
-
- Referring now to FIG. 11, there is illustrated a block diagram depicting the operation illustrated in Equation (22) for a dynamic model to determine the average derivative. This basically parallels the operation of the embodiment in the FIG. 6, described hereinabove with respect to steady-state models. There are provided a plurality of
dynamic models 1100 corresponding to thestatic models 200 described hereinabove which are operable to provide a predicted value y(w)(t). The output of each of thesemodels 1100 can be input to a weightedaverage block 1102 to provide the value <y(t)>. However, in order to provide the prediction for the average derivative ∂<y(t)>/∂u(t), it is necessary to optimize the derivative as described hereinabove in the optimization method (1). This requires feeding both the input value u(t) and the output y(w)(t) for eachmodel 1100 to aderivative block 1104. This provides the derivative for each value of w∂y(w)(t)/∂u(t). These derivatives for each of the models are then input to anaveraging block 1106 to take the weighted average thereof, and then to a summingblock 1108 to provide the average derivative ∂<(y(t)>/∂u(t). - Once the average derivative is determined for the dynamic model, then this can be optimized, utilizing the optimization method (1) or the optimization method (2) described hereinabove, except that a dynamic model is used. This is illustrated in FIG. 12 which parallels FIG. 7 for the static model. The derivatives of each of the models output from the
derivative block 1104 are first subjected to the weighted average in ablock 1200, representingblock - In optimization method (2), the dynamic model representation is illustrated in FIG. 13, which parallels FIG. 8. Each of the
models 1100 has the derivative thereof determined by thederivative block 1104. However, rather than take the weighted average of the derivatives, each û(w)(t+kmin) . . . û(w)(t+kmax)(ûk (w)(t)) of the models has the derivative thereof optimized to provide an estimated control trajectory û(w)(t) for each value of w. These are the estimated values of the inputs for each model which are then processed through a weightedaverage block 1304 which, as described hereinabove, is operable to perform some type of algorithm or selection process thereon. This can be any type of selection process for the model. It could be an averaging operation, or it could be a selection process. This provides a single control trajectory û(t) value. Given the control trajectory û(t), the first value thereof is input to the plant. -
-
- This makes a dynamic model consistent with its steady-state counterpart, as described in U.S. patent application Ser. No. 08/643,464, which was incorporated by reference hereinabove. Therefore, each time the steady-state value changes such that the operating region of the steady-state model is different, this will correspond to a potentially different gain Kss for the steady-state model. This value can then be utilized to update the gain kd of the dynamic model and, therefore, compensate for the errors associated with a dynamic model, wherein the value of kd is determined based on perturbations in the plant on a given set of operating conditions. Since all operating conditions are not modeled, the step of varying the gain will account for changes in the steady-state starting points. With respect to the present application utilizing stochastically related models, it is necessary to determine the dynamic gain of each of the
dynamic models 1100. - Referring now to FIG. 14, there is illustrated a block diagram of the
optimizer 16 utilizing a steady-state model to fix the gain of thedynamic model 1100. The optimizer, as described hereinabove, is divided into a plurality of blocks for determining the derivative of ∂y(w)(t)/∂x(t). These blocks are referred to with the reference numeral 1400. There are provided “w” of these blocks 1400, the output of each of these inputs input to a product block 1405, the outputs thereof summed in a summing block 1406 to provide on the output thereof the average or weighted derivative ∂<y(t)>/∂x(t). - Each of the blocks1400 has associated therewith the
dynamic model 1100 with the input u(t) and the output y(t) input to thederivative block 1104. Additionally, a steady-state model 200 is provided for each of thedynamic models 1100 in each of the blocks 1400. Therefore, each of themodels 200 is a stochastically related model F(w)x(t) which has an associated steady-state gain K(w) ss. This is a known gain for that model. In the embodiment illustrated, the steady-state model 200 for w=1 is associated with the G(1)u(t)model 1100. A gain modulator 1410 is provided for determining the dynamic gain k(1) d. In the preferred embodiment, as set forth in U.S. patent application Ser. No. 08/643,646, the dynamic gain is forced to be equal to the steady-state gain. This provides the dynamic gain k(1) d for the value of w=1. -
- Although the index for the steady-
state model 200 was set equal to the index for thedynamic model 1100, it should be understood that, even though there are multiple ones of the progressive steady-state models 200 and multiple ones of the progressivedynamic models 1100, it is not necessary to match the indices. For example, it could be that the index w=1 could be matched to the maximum value for the index on thedynamic model 1100, such that F(500)x(t) is matched to thedynamic model 1100 with the minimum index G(1)u(t), wherein the maximum index is w=500. - In an alternate embodiment, as illustrated in FIG. 15, the gain of a single steady-state model1500 is utilized to provide a single steady-state gain Kss for all of the gain modulation modules 1410, such that only one steady-state model is required. This is not a progressive model. Alternatively, the dynamic model for each of the blocks 1400 could be the same with the steady-
state models 200 being progressive stochastically-related models. Therefore, there would be a plurality of blocks 1502 which contained only thedynamic model 1100, thederivative block 1104 and the gain modulator 1410. The steady-state gain Kss of the model 1500 would be input to each of the blocks 1502. - In summary, there has been provided a method and apparatus by which a stochastical method is utilized for optimizing y(t) with respect to x(t) through the use of averaging over multiple regression models F(w). This optimization is utilized to provide a single optimal vector for the values of x(t) which constitute inputs to a plant.
- Although the preferred embodiment has been described in detail, it should be understood that various changes, substitutions and alterations can be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (20)
1. A method for determining the optimum operation of a system, comprising the steps of:
receiving the outputs of the system and the measurable inputs to the system; and
optimizing select ones of the outputs as a function of the inputs by minimizing an objective function J to provide optimal values for select ones of the inputs;
wherein the step of optimizing includes the step of predicting the select ones of the outputs with a plurality of models of the system, each model operable to map the inputs through a representation of the system to provide predicted outputs corresponding to the select ones of the outputs which predicted outputs of each of the plurality of models are combined in accordance with a predetermined combination algorithm to provide a single output corresponding to each of the select ones of the outputs.
2. The method of claim 1 , wherein the optimal value of the outputs of the plurality of models is determined as a single averaged optimal output value for each of the select ones of the outputs.
3. The method of claim 1 , and further comprising the step of applying the optimal values of the select ones of the inputs to the corresponding inputs of the system after determination thereof.
4. The method of claim 1 , wherein the step of receiving the outputs of the system comprises receiving measurable outputs of the system.
5. The method of claim 1 , wherein the step of optimizing comprises a derivative-based optimization operation.
6. The method of claim 5 , wherein the step of optimizing comprises the steps of:
determining the average predicted output of the plurality of models <y(t)>;
determining the average derivative of the average predicted output <y(t)> with regards to the inputs x(t) as ∂<y(t)>/∂x(t);
the objective function J being a function of <y(t)> and determining a derivative of the objective function J with respect to <y(t)> as ∂J/∂<y(t)>;
determining with the chain rule the relationship ∂J/∂x(t); and
determining the minimum of the J.
7. The method of claim 5 , wherein the average derivative of the average predicted output is weighted over the plurality of models.
8. The method of claim 2 , wherein the step of predicting the select ones of the outputs with the plurality of models of the system comprises predicting the output to a point forward in time as a trajectory.
9. A method for optimizing the parameters of a system having a vector input x(t) and a vector output y(t), comprising the steps of:
storing a representation of the system in a plurality of models, each model operable to map the inputs through a representation of the system to provide a predicted output, each of the models operable to predict the output of the system for a given input value of x(t), each model operable to map the inputs through a representation of the system to provide a predicted output;
providing predetermined optimization objectives; and
determining a single optimized input vector value {circumflex over (x)}(t) by applying a predetermined optimization algorithm to the plurality of models to achieve a minimum error to the predetermined optimization objective.
10. The method of claim 9 , wherein the step of determining comprises determining the derivative ∂y(t)/∂x(t) of each of the models and then determining an average of the derivatives ∂y(t)/∂x(t).
11. The method of claim 10 , wherein the step of determining the average of the derivative comprises determining the weighted average of the derivatives ∂y(t)/∂x(t).
13. The method of claim 9 , wherein the step of storing a representation of the system in a plurality of models comprises storing a representation of the system in a plurality of non-linear or linear networks, each operable to map the input x(t) to a predicted output through a stored representation of the system.
14. The method of claim 13 , wherein the stored representation of the system in each of the plurality of non-linear or linear networks are related in such a manner wherein the parameters of each of the linear or non-linear networks are stochastically related to each other.
15. The method of claim 14 , wherein the stochastic relationship is a Bayesian relationship.
16. The method of claim 9 , wherein the predetermined optimization algorithm is an iterative optimization algorithm.
17. The method of claim 9 , wherein the step of determining the single optimized input vector value {circumflex over (x)}(t) comprises determining the derivative of the predetermined optimization objective relative to the input vector x(t) as ∂J/∂x(t), where J represents the predetermined optimization objective.
18. The method of claim 9 , wherein the step of determining comprises determining the derivative ∂y(t)/∂x(t) of each of the models and then determining an average of the derivatives ∂y(t)/∂x(t).
20. The method of claim 19 , wherein the step of determining ∂J/∂<x(t)> comprises the steps of:
determining the weighted average of the predicted output of each of the models by the following relationship:
where P(y(i)|x(i), ω)P(ω) represents the posterior probability of the model indexed by w, and Nw represents the maximum number of models in the stochastic relationship, and wherein the stored representation of the system in each of the plurality of models are related in such a manner wherein the parameters of each of the models are stochastically related to each other;
determining the derivatives ∂J/∂<y(t)> as the variation of the predetermined optimization objective with respect to the predicted output y(t); and
determining by the chain rule the following:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/827,977 US20040199481A1 (en) | 1998-10-06 | 2004-04-20 | Bayesian neural networks for optimization and control |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10326998P | 1998-10-06 | 1998-10-06 | |
US09/290,791 US6725208B1 (en) | 1998-10-06 | 1999-04-12 | Bayesian neural networks for optimization and control |
US10/827,977 US20040199481A1 (en) | 1998-10-06 | 2004-04-20 | Bayesian neural networks for optimization and control |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/290,791 Continuation US6725208B1 (en) | 1998-10-06 | 1999-04-12 | Bayesian neural networks for optimization and control |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040199481A1 true US20040199481A1 (en) | 2004-10-07 |
Family
ID=32072696
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/290,791 Expired - Lifetime US6725208B1 (en) | 1998-10-06 | 1999-04-12 | Bayesian neural networks for optimization and control |
US10/827,977 Abandoned US20040199481A1 (en) | 1998-10-06 | 2004-04-20 | Bayesian neural networks for optimization and control |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/290,791 Expired - Lifetime US6725208B1 (en) | 1998-10-06 | 1999-04-12 | Bayesian neural networks for optimization and control |
Country Status (1)
Country | Link |
---|---|
US (2) | US6725208B1 (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040181498A1 (en) * | 2003-03-11 | 2004-09-16 | Kothare Simone L. | Constrained system identification for incorporation of a priori knowledge |
US20050137995A1 (en) * | 2002-08-16 | 2005-06-23 | Powitec Intelligent Technologies Gmbh | Method for regulating a thermodynamic process by means of neural networks |
US7184992B1 (en) * | 2001-11-01 | 2007-02-27 | George Mason Intellectual Properties, Inc. | Constrained optimization tool |
US20080195564A1 (en) * | 2007-02-13 | 2008-08-14 | Denso Corporation | Automotive air conditioner and method and apparatus for controlling automotive air conditioner |
US8265854B2 (en) | 2008-07-17 | 2012-09-11 | Honeywell International Inc. | Configurable automotive controller |
US8360040B2 (en) | 2005-08-18 | 2013-01-29 | Honeywell International Inc. | Engine controller |
US8504175B2 (en) * | 2010-06-02 | 2013-08-06 | Honeywell International Inc. | Using model predictive control to optimize variable trajectories and system control |
US8620461B2 (en) | 2009-09-24 | 2013-12-31 | Honeywell International, Inc. | Method and system for updating tuning parameters of a controller |
US8644961B2 (en) | 2005-12-12 | 2014-02-04 | Neuco Inc. | Model based control and estimation of mercury emissions |
US20140129491A1 (en) * | 2012-11-06 | 2014-05-08 | Rockwell Automation Technologies, Inc. | Empirical modeling with globally enforced general constraints |
CN104345637A (en) * | 2013-07-31 | 2015-02-11 | 罗伯特·博世有限公司 | Method and device for adapting to function module based on data |
CN105139274A (en) * | 2015-08-16 | 2015-12-09 | 东北石油大学 | Power transmission line icing prediction method based on quantum particle swarm and wavelet nerve network |
US20160105308A1 (en) * | 2014-10-10 | 2016-04-14 | DimensionalMechanics, Inc. | System, apparatus and methods for adaptive data transport and optimization of application execution |
US9650934B2 (en) | 2011-11-04 | 2017-05-16 | Honeywell spol.s.r.o. | Engine and aftertreatment optimization system |
US9677493B2 (en) | 2011-09-19 | 2017-06-13 | Honeywell Spol, S.R.O. | Coordinated engine and emissions control system |
US10036338B2 (en) | 2016-04-26 | 2018-07-31 | Honeywell International Inc. | Condition-based powertrain control system |
US10062354B2 (en) | 2014-10-10 | 2018-08-28 | DimensionalMechanics, Inc. | System and methods for creating virtual environments |
US10124750B2 (en) | 2016-04-26 | 2018-11-13 | Honeywell International Inc. | Vehicle security module system |
US10235479B2 (en) | 2015-05-06 | 2019-03-19 | Garrett Transportation I Inc. | Identification approach for internal combustion engine mean value models |
US10272779B2 (en) | 2015-08-05 | 2019-04-30 | Garrett Transportation I Inc. | System and approach for dynamic vehicle speed optimization |
US10309287B2 (en) | 2016-11-29 | 2019-06-04 | Garrett Transportation I Inc. | Inferential sensor |
US10415492B2 (en) | 2016-01-29 | 2019-09-17 | Garrett Transportation I Inc. | Engine system with inferential sensor |
US10423131B2 (en) | 2015-07-31 | 2019-09-24 | Garrett Transportation I Inc. | Quadratic program solver for MPC using variable ordering |
US10503128B2 (en) | 2015-01-28 | 2019-12-10 | Garrett Transportation I Inc. | Approach and system for handling constraints for measured disturbances with uncertain preview |
US10621291B2 (en) | 2015-02-16 | 2020-04-14 | Garrett Transportation I Inc. | Approach for aftertreatment system modeling and model identification |
US11057213B2 (en) | 2017-10-13 | 2021-07-06 | Garrett Transportation I, Inc. | Authentication system for electronic control unit on a bus |
US11156180B2 (en) | 2011-11-04 | 2021-10-26 | Garrett Transportation I, Inc. | Integrated optimization and control of an engine and aftertreatment system |
Families Citing this family (87)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6725208B1 (en) * | 1998-10-06 | 2004-04-20 | Pavilion Technologies, Inc. | Bayesian neural networks for optimization and control |
JP2001184430A (en) * | 1999-10-28 | 2001-07-06 | Citibank Na | Method and system for using bayesian belief network to guarantee completeness of data |
US6728695B1 (en) * | 2000-05-26 | 2004-04-27 | Burning Glass Technologies, Llc | Method and apparatus for making predictions about entities represented in documents |
US10204349B2 (en) | 2000-12-20 | 2019-02-12 | International Business Machines Corporation | Analyzing customer segments |
US10496938B2 (en) | 2000-12-20 | 2019-12-03 | Acoustic, L.P. | Generating product decisions |
US20100010870A1 (en) * | 2000-12-20 | 2010-01-14 | Karl Millar | System and Method for Tuning Demand Coefficients |
US9165270B2 (en) * | 2000-12-20 | 2015-10-20 | International Business Machines Corporation | Predicting likelihood of customer attrition and retention measures |
US7523047B1 (en) * | 2000-12-20 | 2009-04-21 | Demandtec, Inc. | Price optimization system |
US7302410B1 (en) * | 2000-12-22 | 2007-11-27 | Demandtec, Inc. | Econometric optimization engine |
US7062447B1 (en) | 2000-12-20 | 2006-06-13 | Demandtec, Inc. | Imputed variable generator |
US7877286B1 (en) | 2000-12-20 | 2011-01-25 | Demandtec, Inc. | Subset optimization system |
US7617119B1 (en) | 2000-12-20 | 2009-11-10 | Demandtec, Inc. | Price optimization with rule relaxation |
US9785953B2 (en) * | 2000-12-20 | 2017-10-10 | International Business Machines Corporation | System and method for generating demand groups |
US7899691B1 (en) | 2000-12-20 | 2011-03-01 | Demandtec, Inc. | Econometric engine |
US9773250B2 (en) * | 2000-12-20 | 2017-09-26 | International Business Machines Corporation | Product role analysis |
US7657470B1 (en) | 2000-12-20 | 2010-02-02 | Demandtec, Inc. | Financial model engine |
US7660734B1 (en) | 2000-12-20 | 2010-02-09 | Demandtec, Inc. | System for creating optimized promotion event calendar |
US8010404B1 (en) | 2000-12-22 | 2011-08-30 | Demandtec, Inc. | Systems and methods for price and promotion response analysis |
AU2002238801A1 (en) * | 2001-03-20 | 2002-10-03 | Trudell Medical International | Nebulizer apparatus and method |
US7092896B2 (en) * | 2001-05-04 | 2006-08-15 | Demandtec, Inc. | Interface for merchandise promotion optimization |
US6553352B2 (en) * | 2001-05-04 | 2003-04-22 | Demand Tec Inc. | Interface for merchandise price optimization |
US7130811B1 (en) | 2001-05-05 | 2006-10-31 | Demandtec, Inc. | Apparatus for merchandise promotion optimization |
US7085692B2 (en) * | 2001-10-11 | 2006-08-01 | Xerox Corporation | Learning systems and methods for market-based control of smart matter |
US7809581B1 (en) | 2001-11-30 | 2010-10-05 | Demandtec, Inc. | Rule relaxation and subset optimization system |
US7386519B1 (en) | 2001-11-30 | 2008-06-10 | Demandtec, Inc. | Intelligent clustering system |
US7249032B1 (en) | 2001-11-30 | 2007-07-24 | Demandtec Inc. | Selective merchandise price optimization mechanism |
US7249033B1 (en) | 2001-11-30 | 2007-07-24 | Demandtec Inc. | Apparatus and method for selective merchandise price optimization |
US7133882B1 (en) | 2002-08-05 | 2006-11-07 | Demandtec, Inc. | Method and apparatus for creating and using a master catalog |
WO2004090659A2 (en) * | 2003-04-10 | 2004-10-21 | Mukesh Dalal | Optimizing active decision making using simulated decision making |
US6736089B1 (en) * | 2003-06-05 | 2004-05-18 | Neuco, Inc. | Method and system for sootblowing optimization |
US7194320B2 (en) * | 2003-06-05 | 2007-03-20 | Neuco, Inc. | Method for implementing indirect controller |
JP2005141601A (en) * | 2003-11-10 | 2005-06-02 | Nec Corp | Model selection computing device, dynamic model selection device, dynamic model selection method, and program |
US8214271B2 (en) * | 2004-02-04 | 2012-07-03 | Neuco, Inc. | System and method for assigning credit to process inputs |
US20060052902A1 (en) * | 2004-08-27 | 2006-03-09 | Neuco, Inc. | Method and system for SNCR optimization |
US7500437B2 (en) * | 2004-08-27 | 2009-03-10 | Neuco, Inc. | Method and system for SCR optimization |
US7333861B2 (en) * | 2004-10-25 | 2008-02-19 | Neuco, Inc. | Method and system for calculating marginal cost curves using plant control models |
US7123971B2 (en) * | 2004-11-05 | 2006-10-17 | Pegasus Technologies, Inc. | Non-linear model with disturbance rejection |
US7584024B2 (en) * | 2005-02-08 | 2009-09-01 | Pegasus Technologies, Inc. | Method and apparatus for optimizing operation of a power generating plant using artificial intelligence techniques |
US20060230097A1 (en) * | 2005-04-08 | 2006-10-12 | Caterpillar Inc. | Process model monitoring method and system |
US8209156B2 (en) * | 2005-04-08 | 2012-06-26 | Caterpillar Inc. | Asymmetric random scatter process for probabilistic modeling system for product design |
US20060229852A1 (en) * | 2005-04-08 | 2006-10-12 | Caterpillar Inc. | Zeta statistic process method and system |
US20060229753A1 (en) * | 2005-04-08 | 2006-10-12 | Caterpillar Inc. | Probabilistic modeling system for product design |
US7565333B2 (en) * | 2005-04-08 | 2009-07-21 | Caterpillar Inc. | Control system and method |
US7877239B2 (en) * | 2005-04-08 | 2011-01-25 | Caterpillar Inc | Symmetric random scatter process for probabilistic modeling system for product design |
US8364610B2 (en) | 2005-04-08 | 2013-01-29 | Caterpillar Inc. | Process modeling and optimization method and system |
US20060229854A1 (en) * | 2005-04-08 | 2006-10-12 | Caterpillar Inc. | Computer system architecture for probabilistic modeling |
US20060293950A1 (en) * | 2005-06-28 | 2006-12-28 | Microsoft Corporation | Automatic ad placement |
US20070061144A1 (en) * | 2005-08-30 | 2007-03-15 | Caterpillar Inc. | Batch statistics process model method and system |
US7487134B2 (en) * | 2005-10-25 | 2009-02-03 | Caterpillar Inc. | Medical risk stratifying method and system |
US7499842B2 (en) | 2005-11-18 | 2009-03-03 | Caterpillar Inc. | Process model based virtual sensor and method |
US20070118487A1 (en) * | 2005-11-18 | 2007-05-24 | Caterpillar Inc. | Product cost modeling method and system |
US20070135938A1 (en) * | 2005-12-08 | 2007-06-14 | General Electric Company | Methods and systems for predictive modeling using a committee of models |
US7599750B2 (en) * | 2005-12-21 | 2009-10-06 | Pegasus Technologies, Inc. | Model based sequential optimization of a single or multiple power generating units |
US7505949B2 (en) * | 2006-01-31 | 2009-03-17 | Caterpillar Inc. | Process model error correction method and system |
US20070203810A1 (en) * | 2006-02-13 | 2007-08-30 | Caterpillar Inc. | Supply chain modeling method and system |
US9785951B1 (en) | 2006-02-28 | 2017-10-10 | International Business Machines Corporation | Scalable tuning engine |
US9858579B1 (en) | 2006-02-28 | 2018-01-02 | International Business Machines Corporation | Plan tuning engine |
US7756591B2 (en) * | 2006-04-25 | 2010-07-13 | Pegasus Technologies, Inc. | System for optimizing oxygen in a boiler |
US7577626B1 (en) | 2006-05-26 | 2009-08-18 | Georgios Mountrakis | Multi-scale radial basis function neural network |
EP2041141A2 (en) * | 2006-07-13 | 2009-04-01 | 4Sc Ag | Benzopyranopyrazoles |
US8478506B2 (en) | 2006-09-29 | 2013-07-02 | Caterpillar Inc. | Virtual sensor based engine control system and method |
US7483774B2 (en) * | 2006-12-21 | 2009-01-27 | Caterpillar Inc. | Method and system for intelligent maintenance |
US20080154811A1 (en) * | 2006-12-21 | 2008-06-26 | Caterpillar Inc. | Method and system for verifying virtual sensors |
US7787969B2 (en) * | 2007-06-15 | 2010-08-31 | Caterpillar Inc | Virtual sensor system and method |
US7831416B2 (en) * | 2007-07-17 | 2010-11-09 | Caterpillar Inc | Probabilistic modeling system for product design |
US7788070B2 (en) * | 2007-07-30 | 2010-08-31 | Caterpillar Inc. | Product design optimization method and system |
CN101802728A (en) * | 2007-08-17 | 2010-08-11 | 能源技术研究所 | Gaseous emission is carried out system and method based on the virtual sensing of experience set |
US7542879B2 (en) * | 2007-08-31 | 2009-06-02 | Caterpillar Inc. | Virtual sensor based control system and method |
US8340824B2 (en) | 2007-10-05 | 2012-12-25 | Neuco, Inc. | Sootblowing optimization for improved boiler performance |
US7593804B2 (en) * | 2007-10-31 | 2009-09-22 | Caterpillar Inc. | Fixed-point virtual sensor control system and method |
US8036764B2 (en) | 2007-11-02 | 2011-10-11 | Caterpillar Inc. | Virtual sensor network (VSN) system and method |
US8224468B2 (en) | 2007-11-02 | 2012-07-17 | Caterpillar Inc. | Calibration certificate for virtual sensor network (VSN) |
US8209045B2 (en) * | 2008-04-07 | 2012-06-26 | Honeywell International Inc. | System and method for discrete supply chain control and optimization using model predictive control |
US8086640B2 (en) * | 2008-05-30 | 2011-12-27 | Caterpillar Inc. | System and method for improving data coverage in modeling systems |
US20090293457A1 (en) * | 2008-05-30 | 2009-12-03 | Grichnik Anthony J | System and method for controlling NOx reactant supply |
US8145334B2 (en) * | 2008-07-10 | 2012-03-27 | Palo Alto Research Center Incorporated | Methods and systems for active diagnosis through logic-based planning |
US8266092B2 (en) | 2008-07-10 | 2012-09-11 | Palo Alto Research Center Incorporated | Methods and systems for target value path identification |
US8165705B2 (en) * | 2008-07-10 | 2012-04-24 | Palo Alto Research Center Incorporated | Methods and systems for continuously estimating persistent and intermittent failure probabilities for production resources |
US8219437B2 (en) | 2008-07-10 | 2012-07-10 | Palo Alto Research Center Incorporated | Methods and systems for constructing production plans |
US7917333B2 (en) * | 2008-08-20 | 2011-03-29 | Caterpillar Inc. | Virtual sensor network (VSN) based control system and method |
US8359110B2 (en) * | 2009-03-23 | 2013-01-22 | Kuhn Lukas D | Methods and systems for fault diagnosis in observation rich systems |
US8793004B2 (en) | 2011-06-15 | 2014-07-29 | Caterpillar Inc. | Virtual sensor system and method for generating output parameters |
US8774909B2 (en) | 2011-09-26 | 2014-07-08 | Medtronic, Inc. | Episode classifier algorithm |
US8437840B2 (en) | 2011-09-26 | 2013-05-07 | Medtronic, Inc. | Episode classifier algorithm |
US20180024508A1 (en) | 2016-07-25 | 2018-01-25 | General Electric Company | System modeling, control and optimization |
US20220320861A1 (en) * | 2019-05-29 | 2022-10-06 | Tata Consultancy Services Limited | Method and system for optimum coal selection and power plant optimization |
US11880173B2 (en) * | 2020-08-17 | 2024-01-23 | 5G3I Ltd | Systems and methods for enhanced control of electronic circuits |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4486221A (en) * | 1981-10-28 | 1984-12-04 | Basf Aktiengesellschaft | Thiazolo[2,3-b]quinazolones and their use for influencing plant growth |
US4992942A (en) * | 1989-01-25 | 1991-02-12 | Bahm, Inc. | Apparatus and method for controlling a system, such as nutrient control system for feeding plants, based on actual and projected data and according to predefined rules |
US5023045A (en) * | 1989-02-07 | 1991-06-11 | Doryokuro Kakunenryo Kaihatsu Jigyodan | Plant malfunction diagnostic method |
US5159660A (en) * | 1990-08-09 | 1992-10-27 | Western Thunder | Universal process control using artificial neural networks |
US5465321A (en) * | 1993-04-07 | 1995-11-07 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Hidden markov models for fault detection in dynamic systems |
US5513097A (en) * | 1993-05-17 | 1996-04-30 | Siemens Aktiengesellschaft | Method and control device for controlling a process including the use of a neural network having variable network parameters |
US5586221A (en) * | 1994-07-01 | 1996-12-17 | Syracuse University | Predictive control of rolling mills using neural network gauge estimation |
US5659667A (en) * | 1995-01-17 | 1997-08-19 | The Regents Of The University Of California Office Of Technology Transfer | Adaptive model predictive process control using neural networks |
US5680513A (en) * | 1994-08-19 | 1997-10-21 | Hyland; David C. | Series parallel approach to identification of dynamic systems |
US5781432A (en) * | 1993-03-02 | 1998-07-14 | Pavilion Technologies, Inc. | Method and apparatus for analyzing a neural network within desired operating parameter constraints |
US5794192A (en) * | 1993-04-29 | 1998-08-11 | Panasonic Technologies, Inc. | Self-learning speaker adaptation based on spectral bias source decomposition, using very short calibration speech |
US5796920A (en) * | 1994-08-19 | 1998-08-18 | Harris Corporation | Multiprocessor system and method for identification and adaptive control of dynamic systems |
US5825646A (en) * | 1993-03-02 | 1998-10-20 | Pavilion Technologies, Inc. | Method and apparatus for determining the sensitivity of inputs to a neural network on output parameters |
US5867386A (en) * | 1991-12-23 | 1999-02-02 | Hoffberg; Steven M. | Morphological pattern recognition based controller system |
US5877954A (en) * | 1996-05-03 | 1999-03-02 | Aspen Technology, Inc. | Hybrid linear-neural network process control |
US6532454B1 (en) * | 1998-09-24 | 2003-03-11 | Paul J. Werbos | Stable adaptive control using critic designs |
US6725208B1 (en) * | 1998-10-06 | 2004-04-20 | Pavilion Technologies, Inc. | Bayesian neural networks for optimization and control |
US20040181499A1 (en) * | 1995-08-01 | 2004-09-16 | Guided Systems Systems Technologies, Inc. | System and method for adaptive control of uncertain nonlinear processes |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5901246A (en) * | 1995-06-06 | 1999-05-04 | Hoffberg; Steven M. | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
DE59700720D1 (en) * | 1996-02-09 | 1999-12-23 | Siemens Ag | METHOD FOR GENERATING THE REGULATOR PARAMETERS FROM A RESPONSE SIGNAL OF A REGULATED SYSTEM BY A COMPUTER |
US5933345A (en) | 1996-05-06 | 1999-08-03 | Pavilion Technologies, Inc. | Method and apparatus for dynamic and steady state modeling over a desired path between two end points |
US6381504B1 (en) * | 1996-05-06 | 2002-04-30 | Pavilion Technologies, Inc. | Method for optimizing a plant with multiple inputs |
US6438430B1 (en) * | 1996-05-06 | 2002-08-20 | Pavilion Technologies, Inc. | Kiln thermal and combustion control |
US6278899B1 (en) * | 1996-05-06 | 2001-08-21 | Pavilion Technologies, Inc. | Method for on-line optimization of a plant |
ATE205608T1 (en) * | 1996-06-21 | 2001-09-15 | Siemens Ag | METHOD AND SYSTEM FOR COMMISSIONING INDUSTRIAL PLANTS, PARTICULARLY IN THE BASIC MATERIALS INDUSTRY |
US6363289B1 (en) * | 1996-09-23 | 2002-03-26 | Pavilion Technologies, Inc. | Residual activation neural network |
DE19718262A1 (en) * | 1997-04-30 | 1998-11-05 | Schenck Panel Production Syste | Method and system for generating a process model of a technical problem |
US6185470B1 (en) * | 1997-11-07 | 2001-02-06 | Mcdonnell Douglas Corporation | Neural network predictive control method and system |
-
1999
- 1999-04-12 US US09/290,791 patent/US6725208B1/en not_active Expired - Lifetime
-
2004
- 2004-04-20 US US10/827,977 patent/US20040199481A1/en not_active Abandoned
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4486221A (en) * | 1981-10-28 | 1984-12-04 | Basf Aktiengesellschaft | Thiazolo[2,3-b]quinazolones and their use for influencing plant growth |
US4992942A (en) * | 1989-01-25 | 1991-02-12 | Bahm, Inc. | Apparatus and method for controlling a system, such as nutrient control system for feeding plants, based on actual and projected data and according to predefined rules |
US5023045A (en) * | 1989-02-07 | 1991-06-11 | Doryokuro Kakunenryo Kaihatsu Jigyodan | Plant malfunction diagnostic method |
US5159660A (en) * | 1990-08-09 | 1992-10-27 | Western Thunder | Universal process control using artificial neural networks |
US5867386A (en) * | 1991-12-23 | 1999-02-02 | Hoffberg; Steven M. | Morphological pattern recognition based controller system |
US5781432A (en) * | 1993-03-02 | 1998-07-14 | Pavilion Technologies, Inc. | Method and apparatus for analyzing a neural network within desired operating parameter constraints |
US5825646A (en) * | 1993-03-02 | 1998-10-20 | Pavilion Technologies, Inc. | Method and apparatus for determining the sensitivity of inputs to a neural network on output parameters |
US5465321A (en) * | 1993-04-07 | 1995-11-07 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Hidden markov models for fault detection in dynamic systems |
US5794192A (en) * | 1993-04-29 | 1998-08-11 | Panasonic Technologies, Inc. | Self-learning speaker adaptation based on spectral bias source decomposition, using very short calibration speech |
US5513097A (en) * | 1993-05-17 | 1996-04-30 | Siemens Aktiengesellschaft | Method and control device for controlling a process including the use of a neural network having variable network parameters |
US5586221A (en) * | 1994-07-01 | 1996-12-17 | Syracuse University | Predictive control of rolling mills using neural network gauge estimation |
US5680513A (en) * | 1994-08-19 | 1997-10-21 | Hyland; David C. | Series parallel approach to identification of dynamic systems |
US5796920A (en) * | 1994-08-19 | 1998-08-18 | Harris Corporation | Multiprocessor system and method for identification and adaptive control of dynamic systems |
US5659667A (en) * | 1995-01-17 | 1997-08-19 | The Regents Of The University Of California Office Of Technology Transfer | Adaptive model predictive process control using neural networks |
US20040181499A1 (en) * | 1995-08-01 | 2004-09-16 | Guided Systems Systems Technologies, Inc. | System and method for adaptive control of uncertain nonlinear processes |
US5877954A (en) * | 1996-05-03 | 1999-03-02 | Aspen Technology, Inc. | Hybrid linear-neural network process control |
US6532454B1 (en) * | 1998-09-24 | 2003-03-11 | Paul J. Werbos | Stable adaptive control using critic designs |
US6725208B1 (en) * | 1998-10-06 | 2004-04-20 | Pavilion Technologies, Inc. | Bayesian neural networks for optimization and control |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7184992B1 (en) * | 2001-11-01 | 2007-02-27 | George Mason Intellectual Properties, Inc. | Constrained optimization tool |
US20050137995A1 (en) * | 2002-08-16 | 2005-06-23 | Powitec Intelligent Technologies Gmbh | Method for regulating a thermodynamic process by means of neural networks |
US20040181498A1 (en) * | 2003-03-11 | 2004-09-16 | Kothare Simone L. | Constrained system identification for incorporation of a priori knowledge |
US8360040B2 (en) | 2005-08-18 | 2013-01-29 | Honeywell International Inc. | Engine controller |
US8644961B2 (en) | 2005-12-12 | 2014-02-04 | Neuco Inc. | Model based control and estimation of mercury emissions |
US7966280B2 (en) * | 2007-02-13 | 2011-06-21 | Denso Corporation | Automotive air conditioner and method and apparatus for controlling automotive air conditioner |
US20080195564A1 (en) * | 2007-02-13 | 2008-08-14 | Denso Corporation | Automotive air conditioner and method and apparatus for controlling automotive air conditioner |
US8265854B2 (en) | 2008-07-17 | 2012-09-11 | Honeywell International Inc. | Configurable automotive controller |
US8620461B2 (en) | 2009-09-24 | 2013-12-31 | Honeywell International, Inc. | Method and system for updating tuning parameters of a controller |
US9170573B2 (en) | 2009-09-24 | 2015-10-27 | Honeywell International Inc. | Method and system for updating tuning parameters of a controller |
US8504175B2 (en) * | 2010-06-02 | 2013-08-06 | Honeywell International Inc. | Using model predictive control to optimize variable trajectories and system control |
US10309281B2 (en) | 2011-09-19 | 2019-06-04 | Garrett Transportation I Inc. | Coordinated engine and emissions control system |
US9677493B2 (en) | 2011-09-19 | 2017-06-13 | Honeywell Spol, S.R.O. | Coordinated engine and emissions control system |
US9650934B2 (en) | 2011-11-04 | 2017-05-16 | Honeywell spol.s.r.o. | Engine and aftertreatment optimization system |
US11619189B2 (en) | 2011-11-04 | 2023-04-04 | Garrett Transportation I Inc. | Integrated optimization and control of an engine and aftertreatment system |
US11156180B2 (en) | 2011-11-04 | 2021-10-26 | Garrett Transportation I, Inc. | Integrated optimization and control of an engine and aftertreatment system |
US10739735B2 (en) | 2012-11-06 | 2020-08-11 | Rockwell Automation Technologies, Inc. | Empirical modeling with globally enforced general constraints |
US11675319B2 (en) | 2012-11-06 | 2023-06-13 | Rockwell Automation Technology, Inc. | Empirical modeling with globally enforced general constraints |
US20140129491A1 (en) * | 2012-11-06 | 2014-05-08 | Rockwell Automation Technologies, Inc. | Empirical modeling with globally enforced general constraints |
US9147153B2 (en) * | 2012-11-06 | 2015-09-29 | Rockwell Automation Technologies, Inc. | Empirical modeling with globally enforced general constraints |
CN104345637A (en) * | 2013-07-31 | 2015-02-11 | 罗伯特·博世有限公司 | Method and device for adapting to function module based on data |
US10062354B2 (en) | 2014-10-10 | 2018-08-28 | DimensionalMechanics, Inc. | System and methods for creating virtual environments |
US10163420B2 (en) * | 2014-10-10 | 2018-12-25 | DimensionalMechanics, Inc. | System, apparatus and methods for adaptive data transport and optimization of application execution |
US20160105308A1 (en) * | 2014-10-10 | 2016-04-14 | DimensionalMechanics, Inc. | System, apparatus and methods for adaptive data transport and optimization of application execution |
US10503128B2 (en) | 2015-01-28 | 2019-12-10 | Garrett Transportation I Inc. | Approach and system for handling constraints for measured disturbances with uncertain preview |
US11687688B2 (en) | 2015-02-16 | 2023-06-27 | Garrett Transportation I Inc. | Approach for aftertreatment system modeling and model identification |
US10621291B2 (en) | 2015-02-16 | 2020-04-14 | Garrett Transportation I Inc. | Approach for aftertreatment system modeling and model identification |
US10235479B2 (en) | 2015-05-06 | 2019-03-19 | Garrett Transportation I Inc. | Identification approach for internal combustion engine mean value models |
US10423131B2 (en) | 2015-07-31 | 2019-09-24 | Garrett Transportation I Inc. | Quadratic program solver for MPC using variable ordering |
US11144017B2 (en) | 2015-07-31 | 2021-10-12 | Garrett Transportation I, Inc. | Quadratic program solver for MPC using variable ordering |
US11687047B2 (en) | 2015-07-31 | 2023-06-27 | Garrett Transportation I Inc. | Quadratic program solver for MPC using variable ordering |
US11180024B2 (en) | 2015-08-05 | 2021-11-23 | Garrett Transportation I Inc. | System and approach for dynamic vehicle speed optimization |
US10272779B2 (en) | 2015-08-05 | 2019-04-30 | Garrett Transportation I Inc. | System and approach for dynamic vehicle speed optimization |
CN105139274A (en) * | 2015-08-16 | 2015-12-09 | 东北石油大学 | Power transmission line icing prediction method based on quantum particle swarm and wavelet nerve network |
US10415492B2 (en) | 2016-01-29 | 2019-09-17 | Garrett Transportation I Inc. | Engine system with inferential sensor |
US11506138B2 (en) | 2016-01-29 | 2022-11-22 | Garrett Transportation I Inc. | Engine system with inferential sensor |
US10124750B2 (en) | 2016-04-26 | 2018-11-13 | Honeywell International Inc. | Vehicle security module system |
US10036338B2 (en) | 2016-04-26 | 2018-07-31 | Honeywell International Inc. | Condition-based powertrain control system |
US10309287B2 (en) | 2016-11-29 | 2019-06-04 | Garrett Transportation I Inc. | Inferential sensor |
US11057213B2 (en) | 2017-10-13 | 2021-07-06 | Garrett Transportation I, Inc. | Authentication system for electronic control unit on a bus |
Also Published As
Publication number | Publication date |
---|---|
US6725208B1 (en) | 2004-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6725208B1 (en) | Bayesian neural networks for optimization and control | |
Okut | Bayesian regularized neural networks for small n big p data | |
Ahmed et al. | Adaptive fuzzy logic-based framework for software development effort prediction | |
US20030055796A1 (en) | Combinatorial approach for supervised neural network learning | |
US20040093315A1 (en) | Neural network training | |
US7630868B2 (en) | Computer method and apparatus for constraining a non-linear approximator of an empirical process | |
US20170024642A1 (en) | System and method for training neural networks | |
US20030200189A1 (en) | Automatic neural-net model generation and maintenance | |
Carpenter et al. | Building robust simulation-based filters for evolving data sets | |
Eftekhari et al. | Extracting compact fuzzy rules for nonlinear system modeling using subtractive clustering, GA and unscented filter | |
US6985781B2 (en) | Residual activation neural network | |
US6363289B1 (en) | Residual activation neural network | |
Taveeapiradeecharoen et al. | Dynamic model averaging for daily forex prediction: A comparative study | |
US20230385666A1 (en) | Multi-source modeling with legacy data | |
Shemyakin et al. | Online identification of large-scale chaotic system | |
da Graça Marto et al. | Many-objective robust trajectory optimisation under epistemic uncertainty and imprecision | |
Murad et al. | Software Cost Estimation for Mobile Application Development-A Comparative Study of COCOMO Models | |
CN112862004B (en) | Power grid engineering cost control index prediction method based on variational Bayesian deep learning | |
Colò | Anomaly detection for cyber security: time series forecasting and deep learning | |
Polcz et al. | Efficient implementation of Gaussian process–based predictive control by quadratic programming | |
US20040039556A1 (en) | Filter models for dynamic control of complex processes | |
Herzallah et al. | Robust control of nonlinear stochastic systems by modelling conditional distributions of control signals | |
Repický et al. | Automated Selection of Covariance Function for Gaussian process Surrogate Models. | |
KR20190129422A (en) | Method and device for variational interference using neural network | |
Bornn et al. | A pseudo-marginal perspective on the ABC algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SILICON VALLEY BANK, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:PAVILION TECHNOLOGIES, INC.;REEL/FRAME:017240/0396 Effective date: 20051102 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: PAVILION TECHNOLOGIES, INC., TEXAS Free format text: RELEASE;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:020609/0702 Effective date: 20080220 |