US20050036483A1 - Method and system for managing programs for web service system - Google Patents
Method and system for managing programs for web service system Download PDFInfo
- Publication number
- US20050036483A1 US20050036483A1 US10/892,182 US89218204A US2005036483A1 US 20050036483 A1 US20050036483 A1 US 20050036483A1 US 89218204 A US89218204 A US 89218204A US 2005036483 A1 US2005036483 A1 US 2005036483A1
- Authority
- US
- United States
- Prior art keywords
- case
- identification information
- information
- processing
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3006—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0748—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a remote unit communicating with a single-box computer node experiencing an error/fault
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0766—Error or fault reporting or storing
- G06F11/0784—Routing of error reports, e.g. with a specific transmission path or data flow
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3055—Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
Abstract
A Web service status management service is provided independently of processing nodes related to a Web service case. Information for uniquely identifying the status management service is added to messages transmitted and received between the processing nodes. Communication between each of the processing nodes and the status management service makes it possible for the status management service to record the status condition of a specific Web service case. Then, according to the recorded status condition, a notification for cancellation of the case can be directly transmitted to the processing nodes related to the specific Web service case.
Description
- The present application claims priority from Japanese application JP2003-207003 filed on Aug. 11, 2003, the content of which is hereby incorporated by reference into this application.
- The present invention relates to a service processing technology for managing a plurality of processing nodes that provide services.
- As multistage Web services using coordination among a plurality of sub Web services provided by the processing nodes distributed over a plurality of servers, there is a provided a technology as described in David A. Cbappell et al. “Java Web Services”, O'Reilly & Associates, Inc., 2002. 3, page 6. When the need for canceling a Web service case arises due to a request from a client, occurrence of an error in a processing node, or the like, and when the cancellation event of the Web service case is notified to each of the processing nodes, the sequential notification of the cancellation event through a transmission path of messages related to the Web service case becomes necessary. Such notification becomes necessary because of the characteristic of the Web service that the client and each of the processing nodes cannot know the entire contents of sub Web services related to the Web service case.
- In the Web service, a flow is not defined in advance, each of the processing nodes can determine its subsequent node, and the flow control of a centralized management type by business flow servers is not performed. Thus, in an approach in a conventional business flow system, a status condition (progress condition) cannot be tracked, and the node to which cancellation of a service case should be notified cannot be known.
- An object of the present invention is therefore to manage a plurality of processing nodes that execute a Web service when the Web service is executed by the processing nodes.
- Other object of the present invention is to notify the processing nodes that execute a Web service of an error or a failure when the failure or the error has occurred in each Web service case.
- In order to achieve the above-mentioned objects, a Web service status management service that can be present independently of processing nodes related to a Web service case is provided. Information for uniquely identifying the status management service is added to messages transmitted and received between the processing nodes. Communication between each of the processing nodes and the status management service makes it possible for the status management service to record the status condition of a specific Web service case. Then, a unit for enabling direct transmission of a cancellation notification of the case to the processing nodes related to the specific Web service case in accordance with the recorded status condition is provided, thereby achieving the above-mentioned objects.
- The status management service is associated with each of the processing nodes by a specific Web service case and information included in messages related to the specific Web service case, transmitted and received between the processing nodes. By this information, the status management service can be uniquely identified. Accordingly, depending on each Web service case, the related processing nodes and the related status management service may differ.
- According to the present invention, when executing a service by a plurality of processing nodes, management of the processing nodes that execute the service becomes possible.
- Other objects, features and advantages of the invention will become apparent from the following description of the embodiments of the invention taken in conjunction with the accompanying drawings.
-
FIG. 1 is a diagram showing an entire configuration of the present invention; -
FIG. 2 is an explanatory drawing showing an example of execution of cancellation; -
FIG. 3 is an explanatory drawing showing a processing flow of a status management service; -
FIG. 4 is an explanatory drawing showing a processing flow of each processing node; -
FIG. 5 is an explanatory drawing showing an example of a case status table; -
FIG. 6 is an explanatory drawing showing an example of a message transmitted and received between processing nodes; -
FIG. 7 is an explanatory drawing showing an example of the content of a notification of transmission destination information; -
FIG. 8 is an explanatory drawing showing an example of the content of a notification of cancellation; and -
FIG. 9 is an explanatory drawing showing an example of a plurality of flows of service processing messages and notifications of transmission destination information when a plurality of status management services is present. - 1. First Embodiment
- An embodiment of the present invention will be described below.
-
FIG. 1 is a diagram showing an entire configuration, for explaining the present invention. Referring toFIG. 1 , astatus management service 100 functions to manage status conditions of the case of a Web service constituted from a plurality of sub Web services provided by a plurality of processing nodes. Aclient 110 is a Web service terminal, and each ofprocessing nodes client 110, the processing nodes for the Web service, and the status management service (or a management node) are connected over anetwork 140. - The
status management service 100 includes a case status notification receiving and transmitting unit 101, a case database (DB) 102, and a casestatus processing unit 103. The case status notification receiving and transmitting unit 101 receives and transmits status information of the Web service case and a notification of cancellation of the case from each of the processing nodes. Thecase database 102 holds case status information. The casestatus processing unit 103 updates thecase DB 102 in accordance with the information received by the case status notification transmitting receiving unit 101. The case DB 102 stores contents as shown in a case status table 500 inFIG. 5 , for example, which will be described hereinafter. - The
processing node 120 includes a node-specific processing unit 121, a message transmitting and receivingunit 122, and a case status notification transmitting and receivingunit 123. Theprocessing node 130 includes a node-specific processing unit 131, a message transmitting and receivingunit 132, and a case status notification transmitting and receivingunit 133. Each of the node-specific processing units units client 110 and with other node. Each of the case status notification transmitting and receivingunits status management service 100. The processing node in this embodiment may be a computer, logical computer, or a logical server, which can run a program that processes a Web service, or the program or an object for processing the Web service. -
FIG. 2 shows a flow of messages for executing cancellation of the case of a Web service according to a message requesting cancellation of the case from the client. Processing of the status management service may be performed by a start node. This makes it possible for the start node to perform status management of service processing by respective nodes and cancellation management. - Referring to
FIG. 2 , the Web service is constituted from a plurality of sub Web services. By transmitting a message requesting execution of the Web service to astart node 201 by aclient 110, the Web service is executed by thestart node 201 andother processing nodes client 110 may also serve as thestart node 201. - Each of
arrows 211 indicates a flow of a message transmitted and received between the nodes for execution of the Web service, and corresponds to amessage 414 inFIG. 4 , which will be described hereinafter. Each ofsymbols 212 indicates a state of finishing node-specific processing and waiting for the reception of a notification of the case completion or the case cancellation. Each ofsymbols 213 indicates a state of performing the node-specific processing. - A
message 221 is the message requesting cancellation of the Web service case being executed. Thestart node 201 that has received themessage 221 transmits to the status management service 100 anotification 222 to the effect that a cancellation request has been made. - The
status management service 100 that has received thenotification 222 transmits the notification of cancellation of the Web service case to thenodes notification FIG. 4 or anotification FIG. 3 , which will be described hereinafter, in accordance with information on the node states indicated by thesymbols nodes -
FIG. 3 shows a processing flow of thestatus management service 100. First, atstep 301, thestatus management service 100 receives thecase registration information 302 from thestart node 201 at the start of a Web service case. Thecase registration information 302 corresponds to thecase registration information 404 inFIG. 4 , which will be described hereinafter. - Next, at
step 303, thestatus management service 100 registers the case in the case DB. By registration of the case, arecord 510 inFIG. 5 , which will be described hereinafter, is created, and information is recorded in acase ID field 501, a deadline (expiration)field 502, and astart node field 503, respectively. Further, a sub-record 511 is created, and information is recorded in anode field 504, and recording of “in processing” is performed in astatus field 505 of the nodes involved in the Web service case. Incidentally, “in processing” corresponds to the state indicated by thesymbol 213 inFIG. 2 . - Next, at
step 304, thestatus management service 100 is brought to the state where a notification from each processing node is waited for, and atstep 305, thestatus management service 100 receives thenotification 306 from a certain processing node. Thenotification 306 corresponds tonotifications FIG. 4 , which will be described hereinafter. - Next, at
step 307, thestatus management service 100 checks the content of the notification. When thenotification 306 has been determined to be the notification of completion of node processing, thestatus management service 100 updates the case DB atstep 308. For updating the case DB, thestatus management service 100 extracts the record corresponding to the information in thecase ID field 501 and thenode field 504 inFIG. 5 , which will be described hereinafter, according to the case ID and the name of the node included in thenotification 306, and changes the information in thestatus field 505 of the node from “in processing” to “waiting for completion”. “Waiting for completion” corresponds to the state indicated by thesymbol 212 inFIG. 2 . - Next, at
step 309, thestatus management service 100 extracts the case corresponding to the information in thecase ID field 501 inFIG. 5 , which will be described hereinafter, according to the case ID included in thenotification 306. Then, thestatus management service 100 checks thestatus field 505 of the nodes involved in the case. If a node for which “in processing” is recorded is still present, the operation is returned to step 304. If recording of “waiting for completion” is performed on all the nodes, the operation proceeds to step 310. Then, atstep 310, thestatus management service 100 transmits acase completion notification 311 to each of the nodes involved in the case, thereby completing the processing related to the case. Thecase completion notification 311 corresponds to anotification 419 inFIG. 4 , which will be described hereinafter, and by which the operation proceeds fromstep 420 to step 421. - On the other hand, when the content of the notification has been determined to be a cancellation request at
step 307, thestatus management service 100 extracts the case corresponding to the information in thecase ID field 501 inFIG. 5 , which will be described hereinafter, according to the case ID included in thenotification 306, and then transmits acancellation notification 313 to the nodes involved in the case atstep 312, thereby completing the processing related to the case. Thecancellation notification 313 corresponds to thenotification 419 inFIG. 4 , which will be described hereinafter, and by which the operation proceeds fromstep 420 to 422. - Next, when the deadline for execution of the case recorded in the
deadline field 502 has been reached while notification from each processing node is waited for atstep 304, the operation proceeds to step 312, and thestatus management service 100 transmits the cancellation notification. When the content of the notification has been determined to the information of a transmission destination atstep 307, thestatus management service 100 extracts the case corresponding to the information in thecase ID field 501 inFIG. 5 , which will be described hereinafter, according to the case ID included in thenotification 306. Then, a record such as a sub-record 512, or a sub-record 513 is added, and the node of the transmission destination included in thenotification 306 is recorded in thenode field 504. Then, “in processing” is recorded in thestatus field 505 of the node atstep 314. “In processing” corresponds to the state indicated by thesymbol 213 inFIG. 2 . - Digital signature or encryption may be performed on the
notifications -
FIG. 4 shows a processing flow of each of theprocessing nodes node 201 inFIG. 2 , the processing node first receives amessage 402 requesting execution of a Web service case from theclient 110 atstep 401. - Next, at
step 403, the processing node transmits thecase registration information 404 at the start of the Web service case to thestatus management service 100, and the operation proceeds to step 405. Thecase registration information 404 corresponds to thecase registration information 302 inFIG. 3 . - When the processing node is an intermediate node or an end node like the
nodes FIG. 2 , the processing node first receives themessage 414 requesting execution of a sub Web service from the preceding node. Then, the operation proceeds to step 405. Themessage 414 corresponds to each of thearrows 211 inFIG. 2 . - At
step 405, the processing node performs arbitrary processing specific to the node. This processing corresponds to the processing performed by the node-specific processing unit FIG. 1 . This processing is executed by the function of the program or the object set in the node in advance. By analyzing an input message, which processing is to be performed is determined. - Next, at
step 407, the processing node determines whether the processing specific to the node instep 405 was properly performed or an error occurred in the processing. When it has been determined that the error occurred in the processing, the processing node transmits thecancellation request 409 to thestatus management service 100 atstep 408, thereby completing the processing by the node related to the case. - On the other hand, when the processing node has determined at
step 407 that the processing was properly performed, the processing node determines atstep 410 whether a subsequent node is present for the processing of the Web service case. When the subsequent node is present, the operation proceeds to step 411. When the subsequent node is not present, the operation proceeds to step 415. - At
step 411, the processing node transmits thetransmission destination information 412 to thestatus management service 100. Thetransmission destination information 412 corresponds to thenotification 306 inFIG. 3 , and by thenotification 306, the operation proceeds fromstep 307 to step 314. Further, atstep 413, the processing node transmits themessage 414 requesting execution of a sub Web service to the subsequent node. Incidentally, for execution of the sub Web service, a plurality of subsequent nodes may be present, and in this case, the processing node sequentially transmits themessage 414 to the subsequent nodes. - Next, at
step 415, the processing node transmits thenotification 416 indicating completion of the processing by the node to thestatus management service 100. Thenotification 416 indicating completion of the processing corresponds to thenotification 306 inFIG. 3 , by which the operation proceeds fromstep 307 to step 308. - Next, at
step 417, the processing node is brought to the state where a notification from thestatus management service 100 is waited for. Then, atstep 418, the processing node receives thenotification 419. Thenotification 419 corresponds to thenotifications FIG. 3 . - Next, at
step 420, the content of thenotification 419 is checked. Then, when thenotification 419 has been determined to be the notification of completion, arbitrary processing for completion specific to the processing node such as a database commit is performed atstep 421, thereby completing the processing by the node related to the case. - On the other hand, when the content of the notification has been determined to the cancellation notification at
step 420, arbitrary processing for cancellation specific to the processing node such as a database rollback is performed atstep 422, thereby completing the processing by the node related to the case. - Digital signature or encryption may be performed on the
notifications -
FIG. 5 shows an example of a case status table. The case status table is stored in thecase DB 102. - Referring to
FIG. 5 , the case status table 500 includes thecase ID field 501 for describing a case ID for uniquely identifying a case, thedeadline field 502 for describing a deadline for processing of the case, thestart node field 503 for describing a start node for the case, anode field 504 for describing a list of nodes related to the case, and astatus field 505 for describing processing statuses of the respective nodes related to the case. In the case status table 500, information in each of therecord 510 and arecord 520 corresponds to information on a single case. Therecords step 303 inFIG. 3 . - Information in the sub-records 511, 512, and 513 and a sub-record 514 within the
record 510 correspond to information on the nodes related to the case in therecord 510, and are created atstep 314 inFIG. 3 . The statuses of these sub-records are updated atstep 308. - The
record 510, for example, indicates the state shown inFIG. 2 . The sub-record 511 corresponds to thestart node 201 and indicates that the node is in the state of “waiting for completion”, indicated by thesymbol 212. The sub-record 512 corresponds to theend node 202, and indicates that the node is in the state of “in processing”, indicated by thesymbol 213. The sub-record 513 corresponds to theintermediate node 203 and indicates that the node is in the state of “waiting for completion”, indicated by thesymbol 212. The sub-record 514 corresponds to theintermediate node 204 and indicates that the node is in the state of “in processing”, indicated by thesymbol 213. -
FIG. 6 shows an example of a message transmitted and received between the processing nodes and corresponds to the messages presented by thearrows 211 inFIG. 2 and 414 inFIG. 4 . - Referring to
FIG. 6 , amessage 600 is constituted from amessage header 610 and amessage body 630. Themessage header 610 has an element 620 that includes information for controlling a series of messages related to a Web service case. The element 620 includes the status management service location information 621 used for the Web service case and acase ID 622 for uniquely identifying the Web service case. Thecase ID 622 corresponds to the case ID recorded in thecase ID field 501 inFIG. 5 . In addition to the status management service location information 621 and thecase ID 622, the element 620 may include information of the deadline and the start node like the information indicated byelement 623. Themessage 600 may further include other information specific to the Web service and sub Web services related to the Web service within themessage header 610 and themessage body 630. By setting positional information of the start node in the status management service location information 621, execution of the status management service by the start node becomes possible. -
FIG. 7 shows an example of a notification indicating information of a transmission destination transmitted from a processing node to thestatus management service 100. The transmission destination indicates the node subsequent to the processing node. This notification corresponds to thenotification 412 inFIG. 4 . Referring toFIG. 7 , anotification 700 has anelement 710 that includes information for controlling a series of messages related to a Web service. Theelement 710 includes at least acase ID 711 for uniquely identifying the Web service case andtransmission destination information 720. Thetransmission destination information 720 further includes amessage transmission source 721 and amessage transmission destination 722. - The
status management service 100 that has received thenotification 700 updates thecase DB 102 according to the content of thetransmission destination information 720 atstep 314 inFIG. 3 . -
FIG. 8 shows an example of a cancellation notification, transmitted and received between a processing node and thestatus management service 100, and corresponds to thenotification 313 inFIG. 3 or thenotification 409 inFIG. 4 . - Referring to
FIG. 8 , anotification 800 has anelement 810 that includes information for controlling a series of messages related to a Web service case. Theelement 810 includes at least acase ID 811 for uniquely identifying the Web service case, andcancellation information 820. Further, thecancellation information 820 may include at least information 821 of a node that has requested cancellation and acancellation reason 822. - The
status management service 100 that has received thenotification 800 transmits thecancellation notification 313 to respective nodes related to the Web service case, registered in thecase DB 102, atstep 312 inFIG. 3 . - 2. Second Embodiment
- Another embodiment of the present invention will be described below.
-
FIG. 9 illustrates an example showing a plurality of flows of service processing messages and notifications of transmission destination information when a plurality of status management services is present on a network. Referring toFIG. 9 , flows 902 and 912 of the service processing messages are occurred which have passed through a plurality of processing nodes according to messages requesting service processing transmitted fromclients - The service processing message flow 902 passes through a
start node 921 andnodes transmission destination information 904 are transmitted to astatus management service 903 in accordance with information for identifying the status management service described in the service processing messages included in theflow 902 during the processes of these nodes. Each of the notifications oftransmission destination information 904 corresponds to thenotification 412 inFIG. 4 . The status conditions of the case related to the serviceprocessing message flow 902 are recorded in thestatus management service 903. Then, the processing shown in the embodiment described before can be performed on the case. - Likewise, the service processing message flow 912 passes through the
start node 921,nodes transmission destination information 914 are transmitted to astatus management service 913 in accordance with information for identifying the status management service described in the service processing messages included in theflow 912 during the processes of these nodes. Each of the notifications oftransmission destination information 914 corresponds to thenotification 412 inFIG. 4 . The status conditions of the case related to the serviceprocessing message flow 912 are recorded in thestatus management service 913. Then, the processing shown in the embodiment described before can be performed on the case. - Though the service processing message flows 902 and 912 have the
common start node 921, the flows may have different start nodes. Theclients status management services - It should be further understood by those skilled in the art that although the foregoing description has been made on embodiments of the invention, the invention is not limited thereto and various changes and modifications may be made without departing from the spirit of the invention and the scope of the appended claims.
Claims (10)
1. A service processing method in a service processing system including a first node for performing processing in accordance with the received message, a second node for performing processing in accordance with the message received from the first node, a third node for performing processing in accordance with the message received from the second node, and a management node, the method comprising the steps of:
in the first node, processing a predetermined service based on a received processing request upon reception of the message including transmission source identification information, identification information on a case, and the processing request, transmitting a message including the received transmission source identification information and the received case identification information to the management node, and transmitting a message including the transmission source information and completion information indicating an error in processing of the predetermined service to the management node when the error occurs;
in the second node, processing a predetermined service in accordance with a received processing request upon reception of the message including the transmission source identification information, the case identification information, and a processing request, transmitting a message including the received transmission source identification information and the received case identification information to the management node, and transmitting a message including the transmission source information and the completion information indicating an error in processing of the predetermined service to the management node when the error occurs; and
in the management node, storing the transmission source identification information in a case database when the message received therein includes the transmission source information and the case identification information, and transmitting a message including cancellation information to the nodes corresponding to the stored transmission source information in accordance with the stored transmission source information when the received message includes the case identification information and the completion information indicating the error.
2. The service processing method according to claim 1 , wherein the management node is the first node.
3. A service processing method in a service processing system including a first node for performing processing in accordance with a received message, a second node for performing processing in accordance with the message received from the first node, a third node for performing processing in accordance with the message received from the second node, and a management node, the method comprising the steps of:
in the first node, transmitting to the management node a message including transmission source identification information and identification information on a case, upon reception of the message including the transmission source identification information, the case identification information, and a processing request, transmitting to the second node the message including the transmission source identification information, the case identification information, transmission destination identification information, and a processing request, and transmitting to the management node a message including the case identification information, the transmission source information, and completion information indicating normal completion after processing of a predetermined service based on the received processing request is completed normally;
in the second node, transmitting to the management node a message including the transmission source identification information and the case identification information, upon reception of the message including the transmission source identification information, the case identification information, and a processing request, transmitting to the management node a message including the transmission source identification information, the case identification information, and completion information indicating normal completion after processing of a predetermined service in accordance with the received processing request is completed normally; and
in the management node, storing the case identification information, the transmission source identification information, and status information indicating that processing is being performed when the received message includes the transmission source identification information and the case identification information, changing and storing the status information corresponding to the case identification information and the transmission source identification information to status information indicating that completion of the case is waited for, when the received message includes the transmission source identification information, the transmission destination identification information, and the case identification information, and deleting the transmission source identification information corresponding to the case identification information and the status information corresponding to the transmission source identification information when the received message includes the transmission source information and the completion information indicating the normal completion.
4. The service processing method according to claim 3 , wherein the first node processes the predetermined service in accordance with the received processing request upon reception of the message including the transmission source identification information, the case identification information, and the processing request, transmits to the management node the message including the transmission source identification information and the case identification information, and transmits to the management node a message including the transmission source identification information and completion information indicating an error in processing of the predetermined service when the error occurs;
the second node processes the predetermined service in accordance with the received processing request upon reception of the message including the transmission source identification information, the case identification information, and the processing request, transmits to the management node the message including the received transmission source identification information and the received case identification information, and transmits to the management node a message including the transmission source identification information and completion information indicating an error in processing of the predetermined service when the error occurs; and
the management node stores the transmission source identification information in a case database when the received message includes the transmission source identification information and the case identification information, and transmits to the nodes corresponding to the stored transmission source information a message including cancellation information in accordance with the stored transmission source identification information when the received message includes the case identification information and the completion information indicating the error.
5. The service processing method according to claim 3 , wherein the management node is the first node.
6. A service processing method comprising a plurality of nodes and a management node, wherein each of the nodes analyzes a message in response to input of the message, transmits to the management node information on a case and identification information on said each of the nodes, included in the message, and transmits to the management node a request for cancellation of the case when an error occurs in processing of a predetermined service by said each of the nodes; and
the management node stores the case information and the identification information on said each of the nodes in association with each other, in response to input of the case information and the identification information on said each of the nodes, analyzes the request for the cancellation in response to input of the request for the cancellation, and notifies the cancellation and the case information for which the cancellation has occurred to the nodes corresponding to the case information.
7. The service processing method according to claim 6 , wherein the management node is a start node.
8. A service processing system comprising a plurality of nodes and a management node, wherein each of the nodes comprises:
means for analyzing a message in response to input of the message, transmitting to the management node information on a case and identification information on said each of the nodes, included in the message, and transmitting to the management node a request for cancellation of the case when an error occurs in processing of a predetermined service by said each of the nodes; and
the management node comprises:
means for storing the case information and the identification information on said each of the nodes in association with each other, in response to input of the case information and the identification information on said each of the nodes, analyzing the request for the cancellation in response to input of the request for the cancellation, and notifying the cancellation and the case information for which the cancellation has occurred to the nodes corresponding to the case information.
9. A service processing program for a service processing system including a plurality of nodes and a management node, comprising:
a module, executed in each of the nodes, for analyzing a message in response to input of the message, transmitting to the management node information on a case and identification information on said each of the nodes, included in the message, and transmitting to the management node a request for cancellation of the case when an error occurs in processing of a predetermined service by said each of the nodes; and
a module, executed in the management node, for storing the case information and the identification information on said each of the nodes in association with each other, in response to input of the case information and the identification information on said each of the nodes, analyzing the request for the cancellation in response to input of the request for the cancellation, and notifying the cancellation and the case information for which the cancellation has occurred to the nodes corresponding to the case information.
10. A service processing method using a plurality of nodes and a management node, wherein the management node stores information on a case and identification information on each of the nodes in association with each other, in response to input of the case information and the identification information on said each of the nodes, analyzes a request for cancellation of the case in response to input of the request for the cancellation, and notifies the cancellation and the case information for which the cancellation has occurred to the nodes corresponding to the case information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003207003 | 2003-08-11 | ||
JP2003-207003 | 2003-08-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050036483A1 true US20050036483A1 (en) | 2005-02-17 |
Family
ID=34131398
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/892,182 Abandoned US20050036483A1 (en) | 2003-08-11 | 2004-07-16 | Method and system for managing programs for web service system |
Country Status (1)
Country | Link |
---|---|
US (1) | US20050036483A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110085443A1 (en) * | 2008-06-03 | 2011-04-14 | Hitachi. Ltd. | Packet Analysis Apparatus |
US20160127254A1 (en) * | 2014-10-30 | 2016-05-05 | Equinix, Inc. | Orchestration engine for real-time configuration and management of interconnections within a cloud-based services exchange |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5774660A (en) * | 1996-08-05 | 1998-06-30 | Resonate, Inc. | World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network |
US6070190A (en) * | 1998-05-11 | 2000-05-30 | International Business Machines Corporation | Client-based application availability and response monitoring and reporting for distributed computing environments |
US6122664A (en) * | 1996-06-27 | 2000-09-19 | Bull S.A. | Process for monitoring a plurality of object types of a plurality of nodes from a management node in a data processing system by distributing configured agents |
US6134589A (en) * | 1997-06-16 | 2000-10-17 | Telefonaktiebolaget Lm Ericsson | Dynamic quality control network routing |
US20020065918A1 (en) * | 2000-11-30 | 2002-05-30 | Vijnan Shastri | Method and apparatus for efficient and accountable distribution of streaming media content to multiple destination servers in a data packet network (DPN) |
US20020198996A1 (en) * | 2000-03-16 | 2002-12-26 | Padmanabhan Sreenivasan | Flexible failover policies in high availability computing systems |
US20030018927A1 (en) * | 2001-07-23 | 2003-01-23 | Gadir Omar M.A. | High-availability cluster virtual server system |
US6574197B1 (en) * | 1998-07-03 | 2003-06-03 | Mitsubishi Denki Kabushiki Kaisha | Network monitoring device |
US6609213B1 (en) * | 2000-08-10 | 2003-08-19 | Dell Products, L.P. | Cluster-based system and method of recovery from server failures |
US20030177224A1 (en) * | 2002-03-15 | 2003-09-18 | Nguyen Minh Q. | Clustered/fail-over remote hardware management system |
US6868442B1 (en) * | 1998-07-29 | 2005-03-15 | Unisys Corporation | Methods and apparatus for processing administrative requests of a distributed network application executing in a clustered computing environment |
US6880100B2 (en) * | 2001-07-18 | 2005-04-12 | Smartmatic Corp. | Peer-to-peer fault detection |
US6952766B2 (en) * | 2001-03-15 | 2005-10-04 | International Business Machines Corporation | Automated node restart in clustered computer system |
US7047287B2 (en) * | 2000-10-26 | 2006-05-16 | Intel Corporation | Method and apparatus for automatically adapting a node in a network |
US7080378B1 (en) * | 2002-05-17 | 2006-07-18 | Storage Technology Corporation | Workload balancing using dynamically allocated virtual servers |
US7206836B2 (en) * | 2002-09-23 | 2007-04-17 | Sun Microsystems, Inc. | System and method for reforming a distributed data system cluster after temporary node failures or restarts |
US7284147B2 (en) * | 2003-08-27 | 2007-10-16 | International Business Machines Corporation | Reliable fault resolution in a cluster |
US7296268B2 (en) * | 2000-12-18 | 2007-11-13 | Microsoft Corporation | Dynamic monitor and controller of availability of a load-balancing cluster |
-
2004
- 2004-07-16 US US10/892,182 patent/US20050036483A1/en not_active Abandoned
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6122664A (en) * | 1996-06-27 | 2000-09-19 | Bull S.A. | Process for monitoring a plurality of object types of a plurality of nodes from a management node in a data processing system by distributing configured agents |
US5774660A (en) * | 1996-08-05 | 1998-06-30 | Resonate, Inc. | World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network |
US6134589A (en) * | 1997-06-16 | 2000-10-17 | Telefonaktiebolaget Lm Ericsson | Dynamic quality control network routing |
US6070190A (en) * | 1998-05-11 | 2000-05-30 | International Business Machines Corporation | Client-based application availability and response monitoring and reporting for distributed computing environments |
US6574197B1 (en) * | 1998-07-03 | 2003-06-03 | Mitsubishi Denki Kabushiki Kaisha | Network monitoring device |
US6868442B1 (en) * | 1998-07-29 | 2005-03-15 | Unisys Corporation | Methods and apparatus for processing administrative requests of a distributed network application executing in a clustered computing environment |
US20020198996A1 (en) * | 2000-03-16 | 2002-12-26 | Padmanabhan Sreenivasan | Flexible failover policies in high availability computing systems |
US6609213B1 (en) * | 2000-08-10 | 2003-08-19 | Dell Products, L.P. | Cluster-based system and method of recovery from server failures |
US7181523B2 (en) * | 2000-10-26 | 2007-02-20 | Intel Corporation | Method and apparatus for managing a plurality of servers in a content delivery network |
US7047287B2 (en) * | 2000-10-26 | 2006-05-16 | Intel Corporation | Method and apparatus for automatically adapting a node in a network |
US20020065918A1 (en) * | 2000-11-30 | 2002-05-30 | Vijnan Shastri | Method and apparatus for efficient and accountable distribution of streaming media content to multiple destination servers in a data packet network (DPN) |
US7296268B2 (en) * | 2000-12-18 | 2007-11-13 | Microsoft Corporation | Dynamic monitor and controller of availability of a load-balancing cluster |
US6952766B2 (en) * | 2001-03-15 | 2005-10-04 | International Business Machines Corporation | Automated node restart in clustered computer system |
US6880100B2 (en) * | 2001-07-18 | 2005-04-12 | Smartmatic Corp. | Peer-to-peer fault detection |
US20030018927A1 (en) * | 2001-07-23 | 2003-01-23 | Gadir Omar M.A. | High-availability cluster virtual server system |
US20030177224A1 (en) * | 2002-03-15 | 2003-09-18 | Nguyen Minh Q. | Clustered/fail-over remote hardware management system |
US7080378B1 (en) * | 2002-05-17 | 2006-07-18 | Storage Technology Corporation | Workload balancing using dynamically allocated virtual servers |
US7206836B2 (en) * | 2002-09-23 | 2007-04-17 | Sun Microsystems, Inc. | System and method for reforming a distributed data system cluster after temporary node failures or restarts |
US7284147B2 (en) * | 2003-08-27 | 2007-10-16 | International Business Machines Corporation | Reliable fault resolution in a cluster |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110085443A1 (en) * | 2008-06-03 | 2011-04-14 | Hitachi. Ltd. | Packet Analysis Apparatus |
US20160127254A1 (en) * | 2014-10-30 | 2016-05-05 | Equinix, Inc. | Orchestration engine for real-time configuration and management of interconnections within a cloud-based services exchange |
US10129078B2 (en) * | 2014-10-30 | 2018-11-13 | Equinix, Inc. | Orchestration engine for real-time configuration and management of interconnections within a cloud-based services exchange |
US10230571B2 (en) | 2014-10-30 | 2019-03-12 | Equinix, Inc. | Microservice-based application development framework |
US10764126B2 (en) | 2014-10-30 | 2020-09-01 | Equinix, Inc. | Interconnection platform for real-time configuration and management of a cloud-based services exhange |
US11218363B2 (en) | 2014-10-30 | 2022-01-04 | Equinix, Inc. | Interconnection platform for real-time configuration and management of a cloud-based services exchange |
US11936518B2 (en) | 2014-10-30 | 2024-03-19 | Equinix, Inc. | Interconnection platform for real-time configuration and management of a cloud-based services exchange |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10938887B2 (en) | System and method for event driven publish-subscribe communications | |
US7530078B2 (en) | Certified message delivery and queuing in multipoint publish/subscribe communications | |
US8788565B2 (en) | Dynamic and distributed queueing and processing system | |
RU2363040C2 (en) | Message delivery between two terminal points with configurable warranties and features | |
US8418191B2 (en) | Application flow control apparatus | |
US7389350B2 (en) | Method, apparatus and computer program product for integrating heterogeneous systems | |
EP2335153B1 (en) | Queue manager and method of managing queues in an asynchronous messaging system | |
US6934247B2 (en) | Recovery following process or system failure | |
US20030135556A1 (en) | Selection of communication strategies for message brokers or publish/subscribe communications | |
US20120239620A1 (en) | Method and system for synchronization mechanism on multi-server reservation system | |
CN106506490B (en) | A kind of distributed computing control method and distributed computing system | |
US20100058355A1 (en) | Firewall data transport broker | |
US8458725B2 (en) | Computer implemented method for removing an event registration within an event notification infrastructure | |
JP4356018B2 (en) | Asynchronous messaging over storage area networks | |
KR20090001410A (en) | System and method for device management security of trap management object | |
US20020023088A1 (en) | Information routing | |
JP4259427B2 (en) | Service processing system, processing method therefor, and processing program therefor | |
KR101301447B1 (en) | Independent message stores and message transport agents | |
US20050036483A1 (en) | Method and system for managing programs for web service system | |
CN116542623A (en) | Business constraint relation management and control method and business relation management engine | |
US7461068B2 (en) | Method for returning a data item to a requestor | |
US7650410B2 (en) | Method and system for managing programs for Web service system | |
US20090313326A1 (en) | Device management using event | |
KR20040105588A (en) | Method with management of an opaque user identifier for checking complete delivery of a service using a set of servers | |
JP2003140987A (en) | System, method and program for supporting security audit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |