CA1241765A - Distributed information backup system - Google Patents

Distributed information backup system

Info

Publication number
CA1241765A
CA1241765A CA000497315A CA497315A CA1241765A CA 1241765 A CA1241765 A CA 1241765A CA 000497315 A CA000497315 A CA 000497315A CA 497315 A CA497315 A CA 497315A CA 1241765 A CA1241765 A CA 1241765A
Authority
CA
Canada
Prior art keywords
data
data base
predetermined
base
accessing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000497315A
Other languages
French (fr)
Inventor
Raman Lakshmanan
Catherine A. Blackwell
Mahadevan Subramanian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iconectiv LLC
Original Assignee
Bell Communications Research Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bell Communications Research Inc filed Critical Bell Communications Research Inc
Application granted granted Critical
Publication of CA1241765A publication Critical patent/CA1241765A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2097Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space

Abstract

Abstract of the Disclosure A distributed information backup system is disclosed. The inventive backup system includes a first computer which accesses data from a central data base and periodically distributes the data to a predetermined multiplicity of other computers upon request therefrom.
Each of the other computers receives only a portion of the distributed data and thereupon updates a local data base with the data each receives. When the central data base is inaccessible, a data base user accesses one of the local data bases through one of the multiplicity of computers to obtain the data which it would ordinarily obtain from the central data base to perform his tasks.

Description

--" 12417~i5 Background of_the Invention The presen~ invention pertains to a distributed information backup system.
An enormous collection of data is required to S carry out most administrative tasks in a technologically complex environment. In practice, such data originate in a large number of geographically dispersed locations. ~s such, many data base applications systems achieve advantages by storing the collection of data in 3 central data base. Through centralized storage, the efforts required to gather and administer the data, i.e. to update the data base, are greatly simplified and reduced. For these reasons, "main-frame" computer systems which store and provide access to enormous central data bases have been developed.
The data stored in such central data bases are often accessed by users who are located at many widely dispersed geographic locations. Access to the data is often critical for specific jobs being performed in the "field", i.e. many jobs simply cannot be carried out if the data in the central data base is not accessible to the geographically dispersed users. The central data base may be inaccessible for a variety of reasons, such as, hardware maintenance, data base updating, equipment failure and/or a failure in data communications between a user and the central data base.
To ensure data access, most data base systems provide a backup capability. In one typical system in the art, this backup is provided by duplicating the central data base itself. Alternatively, in other well-known systems in the art, backup is provided by duplicating the computer and/or communications facilities or by combinations of these two methods. ~nfortunately, providing backup capabilities is expensive because of the q~

~2~:~7~i extra harclware and software required Eor its implementation.
In other systems in the art, a central data base is accessed by users throuyh one or more "front end"
processors which serve as a partial backup capability.
These "front-end" processors only contain a small portion of the data that resides in the central da-ta base. Thus, even if a user could access the "front-end" processors when the central data base was inaccessible, the data accessed by the "front-end" processor alone is minimal.
This implementation is expensive and it is still vulnerable to failure in data communications.
Thus, a need exists for a reliable, inexpensive data base backup system that provides data accessibility when the centralized data base is inaccessible and when the communication link between users and the centralized data base is not operating properly.
Summary of the Invention Apparatus fabricated in accordance with the present invention advantageously provide a reliable, inexpensive distributed inEormation backup system.
The inventive backup system includes a first computer which accesses data from a central data base and periodically distributes the data to a predeter~ined multiplicity of other computers upon request therefrom.
Each of the other computers receives only a portion of the distributed data and thereupon updates a local data base with the data each receives -- update includes such tasks as adding new data to the local data base and/or deleting or altering existiny data in the local data base. When the central data base is inaccessible, a data base user accesses one of the loca] data bases throuyh one of the multiplicity of computers to obtain the data wnich it would ordinarily obtain from the central data base to perform his tasks.

.~

76~i -- 3 ~

In one embodiment of the present invention, the data distributed and incorporated into the local data bases comprises a subset of all the da-ta in the central data base, this subset being only enough data to provide users with the capability of performing their tasks for the short time during which the central data base is inaccessible to users.
In a second embodiment of the present invention, when updates to the data in the central data base occur, they cause the first computer to access only predetermined portions of the data from the central data base for distribution, the predetermined portions of data being specified by the updates to the central data base. In such an embodiment, the data stored in the local data bases comprises a subset of all the da-ta in the central data base.
Brief Description of the Drawing ~ complete understanding of the present invention may be gained by considering the following detailed description in conjunction with the accompanying drawing, in which:
FIG. 1 shows, in pictorial form, a centralized data base system without backup;
FIG. 2a and 2b show, in pictorial Eorm, a central data base system having a backup capability known to the art; and ~IG. 3 shows, in pictorial form, a central data base system with a distributed data base backup system embodying the principles of the present invention.
To facilitate reader understanding, identical reference nurnerals are used to designate elements common to the figures.
Detailed Description The present invention is best understood in the context of a specific application which utilizes an information backup system and will therefore be discussed -~ ~2~

in terms of the Circuit Installation and Maintenance Assistance Package (CIMAP).
CIMAP is a central data base system that mechanizes the overall administration, i.e. coordination and trackingl of installa-tion and maintenance of messaye and special service circ~its in a telephone network. In addition, it provides on-line information to assist in resolving circuit troubles. The Special Service Center (SSC) module in CIMAP provides on-line information, on request by users, to track trouble reports on special service circuits in various special service centers, i.e.
locations which serve as ad~ninistration and testing centers to coordinate activities concerning special service circuits in a geoyraphical area. ~ telephone company requires twenty-four hour availability of the central data base in order to pro2erly maintain special service circuits for two reasons: 1) many workers are assigned to an SSC Eor testing and repairing special service circuits; if the central data base is unavailable, then these workers are idle and an enormous expense is incurred and 2) if a trouble exists on a special service circuit which renders the circuit unusable for a substantial period of time, for example an hour, the customer receives a rebate on his bill; if the central data base is unavailable, the circuit cannot be repaired and substantial revenues are lost.
Consequently, in CIMAP, the main objective of a backup system is to guarantee accessibility of the information needed to administer the repair of special service circuits by personnel in the SSCs during those times when the central data base is inaccessible, for example during maintenance of the central d3ta base system or duriny failure of the communication link between an SSC
and the central data base system.
Before describing the inventive backup system, we will first generally describe a typical centra~ data base system operatiny without a backup system. ~IG. 1 , ~

- s -shows a cen-tral data base system 10 and several geographically dispersed SSCs 20, 21 and 22. The central data base system comprises computer 30, data base storage device 31 and communications controller 32~ Computer 30 accepts data, formats and organi~es it, and then transmits the data to storage device 31, for example an on-line disk storage system. Communications controller 32 interfaces communications link 33 from SSC 22 with computer 30.
Data access requests from SSC 22 are inpu-t into a multiplicity oE user terminals 41-44. Terminals 41-44 are connected to terminal controller 45 ~hich, in turn, interfaces with communications link 33. Thus, data access requests provided to terminals 41-44 are transmitted over communications link 33 to communications controller 32 and from there to computer 30. Computer 30 interprets the data access request and retrieves the information responsive to the request from storage device 31. The data is then transmitted through communication controller 32, over communications link 33, through terminal controller 45 to the terminal that originated the request. If communications link 33, communications controller 32, computer 30 and/or storage device 31 is inoperative, for any reason, the central data base is inaccessible to the terminals in SSC 22.
FIG. 2a shows one system known to the art for providing data base backup. In this system, computer 30 communicates with duplicate storage devices 51 and 52 through data base switch 53. Storage devices 51 and 52 both contain identical copies of the central data base.
If either storage device fails, then computer 30 communicates with the other storage device, according to well-known methods, by means of data base switch 53. This system will protect against inaccessibility due to failure of either data base storage device. ~lowever, the data base in this system is still prone to inaccessibility ~Z~L7~5 caused by failure or unavailability of communieations link 33, communications controller 32, eomputer 30 and/or data base switch 53.
FIG. 2b shows another system known in the art for providing data base backup. In this system, computers 61 and 62 are identical and are connected to identical storage devices 51 and 52, respectively.
Communications controller 32 eommunicates with computers 61 and 62, according to well-known methods in the art, through eomputer switeh 63. This system will protect against inaceessibility of the eentral data base due to failure of either eomputer 61 and storage deviee 51 or eomputer 62 and storage device 52. However, the data base in this system is still prone to inaeeessibility eaused by failure or unavailability of communications link 33, eommunieations controller 32 and/or computer switeh 63.
It is elear that duplication of the apparatus eonneeted with the eentral data base, or even the communications network, substantially inereases the eost of -the entire eentral data base system. This large ~ expense would be extremely burdensome for a eompany having relatively few SSCs among whieh the inereased eost is shared.
The distributed information baekup system embodying the principles of the present invention provides a backup system whose inereased eost is largely determined by the number of distributed loeations whieh it serves.
Further, the inventive system protects against inaceessibililty of the central data base due to the unavailability of the eomputer, the storage device, the communieation eontroller and/or the eomm~lnieations link.
When applied to CIMAP, the inventive information baekup system is ealled the Short Outage System (SOS) and is shown in FIG. 3. ~Note, the information backup system described below does not backup the entire CIM~P data base, but merely that portion which eontains information used in testing and repairing troubles on special service circuits.) Here, communications controller 32 and communications link 33 function in the manner described above witn respec-t to FIG. 1.
In this embodiment, computer 30, in addition to its previously described capabilities, includes a module which accesses data from the central clata base and stores this data, grouped according to SSC, on storage device 31.
Then, computer 30 distributes this data, over communications links, to the various SSCs in response to periodic requests from computers located in the SSCs, Eor example computer 70.
As shown in FIG. 3, computer 30 sends the clata stored on storage device 31 to terminal controller 72 in SSC 22 over communications link 33. Terminal controller 72 interfaces with computer 70 and terminals 41-44 through local area network 75 -- such local area networks are well-known in the art.
Computer 70, illustratively a personal computerr uses the data to update a local data base on storage device 71, illustratively an on-line disk. As a result of such updating, new information is added to the local data base and/or existing inEormation is deleted or altered.
Note, the local c]ata base stored on storage device 71 is not formed by merely copying the data distributed to SSC 22 onto storage device 71. Instead, the local data base is formed as the result of computer 70 updating the local data base with the incoming data. Sinee the local data base is updated, the distributed data merely comprises portions oE the data contained in the eentral data base.
Terminals 41-44 are networkecl locally within SSC 22 by means of local area network 75. Local area network 75 provides user access to the central data base throuc~h terminal controller 72 in the manner described above with respect to FIG. 1 and access to the local data base stored on storage device 71 through computer 70.

There are terminals in the SSCs, such as terminal 91, which are not connected to the local area network within an SSC and which communicate with the central da-ta base directly by means of terminal controller 72. ~urther, terminals in SSCs other than SSC 22 can access data in the local data base stored on storage device 71. For example, terminal 47 in SSC 20 is connected to local area network 76 in SSC 20. Local area networks 75 and 76 are interconnected through communications link 9~ and gateways 92 and 93 -- a gateway beiny a device which interconnects local area networks and is well-known in the art. Thus, by means of the connections shown in FIG. 3, a user in SSC 20 can access data on storage device 71.
In this embodiment of the inventive backup system, the data distributed to the SSCs and stored in the local data bases does not, when all taken -together, comprise tne totality of the data in tAe central data base. For example, the data does not include information on all the special service circuits in the telephone company. Instead, the data includes information on just those circuits which are in need of repair, and even then on those circuits, only the portions of the circuit data relevant to the testing and repair. In addition, the data includes all the information necessary to enable personnel in the SSC to perform the essential repair administration tasks with regard to those circuits that they would be able to perform if the central data base were accessible.
The manner in which the inventive backup system operates is best understood in the context oE the special service circuit administration described briefly above~
When notiEication of a trouble on a special service circuit is received at computer 30, the computer produces a trouble ticket, i.e. an administrative indication of a trouble on a circuit. Computer 30 then updates the central data base trouble ticket file and accesses the central data base or storage device 31 to gather the specific information required for the repair work and for 1~41765 `

the administration of the repair work. This information, i.e. trouble reports, circuit test details, circuit layout details and a trouble ticket log, as shown in FIG. 3, is stored on storage device 31 according to the SSC which will administer the repair. Inasmuch as this information is essential to performing maintenance, SSC personnel must have access to this information from the backup system when the central data base is inaccessible.
~s described, computer 30 gLoups the information on storage device 31 according to the SSC for which it is pertinent. Then, computer 30 distributes the data to the SSCs in response to periodic requests from the computers in the SSCs, for example every four (4) minutes. Only the information pertinent to a specific SSC is sent thereto and a record is kept of which data was sent to which computer. This enables computer 30 to resend data upon request if something interfered with proper transmission the first time. Upon receipt of the information from the central data base that is pertinent to SSC 22, computer 70 updates the local data base on storage device 71 The above-described distribution of information from the central data base to ~he distributed local data bases goes on continuously. If the central data base is inaccessible from SSC 22, for any reason, then personnel can use terminals 41-44 to obtain the data they need by accessing the local data base stored on storage device 71.
When the central data base is accessible and a trouble on a circuit has been cleared, i.e. the circuit has been repaired, the trouble ticket must be closed in the central data base -- not removed, but merely assigned the status corresponding to "trouble resolved". However, the trouble ticket must be removed from any local data base in which it is stored. For SSC 22, this is accomplished by computer 30 transmitting a "delete"
request to computer 70 during the period of time when the next batch of data is sent from the central data base to SSC 22.

" ~241L7~

The wQrk performed in the S5C with the aid of the backup system, i.e. duriny the time the central data base is inaccessible, is lo~ged into the backup system by computer 70 and stored on storage device 71. When the central data base becomes accessible again, the log of work performed during this time is sent, via local area network 75, terminal controller 72, communications link 33, and communications controller 32 to computer 30 so that the central data base can be updated.
The above-described process can also be illustrated in terms of the following steps:
WHEN THE COMP~TER 30 IS ACCESSIBLE:
Step 1: Computer 30 receives notice of trouble on a circuit, generates a trouble ticket, and adds the trouble ticket to the central data base.
Step 2: Computer 30 determines which SSC will administer the testing and repair on the circuit, gathers the information needed for that task from the central data base, and stores that data, according to SSC, on storage device 31.
Step 3: Computer 30 receives notice that a trouble on a circuit has been cleared, closes the trouble ticket in the central data base, and stores a "delete" for that trouble ticket, according to SSC, on storage device 31.
Step 4: Computer 30 receives a report from the field concerning the status of a trouble ticket, inserts this new information on the trouble ticket stored in the central data base, and stores an "alter" for that trouble ticket, according to SSC, on storage device 31.
Step 5: Computer 30 receives a periodic request for data Erom a SSC, for example from computer 70 in SSC 22, and transmits the data store~ on storage devices 31 ~or SSC 22 thereto.
Step 6: Illustratively~ computer 70, in SSC 22, receives the above-described data from computer 30 and updates the local data base stored on storag2 device 71 --adds a trouble ticket and the associated data to the local data base or deletes a trouble ticket and the associated data from the local data base or alters a trouble ticket on the local data base.
WHEN COMP~TER 30 IS INACCESSIBLE:
Step 1: Illustratively~ computer 70 recognizes that computer 30 is inaccessible and signals the users.
user accesses computer 70 via terminal 41 in SSC 22 to obtain the status of a trouble ticket and the associated information.
Step 2: Illustratively, user accesses computer 70 via terminal 41 in SSC 22 to resolve a trouble ticket or report the status of the trouble, computer 70 stores an indication of the closed trouble ticket or the status on storage device 71.
WHEN COMP~TER 30 BECOMES ACCESSIBLE AFTER A PERIOD
OF INACCESSIBILITY:
Step 1: Illustratively, computer 70 recognizes that computer 30 is accessible and signals the users.
Computer 70 accesses the "delete" and "status" information and transmits it to computer 30 so that computer 30 can update the central data base.
Thus, as described above, the inventive information backup system allows the users in the SSCs to perform their work if the central data base is inaccessible because of a Eailure in the central equipment or due to a failure in the communications link between the central data base and one, some or all of the SSCs.
It should also be clear to those skilled in the art that further embodiments of the present invention may be made by those skilled in the art without departing from the teachings of the present invention. For exampLe, the periodic request for the distrib~tion of data could be generated in computer 30 instead of the computers located in the SSCs.

Claims (12)

What is claimed is:
1. Information backup apparatus for a primary data base which comprises:
a plurality of storage means each containing a secondary data base, a multiplicity of means for accessing said secondary data bases, and means for accessing data from the primary data base, for storing the data in predetermined groups on a storage device, and for periodically distributing the predetermined groups of data to predetermined ones of said accessing means for updating said secondary data bases with data from the predetermined groups.
2. Apparatus in accordance with claim 1 wherein the means for accessing data and for storing data comprises a computer and the storage device comprises disk storage means.
3. Apparatus in accordance with claim 1 wherein at least one of said means for accessing said secondary data bases comprises a computer.
4. Apparatus in accordance with claim 3 wherein the means for periodically distributing comprises a further computer which accesses a predetermined group of data stored on the storage device and transmits the predetermined group to one of said means for accessing said secondary data bases in response to a periodic request therefrom.
5. Apparatus in accordance with claim 3 wherein the means for periodically distributing comprises a further computer which periodically accesses a predetermined group of data stored on the storage device and transmits the predetermined group to at least a predetermined one of said means for accessing said secondary data bases.
6. Apparatus in accordance with claim 1 wherein the means for accessing data from the primary data base comprises means for receiving updates to predetermined data in the primary data base and means for accessing other predetermined data from the primary data base in response thereto.
7. Apparatus in accordance with claim 6 wherein each of the means for accessing said secondary data bases comprises means for receiving data from a data entry device, the data having a relation to data already contained in a secondary data base, updating the secondary data base in response to receiving such data, storing such data on a storage device, and transmitting such data to means for updating the primary data base.
8. Method of information backup for a data base which comprises the steps of:
accessing the data base when predetermined data is added thereto to retrieve other predetermined data, storing the predetermined data and the other predetermined data in predetermined groups of data, periodically transmitting the predetermined groups of data to predetermined ones of a multiplicity of means for updating local data bases with data from the predetermined groups, and updating the local data bases with the predetermined data.
9. The method of claim 8 wherein the step of periodically transmitting comprises the step of transmitting the data from one of the predetermined groups in response to a periodic request from one of the multiplicity of means for updating.
10. The method of claim 8 which further comprises the steps of:
when the data base is inaccessible, receiving data having a relation to data already contained in the local data base, updating the local data base with such data, and storing such data and when the data base becomes accessible, transmitting such stored data to means for updating the data base.
11. An arrangement for providing information back up for a centralized data base having stored therein data related to remote locations comprising a storage device at a remote location containing a local data base, said local data base comprising some but not all of the data in said centralized data base, a plurality of terminal equipments, a local area network connected to said terminal equipments, means connected to said local area network for accessing said local data base, and means for periodically distributing predetermined groups of said data from said centralized data base to said means for accessing said local data base for updating said local data base with said data from said predetermined groups, said distributing means including a centralized computer connected to a storage device containing said centralized data base and communication means for interconnecting said local area network and said centralized computer.
12. A method of information back up for a centralized data base having stored therein data related to remote locations comprising the steps of accessing the centralized data base and storing predetermined data in predetermined groups, periodically transmitting the predetermined groups of data to individual ones of said remote locations, and updating local data bases at said remote locations with said predetermined data, said local data bases being smaller than said centralized data base.
CA000497315A 1985-07-10 1985-12-10 Distributed information backup system Expired CA1241765A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US753,619 1976-12-22
US06/753,619 US4710870A (en) 1985-07-10 1985-07-10 Central computer backup system utilizing localized data bases

Publications (1)

Publication Number Publication Date
CA1241765A true CA1241765A (en) 1988-09-06

Family

ID=25031437

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000497315A Expired CA1241765A (en) 1985-07-10 1985-12-10 Distributed information backup system

Country Status (2)

Country Link
US (1) US4710870A (en)
CA (1) CA1241765A (en)

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2591774B1 (en) * 1985-11-06 1996-07-12 Canon Kk FILE SYSTEM
US5230073A (en) * 1986-07-21 1993-07-20 Bell Communications Research, Inc. System and method for accessing and updating a continuously broadcasted stored database
US4897781A (en) * 1987-02-13 1990-01-30 International Business Machines Corporation System and method for using cached data at a local node after re-opening a file at a remote node in a distributed networking environment
WO1989002631A1 (en) * 1987-09-08 1989-03-23 Digital Equipment Corporation Naming service for networked digital data processing system
DE3854384T2 (en) * 1987-11-30 1996-03-28 Ibm Method for operating a multiprocessor system using a shared virtual memory.
US5101348A (en) * 1988-06-23 1992-03-31 International Business Machines Corporation Method of reducing the amount of information included in topology database update messages in a data communications network
US5136707A (en) * 1988-10-28 1992-08-04 At&T Bell Laboratories Reliable database administration arrangement
DE3912078A1 (en) * 1989-04-13 1990-10-18 Telefonbau & Normalzeit Gmbh DIGITAL TELECOMMUNICATION SYSTEM
US5123089A (en) * 1989-06-19 1992-06-16 Applied Creative Technology, Inc. Apparatus and protocol for local area network
EP0405859B1 (en) * 1989-06-30 1997-09-17 Digital Equipment Corporation Method and apparatus for managing a shadow set of storage media
US5247618A (en) * 1989-06-30 1993-09-21 Digital Equipment Corporation Transferring data in a digital data processing system
US5210865A (en) * 1989-06-30 1993-05-11 Digital Equipment Corporation Transferring data between storage media while maintaining host processor access for I/O operations
US5239637A (en) * 1989-06-30 1993-08-24 Digital Equipment Corporation Digital data management system for maintaining consistency of data in a shadow set
US5163148A (en) * 1989-08-11 1992-11-10 Digital Equipment Corporation File backup system for producing a backup copy of a file which may be updated during backup
US5093911A (en) * 1989-09-14 1992-03-03 International Business Machines Corporation Storage and retrieval system
US5170480A (en) * 1989-09-25 1992-12-08 International Business Machines Corporation Concurrently applying redo records to backup database in a log sequence using single queue server per queue at a time
US5412801A (en) * 1990-01-17 1995-05-02 E-Net Gap recovery for off-site data storage and recovery systems
US6212557B1 (en) 1990-01-29 2001-04-03 Compaq Computer Corporation Method and apparatus for synchronizing upgrades in distributed network data processing systems
US5175727A (en) * 1990-04-16 1992-12-29 Maher John W Communication system network interconnecting a plurality of communication systems
US5544347A (en) 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US5263156A (en) * 1990-12-20 1993-11-16 Bell Communications Research, Inc. Parallel, distributed optimistic concurrency control certification using hardware filtering
US5668986A (en) * 1991-10-02 1997-09-16 International Business Machines Corporation Method and apparatus for handling data storage requests in a distributed data base environment
CA2078045C (en) * 1992-09-11 1999-11-16 Mark R. Sestak Global management of telephone directory
US5659691A (en) * 1993-09-23 1997-08-19 Virtual Universe Corporation Virtual reality network with selective distribution and updating of data to reduce bandwidth requirements
US5495607A (en) * 1993-11-15 1996-02-27 Conner Peripherals, Inc. Network management system having virtual catalog overview of files distributively stored across network domain
JP3302522B2 (en) 1994-12-26 2002-07-15 富士通株式会社 Database system and its information utilization support device
US6295491B1 (en) * 1995-03-24 2001-09-25 Motorola, Inc. Method of providing distributed operational control of a radio communication system
US5694596A (en) 1995-05-25 1997-12-02 Kangaroo, Inc. On-line database updating network system and method
US6240451B1 (en) * 1995-05-25 2001-05-29 Punch Networks Corporation Method and apparatus for automatically disseminating information over a network
US5897635A (en) * 1995-06-07 1999-04-27 International Business Machines Corp. Single access to common user/application information
US5799141A (en) * 1995-06-09 1998-08-25 Qualix Group, Inc. Real-time data protection system and method
US6728851B1 (en) * 1995-07-31 2004-04-27 Lexar Media, Inc. Increasing the memory performance of flash memory devices by writing sectors simultaneously to multiple flash memory devices
US5761500A (en) * 1996-04-18 1998-06-02 Mci Communications Corp. Multi-site data communications network database partitioned by network elements
US6044444A (en) * 1996-05-28 2000-03-28 Emc Corporation Remote data mirroring having preselection of automatic recovery or intervention required when a disruption is detected
US6052797A (en) * 1996-05-28 2000-04-18 Emc Corporation Remotely mirrored data storage system with a count indicative of data consistency
US5787415A (en) * 1996-10-30 1998-07-28 International Business Machines Corporation Low maintenance data delivery and refresh system for decision support system database
US7398286B1 (en) * 1998-03-31 2008-07-08 Emc Corporation Method and system for assisting in backups and restore operation over different channels
US6959368B1 (en) * 1999-06-29 2005-10-25 Emc Corporation Method and apparatus for duplicating computer backup data
TW454120B (en) 1999-11-11 2001-09-11 Miralink Corp Flexible remote data mirroring
US7752169B2 (en) * 2002-06-04 2010-07-06 International Business Machines Corporation Method, system and program product for centrally managing computer backups
US7047377B2 (en) 2002-08-20 2006-05-16 Gruintine Pueche, Inc. System and method for conducting an auction-based ranking of search results on a computer network
US7672986B2 (en) * 2004-02-12 2010-03-02 Microsoft Corporation Managing graphic databases
US7487188B2 (en) 2004-09-07 2009-02-03 Computer Associates Think, Inc. System and method for providing increased database fault tolerance
US20090138510A1 (en) * 2007-11-28 2009-05-28 Childress Rhonda L Method and apparatus for associating help desk ticket with affected data processing system
US9251012B2 (en) * 2008-01-18 2016-02-02 Tivo Inc. Distributed backup and retrieval system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4186439A (en) * 1977-12-29 1980-01-29 Casio Computer Co., Ltd. Electronic cash register for totalizing sales data on a time zone basis

Also Published As

Publication number Publication date
US4710870A (en) 1987-12-01

Similar Documents

Publication Publication Date Title
CA1241765A (en) Distributed information backup system
US5528677A (en) System for providing communications services in a telecommunications network
US5398277A (en) Flexible multiprocessor alarm data processing system
US6255945B1 (en) Communication path integrity supervision in a network system for automatic alarm data communication
CN100380337C (en) System and method for preventing access to data on a compromised remote device
CA2002018A1 (en) Autonomous expert system for directly maintaining remote telephone switching systems
US8346905B2 (en) Systems and methods for improved multisite management and reporting of converged communication systems and computer systems
US4811388A (en) Telecommunication network including a central back-up memory
US7584259B2 (en) System and method for providing service technicians access to dispatch information
JPH09146812A (en) Data base device
US9270735B2 (en) Systems and methods for improved multisite management and reporting of converged communication systems and computer systems
US6360095B1 (en) Home location register for a mobile telecommunications network
EP0890281B1 (en) A home location register for a mobile telecommunications network
Giunta et al. No. 4 ESS: Data/trunk administration and maintenance
Haas et al. Stored Program Controlled Network: 800 Service using SPC network capability—Network implementation and administrative functions
JPH1074157A (en) Distributed processor and distributed processing method
Chappell et al. Automated Repair Service Bureau: The Front‐End System
KR0123193B1 (en) Distribute real time data base output method
Beaumont et al. Transaction network, telephones, and terminals: Maintenance and administration
Rodriguez Transaction network, telephones, and terminals: Transaction network operational programs
KR940007843B1 (en) Schema managing method on dbms
Ellis et al. A Plan for Consolidation and Automation of Military Telecommunications on Oahu
Greene et al. Route control in AUTOVON electronic switching centers
Boroff et al. AO&M of Government Networks
Hunt et al. Movements Information Network (MINET) Testbed Design Study

Legal Events

Date Code Title Description
MKEX Expiry