US20080172472A1 - Peer Data Transfer Orchestration - Google Patents

Peer Data Transfer Orchestration Download PDF

Info

Publication number
US20080172472A1
US20080172472A1 US12/054,838 US5483808A US2008172472A1 US 20080172472 A1 US20080172472 A1 US 20080172472A1 US 5483808 A US5483808 A US 5483808A US 2008172472 A1 US2008172472 A1 US 2008172472A1
Authority
US
United States
Prior art keywords
peer
data
peer system
transfer
systems
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/054,838
Inventor
Brian D. Goodman
John W. Rooney
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US12/054,838 priority Critical patent/US20080172472A1/en
Publication of US20080172472A1 publication Critical patent/US20080172472A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1087Peer-to-peer [P2P] networks using cross-functional networking aspects
    • H04L67/1091Interfacing with client-server systems or between P2P systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • H04L67/1078Resource delivery mechanisms
    • H04L67/108Resource delivery mechanisms characterised by resources being split in blocks or fragments

Definitions

  • the embodiments of the invention generally relate to network computing, and, more particularly, to network-based data transferring systems.
  • the transfer of data from one system to another is a fundamental aspect of network computing. With the advent of grid and localized orchestration of file distribution, the transfer of data from a first system to a second system (i.e., peer-to-peer data transfer) has increased considerably. Data transfer requests in a grid system are generally services performed by many systems.
  • peers are bandwidth limited by the technology or configuration of their connection to the network.
  • a peer is a computing system participating in a networked environment. Most peers are limited by a single connection to the network (e.g. ethernet port, wireless, etc.).
  • a router and bridge that connects a peer to a larger network of peers often brokers their connection. Even if the peer happens to be a mainframe with multiple connections to multiple networks, there generally is a limitation to the amount of data that can be transferred to the peer. Specifically, each network connection can typically only sustain a maximum rate of transfer and this is true for network hubs, switches, and bridges.
  • the existing systems typically address orchestration servers. Approaching the problem from a server side perspective optimizes the data transfer load from one server to many. This provides optimal load distribution and higher transfer rates for client peers receiving the data. However, clients typically have limitations on how much data they can pull down at any one time.
  • the shortcomings of the conventional approaches generally include the finite data transfer resource of the requesting system.
  • FIG. 1 illustrates a basic data transfer scenario, where data is transferred from a data serving system (second system/client) 100 to a requesting system (first system) 101 .
  • the first system 101 requests a 1,000 MB file from the client 100
  • the current client 100 is constrained at 1 MB/second.
  • the file will transfer in approximately 17 minutes in a best-case scenario using a local area network (LAN) 103 .
  • LAN local area network
  • Data transfer is also dependant on the ability of the second system 100 to correctly transfer the data, the location, etc.
  • the data transfer will require almost all of the bandwidth available from the first system 101 in order to accomplish this task.
  • the best-case scenario is the same as the single transfer and could even be worse due to overhead. Often the best case scenario is not possible and the initial transfer of 1,000 MB in 17 minutes is more likely to occur in 83 minutes (transferring at 0.2 MB/sec); an 80% increase in time.
  • segmented data transfer As a popular way for increasing efficiency over traditional single threaded transfer as illustrated in FIG. 2 . Segmented data transfers call upon multiple data sources 100 to service segments 101 .
  • the limitation to this approach is the fixed nature of the available bandwidth for a given server.
  • Peer-to-peer applications and architectures such as the network illustrated in FIG. 3 , offer a method of identifying data and transferring that data, often from multiple sources 160 , to achieve the benefits of segmented file transfer. Again, this approach is generally limited by the physical configuration of the network bandwidth allocated to the requesting server 165 . Accordingly, there remains a need for a novel peer-to-peer data transfer technique that overcomes the limitations of the conventional solutions.
  • an embodiment of the invention provides a data transfer system comprising a plurality of peer systems arranged in a computer network; and at least one data server comprising data and coupled to the plurality of peer systems, wherein the plurality of peer systems comprise a first peer system and at least one second peer system, wherein the first peer system is adapted to instruct the at least one second peer system to collaboratively transfer the data from the at least one data server to the first peer system, and wherein the at least one second peer system is adapted to transfer data from the at least one data server to the first peer system.
  • the plurality of peer systems is preferably grid enabled.
  • the first peer system is preferably adapted to create a data transfer plan adapted to identify data resources and transfer bandwidth capabilities of each of the at least one second peer system, wherein the data transfer plan may comprise a uniform resource identifier (URI), a peer identifier, and byte ranges associated with each of the at least one second peer system.
  • the first peer system may further be adapted to identify data to be transferred, identify the at least one second peer system capable of transferring portions of the data, and create a data transfer plan; and wherein the at least one second peer system is adapted to send the data transfer plan to the at least one data server and to provide a status message to the first peer system.
  • communication between the first peer system and the at least one second peer system may occur through web services.
  • the first peer system may be further adapted to reconstitute the data.
  • the data transfer system further preferably comprises a peer directory adapted to connect the plurality of peer systems to one another.
  • inventions provide a method of transferring data, a service of transferring data, and a program storage device readable by computer, tangibly embodying a program of instructions executable by the computer to perform a method of transferring data, wherein the method comprises arranging a plurality of peer systems in a computer network; coupling at least one data server preferably comprises data to the plurality of peer systems, wherein the plurality of peer systems comprise a first peer system and at least one second peer system; the first peer system instructing the at least one second peer system to collaboratively transfer the data from the at least one data server to the first peer system; and the at least one second peer system transferring the data from the at least one data server to the first peer system.
  • the plurality of peer systems is preferably grid enabled.
  • the method further preferably comprises the first peer system creating a data transfer plan and identifying data resources and transfer bandwidth capabilities of each of the at least one second peer system, wherein the data transfer plan may comprise a uniform resource identifier (URI), a peer identifier, and byte ranges associated with each of the at least one second peer system.
  • the method further preferably comprises the first peer system identifying data to be transferred, identifying the at least one second peer system capable of transferring portions of the data, and creating a data transfer plan; and wherein the method further preferably comprises the at least one second peer system sending the data transfer plan to the at least one data server and providing a status message to the first peer system.
  • communication between the first peer system and the at least one second peer system may occur through web services.
  • the method further preferably comprises the first peer system reconstituting the data and using a peer directory to connect the plurality of peer systems to one another. Preferably, the reconstitution of the data is performed by transferring the data using compression.
  • Another embodiment of the invention provides a computer system comprising a computer network; at least one data server comprising data and coupled to the computer network; a grid enabled first peer system coupled to the computer network; a plurality of grid enabled second peer systems coupled to the computer network; and a peer directory adapted to connect the first peer system and the plurality of second peer systems to one another, wherein the first peer system is adapted to instruct the at least one second peer system to collaboratively transfer the data from the at least one data server to the first peer system, wherein the plurality of second peer systems are adapted to transfer data from the at least one data server to the first peer system, and wherein the first peer system is further adapted to identify data to be transferred, identify the at least one second peer system capable of transferring portions of the data, and create a data transfer plan; and wherein the at least one second peer system is adapted to send the data transfer plan to the at least one data server and to provide a status message to the first peer system.
  • FIG. 1 illustrates a schematic diagram of a data transfer system
  • FIG. 2 illustrates a schematic diagram of a multi-system data transfer system
  • FIG. 3 illustrates a schematic diagram of a peer-to-peer network
  • FIG. 4 illustrates a schematic diagram of a segmented data orchestration data transfer system according to an embodiment of the invention
  • FIG. 5 illustrates a schematic diagram of a process flow of the segmented data orchestration data transfer system of FIG. 4 according to an embodiment of the invention.
  • FIG. 6 illustrates a schematic diagram of a computer system according to an embodiment of the invention.
  • the embodiments of the invention achieve this by providing a peer-segmented data transfer orchestration allowing a single peer to coordinate the data transfer activity on behalf of one or more node peers, and specifically a system and method for enabling a first peer to orchestrate the data transfer behavior of a second peer to the benefit of the first peer.
  • the embodiments of the invention allow, in a LAN environment 203 , a first peer system 200 to identify local peers (i.e., second peer system) 201 a , 201 b capable of collaborative orchestrated segmented data transfer, and to send a series of instructions of which parts of, for example a 1,000 MB file to transfer to the first peer system 200 .
  • FIG. 4 illustrates only two second peer systems 201 a , 201 b for ease of understanding.
  • the embodiments of the invention may include an indefinite number of second peer systems.
  • a peer 201 a can transfer those portions from its data serving systems (i.e., data servers) 204 ( 1 ), 204 ( 2 ) . . . 204 ( x ) to complete the transaction.
  • data serving systems i.e., data servers
  • the total time afforded by the embodiments of the invention is 25 minutes (8+17 minutes) versus 83 minutes (for the conventional scenario), which is a 70% increase in performance over the conventional approaches.
  • the embodiments of the invention provide a system and method for obtaining all the benefits of multi-sourced segmented data transfer while solving the traditional challenges of constrained bandwidth on a first peer system 200 .
  • the embodiments of the invention address peer segmented data transfer orchestration wherein local peers 201 a , 201 b are instructed to participate in the process of data transfer as depicted in FIG. 4 .
  • FIG. 5 illustrates a process in accordance with an embodiment of the invention, which includes the following steps, further described in greater detail below: data identification 301 , peer identification 305 , data transfer plan creation 310 , instruction assignment 315 , begin data transfer 320 , listen for responses and potential status messages from peers 325 , data reconstitution 330 , and optionally metric and heuristic processing 335 .
  • identifying the data ( 301 ) is synonymous with resolving the asset, which the primary peer 200 (of FIG. 4 ) is looking to download. This could be as simple as requesting information about the asset, minimally, that it exists and that it is of a certain size.
  • identifying the data might include connecting to a master asset server 204 ( 1 ), for example, that manages the resources on the grid network and retrieving the list of grid nodes to pull the data from.
  • the identification of data ( 301 ) involves using software running on a computer or device allowing for the identification (presence) of a uniform resource identifier (URI) to a desired asset.
  • URI uniform resource identifier
  • the grid example involves at least one central server 204 ( 1 ), for example, knowledgeable of all assets on the grid and which computers or nodes contain the assets.
  • the identification of the resource could be performed by software running on a second system (not shown) capable of talking to the grid to get the list of servers 204 ( 1 ), 204 ( 2 ) . . . 204 ( x ) from which the asset can be retrieved.
  • Peer identification ( 305 ) can be achieved in many ways.
  • One way is to have a central server 204 ( 1 ), for example, where all peers 201 a , 201 b register themselves.
  • a peer directory 307 can respond with a list of peers 201 a , 201 b based on specific criteria such as location and performance.
  • the identification and catalog of peers 201 a , 201 b in a network 203 may include basic web forms running on a primary peer 200 allowing users to add their IP address to a list downloadable through the web.
  • Another example may include a central grid server 204 ( 2 ), for example, wherein peer registration is embodied as nodes that happen to contain all or part of an asset of interest.
  • the creation ( 310 ) of a data transfer plan 312 involves breaking up a large file into smaller tasks and assigning each task to identified peers 201 a , 201 b .
  • the smaller tasks are to transfer a subset of the larger file.
  • the decision of which peer 201 a , 201 b receives which portion or how many portions (i.e., “chunks”) is determined by the primary peer 200 . This could be performed by force (e.g. divide equally amongst all peers 201 a , 201 b ) or with some logic (e.g. the peer directory 307 shows a particular peer 201 a , for example, as having four ethernet connections and bridging multiple networks so it is assigned five times the amount of work).
  • the data transfer plan 312 is preferably a text file including eXtensible markup language (XML) detailing the instructions for other peers 201 a , 201 b to consume.
  • Software is required on each peer system 201 a , 201 b to allow the primary peer 200 the ability to connect over a suitable network 203 or similar connection to other peers 201 a , 201 b.
  • each peer 201 a listens ( 325 ) for instructions from other peers 201 b , for example, and responds as best as it is able to.
  • metrics could be posted back to the peer directory 307 to ensure each peer 201 a , 201 b are not given too many tasks.
  • the peer 201 a can reject the work item and the primary peer 200 would be responsible for asking for more peers 201 b , for example, or adjusting the workload.
  • peers 201 a listen on a network socket for instructions from other peers 201 b , for example.
  • the process of listening on a network socket is well known to the art and requires suitable software on each peer 201 a , 201 b.
  • the primary peer 200 listens for the completion of the task. Upon completion, each peer 201 a , 201 b notifies the primary peer 200 of the task status, and the primary peer 200 begins the process of (presumably on the local or optimal network 200 ) reconstituting the data ( 320 ). Listening for the completion of the task requires software (possible embeddable in an appliance) on a port for other peers 201 a , 201 b notifying the primary peer 200 of job completion. Additionally, the primary peer 200 could maintain the socket connection for the full duration of the transaction. Alternatively, the peers 201 a , 201 b might leverage a publish/subscribe system for exchanging messages. Publish/subscribe style messaging allows for the efficient broadcast of messages from one to many, but can facilitate one to one messaging in straightforward generic way.
  • the final step is to report back the performance witnessed by the primary peer 200 to be added into the metrics and algorithms ( 330 ) the peer directory 307 uses in returning peer lists.
  • Metric and heuristic processing ( 330 ) is an optional component of the embodiments of the invention intended to make the peer-to-peer system less arbitrary. Reporting back performance ( 330 ) requires software on the peer 201 a , 201 b and a central server (directory) 204 ( 1 ), for example.
  • the directory 204 ( 1 ) for example, listens for feedback on peers 201 a , 201 b it knows about.
  • the primary peer 200 connects over the network 203 to the directory servers 204 ( 1 ), 204 ( 2 ) . . . 204 ( x ) using a Transmission Control Protocol/Internet Protocol (TCP/IP) and submits performance data in the form of an XML document describing the time of interaction, asset, peer and the associated performance metric.
  • TCP/IP Transmission Control Protocol/Internet
  • the identification of peers ( 305 ) includes several solutions such as a community server, master directory, seeded list, and peer discovery.
  • a community server approach is a server centric model where peers connect to a main server to accomplish peer awareness.
  • a master directory stores all the known peers, but may not provide services that the community server offers.
  • Seeded lists are groups of random peer identifiers enabling a decentralized discovery of the network 203 .
  • Peer discovery is accomplished by several techniques, the simplest of which is pinging the subnet to discover peers 201 a , 201 b . Pinging occurs when a system 200 , 201 a , 201 b is connected to a network 203 and sends a specific message requesting acknowledgement.
  • a primary system 200 When pinging a subnet, a primary system 200 is not addressing a specific system on the network 200 . Rather, it is sending a message to any system 201 a , 201 b on the network 203 and looking for which systems respond.
  • Various characteristics contribute to an overall weighting of each peer 201 a , 201 b . Examples include ping response time, average past task completion performance, or geography. These peer characteristics can optionally be provided through a common server or peer directory 307 .
  • Client peers 201 a , 201 b can either be brokered through a common server 204 ( 1 ), for example, or report back to a common server 204 ( 1 ), for example, on the current characteristics of a specific data transfer. For example, if a primary peer 200 wants to transfer the file “data.zip” to a requesting user, the peers 201 a , 201 b that have that file might be known, but the best peers will typically be local peers. For example, if the peer is in the U.S. north east corridor, then transferring from China or Japan is less optimal that transferring from Toronto, Canada. In addition there may be local peers that do not have the bandwidth to help or are too busy, in which case other local peers are more advantageous. Identifying peers 201 a , 201 b with the exact file may be performed simply by file name, but generally requires other attributes to match such as a file size, timestamp, author, checksum, timestamp, MD5 Hash, or digital signature.
  • the creation and transfer ( 310 ) of a data transfer plan 312 to peers 201 a , 201 b identifies the resource in question and the portions required for transfer by each peer 201 a , 201 b .
  • this data transfer plan 312 is embodied as a list with the URI, the peer identifier, and the byte ranges which that peer requests.
  • Table 1 illustrates a sample data transfer plan in accordance with an embodiment of the invention.
  • the first line in Table 1 provides a URI to the data, identifying the protocol, server name, data name and resource size.
  • the second, third, and fourth lines of Table 1 identify IP addresses of co-opted peers and the data range which that peer is requested to transfer. For example, the second line states that peer 9.45.36.100 requests 0, 10000 bytes of “a_big_file.zip” from www.server.com using a hypertext transfer protocol (HTTP) connection.
  • HTTP hypertext transfer protocol
  • Other transfer protocols are possible, such as File Transfer Protocol (FTP) or Network News Transfer Protocol (NNTP) etc.
  • the numbers defining the range assigned to each peer could be specified in bytes, kilobytes, megabytes, etc.
  • the URI to the resource might point to a grid system or multiple host systems that could be used to transfer the data.
  • the host systems to transfer from could be specified in the instructions node or as part of the nature of implementation as in the grid system where the grid system dictates which peers to transfer from.
  • the data transfer plan 312 might take the form of a self-describing markup (i.e., in XML format) as shown in Table 2.
  • the first node (“ ⁇ resource . . . />”) defines the resource and size of the total transfer.
  • the second node ( ⁇ instruction> . . . ⁇ /instruction>) defines the instructions and includes the unique identifier and the specific instructions for that peer. In this case, the peer is instructed to make two transfers.
  • the data transfer plan shown in Table 2 is written in XML. It includes similar content to that of the data transfer plan in Table 1.
  • the parent node is the data-transfer-plan. It includes at least two child nodes, resource and instruction.
  • the resource node describes the data the transfer plan refers to. It provides a URI to the data, identifying the protocol, server name, and data name. It also identifies the size of the data.
  • the instruction node and stanza has several child nodes, peers, and at least one transfer node.
  • the peer node has a property called a unique identifier (UID) which currently maps to the IP address of the target peer.
  • the transfer node has two properties, start and end, identifying the data range which that peer is requested to transfer.
  • the URI to the resource may point to a grid system or multiple host systems that might be used to transfer the data. The host systems to transfer from might be specified in the instructions node or as part of the nature of implementation as in the grid system where the grid system dictates which peers to transfer from. Other transfer protocols are possible, such as FTP or NNTP, etc.
  • the numbers defining the range assigned to each peer could be specified in bytes, kilobytes, megabytes, etc.
  • call back notification from a peer 201 a , 201 b to the primary peer 200 offers an alternative to the data transfer plan 312 .
  • data transfer plans 312 might not be transmitted in whole to each peer 201 a , 201 b .
  • Individual peers 201 a , 201 b request the task to be performed and, when completed, ask for any other tasks to be performed. Transmitting the task list in whole offers opportunities for peers 201 a , 201 b to “collaborate” on accomplishing the task.
  • a peer system 201 a for example, might be transferring slowly but have more tasks.
  • Another peer system 201 b for example, might be transferring quickly, but not have any further tasks.
  • the primary peer system 200 can query the peer system 201 a , 201 b to establish a link and task transfer.
  • the peer 201 a , 201 b indicates to the primary (i.e., master) peer 200 that it is finished with the data transfer assigned. It can also ask for another segment of data to transfer.
  • the primary peer 200 queries the current state of data transfer from the local peers 201 a , 201 b and reassigns task or portions of tasks.
  • the connection from the primary peer 200 to the secondary peers 201 a , 201 b requires software running on each peer 201 a , 201 b capable of listening and responding to messages over the network 203 .
  • a third peer (not shown) might have been asked to transfer 10000-30000 bytes but has only been able to transfer 20000.
  • the primary peer 200 can assign 25000-30000 to an idle more advantageous peer 201 a , for example.
  • the primary peer system 200 can query for the final data transfer plan 312 to reconstitute the data by connecting to each peer 201 a , 201 b over the network 203 .
  • the primary peer 200 requests each part of a file from other peers 201 a , 201 b by connecting to each of the peers 201 a , 201 b over the network 203 using a suitable protocol supporting two way message transfer (send and respond).
  • Other peer systems can respond with the data stream or the pointer to the data stream by reading the data as it is stored in memory (hard disk, RAM, network storage) and writing to a network port (not shown) where the primary peer 200 is listening.
  • an optional step is to process ( 330 ) performance data to aid in the optimal selection of peer systems 201 a , 201 b .
  • This step may include submitting data to a central server 204 ( 1 ), for example.
  • the peer systems 201 a , 201 b could store the data locally on the immediate systems storage, RAM, hard-drive etc. Further algorithms may be run to determine weightings of each known peer system 201 a , 201 b . For example, data on average transfer speed for a given peer 201 a , for example, may be captured.
  • the peer 201 a may decide through preprogrammed rules or through end user intervention or preferences to indicate that average transfer rate is important.
  • the peers 201 b for example, with slow transfer rate are selected last for co-option.
  • each peer 201 a , 201 b may process requests only by trusted peers (not shown).
  • Trusted peers may be managed centrally through central server policies or by end-user interaction. Prompts to an end user that ‘X’ peer is requesting trust status is one method for building a list of trusted peers. The end user might say this time only or always trust ‘X’ peer.
  • communication between peers 200 , 201 a , 201 b might be encrypted through well-known encryption techniques.
  • the embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment including both hardware and software elements.
  • the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • the embodiments of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
  • Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
  • a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • FIG. 6 A representative hardware environment for practicing the embodiments of the invention is depicted in FIG. 6 .
  • the system comprises at least one processor or central processing unit (CPU) 10 .
  • the CPUs 10 are interconnected via system bus 12 to various devices such as a random access memory (RAM) 14 , read-only memory (ROM) 16 , and an input/output (I/O) adapter 18 .
  • RAM random access memory
  • ROM read-only memory
  • I/O input/output
  • the I/O adapter 18 can connect to peripheral devices, such as disk units 11 and tape drives 13 , or other program storage devices that are readable by the system.
  • the system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments of the invention.
  • the system further includes a user interface adapter 19 that connects a keyboard 15 , mouse 17 , speaker 24 , microphone 22 , and/or other user interface devices such as a touch screen device (not shown) to the bus 12 to gather user input.
  • a communication adapter 20 connects the bus 12 to a data processing network 25
  • a display adapter 21 connects the bus 12 to a display device 23 which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
  • Enabling client side orchestration allows segmented data transfer to be multiplied over two or more systems moving the data transfer bottleneck to the total bandwidth capacity of the network and the capacity for the resource server to transmit the data. The resulting transfer is faster than a multi-segment transfer from a single system.
  • the ability for peers to cooperate and transfer parts of the same asset leverages both server topologies and network topologies to utilize all available resources.
  • Multiple servers 204 ( 1 ), 204 ( 2 ) . . . 204 ( x ) are capable of responding to multiple requests for the same asset and different parts of the same asset.
  • the peer network 203 provided by an embodiment of the invention is capable of requesting the same file or parts of files from different servers 204 ( 1 ), 204 ( 2 ) . . . 204 ( x ). This utilizes software on each of the peers 201 a , 201 b and software on the servers 204 ( 1 ), 204 ( 2 ) . . . 204 ( x
  • the software on the servers 204 ( 1 ), 204 ( 2 ) . . . 204 ( x ) is required to simply return all or part of a requested asset.
  • the software on the peers 201 a , 201 b is required to both submit and receive instructions of which part or parts of an asset to transfer and then subsequently transfer those parts to the primary peer 200 .
  • a central server (not shown) may include software that keeps track of the peers 201 a , 201 b for a given network 203 . Enabling peers 201 a , 201 b with the software allows for faster transfers as they are all cooperating to download the same asset from potentially multiple places, reducing the final transfer to a more local high performing transfer.
  • the embodiments of the invention provide a system and method for obtaining all the benefits of multi-sourced segmented data transfer while solving the traditional challenges of constrained bandwidth on a primary peer 200 . Accordingly, the embodiments of the invention addresses peer segmented data transfer orchestration wherein local peers 201 a , 201 b are instructed by a primary peer 200 to participate in the process of data transfer.
  • the embodiments of the invention provide a data transfer system and method comprising a plurality of peer systems 200 , 201 a , 201 b arranged in a computer network 203 and at least one data server 204 ( 1 ), 204 ( 2 ) . . .
  • the plurality of peer systems comprise a first peer system 200 and at least one second peer system 201 a , 201 b
  • the first peer system 200 is adapted to instruct the at least one second peer system 201 a , 201 b to collaboratively transfer the data from the at least one data server 204 ( 1 ), 204 ( 2 ) . . . 204 ( x ) to the first peer system 200
  • the at least one second peer system 201 a , 201 b is adapted to transfer data from the at least one data server 204 ( 1 ), 204 ( 2 ) . .
  • the data transfer system further comprises a peer directory 307 adapted to connect the plurality of peer systems 200 , 201 a , 201 b to one another.
  • the first peer system 200 is adapted to create a data transfer plan 312 adapted to identify data resources and transfer bandwidth capabilities of each of the at least one second peer system 201 a , 201 b . Additionally, the data transfer plan 312 comprises a URI, a peer identifier, and byte ranges associated with each of the at least one second peer system 201 a , 201 b . The first peer system 200 is further adapted to identify data to be transferred, identify the at least one second peer system 201 a , 201 b capable of transferring portions of the data, and create a data transfer plan 312 .
  • the at least one second peer system 201 a , 201 b is adapted to send the data transfer plan 312 to the at least one data server 204 ( 1 ), 204 ( 2 ) . . . 204 ( x ) and to provide a status message to the first peer system 200 .
  • the communication between the first peer system 200 and the at least one second peer system 201 a , 201 b occurs through web services.
  • the first peer system 200 is further adapted to reconstitute the data, wherein the reconstitution of the data is performed by transferring the data using compression.

Abstract

A system, method, service, and program storage device implementing a method of transferring data, wherein the method comprises arranging a plurality of peer systems in a computer network; coupling at least one data server preferably comprises data to the plurality of peer systems, wherein the plurality of peer systems comprise a first peer system and at least one second peer system; the first peer system instructing the at least one second peer system to collaboratively transfer the data from the at least one data server to the first peer system; and the at least one second peer system transferring the data from the at least one data server to the first peer system. The plurality of peer systems is preferably grid enabled.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a Continuation of U.S. application Ser. No. 11/128,100 filed May 12, 2005, the complete disclosure of which, in its entirety, is herein incorporated by reference.
  • BACKGROUND
  • 1. Field of the Invention
  • The embodiments of the invention generally relate to network computing, and, more particularly, to network-based data transferring systems.
  • 2. Description of the Related Art
  • The transfer of data from one system to another is a fundamental aspect of network computing. With the advent of grid and localized orchestration of file distribution, the transfer of data from a first system to a second system (i.e., peer-to-peer data transfer) has increased considerably. Data transfer requests in a grid system are generally services performed by many systems.
  • However, there are generally two major problems with these approaches to data transfer. First, peers are bandwidth limited by the technology or configuration of their connection to the network. A peer is a computing system participating in a networked environment. Most peers are limited by a single connection to the network (e.g. ethernet port, wireless, etc.). A router and bridge that connects a peer to a larger network of peers often brokers their connection. Even if the peer happens to be a mainframe with multiple connections to multiple networks, there generally is a limitation to the amount of data that can be transferred to the peer. Specifically, each network connection can typically only sustain a maximum rate of transfer and this is true for network hubs, switches, and bridges.
  • Second, the existing systems typically address orchestration servers. Approaching the problem from a server side perspective optimizes the data transfer load from one server to many. This provides optimal load distribution and higher transfer rates for client peers receiving the data. However, clients typically have limitations on how much data they can pull down at any one time.
  • With the popular reinvigoration of grid technologies, the exploitation of segmented data transfer has become a focus in leveraging peer networks. The shortcomings of the conventional approaches generally include the finite data transfer resource of the requesting system.
  • FIG. 1 illustrates a basic data transfer scenario, where data is transferred from a data serving system (second system/client) 100 to a requesting system (first system) 101. In this data transfer scenario where, for example, the first system 101 requests a 1,000 MB file from the client 100, and the current client 100 is constrained at 1 MB/second. The file will transfer in approximately 17 minutes in a best-case scenario using a local area network (LAN) 103. Data transfer is also dependant on the ability of the second system 100 to correctly transfer the data, the location, etc. The data transfer will require almost all of the bandwidth available from the first system 101 in order to accomplish this task. The best-case scenario is the same as the single transfer and could even be worse due to overhead. Often the best case scenario is not possible and the initial transfer of 1,000 MB in 17 minutes is more likely to occur in 83 minutes (transferring at 0.2 MB/sec); an 80% increase in time.
  • The industry has generally established segmented data transfer as a popular way for increasing efficiency over traditional single threaded transfer as illustrated in FIG. 2. Segmented data transfers call upon multiple data sources 100 to service segments 101. The limitation to this approach is the fixed nature of the available bandwidth for a given server.
  • Peer-to-peer applications and architectures, such as the network illustrated in FIG. 3, offer a method of identifying data and transferring that data, often from multiple sources 160, to achieve the benefits of segmented file transfer. Again, this approach is generally limited by the physical configuration of the network bandwidth allocated to the requesting server 165. Accordingly, there remains a need for a novel peer-to-peer data transfer technique that overcomes the limitations of the conventional solutions.
  • SUMMARY
  • In view of the foregoing, an embodiment of the invention provides a data transfer system comprising a plurality of peer systems arranged in a computer network; and at least one data server comprising data and coupled to the plurality of peer systems, wherein the plurality of peer systems comprise a first peer system and at least one second peer system, wherein the first peer system is adapted to instruct the at least one second peer system to collaboratively transfer the data from the at least one data server to the first peer system, and wherein the at least one second peer system is adapted to transfer data from the at least one data server to the first peer system. The plurality of peer systems is preferably grid enabled. Moreover, the first peer system is preferably adapted to create a data transfer plan adapted to identify data resources and transfer bandwidth capabilities of each of the at least one second peer system, wherein the data transfer plan may comprise a uniform resource identifier (URI), a peer identifier, and byte ranges associated with each of the at least one second peer system. Furthermore, the first peer system may further be adapted to identify data to be transferred, identify the at least one second peer system capable of transferring portions of the data, and create a data transfer plan; and wherein the at least one second peer system is adapted to send the data transfer plan to the at least one data server and to provide a status message to the first peer system. Additionally, communication between the first peer system and the at least one second peer system may occur through web services. Also, the first peer system may be further adapted to reconstitute the data. The data transfer system further preferably comprises a peer directory adapted to connect the plurality of peer systems to one another.
  • Other embodiments of the invention provide a method of transferring data, a service of transferring data, and a program storage device readable by computer, tangibly embodying a program of instructions executable by the computer to perform a method of transferring data, wherein the method comprises arranging a plurality of peer systems in a computer network; coupling at least one data server preferably comprises data to the plurality of peer systems, wherein the plurality of peer systems comprise a first peer system and at least one second peer system; the first peer system instructing the at least one second peer system to collaboratively transfer the data from the at least one data server to the first peer system; and the at least one second peer system transferring the data from the at least one data server to the first peer system. The plurality of peer systems is preferably grid enabled. The method further preferably comprises the first peer system creating a data transfer plan and identifying data resources and transfer bandwidth capabilities of each of the at least one second peer system, wherein the data transfer plan may comprise a uniform resource identifier (URI), a peer identifier, and byte ranges associated with each of the at least one second peer system. Furthermore, the method further preferably comprises the first peer system identifying data to be transferred, identifying the at least one second peer system capable of transferring portions of the data, and creating a data transfer plan; and wherein the method further preferably comprises the at least one second peer system sending the data transfer plan to the at least one data server and providing a status message to the first peer system. Additionally, communication between the first peer system and the at least one second peer system may occur through web services. The method further preferably comprises the first peer system reconstituting the data and using a peer directory to connect the plurality of peer systems to one another. Preferably, the reconstitution of the data is performed by transferring the data using compression.
  • Another embodiment of the invention provides a computer system comprising a computer network; at least one data server comprising data and coupled to the computer network; a grid enabled first peer system coupled to the computer network; a plurality of grid enabled second peer systems coupled to the computer network; and a peer directory adapted to connect the first peer system and the plurality of second peer systems to one another, wherein the first peer system is adapted to instruct the at least one second peer system to collaboratively transfer the data from the at least one data server to the first peer system, wherein the plurality of second peer systems are adapted to transfer data from the at least one data server to the first peer system, and wherein the first peer system is further adapted to identify data to be transferred, identify the at least one second peer system capable of transferring portions of the data, and create a data transfer plan; and wherein the at least one second peer system is adapted to send the data transfer plan to the at least one data server and to provide a status message to the first peer system.
  • These and other aspects of embodiments of the invention will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following description, while indicating preferred embodiments of the invention and numerous specific details thereof, is given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments of the invention without departing from the spirit thereof, and the invention includes all such modifications.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The embodiments of the invention will be better understood from the following detailed description with reference to the drawings, in which:
  • FIG. 1 illustrates a schematic diagram of a data transfer system;
  • FIG. 2 illustrates a schematic diagram of a multi-system data transfer system;
  • FIG. 3 illustrates a schematic diagram of a peer-to-peer network;
  • FIG. 4 illustrates a schematic diagram of a segmented data orchestration data transfer system according to an embodiment of the invention;
  • FIG. 5 illustrates a schematic diagram of a process flow of the segmented data orchestration data transfer system of FIG. 4 according to an embodiment of the invention; and
  • FIG. 6 illustrates a schematic diagram of a computer system according to an embodiment of the invention.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION
  • The embodiments of the invention and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. It should be noted that the features illustrated in the drawings are not necessarily drawn to scale. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments of the invention. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments of the invention may be practiced and to further enable those of skill in the art to practice the embodiments of the invention. Accordingly, the examples should not be construed as limiting the scope of the invention.
  • As mentioned, there remains a need for a novel peer-to-peer data transfer technique that overcomes the limitations of the conventional solutions. The embodiments of the invention achieve this by providing a peer-segmented data transfer orchestration allowing a single peer to coordinate the data transfer activity on behalf of one or more node peers, and specifically a system and method for enabling a first peer to orchestrate the data transfer behavior of a second peer to the benefit of the first peer. Referring now to the drawings, and more particularly to FIGS. 4 through 6 where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments of the invention.
  • With respect to the data transfer scenario in FIG. 4, the embodiments of the invention allow, in a LAN environment 203, a first peer system 200 to identify local peers (i.e., second peer system) 201 a, 201 b capable of collaborative orchestrated segmented data transfer, and to send a series of instructions of which parts of, for example a 1,000 MB file to transfer to the first peer system 200. FIG. 4 illustrates only two second peer systems 201 a, 201 b for ease of understanding. However, the embodiments of the invention may include an indefinite number of second peer systems. For example, if ten peers transfer a 100 MB portion of the file, then a peer 201 a can transfer those portions from its data serving systems (i.e., data servers) 204(1), 204(2) . . . 204(x) to complete the transaction. Leveraging ten peers to transfer 100 MB fragments transfer in parallel results in approximately 8 minutes (at 0.2 MB/sec) to transfer all 1,000 MB. By adding the transfer of the fragments locally at the initial 17 minutes (conventional technique), then the total time afforded by the embodiments of the invention is 25 minutes (8+17 minutes) versus 83 minutes (for the conventional scenario), which is a 70% increase in performance over the conventional approaches.
  • The embodiments of the invention provide a system and method for obtaining all the benefits of multi-sourced segmented data transfer while solving the traditional challenges of constrained bandwidth on a first peer system 200. The embodiments of the invention address peer segmented data transfer orchestration wherein local peers 201 a, 201 b are instructed to participate in the process of data transfer as depicted in FIG. 4.
  • FIG. 5 illustrates a process in accordance with an embodiment of the invention, which includes the following steps, further described in greater detail below: data identification 301, peer identification 305, data transfer plan creation 310, instruction assignment 315, begin data transfer 320, listen for responses and potential status messages from peers 325, data reconstitution 330, and optionally metric and heuristic processing 335.
  • With reference to FIGS. 4 and 5, identifying the data (301) is synonymous with resolving the asset, which the primary peer 200 (of FIG. 4) is looking to download. This could be as simple as requesting information about the asset, minimally, that it exists and that it is of a certain size. In a grid based system, identifying the data might include connecting to a master asset server 204(1), for example, that manages the resources on the grid network and retrieving the list of grid nodes to pull the data from. The identification of data (301) involves using software running on a computer or device allowing for the identification (presence) of a uniform resource identifier (URI) to a desired asset. In the grid example, it involves at least one central server 204(1), for example, knowledgeable of all assets on the grid and which computers or nodes contain the assets. The identification of the resource could be performed by software running on a second system (not shown) capable of talking to the grid to get the list of servers 204(1), 204(2) . . . 204(x) from which the asset can be retrieved.
  • Peer identification (305) can be achieved in many ways. One way is to have a central server 204(1), for example, where all peers 201 a, 201 b register themselves. A peer directory 307 can respond with a list of peers 201 a, 201 b based on specific criteria such as location and performance. The identification and catalog of peers 201 a, 201 b in a network 203 may include basic web forms running on a primary peer 200 allowing users to add their IP address to a list downloadable through the web. Another example may include a central grid server 204(2), for example, wherein peer registration is embodied as nodes that happen to contain all or part of an asset of interest.
  • The creation (310) of a data transfer plan 312 involves breaking up a large file into smaller tasks and assigning each task to identified peers 201 a, 201 b. The smaller tasks are to transfer a subset of the larger file. The decision of which peer 201 a, 201 b receives which portion or how many portions (i.e., “chunks”) is determined by the primary peer 200. This could be performed by force (e.g. divide equally amongst all peers 201 a, 201 b) or with some logic (e.g. the peer directory 307 shows a particular peer 201 a, for example, as having four ethernet connections and bridging multiple networks so it is assigned five times the amount of work). The data transfer plan 312 is preferably a text file including eXtensible markup language (XML) detailing the instructions for other peers 201 a, 201 b to consume. Software is required on each peer system 201 a, 201 b to allow the primary peer 200 the ability to connect over a suitable network 203 or similar connection to other peers 201 a, 201 b.
  • After the data transfer plan 312 is sent (315) to each of the peers 201 a, 201 b, each peer 201 a, for example, listens (325) for instructions from other peers 201 b, for example, and responds as best as it is able to. At this point, metrics could be posted back to the peer directory 307 to ensure each peer 201 a, 201 b are not given too many tasks. Alternatively, the peer 201 a, for example, can reject the work item and the primary peer 200 would be responsible for asking for more peers 201 b, for example, or adjusting the workload. In a preferred implementation, peers 201 a, for example, listen on a network socket for instructions from other peers 201 b, for example. The process of listening on a network socket is well known to the art and requires suitable software on each peer 201 a, 201 b.
  • The primary peer 200 listens for the completion of the task. Upon completion, each peer 201 a, 201 b notifies the primary peer 200 of the task status, and the primary peer 200 begins the process of (presumably on the local or optimal network 200) reconstituting the data (320). Listening for the completion of the task requires software (possible embeddable in an appliance) on a port for other peers 201 a, 201 b notifying the primary peer 200 of job completion. Additionally, the primary peer 200 could maintain the socket connection for the full duration of the transaction. Alternatively, the peers 201 a, 201 b might leverage a publish/subscribe system for exchanging messages. Publish/subscribe style messaging allows for the efficient broadcast of messages from one to many, but can facilitate one to one messaging in straightforward generic way.
  • The final step is to report back the performance witnessed by the primary peer 200 to be added into the metrics and algorithms (330) the peer directory 307 uses in returning peer lists. Metric and heuristic processing (330) is an optional component of the embodiments of the invention intended to make the peer-to-peer system less arbitrary. Reporting back performance (330) requires software on the peer 201 a, 201 b and a central server (directory) 204(1), for example. The directory 204(1), for example, listens for feedback on peers 201 a, 201 b it knows about. In a preferred embodiment, the primary peer 200 connects over the network 203 to the directory servers 204(1), 204(2) . . . 204(x) using a Transmission Control Protocol/Internet Protocol (TCP/IP) and submits performance data in the form of an XML document describing the time of interaction, asset, peer and the associated performance metric.
  • The identification of peers (305) includes several solutions such as a community server, master directory, seeded list, and peer discovery. A community server approach is a server centric model where peers connect to a main server to accomplish peer awareness. Similarly, a master directory stores all the known peers, but may not provide services that the community server offers. Seeded lists are groups of random peer identifiers enabling a decentralized discovery of the network 203. Peer discovery is accomplished by several techniques, the simplest of which is pinging the subnet to discover peers 201 a, 201 b. Pinging occurs when a system 200, 201 a, 201 b is connected to a network 203 and sends a specific message requesting acknowledgement. When pinging a subnet, a primary system 200 is not addressing a specific system on the network 200. Rather, it is sending a message to any system 201 a, 201 b on the network 203 and looking for which systems respond. Various characteristics contribute to an overall weighting of each peer 201 a, 201 b. Examples include ping response time, average past task completion performance, or geography. These peer characteristics can optionally be provided through a common server or peer directory 307.
  • Client peers 201 a, 201 b can either be brokered through a common server 204(1), for example, or report back to a common server 204(1), for example, on the current characteristics of a specific data transfer. For example, if a primary peer 200 wants to transfer the file “data.zip” to a requesting user, the peers 201 a, 201 b that have that file might be known, but the best peers will typically be local peers. For example, if the peer is in the U.S. north east corridor, then transferring from China or Japan is less optimal that transferring from Toronto, Canada. In addition there may be local peers that do not have the bandwidth to help or are too busy, in which case other local peers are more advantageous. Identifying peers 201 a, 201 b with the exact file may be performed simply by file name, but generally requires other attributes to match such as a file size, timestamp, author, checksum, timestamp, MD5 Hash, or digital signature.
  • The creation and transfer (310) of a data transfer plan 312 to peers 201 a, 201 b identifies the resource in question and the portions required for transfer by each peer 201 a, 201 b. In one form, this data transfer plan 312 is embodied as a list with the URI, the peer identifier, and the byte ranges which that peer requests. Table 1 illustrates a sample data transfer plan in accordance with an embodiment of the invention.
  • TABLE 1
    Sample data transfer plan
    http://www.server.com/a_big_file.zip, 40000
    9.45.36.100,0,10000
    9.45.36.101,10001,20000
    9.45.36.102,20001,10000
  • The first line in Table 1 provides a URI to the data, identifying the protocol, server name, data name and resource size. The second, third, and fourth lines of Table 1 identify IP addresses of co-opted peers and the data range which that peer is requested to transfer. For example, the second line states that peer 9.45.36.100 requests 0, 10000 bytes of “a_big_file.zip” from www.server.com using a hypertext transfer protocol (HTTP) connection. Other transfer protocols are possible, such as File Transfer Protocol (FTP) or Network News Transfer Protocol (NNTP) etc. The numbers defining the range assigned to each peer could be specified in bytes, kilobytes, megabytes, etc. Additionally, the URI to the resource might point to a grid system or multiple host systems that could be used to transfer the data. The host systems to transfer from could be specified in the instructions node or as part of the nature of implementation as in the grid system where the grid system dictates which peers to transfer from.
  • Optionally, the data transfer plan 312 might take the form of a self-describing markup (i.e., in XML format) as shown in Table 2. The first node (“<resource . . . />”) defines the resource and size of the total transfer. The second node (<instruction> . . . </instruction>) defines the instructions and includes the unique identifier and the specific instructions for that peer. In this case, the peer is instructed to make two transfers.
  • TABLE 2
    Sample data transfer plan in XML format
    <data-transfer-plan>
    <resource uri=“http://www.server.com/a_big_file.zip” size“40000”/>
    <instruction>
    <peer uid=“9 45.36 100”/>
    <transfer start=“0” end=“10000”/>
    <transfer start=“30000” end=“40000”/>
    </instruction>
    < instruction >
    <peer uid=“9.45.36.101”/>
    <transfer start=“10001” end=“20000”/>
    </instruction>
    <instruction>
    <peer uid=“9.45.36.102”/>
    <transfer start=“20001” end=“30000”/>
    </instruction>
    </data-transfer-plan>
  • The data transfer plan shown in Table 2 is written in XML. It includes similar content to that of the data transfer plan in Table 1. The parent node is the data-transfer-plan. It includes at least two child nodes, resource and instruction. The resource node describes the data the transfer plan refers to. It provides a URI to the data, identifying the protocol, server name, and data name. It also identifies the size of the data. The instruction node and stanza has several child nodes, peers, and at least one transfer node. The peer node has a property called a unique identifier (UID) which currently maps to the IP address of the target peer. The transfer node has two properties, start and end, identifying the data range which that peer is requested to transfer. In the case of this first instruction node, there are two transfer nodes indicating that peer 9.45.36.100 is being asked to transfer more than one segment of the associated data. Attributes and node values may be used interchangeably. For example: <peer> might have a child node <uid> instead of an attribute <peer uid=“ ”>. Additionally, the URI to the resource may point to a grid system or multiple host systems that might be used to transfer the data. The host systems to transfer from might be specified in the instructions node or as part of the nature of implementation as in the grid system where the grid system dictates which peers to transfer from. Other transfer protocols are possible, such as FTP or NNTP, etc. The numbers defining the range assigned to each peer could be specified in bytes, kilobytes, megabytes, etc.
  • Again with reference to FIGS. 4 and 5, call back notification from a peer 201 a, 201 b to the primary peer 200 offers an alternative to the data transfer plan 312. Additionally, data transfer plans 312 might not be transmitted in whole to each peer 201 a, 201 b. Individual peers 201 a, 201 b request the task to be performed and, when completed, ask for any other tasks to be performed. Transmitting the task list in whole offers opportunities for peers 201 a, 201 b to “collaborate” on accomplishing the task. For example, a peer system 201 a, for example, might be transferring slowly but have more tasks. Another peer system 201 b, for example, might be transferring quickly, but not have any further tasks. Given the complete data transfer plan 312, the primary peer system 200 can query the peer system 201 a, 201 b to establish a link and task transfer.
  • In another example, the peer 201 a, 201 b indicates to the primary (i.e., master) peer 200 that it is finished with the data transfer assigned. It can also ask for another segment of data to transfer. The primary peer 200 queries the current state of data transfer from the local peers 201 a, 201 b and reassigns task or portions of tasks. The connection from the primary peer 200 to the secondary peers 201 a, 201 b requires software running on each peer 201 a, 201 b capable of listening and responding to messages over the network 203. For example, a third peer (not shown) might have been asked to transfer 10000-30000 bytes but has only been able to transfer 20000. The primary peer 200 can assign 25000-30000 to an idle more advantageous peer 201 a, for example. Upon completion, the primary peer system 200 can query for the final data transfer plan 312 to reconstitute the data by connecting to each peer 201 a, 201 b over the network 203. Alternatively, the primary peer 200 requests each part of a file from other peers 201 a, 201 b by connecting to each of the peers 201 a, 201 b over the network 203 using a suitable protocol supporting two way message transfer (send and respond). Other peer systems can respond with the data stream or the pointer to the data stream by reading the data as it is stored in memory (hard disk, RAM, network storage) and writing to a network port (not shown) where the primary peer 200 is listening.
  • As previously mentioned, an optional step is to process (330) performance data to aid in the optimal selection of peer systems 201 a, 201 b. This step may include submitting data to a central server 204(1), for example. Alternatively, the peer systems 201 a, 201 b could store the data locally on the immediate systems storage, RAM, hard-drive etc. Further algorithms may be run to determine weightings of each known peer system 201 a, 201 b. For example, data on average transfer speed for a given peer 201 a, for example, may be captured. The peer 201 a may decide through preprogrammed rules or through end user intervention or preferences to indicate that average transfer rate is important. The peers 201 b, for example, with slow transfer rate are selected last for co-option.
  • Another additional step is for each peer 201 a, 201 b to process requests only by trusted peers (not shown). Trusted peers may be managed centrally through central server policies or by end-user interaction. Prompts to an end user that ‘X’ peer is requesting trust status is one method for building a list of trusted peers. The end user might say this time only or always trust ‘X’ peer. Additionally, communication between peers 200, 201 a, 201 b might be encrypted through well-known encryption techniques.
  • The embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment including both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • Furthermore, the embodiments of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
  • A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • A representative hardware environment for practicing the embodiments of the invention is depicted in FIG. 6. This schematic drawing illustrates a hardware configuration of an information handling/computer system in accordance with the embodiments of the invention. The system comprises at least one processor or central processing unit (CPU) 10. The CPUs 10 are interconnected via system bus 12 to various devices such as a random access memory (RAM) 14, read-only memory (ROM) 16, and an input/output (I/O) adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk units 11 and tape drives 13, or other program storage devices that are readable by the system. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments of the invention. The system further includes a user interface adapter 19 that connects a keyboard 15, mouse 17, speaker 24, microphone 22, and/or other user interface devices such as a touch screen device (not shown) to the bus 12 to gather user input. Additionally, a communication adapter 20 connects the bus 12 to a data processing network 25, and a display adapter 21 connects the bus 12 to a display device 23 which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
  • Enabling client side orchestration allows segmented data transfer to be multiplied over two or more systems moving the data transfer bottleneck to the total bandwidth capacity of the network and the capacity for the resource server to transmit the data. The resulting transfer is faster than a multi-segment transfer from a single system. The ability for peers to cooperate and transfer parts of the same asset leverages both server topologies and network topologies to utilize all available resources. Multiple servers 204(1), 204(2) . . . 204(x) are capable of responding to multiple requests for the same asset and different parts of the same asset. The peer network 203 provided by an embodiment of the invention is capable of requesting the same file or parts of files from different servers 204(1), 204(2) . . . 204(x). This utilizes software on each of the peers 201 a, 201 b and software on the servers 204(1), 204(2) . . . 204(x).
  • First, the software on the servers 204(1), 204(2) . . . 204(x) is required to simply return all or part of a requested asset. Second, the software on the peers 201 a, 201 b is required to both submit and receive instructions of which part or parts of an asset to transfer and then subsequently transfer those parts to the primary peer 200. A central server (not shown) may include software that keeps track of the peers 201 a, 201 b for a given network 203. Enabling peers 201 a, 201 b with the software allows for faster transfers as they are all cooperating to download the same asset from potentially multiple places, reducing the final transfer to a more local high performing transfer. The embodiments of the invention provide a system and method for obtaining all the benefits of multi-sourced segmented data transfer while solving the traditional challenges of constrained bandwidth on a primary peer 200. Accordingly, the embodiments of the invention addresses peer segmented data transfer orchestration wherein local peers 201 a, 201 b are instructed by a primary peer 200 to participate in the process of data transfer.
  • Generally, the embodiments of the invention provide a data transfer system and method comprising a plurality of peer systems 200, 201 a, 201 b arranged in a computer network 203 and at least one data server 204(1), 204(2) . . . 204(x) comprising data and coupled to the plurality of peer systems 200, 201 a, 201 b, wherein the plurality of peer systems comprise a first peer system 200 and at least one second peer system 201 a, 201 b, wherein the first peer system 200 is adapted to instruct the at least one second peer system 201 a, 201 b to collaboratively transfer the data from the at least one data server 204(1), 204(2) . . . 204(x) to the first peer system 200, and wherein the at least one second peer system 201 a, 201 b is adapted to transfer data from the at least one data server 204(1), 204(2) . . . 204(x) to the first peer system 200. Preferably, the plurality of peer systems 200, 201 a, 201 b is grid enabled. The data transfer system further comprises a peer directory 307 adapted to connect the plurality of peer systems 200, 201 a, 201 b to one another.
  • The first peer system 200 is adapted to create a data transfer plan 312 adapted to identify data resources and transfer bandwidth capabilities of each of the at least one second peer system 201 a, 201 b. Additionally, the data transfer plan 312 comprises a URI, a peer identifier, and byte ranges associated with each of the at least one second peer system 201 a, 201 b. The first peer system 200 is further adapted to identify data to be transferred, identify the at least one second peer system 201 a, 201 b capable of transferring portions of the data, and create a data transfer plan 312. Moreover, the at least one second peer system 201 a, 201 b is adapted to send the data transfer plan 312 to the at least one data server 204(1), 204(2) . . . 204(x) and to provide a status message to the first peer system 200. Preferably, the communication between the first peer system 200 and the at least one second peer system 201 a, 201 b occurs through web services. Moreover, the first peer system 200 is further adapted to reconstitute the data, wherein the reconstitution of the data is performed by transferring the data using compression.
  • The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments of the invention that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments of the invention have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments of the invention can be practiced with modification within the spirit and scope of the appended claims.

Claims (35)

1. A data transfer system comprising:
a plurality of peer systems arranged in a computer network; and
at least one data server comprising data and coupled to said plurality of peer systems,
wherein said plurality of peer systems comprise a first peer system and at least one second peer system,
wherein said first peer system is adapted to instruct said at least one second peer system to collaboratively transfer said data from said at least one data server to said first peer system, and
wherein said at least one second peer system is adapted to transfer data from said at least one data server to said first peer system.
2. The data transfer system of claim 1, wherein said plurality of peer systems is grid enabled.
3. The data transfer system of claim 1, wherein said first peer system is adapted to create a data transfer plan adapted to identify data resources and transfer bandwidth capabilities of each of said at least one second peer system.
4. The data transfer system of claim 3, wherein said data transfer plan comprises a uniform resource identifier (URI), a peer identifier, and byte ranges associated with each of said at least one second peer system.
5. The data transfer system of claim 1, wherein said first peer system is further adapted to identify data to be transferred, identify said at least one second peer system capable of transferring portions of said data, and create a data transfer plan; and wherein said at least one second peer system is adapted to send said data transfer plan to said at least one data server and to provide a status message to said first peer system.
6. The data transfer system of claim 1, wherein communication between said first peer system and said at least one second peer system occurs through web services.
7. The data transfer system of claim 1, wherein said first peer system is further adapted to reconstitute said data.
8. The data transfer system of claim 1, further comprising a peer directory adapted to connect said plurality of peer systems to one another.
9. A method of transferring data, said method comprising:
arranging a plurality of peer systems in a computer network;
coupling at least one data server comprising data to said plurality of peer systems, wherein said plurality of peer systems comprise a first peer system and at least one second peer system;
said first peer system instructing said at least one second peer system to collaboratively transfer said data from said at least one data server to said first peer system; and
said at least one second peer system transferring the data from said at least one data server to said first peer system.
10. The method of claim 9, wherein said plurality of peer systems is grid enabled.
11. The method of claim 9, further comprising said first peer system creating a data transfer plan and identifying data resources and transfer bandwidth capabilities of each of said at least one second peer system.
12. The method of claim 11, wherein said data transfer plan comprises a uniform resource identifier (URI), a peer identifier, and byte ranges associated with each of said at least one second peer system.
13. The method of claim 9, further comprising said first peer system identifying data to be transferred, identifying said at least one second peer system capable of transferring portions of said data, and creating a data transfer plan; and wherein said method further comprising said at least one second peer system sending said data transfer plan to said at least one data server and providing a status message to said first peer system.
14. The method of claim 9, wherein communication between said first peer system and said at least one second peer system occurs through web services.
15. The method of claim 9, further comprising said first peer system reconstituting said data.
16. The method of claim 9, further comprising using a peer directory to connect said plurality of peer systems to one another.
17. The method of claim 15, wherein the reconstitution of said data is performed by transferring said data using compression.
18. A program storage device readable by computer, tangibly embodying a program of instructions executable by said computer to perform a method of transferring data, said method comprising:
arranging a plurality of peer systems in a computer network;
coupling at least one data server comprising data to said plurality of peer systems, wherein said plurality of peer systems comprise a first peer system and at least one second peer system;
said first peer system instructing said at least one second peer system to collaboratively transfer said data from said at least one data server to said first peer system; and
said at least one second peer system transferring the data from said at least one data server to said first peer system.
19. The program storage device of claim 18, wherein said plurality of peer systems is grid enabled.
20. The program storage device of claim 18, wherein said method further comprises said first peer system creating a data transfer plan and identifying data resources and transfer bandwidth capabilities of each of said at least one second peer system.
21. The program storage device of claim 20, wherein said data transfer plan comprises a uniform resource identifier (URI), a peer identifier, and byte ranges associated with each of said at least one second peer system.
22. The program storage device of claim 18, wherein said method further comprises said first peer system identifying data to be transferred, identifying said at least one second peer system capable of transferring portions of said data, and creating a data transfer plan; and wherein said method further comprises said at least one second peer system sending said data transfer plan to said at least one data server and providing a status message to said first peer system.
23. The program storage device of claim 18, wherein communication between said first peer system and said at least one second peer system occurs through web services.
24. The program storage device of claim 18, wherein said method further comprises said first peer system reconstituting said data.
25. The program storage device of claim 18, wherein said method further comprises using a peer directory to connect said plurality of peer systems to one another.
26. The program storage device of claim 24, wherein the reconstitution of said data is performed by transferring said data using compression.
27. A service of transferring data, said service comprising:
arranging a plurality of peer systems in a computer network;
coupling at least one data server comprising data to said plurality of peer systems, wherein said plurality of peer systems comprise a first peer system and at least one second peer system;
said first peer system instructing said at least one second peer system to collaboratively transfer said data from said at least one data server to said first peer system; and
said at least one second peer system transferring the data from said at least one data server to said first peer system.
28. The service of claim 27, wherein said plurality of peer systems is grid enabled.
29. The service of claim 27, further comprising said first peer system creating a data transfer plan and identifying data resources and transfer bandwidth capabilities of each of said at least one second peer system.
30. The service of claim 29, wherein said data transfer plan comprises a uniform resource identifier (URI), a peer identifier, and byte ranges associated with each of said at least one second peer system.
31. The service of claim 27, further comprising said first peer system identifying data to be transferred, identifying said at least one second peer system capable of transferring portions of said data, and creating a data transfer plan; and wherein said method further comprising said at least one second peer system sending said data transfer plan to said at least one data server and providing a status message to said first peer system.
32. The service of claim 27, wherein communication between said first peer system and said at least one second peer system occurs through web services.
33. The service of claim 27, further comprising said first peer system reconstituting said data, wherein the reconstitution of said data is performed by transferring said data using compression.
34. The service of claim 27, further comprising using a peer directory to connect said plurality of peer systems to one another.
35. A computer system comprising:
a computer network;
at least one data server comprising data and coupled to said computer network;
a grid enabled first peer system coupled to said computer network;
a plurality of grid enabled second peer systems coupled to said computer network; and
a peer directory adapted to connect said first peer system and said plurality of second peer systems to one another,
wherein said first peer system is adapted to instruct said at least one second peer system to collaboratively transfer said data from said at least one data server to said first peer system,
wherein said plurality of second peer systems are adapted to transfer data from said at least one data server to said first peer system, and
wherein said first peer system is further adapted to identify data to be transferred, identify said at least one second peer system capable of transferring portions of said data, and create a data transfer plan; and wherein said at least one second peer system is adapted to send said data transfer plan to said at least one data server and to provide a status message to said first peer system.
US12/054,838 2005-05-12 2008-03-25 Peer Data Transfer Orchestration Abandoned US20080172472A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/054,838 US20080172472A1 (en) 2005-05-12 2008-03-25 Peer Data Transfer Orchestration

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/128,100 US7490140B2 (en) 2005-05-12 2005-05-12 Peer data transfer orchestration
US12/054,838 US20080172472A1 (en) 2005-05-12 2008-03-25 Peer Data Transfer Orchestration

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/128,100 Continuation US7490140B2 (en) 2005-05-12 2005-05-12 Peer data transfer orchestration

Publications (1)

Publication Number Publication Date
US20080172472A1 true US20080172472A1 (en) 2008-07-17

Family

ID=37420451

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/128,100 Expired - Fee Related US7490140B2 (en) 2005-05-12 2005-05-12 Peer data transfer orchestration
US12/054,838 Abandoned US20080172472A1 (en) 2005-05-12 2008-03-25 Peer Data Transfer Orchestration

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/128,100 Expired - Fee Related US7490140B2 (en) 2005-05-12 2005-05-12 Peer data transfer orchestration

Country Status (6)

Country Link
US (2) US7490140B2 (en)
EP (1) EP1880302A4 (en)
JP (1) JP2009501456A (en)
CN (1) CN101313292A (en)
TW (1) TW200703024A (en)
WO (1) WO2006124084A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090144362A1 (en) * 2007-12-01 2009-06-04 Richmond Evan P Systems and methods for providing desktop messaging and end-user profiling
US20100169334A1 (en) * 2008-12-30 2010-07-01 Microsoft Corporation Peer-to-peer web search using tagged resources
US20140115093A1 (en) * 2012-10-22 2014-04-24 Digi International Inc. Remote data exchange and device management with efficient file replication over heterogeneous communication transports

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005234666A (en) * 2004-02-17 2005-09-02 Nec Corp PoC SYSTEM, PoC SERVER AND PoC CLIENT
US8620713B2 (en) * 2005-07-15 2013-12-31 Sap Ag Mechanism to control delegation and revocation of tasks in workflow system
US8078686B2 (en) * 2005-09-27 2011-12-13 Siemens Product Lifecycle Management Software Inc. High performance file fragment cache
US7797722B2 (en) * 2006-05-26 2010-09-14 Sony Corporation System and method for content delivery
EP2210188A1 (en) * 2007-11-05 2010-07-28 Limelight Networks, Inc. End to end data transfer
US9203928B2 (en) 2008-03-20 2015-12-01 Callahan Cellular L.L.C. Data storage and retrieval
US8458285B2 (en) * 2008-03-20 2013-06-04 Post Dahl Co. Limited Liability Company Redundant data forwarding storage
US20100088268A1 (en) * 2008-10-02 2010-04-08 International Business Machines Corporation Encryption of data fragments in a peer-to-peer data backup and archival network
US9307020B2 (en) * 2008-10-02 2016-04-05 International Business Machines Corporation Dispersal and retrieval of data fragments in a peer-to-peer data backup and archival network
US8935355B2 (en) * 2008-10-02 2015-01-13 International Business Machines Corporation Periodic shuffling of data fragments in a peer-to-peer data backup and archival network
WO2011026661A1 (en) * 2009-09-03 2011-03-10 International Business Machines Corporation Shared-bandwidth multiple target remote copy
EP2372576A1 (en) 2010-03-30 2011-10-05 British Telecommunications public limited company Federated file distribution
US20110246658A1 (en) * 2010-04-05 2011-10-06 International Business Machines Coporation Data exchange optimization in a peer-to-peer network
US9686355B2 (en) * 2010-12-20 2017-06-20 Microsoft Technology Licensing, Llc Third party initiation of communications between remote parties
US9042386B2 (en) * 2012-08-14 2015-05-26 International Business Machines Corporation Data transfer optimization through destination analytics and data de-duplication
CN103780647A (en) * 2012-10-23 2014-05-07 苏州联讯达软件有限公司 Touch screen task relaying method and system
US20150156264A1 (en) * 2013-12-04 2015-06-04 International Business Machines Corporation File access optimization using strategically partitioned and positioned data in conjunction with a collaborative peer transfer system
WO2019232750A1 (en) * 2018-06-07 2019-12-12 Guan Chi Network communication method and system, and peer
CN113657960A (en) * 2020-08-28 2021-11-16 支付宝(杭州)信息技术有限公司 Matching method, device and equipment based on trusted asset data

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5996025A (en) * 1997-10-31 1999-11-30 International Business Machines Corp. Network transparent access framework for multimedia serving
US20020099844A1 (en) * 2000-08-23 2002-07-25 International Business Machines Corporation Load balancing and dynamic control of multiple data streams in a network
US6510553B1 (en) * 1998-10-26 2003-01-21 Intel Corporation Method of streaming video from multiple sources over a network
US20030078964A1 (en) * 2001-06-04 2003-04-24 Nct Group, Inc. System and method for reducing the time to deliver information from a communications network to a user
US20030084280A1 (en) * 2001-10-25 2003-05-01 Worldcom, Inc. Secure file transfer and secure file transfer protocol
US20030188019A1 (en) * 2002-03-27 2003-10-02 International Business Machines Corporation Providing management functions in decentralized networks
US20030191811A1 (en) * 2002-04-05 2003-10-09 Tony Hashem Method and system for transferring files using file transfer protocol
US20030204602A1 (en) * 2002-04-26 2003-10-30 Hudson Michael D. Mediated multi-source peer content delivery network architecture
US20030233464A1 (en) * 2002-06-10 2003-12-18 Jonathan Walpole Priority progress streaming for quality-adaptive transmission of data
US20040057379A1 (en) * 2002-09-20 2004-03-25 Fei Chen Method and apparatus for identifying delay causes in traffic traversing a network
US20040088369A1 (en) * 2002-10-31 2004-05-06 Yeager William J. Peer trust evaluation using mobile agents in peer-to-peer networks
US20040172476A1 (en) * 2003-02-28 2004-09-02 Chapweske Justin F. Parallel data transfer over multiple channels with data order prioritization
US6801947B1 (en) * 2000-08-01 2004-10-05 Nortel Networks Ltd Method and apparatus for broadcasting media objects with guaranteed quality of service
US20040255048A1 (en) * 2001-08-01 2004-12-16 Etai Lev Ran Virtual file-sharing network
US20060085823A1 (en) * 2002-10-03 2006-04-20 Bell David A Media communications method and apparatus
US20060288408A1 (en) * 1996-10-17 2006-12-21 Graphon Corporation Virtual private network
US20070028133A1 (en) * 2005-01-28 2007-02-01 Argo-Notes, Inc. Download method for file by bit torrent protocol

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7089301B1 (en) * 2000-08-11 2006-08-08 Napster, Inc. System and method for searching peer-to-peer computer networks by selecting a computer based on at least a number of files shared by the computer
JP2002268979A (en) * 2001-03-07 2002-09-20 Nippon Telegr & Teleph Corp <Ntt> Method/device for downloading, downloading program and recording medium with the program recorded thereon
US7272645B2 (en) * 2001-05-25 2007-09-18 Sbc Technology Resources, Inc. Method of improving the reliability of peer-to-peer network downloads
EP1413119B1 (en) * 2001-08-04 2006-05-17 Kontiki, Inc. Method and apparatus for facilitating distributed delivery of content across a computer network
US20030233455A1 (en) * 2002-06-14 2003-12-18 Mike Leber Distributed file sharing system
JP2004080145A (en) * 2002-08-12 2004-03-11 Canon Inc Image server system and its image reproducing method
JP4233328B2 (en) * 2003-01-08 2009-03-04 日立ソフトウエアエンジニアリング株式会社 File download method and system using peer-to-peer technology
KR100427143B1 (en) * 2003-01-17 2004-04-14 엔에이치엔(주) Method for Transmitting and Dowloading Streaming Data
JP2005018159A (en) * 2003-06-23 2005-01-20 Fujitsu Ltd Storage system construction support device, storage system construction support method and storage system construction support program
US7627678B2 (en) * 2003-10-20 2009-12-01 Sony Computer Entertainment America Inc. Connecting a peer in a peer-to-peer relay network

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060288408A1 (en) * 1996-10-17 2006-12-21 Graphon Corporation Virtual private network
US5996025A (en) * 1997-10-31 1999-11-30 International Business Machines Corp. Network transparent access framework for multimedia serving
US6510553B1 (en) * 1998-10-26 2003-01-21 Intel Corporation Method of streaming video from multiple sources over a network
US6801947B1 (en) * 2000-08-01 2004-10-05 Nortel Networks Ltd Method and apparatus for broadcasting media objects with guaranteed quality of service
US20020099844A1 (en) * 2000-08-23 2002-07-25 International Business Machines Corporation Load balancing and dynamic control of multiple data streams in a network
US20030078964A1 (en) * 2001-06-04 2003-04-24 Nct Group, Inc. System and method for reducing the time to deliver information from a communications network to a user
US20040255048A1 (en) * 2001-08-01 2004-12-16 Etai Lev Ran Virtual file-sharing network
US20030084280A1 (en) * 2001-10-25 2003-05-01 Worldcom, Inc. Secure file transfer and secure file transfer protocol
US20030188019A1 (en) * 2002-03-27 2003-10-02 International Business Machines Corporation Providing management functions in decentralized networks
US20030191811A1 (en) * 2002-04-05 2003-10-09 Tony Hashem Method and system for transferring files using file transfer protocol
US20030204605A1 (en) * 2002-04-26 2003-10-30 Hudson Michael D. Centralized selection of peers as media data sources in a dispersed peer network
US20030204613A1 (en) * 2002-04-26 2003-10-30 Hudson Michael D. System and methods of streaming media files from a dispersed peer network to maintain quality of service
US20030204602A1 (en) * 2002-04-26 2003-10-30 Hudson Michael D. Mediated multi-source peer content delivery network architecture
US20030233464A1 (en) * 2002-06-10 2003-12-18 Jonathan Walpole Priority progress streaming for quality-adaptive transmission of data
US20040057379A1 (en) * 2002-09-20 2004-03-25 Fei Chen Method and apparatus for identifying delay causes in traffic traversing a network
US20060085823A1 (en) * 2002-10-03 2006-04-20 Bell David A Media communications method and apparatus
US20040088369A1 (en) * 2002-10-31 2004-05-06 Yeager William J. Peer trust evaluation using mobile agents in peer-to-peer networks
US20040172476A1 (en) * 2003-02-28 2004-09-02 Chapweske Justin F. Parallel data transfer over multiple channels with data order prioritization
US20070028133A1 (en) * 2005-01-28 2007-02-01 Argo-Notes, Inc. Download method for file by bit torrent protocol

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090144362A1 (en) * 2007-12-01 2009-06-04 Richmond Evan P Systems and methods for providing desktop messaging and end-user profiling
US20100169334A1 (en) * 2008-12-30 2010-07-01 Microsoft Corporation Peer-to-peer web search using tagged resources
US8583682B2 (en) * 2008-12-30 2013-11-12 Microsoft Corporation Peer-to-peer web search using tagged resources
US20140115093A1 (en) * 2012-10-22 2014-04-24 Digi International Inc. Remote data exchange and device management with efficient file replication over heterogeneous communication transports

Also Published As

Publication number Publication date
JP2009501456A (en) 2009-01-15
WO2006124084A2 (en) 2006-11-23
US20060259573A1 (en) 2006-11-16
US7490140B2 (en) 2009-02-10
WO2006124084A3 (en) 2008-07-31
TW200703024A (en) 2007-01-16
CN101313292A (en) 2008-11-26
EP1880302A2 (en) 2008-01-23
EP1880302A4 (en) 2010-05-26

Similar Documents

Publication Publication Date Title
US7490140B2 (en) Peer data transfer orchestration
JP3944168B2 (en) Method and system for peer-to-peer communication in a network environment
US7558859B2 (en) Peer-to-peer auction based data distribution
RU2343536C2 (en) Mechanism of peer broadcasting of information content
JP3851275B2 (en) Scalable resource discovery and reconfiguration of distributed computer networks
US7363449B2 (en) Software agent-based architecture for data relocation
US7970856B2 (en) System and method for managing and distributing assets over a network
EP1825654B1 (en) Routing a service query in an overlay network
Selimi et al. Cloud services in the guifi. net community network
US20070250590A1 (en) Ad-hoc proxy for discovery and retrieval of dynamic data such as a list of active devices
US20110087783A1 (en) Allocating resources of a node in a server farm
US9544371B1 (en) Method to discover multiple paths to disk devices cluster wide
US20100138555A1 (en) System and Method to Guide Active Participation in Peer-to-Peer Systems with Passive Monitoring Environment
CN105051673A (en) Network printing
CN101662494A (en) Method, system and device for realizing content supply
JP4554564B2 (en) Distributed data management method and management system
JP5690296B2 (en) Load balancing program and load balancing apparatus
US11863592B2 (en) Active speaker tracking using a global naming scheme
Banerjea et al. Publish/subscribe-based p2p-cloud of underutilized computing resources for providing computation-as-a-service
WO2024034057A1 (en) Network management device, network management method, and program
WO2023238284A1 (en) Management system, management method, and management program
Li et al. An improved design of P4P based on distributed tracker
Li et al. Facilitating resource discovery in grid environments with peer-to-peer structured tuple spaces
Syed Internet and Voice Communication
JP5070965B2 (en) Network information collection system, information collection method, and information processing apparatus

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION