CN102483707B - The system and method for source IP is kept in the multi-core environment of load balance - Google Patents

The system and method for source IP is kept in the multi-core environment of load balance Download PDF

Info

Publication number
CN102483707B
CN102483707B CN201080036985.3A CN201080036985A CN102483707B CN 102483707 B CN102483707 B CN 102483707B CN 201080036985 A CN201080036985 A CN 201080036985A CN 102483707 B CN102483707 B CN 102483707B
Authority
CN
China
Prior art keywords
core
port
packet
client computer
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201080036985.3A
Other languages
Chinese (zh)
Other versions
CN102483707A (en
Inventor
D·格尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Citrix Systems Inc
Original Assignee
Citrix Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Citrix Systems Inc filed Critical Citrix Systems Inc
Publication of CN102483707A publication Critical patent/CN102483707A/en
Application granted granted Critical
Publication of CN102483707B publication Critical patent/CN102483707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering

Abstract

Described herein is the method and system of asking for distribution on multiple nucleus system and responding.Each core performs Packet engine, and this Packet engine processes the packet being assigned to this core further.Client requests is transmitted to the Packet engine on core by the stream distributor performed in multiple nucleus system, described core is that described tuple is included in the Client IP address, client computer port, server ip address and the Service-Port that identify in this request based on selected by the value generated during tuple application Hash.Described Packet engine keeps Client IP address, selects the first port of this core, and determines whether can identify selected core to the Hash of the tuple comprising those values.Then, amendment client requests, makes described client requests comprise tuple, and this tuple comprises Client IP address, server ip address, the first port and service device port.

Description

The system and method for source IP is kept in the multi-core environment of load balance
related application
This application claims the right of priority that name is called the U.S. Patent application No.12/489165 of " Systems and Methods for Retaining Source IP inLoad Balancing Multi-Core Environment ", this U.S. Patent application proposed on June 22nd, 2009, and it is all contained in this by reference.
Technical field
The application relates generally to data communication network.Specifically, the application's packet related to for being received by multiple nucleus system is distributed to the system and method for the core in this multiple nucleus system.
Background technology
In multiple nucleus system, any one in multiple core can perform identical or different function collection.Described multiple nucleus system can be disposed receiver side adjuster (the receiver side adjustment technology of such as Microsoft) and the grouping received from network interface unit is distributed to any core processes.Described receiver side adjuster can not know the function collection that each core is performing.Receiver side adjuster receives network packet from network interface unit, based on predetermined function, this network packet is forwarded to core.This network packet can be the affairs of multiple network packet in some context or a part for series.Due to the distributed function of receiver side adjuster, some in these network packet can go to different core.Like this, may cause in multiple nucleus system the imbalance that function collection performs and processes.
Summary of the invention
There is the multiple nucleus system of the network traffics that can balance in multiple nucleus system on one or more core.These multiple nucleus systems can be included in equipment or computing system, and can comprise core or the processor of any amount.In certain embodiments, multiple nucleus system carrys out distributed network flow according to stream distributed model, described stream distributed model is such as function parallelization mechanism, wherein each core of multiple nucleus system is assigned with different functions, described stream distributed model is such as data parallel mechanism, and wherein each core of multiple nucleus system is assigned to different equipment and module.These distribution schemes do not consider network traffics, so network traffics distribution is often unequal or symmetrical.So need a kind of distribution scheme, it can at one or more internuclear substantial uniform, the distributed network flow symmetrically of multiple nucleus system.
In some cases, the network traffics distribution on one or more core needs the attribute changing network traffics, to ensure that the flow returned is routed to initiation core.Guarantee the symmetry initiating core and response reception core about request, unnecessary copy and the buffer memory of integrated data can be reduced, and uniform, that pass in and out described multiple nucleus system request and response flow can be provided.Some systems realize symmetrical by changing the tuple associated with the packet in network traffics.Can be amendment source IP address and/or source port to the change of this tuple.In some cases, background system can require that source IP address and/or source port remain unchanged.Under those circumstances, need system can keep the attribute of these packets, can guarantee that again described request is processed by core substantially identical in multiple nucleus system with response.
The packet that the network traffics that multiple cores in multiple nucleus system distribute comprise sometimes is segmented.In these cases, multiple nucleus system receives segments data packets, instead of complete packet.So, need system can process data packets segmentation, again can equably and symmetrically distributed network flow on the multiple cores on multiple nucleus system.
In one aspect, described herein is for providing the request of symmetry and respond the embodiment of the method processed in a Packet engine of multiple Packet engine.Each in the plurality of Packet engine performs on the corresponding core of multiple cores being arranged in the multiple nucleus system in the middle of client-server.Be arranged in the Packet engine that multiple nucleus system first core in the middle of client-server performs, from stream distributor subscribing client to the request of server.Described first core is selected based on to the Hash of the first tuple by described stream distributor, described first tuple be included in identify in described client requests client computer Internet protocol address, client computer port, server internet protocol address and Service-Port.Packet engine selects the first Internet protocol address from one or more Internet protocol addresses of described first core, and selects the first port from multiple ports of described first core.Then, Packet engine determines the first core described in the Hash identification to the second tuple, and the second tuple at least comprises described first Internet protocol address and the first port.Then, described in Packet engine identification, the first port is available, and revises described client requests described first Internet protocol address is identified as client computer Internet protocol address and described first port is identified as client computer port.
In certain embodiments, modified client requests is sent to described server by Packet engine.
In certain embodiments, described stream distributor receive from described server, response to described client requests, and based on the Hash to tlv triple, described response is distributed to the first core of described Packet engine, described tlv triple be included in identify in described response client computer Internet protocol address, client computer port, server internet protocol address and Service-Port.
In certain embodiments, described Packet engine determines that the Hash identification to described first tuple performs the first core of described Packet engine thereon.In other embodiments, described Packet engine determines that the Hash identification to described second tuple performs the first core of described Packet engine thereon.
In certain embodiments, described Packet engine determines that described first port is unavailable, the second port is selected from multiple ports of described first core, determine that described second port can be used, and determine the first core described in the Hash identification to four-tuple, described four-tuple at least comprises the first Internet protocol address and the second port.Then, described Packet engine revises described client requests so that described first Internet protocol address is identified as described client computer Internet protocol address, and is described client computer port by described second port identification.
In one embodiment, described Packet engine determines that described first port is unavailable, the second Internet protocol address is selected from one or more Internet protocol addresses of described first core, the second port is selected from multiple ports of described first core, and determine the first core described in the Hash identification to five-tuple, described five-tuple at least comprises described second Internet protocol address and described second port.Then, described Packet engine revises described client requests, so that described second Internet protocol address is identified as described client computer Internet protocol address and be described client computer port by described second port identification.
In certain embodiments, Packet engine selects the first Internet protocol address from one group of described first core predetermined Internet protocol address.In other embodiments, described Packet engine selects the first port from the port table comprising available port.In certain embodiments, be based in part on the local internet protocol address of the first core and one or more Hash of local port of being associated with each local internet protocol address, select each port including described port table in.
In many examples, described stream distributor performs in described multiple nucleus system.In certain embodiments, described multiple nucleus system at least comprises two cores, and each core storage is included in the port table of the available port on this core.
In one embodiment, described first core is based in part on by described stream distributor and selects the Hash of described first tuple.In other embodiments, described first port is distributed to packet with instruction by described Packet engine renewal port assignment table.
In some respects, described herein is the system providing symmetrical request and response process in the Packet engine in multiple Packet engine, and each of described multiple Packet engine performs on the corresponding core of multiple cores being arranged in the multiple nucleus system in the middle of client-server.This system can comprise the multiple nucleus system be positioned in the middle of client-server, and described multiple nucleus system comprises multiple core.The stream distributor receiving the request from client computer to server can perform in described multiple nucleus system, and select the first core based on to the Hash of the first tuple, described first tuple be included in identify in described client requests client computer Internet protocol address, client computer port, server internet protocol address and Service-Port.The Packet engine that the first core in described multiple nucleus system performs can receive described client requests from described stream distributor.Then, described Packet engine can be selected the first Internet protocol address of one or more Internet protocol addresses of described first core and select the first port from multiple ports of described first core, determine the first core described in the Hash identification to the second tuple at least comprising described first Internet protocol address and described first port, identify that described first port is available, and revise described client requests described first Internet protocol address is identified as described client computer Internet protocol address and described first port is identified as described client computer port.
In yet another aspect, described herein is an a kind of Packet engine for network packet being directed to by stream distributor in multiple Packet engine, keep the embodiment of the method for client computer Internet protocol address and client computer port, each in described multiple Packet engine performs on the core of multiple cores being arranged in the multiple nucleus system in the middle of described client-server simultaneously.In the described Packet engine that the first core of multiple nucleus system in the middle of client-server performs of being positioned at from the request of stream distributor subscribing client, described client requests identification comprises the first tuple of client computer Internet protocol address, client computer port, server internet protocol address and Service-Port.Described stream distributor selects described first core with subscribing client request based on to the Hash of described first tuple.Described stream distributor also receives the response to the client requests being forwarded to described server by described Packet engine, described response is generated by described server and comprises the second tuple, and described second tuple is by identifying second core different from described the first core receiving the Packet engine of asking to the Hash of the second tuple.Received response is forwarded to the second Packet engine of described second core by described stream distributor.Then, the response received by described second core, in response to the rule of the stream distributor performed on described second core, is directed to described first core by described stream distributor.
In certain embodiments, received response is forwarded to the second Packet engine also to comprise and one or more network packet of described response are stored in by the second Packet engine of described second core the storage unit that described first nuclear energy enough accesses.Described one or more network packet can be stored in the addressable shared buffer of each core in multiple nucleus system.
In yet another embodiment, send message by the second core to described first core, respond the Packet engine process by described first core described in described message identification.
In certain embodiments, the second Packet engine of described second core determines that described response corresponds to not by the request of described second Packet engine process.This determines also can comprise the Hash calculated the tuple of described response, the first core described in described Hash identification.This is determined to be included in port assignment table and searches port to identify described first core.
In certain embodiments, by the described Packet engine on described first core, described client requests is forwarded to server.When described client requests is forwarded, the described client computer Internet protocol address in described first tuple and described client computer port can be remained on.
In certain embodiments, described response can comprise the second tuple, and described second tuple at least comprises the described client computer Internet protocol address of described first tuple and described client computer port.In certain embodiments, the Hash being applied to described first tuple is identical with the Hash being applied to described second tuple substantially.
In one embodiment, described stream distributor is based in part on and selects described first core to the Hash of described first tuple.
In certain embodiments, in response to being configured to the Packet engine keeping client computer Internet protocol address, described client computer Internet protocol address is kept.In these embodiments, in response to requiring the security strategy keeping client computer Internet protocol address, described Packet engine can be configured to keep client computer Internet protocol address.In other embodiments, in response to being configured to the Packet engine keeping client computer port, described client computer port is kept.In these embodiments, in response to requiring the security strategy keeping client computer port, described Packet engine can be configured to keep client computer port.
In other respects, described herein is method for by stream distributor the network packet be segmented being directed to a Packet engine in multiple Packet engine, and each in described multiple Packet engine performs on the corresponding core of multiple cores being arranged in the multiple nucleus system in the middle of described client-server.In the described Packet engine that the first core of multiple nucleus system in the middle of client-server performs of being positioned at from the request of stream distributor subscribing client, described client requests identification comprises the first tuple of client computer Internet protocol address, client computer port, server internet protocol address and Service-Port.Described stream distributor can select described first core based on to the Hash of described first tuple, to receive described client requests.Described stream distributor can receive from described server, to the multiple segmentations being forwarded to the response of the client requests of described server by the Packet engine on described first core.Then, in response to calculated by described stream distributor, the second Hash to the described source internet protocol address identified by described multiple segmentation and object Internet protocol address, multiple segmentations of described response can be distributed to described second core by described stream distributor.Then, the second Packet engine of described second core can store described multiple segmentation, and performs one or more segmentation action to described multiple segmentation.Then, determined the described multiple segmentation received by described second core to be directed to described first core by the rule of the described stream distributor operated on described second core.
In certain embodiments, store described multiple segmentation also to comprise by the described multiple segmentation of described second Packet engine assembling.
In other embodiments, determine described multiple segmentation to be directed to described first core also to comprise, by described second Packet engine, assembled multiple fragmented storage are endorsed the storage unit of accessing to described first.In certain embodiments, described method also comprises by described second core to described first core transmission message, with the multiple segmentations indicating described first core process to assemble.
In certain embodiments, determine described multiple segmentation to be directed to described first core also to comprise and determine that described first core establishes connection by described second core.In one embodiment, perform segmentation action and also comprise execution Assembly Action, and in other embodiments, perform segmentation action and also comprise the action of execution bridge joint.
In certain embodiments, described multiple segmentation may be directed to described first core.
In certain embodiments, a part for the described multiple segmentation of described stream distributor assembling.Then, described stream distributor can extract described source internet protocol address and the object Internet protocol address of described second tuple from a described part for assembled multiple segmentations.In other embodiments, a described part for the described multiple segmentation of described stream distributor assembling, until the head of described response is assembled.Then, described stream distributor extracts described source internet protocol address and the object Internet protocol address of described second tuple from the head of assembled response
In yet another aspect, described herein is a kind of request and response process of providing symmetry in the Packet engine of in multiple Packet engine, keep the embodiment of the Internet protocol address of client computer and the method for proxy client port, each in described multiple Packet engine performs on the core of multiple cores being arranged in the multiple nucleus system in the middle of described client-server simultaneously.By in the described Packet engine that the first core of multiple nucleus system in the middle of client-server performs of being positioned at from the request of stream distributor subscribing client, described client requests identification comprises the first tuple of client computer Internet protocol address, client computer port, server internet protocol address and Service-Port.In response to the first Hash to described first tuple, described request is forwarded to described first core by described stream distributor.Described Packet engine can determine the described client computer port acting on behalf of described request, and keeps described client computer Internet protocol address.Described Packet engine can also calculate the second Hash to described client computer Internet protocol address and described object Internet protocol address, to select port allocation table from multiple port assignment table.After have selected described port assignment table, described Packet engine can determine the first core described in the Hash identification to the second tuple, and described second tuple at least comprises the first available port from selected port assignment table and described client computer Internet protocol address.Then, described Packet engine can revise the described client computer port of described client requests, to identify described first port.
In certain embodiments, revised client requests is sent to described server by described Packet engine.In certain embodiments, revised client requests is sent to the server being positioned at described object Internet protocol address by described Packet engine.In other embodiments, described Packet engine determines that described first port of selected port assignment table is unavailable.When making after this determines, described Packet engine selects the second port from described selected port assignment table, and determines that described second port can be used.Further, described Packet engine can be unavailable by determining that described first port makes for determining described first port.
In one embodiment, the method also comprises on each core of being stored in by multiple port assignment table in described multiple nucleus system.Each port assignment table can be positioned at the agent Internet protocol address of the core storing this port assignment table thereon.Described Packet engine can be based in part on the client computer Internet protocol address of the first packet and the Hash of destination address to select port allocation table.
In certain embodiments, described stream distributor receives the first packet and the second packet, and be based in part on the first core described first packet be forwarded to the Hash of the first tuple in described multiple nucleus system, described first tuple at least comprises the first client computer Internet protocol address and first destination address of described first packet.Then, described stream distributor is based in part on the second core described second packet be forwarded to the Hash of the second tuple in described multiple nucleus system, and described second tuple at least comprises the second client computer Internet protocol address and second destination address of described second packet.
In one embodiment, the method also comprises the port assignment table selected by renewal, to arrange described first port for unavailable.
In some respects, described herein is a kind ofly in the Packet engine of in multiple Packet engine, provide symmetrical request and response process, keep the Internet protocol address of client computer and the system of proxy client port simultaneously, and each in described multiple Packet engine performs on the core of multiple cores being arranged in the multiple nucleus system in the middle of described client-server.This system can comprise the multiple nucleus system be positioned in the middle of client-server.This system can also comprise stream distributor, described stream distributor subscribing client is to the request of server, and select the first core based on to the Hash of the first tuple, described first tuple be included in identify in described client requests client computer Internet protocol address, client computer port, server internet protocol address and Service-Port.The Packet engine that first core of described multiple nucleus system performs can receive the request of the described client computer from described stream distributor, and determines whether to act on behalf of the described client computer port of this request and keep described client computer Internet protocol address.Then, described Packet engine calculates the second Hash of described client computer Internet protocol address and described object Internet protocol address to select a port assignment table in multiple port assignment table, and determine the first core described in the Hash identification to the second tuple, described second tuple at least comprises the first available port from selected port assignment table and described client computer Internet protocol address.Then, described Packet engine revises the client computer port of described client requests, to identify described first port.
The details of the various embodiments of method and system described herein accompanying drawing below and elaborating in describing.
Accompanying drawing explanation
By reference to following description taken together with the accompanying drawings, aforementioned and other object, aspects, features and advantages of the present invention, will more obviously and be easier to understand, wherein:
Figure 1A is the block diagram of client computer by the embodiment of the network environment of access service device;
Figure 1B transmits the block diagram of computing environment to the embodiment of the environment of client computer by equipment from server;
Fig. 1 C transmits the block diagram of computing environment to another embodiment of the environment of client computer by equipment from server;
Fig. 1 D transmits the block diagram of computing environment to another embodiment of the environment of client computer by equipment from server;
Fig. 1 E to 1H is the block diagram of the embodiment of calculation element;
Fig. 2 A be for the treatment of client-server between the block diagram of embodiment of equipment of communication;
Fig. 2 B be for optimizing, accelerating, the block diagram of another embodiment of the equipment of load balance and the communication between routed customer machine and server;
Fig. 3 is the block diagram of the embodiment for the client computer by equipment and server communication;
Fig. 4 A is the block diagram of the embodiment of virtualized environment;
Fig. 4 B is the block diagram of another embodiment of virtualized environment;
Fig. 4 C is the block diagram of the embodiment of virtual equipment;
Fig. 5 A is the block diagram of the embodiment of the method realizing parallel mechanism in multiple nucleus system;
Fig. 5 B is the block diagram of the embodiment of the system using multiple nucleus system;
Fig. 5 C is the block diagram of the another embodiment of an aspect of multiple nucleus system;
Fig. 6 A is the block diagram of the embodiment of multiple nucleus system;
Fig. 6 B is the block diagram of the embodiment of core in multiple nucleus system;
Fig. 7 A-7C is the process flow diagram of embodiment of the method for the grouping of distributed data on multiple nucleus system;
Fig. 8 is the process flow diagram of the embodiment for carrying out the method that distributed data divides into groups on multiple nucleus system based on Hash;
Fig. 9 is the process flow diagram of embodiment for being carried out the method for distributed data grouping on multiple nucleus system to the transmission of messages of core by core;
Figure 10 A-10B is for the grouping of distributed data on multiple nucleus system, the process flow diagram keeping the embodiment of the method for Client IP address and client computer port simultaneously;
Figure 11 A-11B is the process flow diagram for the embodiment of the method for distributed data packet segmentation on multiple nucleus system;
Figure 12 A is for the grouping of distributed data on multiple nucleus system, the process flow diagram keeping the embodiment of the method for Client IP address simultaneously; With
Figure 12 B is the process flow diagram of the embodiment of method for selecting port allocation table.
Below in conjunction with the detailed description that accompanying drawing is set forth, the features and advantages of the present invention will be more obvious, and wherein, same reference marker identifies corresponding element in the text.In the accompanying drawings, same Reference numeral ordinary representation is identical, functionally similar in similar and/or structure element.
Embodiment
In order to read the description of following various embodiment, the description of the following part for instructions and their contents is separately useful:
-part A describes and can be used for network environment and the computing environment of implementing embodiment described herein;
-part B describes the embodiment being used for system and method computing environment being sent to long-distance user;
-C part describes the embodiment of the system and method for accelerating the communication between client-server;
-D part describes the embodiment being used for application transfer control being carried out to virtualized system and method.
-E part describes the embodiment of the system and method for providing multicore architecture and environment; And
-F part describes the embodiment for the system and method at multicore architecture and environmentally distributed data grouping.
A. network and computing environment
Discuss equipment and/or client computer system and method embodiment details before, the network that can dispose these embodiments is wherein discussed and computing environment is helpful.Referring now to Figure 1A, describe the embodiment of network environment.Summarize, network environment comprises and (total is called server 106 equally via one or more network 104,104 ' (total is called network 104) and one or more server 106a-106n, or remote machine 106) one or more client computer 102a-102n (total is equally called local machine 102, or client computer 102) of communicating.In certain embodiments, client computer 102 is communicated with server 106 by equipment 200.
Although Figure 1A shows network 104 between client computer 102 and server 106 and network 104 ', client computer 102 and server 106 can be positioned on same network 104.Network 104 and 104 ' can be the network of identical type or dissimilar network.Network 104 and/or 104 ' can be LAN (Local Area Network) (LAN) such as company Intranet, Metropolitan Area Network (MAN) (MAN), or wide area network (WAN) such as the Internet or WWW.In one embodiment, network 104 ' can be dedicated network and network 104 can be public network.In certain embodiments, network 104 ' can be private and network 104 ' can be public network.In yet another embodiment, network 104 and 104 ' can be all private.In certain embodiments, client computer 102 can be arranged in the branch offices of incorporated business, is connected communicate with the server 106 being positioned at corporate data center by the WAN on network 104.
Network 104 and/or 104 ' can be the network of any type and/or form, and any following network can be comprised: point to point network, radio network, wide area network, LAN (Local Area Network), communication network, data communication network, computer network, ATM (asynchronous transfer mode) network, SONET (Synchronous Optical Network) network, SDH (SDH (Synchronous Digital Hierarchy)) network, wireless network and cable network.In certain embodiments, network 104 can comprise wireless link, such as infrared channel or Landsat band.Network 104 and/or 104 ' topology can be bus-type, star-like or ring network is topological.Network 104 and/or 104 ' and network topology can for well known to those of ordinary skill in the art, that operation described herein can be supported any such network or network topology.
As shown in Figure 1A, equipment 200 is displayed between network 104 and 104 ', and equipment 200 also can be called as interface unit 200 or gateway 200.In certain embodiments, equipment 200 can be positioned on network 104.Such as, company branch offices can in branch offices deployment facility 200.In other embodiments, equipment 200 can be positioned on network 104 '.Such as, equipment 200 can be positioned at the data center of company.In yet another embodiment, multiple equipment 200 can in network 104 deploy.In certain embodiments, multiple equipment 200 can be deployed on network 104 '.In one embodiment, the first equipment 200 communicates with the second equipment 200 '.In other embodiments, equipment 200 can be and is positioned at or the arbitrary client computer 102 of heterogeneous networks 104,104 ' same with client computer 102 or a part for server 106.One or more equipment 200 can the network between client computer 102 and server 106 or any point in network communication path.
In certain embodiments, equipment 200 comprises any network equipment being called as Citrix NetScaler equipment manufactured by the Citrix Systems company being positioned at Florida State Ft.Lauderdale.In other embodiments, equipment 200 comprises any one product embodiments being called as WebAccelerator and BigIP manufactured by the F5 Networks company being positioned at Seattle, Washington.In yet another embodiment, equipment 205 comprises any one of the SSL VPN serial equipment of DX acceleration equipment platform and/or such as SA700, SA2000, SA4000 and the SA6000 manufactured by the JuniperNetworks company being arranged in California Sunnyvale.In yet another embodiment, equipment 200 comprises any application acceleration and/or equipment relevant safely and/or software that are manufactured by the Cisco Systems company being positioned at California San Jose, such as Cisco ACE application controls engine modules business (Applicati0n Control Engine Module service) software and mixed-media network modules mixed-media and CiscoAVS serial application speed system (Application Velocity System).
In one embodiment, system can comprise the server 106 of multiple logic groups.In these embodiments, the logic groups of server can be called as server zone 38.Wherein in some embodiments, server 106 can be and geographically disperses.In some cases, group 38 can be managed as single entity.In other embodiments, server zone 38 comprises multiple server zone 38.In one embodiment, server zone represents one or more client computer 102 and performs one or more application program.
Server 106 in each group 38 can be variety classes.One or more server 106 can according to the operating system platform of a type (such as, the WINDOWS NT manufactured by the Microsoft company of State of Washington Redmond) operation, and other server 106 one or more can operate according to the operating system platform of another type (such as, Unix or Linux).The server 106 of each group 38 does not need with another server 106 in same a group 38 physically close.Therefore, wide area network (WAN) can be used to connect for the server 106 groups being group 38 by logic groups or Metropolitan Area Network (MAN) (MAN) connects interconnected.Such as, group 38 can comprise be physically located at different continent or continent zones of different, country, state, city, campus or room server 106.If use LAN (Local Area Network) (LAN) connection or some direct-connected forms to carry out connection server 106, then can increase the data transfer rate between the server 106 in group 38.
Server 106 can refer to file server, application server, web server, proxy server or gateway server.In certain embodiments, server 106 can have as application server or the ability as master application server work.In one embodiment, server 106 can comprise Active Directory.Client computer 102 also can be described as client node or end points.In certain embodiments, client computer 102 can have the ability of seeking the application on access services device as client node, also can have as application server for other client computer 102a-102n provides the ability of access to posting the application of carrying.
In certain embodiments, client computer 102 communicates with server 106.In one embodiment, client computer 102 directly can communicate with one of them of the server 106 in group 38.In yet another embodiment, client computer 102 executive routine proximity application (program neighborhood application) is to communicate with the server 106 in group 38.In yet another embodiment, server 106 provides the function of host node.In certain embodiments, client computer 102 is communicated with the server 106 in group 38 by network 104.By network 104, client computer 102 such as can ask the server 106a-106n performed in group 38 to post the various application of carrying, and the output receiving application execution result shows.In certain embodiments, only have host node to provide to identify and provide and post the function of carrying needed for the relevant address information of the server 106 ' of application of asking.
In one embodiment, server 106 provides the function of web server.In yet another embodiment, server 106a receives the request from client computer 102, and by this request forward to second server 106b, and use responds the request of the response of this request to client computer 102 from server 106b.In yet another embodiment, server 106 obtain client computer 102 can the enumerating and the address information relevant to the server 106 enumerating identified application by this application of application.In yet another embodiment, server 106 uses web interface to be supplied to client computer 102 by the response of request.In one embodiment, client computer 102 directly communicates to access identified application with server 106.In yet another embodiment, client computer 102 receives the application such as showing data produced by the application that execution server 106 identifies and exports data.
With reference now to Figure 1B, describe the embodiment of the network environment of disposing multiple equipment 200.First equipment 200 can be deployed on first network 104, and the second equipment 200 ' is deployed on second network 104 '.Such as, company can dispose the first equipment 200 in branch offices, and the heart disposes the second equipment 200 ' in the data.In yet another embodiment, the first equipment 200 and the second equipment 200 ' are deployed on same network 104 or network 104.Such as, the first equipment 200 can be deployed for first server group 38, and the second equipment 200 can be deployed for second server group 38 '.In another example, the first equipment 200 can be deployed in the first branch offices, and the second equipment 200 ' be deployed in the second branch offices '.In certain embodiments, the first equipment 200 and the second equipment 200 ' is collaborative or associated working each other, to accelerate the transmission of network traffics between client-server or application and data.
Refer now to Fig. 1 C, describe another embodiment of network environment, in this network environment, by equipment 200 together with the deployed with devices of other type one or more, such as, be deployed in one or more WAN optimized device 205, between 205 '.Such as, a WAN optimized device 205 is presented between network 104 and 104 ', and the 2nd WAN optimized device 205 ' can be deployed between equipment 200 and one or more server 106.Such as, company can dispose a WAN optimized device 205 in branch offices, and the heart disposes the 2nd WAN optimized device 205 ' in the data.In certain embodiments, equipment 205 can be positioned on network 104 '.In other embodiments, equipment 205 ' can be positioned on network 104.In certain embodiments, equipment 205 ' can be positioned at network 104 ' or network 104 " on.In one embodiment, equipment 205 and 205 ' is on same network.In yet another embodiment, equipment 205 and 205 ' is on different networks.In another example, a WAN optimized device 205 can be deployed for first server group 38, and the 2nd WAN optimized device 205 ' can be deployed for second server group 38 '.
In one embodiment, equipment 205 is devices of performance for accelerating, optimizing or otherwise improve any type and the network traffics (flow such as gone to and/or be connected from WAN) of form, operation or service quality.In certain embodiments, equipment 205 is performance enhancement proxy.In other embodiments, equipment 205 is that the WAN of any type and form optimizes or accelerator, is also sometimes referred to as WAN optimal controller.In one embodiment, equipment 205 is by any one being called as the product embodiments of WANScaler of the Citrix Systems Company being arranged in Florida State Ft.Lauderdale.In other embodiments, equipment 205 comprises by any one being called as the product embodiments of BIG-IP link controller and WANjet of the F5 Networks Company being arranged in State of Washington Seattle.In yet another embodiment, equipment 205 comprises by any one of WX and the WXC WAN accelerator platform of the Juniper NetWorks Company being arranged in California Sunnyvale.In certain embodiments, equipment 205 comprises by any one in the serial WAN optimized device of rainbow trout (steelhead) of the Riverbed Technology Company of California San Francisco.In other embodiments, equipment 205 comprises by any one of the WAN relevant apparatus of the Expand Networks Company being arranged in New Jersey Roseland.In one embodiment, equipment 205 comprises any one WAN relevant device by the Packeteer Company being positioned at California Cupertino, PacketShaper, iShared and SkyX product embodiments such as provided by Packeteer.In yet another embodiment, equipment 205 comprises by any WAN relevant device of the Cisco Systems Company being positioned at California San Jose and/or software, such as Cisco wide area network application service software and mixed-media network modules mixed-media and wide area network engine apparatus.
In one embodiment, equipment 205 is applied and data acceleration service for branch offices or telecottage provide.In one embodiment, equipment 205 comprises the optimization of wide area file services (WAFS).In yet another embodiment, equipment 205 accelerates the transmission of file, such as, via CIFS (CIFS) agreement.In other embodiments, equipment 205 provides high-speed cache to accelerate the transmission of application and data in storer and/or memory storage.In one embodiment, equipment 205 any rank network stack or the compression of network traffics is provided in any agreement or network layer.In yet another embodiment, equipment 205 provides transport layer protocol optimization, flow control, performance enhancement or amendment and/or management, to accelerate the transmission of application in WAN connection and data.Such as, in one embodiment, equipment 205 provides transmission control protocol (TCP) to optimize.In other embodiments, equipment 205 provides for the optimization of any session or application layer protocol, flow control, performance enhancement or amendment and/or management.
In yet another embodiment, the header fields of TCP and/or IP that be that the data of any type and form or information coding are become the customization of network packet by equipment 205 or standard or Optional Field, to be existed, function or capability advertisement give another equipment 205 '.In yet another embodiment, equipment 205 ' can be used in the data of encoding in TCP and/or IP header fields or option to communicate with another equipment 205 '.Such as, equipment can use tcp option or IP header fields or option to pass on by equipment 205 when performing the function that such as WAN accelerates or in order to the work of combining with one another, the 205 ' one or more parameters used.
In certain embodiments, equipment 200 is kept at any information of encoding in TCP and/or the IP head and/or Optional Field passed between equipment 205 and 205 '.Such as, equipment 200 can stop connecting through the transport layer of equipment 200, such as through equipment 205 with 205 ' between client and server transport layer be connected.In one embodiment, equipment 200 identifies and preserves by the first equipment 205 by any coded message in the transport layer packet of the first transport layer connection transmission, and via the second transport layer connection, the transport layer packet with coded message is communicated to the second equipment 205 '.
Refer now to Fig. 1 D, describe the network environment for transmitting and/or operate the computing environment in client computer 102.In certain embodiments, server 106 comprises the application transfer system 190 for transmitting computing environment or application and/or data file to one or more client computer 102.Generally speaking, client computer 10 is communicated with server 106 with equipment 200 by network 104,104 '.Such as, client computer 102 can reside in the telecottage of company, such as branch offices, and server 106 can reside in corporate data center.Client computer 102 comprises client proxy 120 and computing environment 15.Computing environment 15 can perform or operate for accessing, processing or the application of usage data file.Computing environment 15, application and/or data file can be transmitted via equipment 200 and/or server 106.
In certain embodiments, equipment 200 speed-up computation environment 15 or its any part are to the transmission of client computer 102.In one embodiment, equipment 200 is by the transmission of application transfer system 190 speed-up computation environment 15.Such as, embodiment described herein can be used accelerate the transmission from company's central data center to the stream of remote user positions (branch offices of such as company) application (streaming application) and the accessible data file of this application.In yet another embodiment, equipment 200 accelerates the transport layer flow between client computer 102 and server 106.Equipment 200 can be provided for the speed technology of any transport layer useful load accelerated from server 106 to client computer 102, such as: 1) transport layer connection pool, 2) transport layer connects multiplexed, and 3) transmission control protocol buffering, 4) compression and 5) high-speed cache.In certain embodiments, equipment 200 provides the load balance of server 106 in response to the request from client computer 102.In other embodiments, equipment 200 serves as agency or access services device provides access to one or more server 106.In yet another embodiment, equipment 200 provides the secure virtual private network from the first network 104 of client computer 102 to the second network 104 ' of server 106 to connect, and such as SSL VPN connects.In other embodiments, equipment 200 provides the connection between client computer 102 and server 106 and the application firewall safety, the control and management that communicate.
In certain embodiments, based on multiple manner of execution and based on the arbitrary authentication vs. authorization strategy applied by policy engine 195, application transfer management system 190 provides the application tranmission techniques of desktop computing environment being sent to long-range or other user.Use these technology, long-distance user can obtain computing environment from any network connection device 100 and the application that stores of access services device and data file.In one embodiment, apply transfer system 190 can perform on a server 106 resident or thereon.In yet another embodiment, apply transfer system 190 can reside on multiple server 106a-106n or perform thereon.In certain embodiments, apply transfer system 190 to perform in server zone 38.In one embodiment, the server 106 performing application transfer system 190 also can store or provide application and data file.In yet another embodiment, first group of one or more server 106 can perform application transfer system 190, and different server 106n can store or provide application and data file.In certain embodiments, each application in transfer system 190, application and data file can be resident or be positioned at different servers.In yet another embodiment, apply transfer system 190 any part can resident, perform or be stored in or be distributed to equipment 200 or multiple equipment.
Client computer 102 can comprise the computing environment 15 for performing the application using or process data file.Client computer 102 asks application from server 106 and data file by network 104,104 ' and equipment 200.In one embodiment, equipment 200 can by the request forward from client computer 102 to server 106.Such as, client computer 102 may not have local storage or local addressable application and data file.In response to request, application transfer system 190 and/or server 106 can transmit application and data file to client computer 102.Such as, in one embodiment, application can be transmitted as application stream by server 106, to operate in computing environment 15 on client 102.
In certain embodiments, the CitrixAccess Suite that transfer system 190 comprises Citrix Systems company is applied tMany portion (such as MetaFrame or Citrix PresentationServer tM), and/or Microsoft's exploitation any one in Windows Terminal Service.In one embodiment, applying transfer system 190 can by remote display protocol or otherwise by based on remote computation or calculate based on server and transmit one or more and be applied to client computer 102 or user.In yet another embodiment, application transfer system 190 can transmit one or more by application stream and be applied to client computer or user.
In one embodiment, application transfer system 190 comprises policy engine 195, its for control and management to the access of application, the selection of application execution method and the transmission of application.In certain embodiments, policy engine 195 determines one or more application that user or client computer 102 can be accessed.In yet another embodiment, policy engine 195 determines how application should be sent to user or client computer 102, such as manner of execution.In certain embodiments, application transfer system 190 provides multiple tranmission techniques, the method for therefrom selective gist execution, and such as server-based computing, local stream transmission or transmission application perform for this locality to client computer 120.
In one embodiment, the execution of client computer 102 request applications and the application transfer system 190 comprising server 106 select the method for executive utility.In certain embodiments, server 106 is from client computer 102 acceptance certificate.In yet another embodiment, server 106 receives the request enumerated for useful application from client computer 102.In one embodiment, respond the reception of this request or certificate, application transfer system 190 enumerate for client computer 102 can multiple application programs.Application transfer system 190 receives the request of the application cited by performing.One of application transfer system 190 method selecting predetermined quantity performs cited application, the such as strategy of response policy engine.Application transfer system 190 can select the method performing application, makes client computer 102 receive the application produced by the application program on execution server 106 and exports data.Application transfer system 190 can select the method performing application, makes local machine 10 local executive utility after retrieval comprises multiple application files of application.In yet another embodiment, application transfer system 190 can select the method performing application, to be applied to client computer 102 by network 104 stream transmission.
Client computer 102 can perform, operate or otherwise provide application, described application can be the software of any type and/or form, program or executable instruction, the executable instruction of the web browser of such as any type and/or form, the client computer based on web, client-server application, thin-client computing client, ActiveX control or java applet or other type any that can perform on client 102 and/or form.In certain embodiments, application can be representative client 102 perform on a server 106 based on server or based on long-range application.In one embodiment, server 106 can use any thin-client or remote display protocol carry out display translation to client computer 102, described thin-client or remote display protocol be such as by being positioned at independent computing architecture (ICA) agreement of Citrix Systems Company of Florida State Ft.Lauderdale or the RDP (RDP) of being produced by the Microsoft being positioned at State of Washington Redmond.Application can use the agreement of any type, and it can be, such as, and HTTP client computer, FTP client computer, Oscar client computer or Telnet client computer.In other embodiments, application comprises the software of any type of being correlated with that communicate with VoIP, such as soft IP phone.In a further embodiment, application comprises the arbitrary application relating to real-time data communication, such as, for the application of Streaming video and/or audio frequency.
In certain embodiments, server 106 or server zone 38 can run one or more application, such as, provide thin-client calculating or long-range display to represent the application of application.In one embodiment, server 106 or server zone 38 should be used for performing the CitrixAccess Suite of Citrix Systems company as one tMany portion (such as MetaFrame or Citrix PresentationServer tM), and/or Microsoft's exploitation any one in Windows Terminal Service.In one embodiment, this application is the ICA client computer that the CitrixSystems Inc. being positioned at Florida State Fort Lauderdale develops.In other embodiments, this application comprises remote desktop (RDP) client computer developed by the Microsoft being positioned at State of Washington Redmond.In addition, server 106 can run an application, such as, it can be to provide the application server of E-mail service, the Microsoft Exchange such as manufactured by the Microsoft being positioned at State of Washington Redmond, web or Internet server, or desktop sharing server, or collaboration server.In certain embodiments, arbitrary application can comprise arbitrary type post carry service or product, the GoToMeeting that the Citrix Online Division being such as positioned at California Santa Barbara provides tM, the WebEx that the WebEx company being positioned at California Santa Clara provides tM, or the Microsoft Office Live Meeting that the Microsoft company being positioned at State of Washington Redmond provides.
Still with reference to figure 1D, an embodiment of network environment can comprise monitoring server 106A.Monitoring server 106A can comprise the performance monitoring service 198 of any type and form.Performance monitoring service 198 can comprise monitoring, measurement and/or management software and/or hardware, comprises Data Collection, set, analysis, management and report.In one embodiment, performance monitoring service 198 comprises one or more monitoring agent 197.Monitoring agent 197 comprises on the device at such as client computer 102, server 106 or equipment 200 and 205, to perform monitoring, any software of measurement and data collection activity, hardware or its combination.In certain embodiments, monitoring agent 197 comprises the script of such as Visual Basic script or any type of Javascript and form.In one embodiment, monitoring agent 197 performs relative to any application of device and/or user transparent.In certain embodiments, monitoring agent 197 is relative to application or client computer unobtrusively mounted and operation.In yet another embodiment, the installation of monitoring agent 197 and operation there is no need for any equipment of this application or device.
In certain embodiments, monitoring agent 197 is with preset frequency monitoring, measurement and collection data.In other embodiments, monitoring agent 197 is based on detecting that the event of any type and form is monitored, measured and collect data.Such as, monitoring agent 197 can collect data when detecting the request of web page or receiving http response.In another example, monitoring agent 197 can collect data when arbitrary user's incoming event that such as mouse is clicked being detected.Monitoring agent 197 can report or provide any data of monitoring, measure or collecting to monitor service 198.In one embodiment, monitoring agent 197 sends information to monitor service 198 according to arrangement of time or preset frequency.In yet another embodiment, monitoring agent 197 sends information to monitor service 198 when event being detected.
In certain embodiments, monitor service 198 and/or monitoring agent 197 are monitored and performance measurement such as any Internet resources of client computer, server, server zone, equipment 200, equipment 205 or network connection or the carrying out of network infrastructure elements.In one embodiment, monitor service 198 and/or monitoring agent 197 perform monitoring and the performance measurement of any transport layer connection that such as TCP or UDP connects.In yet another embodiment, monitor service 198 and/or monitoring agent 197 are monitored and measure network latency.In yet another embodiment, monitor service 198 and/or monitoring agent 197 are monitored and Measurement bandwidth utilization.
In other embodiments, monitor service 198 and/or monitoring agent 197 are monitored and measuring terminals subscriber response time.In certain embodiments, monitor service 198 performs monitoring and the performance measurement of application.In yet another embodiment, monitor service 198 and/or monitoring agent 197 perform to monitoring and the performance measurement of any session or the connection of applying.In one embodiment, the performance of browser is monitored and measured to monitor service 198 and/or monitoring agent 197.In yet another embodiment, the performance of the affairs based on HTTP is monitored and measured to monitor service 198 and/or monitoring agent 197.In certain embodiments, the performance of IP phone (VoIP) application or session is monitored and measured to monitor service 198 and/or monitoring agent 197.In other embodiments, the performance of the remote display protocol application of such as ICA client computer or RDP client computer is monitored and measured to monitor service 198 and/or monitoring agent 197.In yet another embodiment, the performance of the Streaming Media of any type and form is monitored and measured to monitor service 198 and/or monitoring agent 197.In a further embodiment, monitor service 198 and/or monitoring agent 197 monitor and measure post the application of carrying or namely software serve the performance of (Software-As-A-Service, SaaS) transport model.
In certain embodiments, monitor service 198 and/or monitoring agent 197 perform and apply relevant one or more affairs, the monitoring of request or response and performance measurement.In other embodiments, any part of application layer storehouse is monitored and measured to monitor service 198 and/or monitoring agent 197, and such as any .NET or J2EE calls.In one embodiment, monitor service 198 and/or monitoring agent 197 are monitored and measured database or SQL affairs.In yet another embodiment, monitor service 198 and/or monitoring agent 197 are monitored and are measured any method, function or application programming interface (API) and call.
In one embodiment, monitor service 198 and/or monitoring agent 197 to the one or more equipment via such as equipment 200 and/or equipment 205 from the application of server to client machine and/or the transmission of data is monitored and performance measurement.In certain embodiments, the performance of the transmission of virtualization applications is monitored and measured to monitor service 198 and/or monitoring agent 197.In other embodiments, the performance of the transmission of streaming application is monitored and measured to monitor service 198 and/or monitoring agent 197.In yet another embodiment, monitor service 198 and/or monitoring agent 197 are monitored and are measured and transmit desktop application to client computer and/or the performance performing desktop application on a client.In yet another embodiment, the performance of monitor service 198 and/or monitoring agent 197 monitoring and measuring customer machine/server application.
In one embodiment, monitor service 198 and/or monitoring agent 197 are designed and are configured to application transfer system 190 provides application performance to manage.Such as, monitor service 198 and/or monitoring agent 197 can be monitored, measure and manage and represent that server (Citrix PresentationServer) transmits the performance of application via Citrix.In this example, monitor service 198 and/or monitoring agent 197 monitor independent ICA session.Monitor service 198 and/or monitoring agent 197 can measure totally by and each conversational system resource use, and application and networking performance.Monitor service 198 and/or monitoring agent 197 can identify effective server (activeserver) for given user and/or user conversation.In certain embodiments, the rear end that monitor service 198 and/or monitoring agent 197 are monitored between application transfer system 190 and application and/or database server connects.Monitor service 198 and/or monitoring agent 197 can measure the network latency of each user conversation or ICA session, delay and capacity.
In certain embodiments, monitor service 198 and/or monitoring agent 197 measurement and monitoring use for such as total storer of application transfer system 190, the storer of each user conversation and/or each process uses.In other embodiments, monitor service 198 and/or the such as total CPU of monitoring agent 197 measurement and monitoring use, the CPU of the application transfer system 190 of each user conversation and/or each process uses.In yet another embodiment, monitor service 198 and/or monitoring agent 197 measurement and monitoring sign in such as Citrix and represent the application of server, server or the time of application needed for transfer system.In one embodiment, monitor service 198 and/or monitoring agent 197 measurement and monitoring user log in application, server or apply the duration of transfer system 190.In certain embodiments, the effective and invalid session count of monitor service 198 and/or the application of monitoring agent 197 measurement and monitoring, server or application transfer system session.In yet another embodiment, monitor service 198 and/or the monitoring agent 197 measurement and monitoring user conversation stand-by period.
In yet another embodiment, the server index of monitor service 198 and/or any type of monitoring agent 197 measurement and monitoring and form.In one embodiment, monitor service 198 and/or monitoring agent 197 measurement and monitoring use the index relevant with magnetic disk memory with Installed System Memory, CPU.In yet another embodiment, monitor service 198 and/or the monitoring agent 197 measurement and monitoring index relevant with page fault, such as page fault per second.In other embodiments, the index of monitor service 198 and/or monitoring agent 197 measurement and monitoring two-way time.In yet another embodiment, monitor service 198 and/or monitoring agent 197 measurement and monitoring are to application crashes, mistake and/or stop relevant index.
In certain embodiments, monitor service 198 and monitoring agent 198 comprise by any one product embodiments being called as EdgeSight of the Citrix Systems Company being positioned at Florida State Ft.Lauderdale.In yet another embodiment, performance monitoring service 198 and/or monitoring agent 198 comprise by any portion being called as the product embodiments of TrueView product suite of the Symphoniq Company being positioned at California Palo Alto.In one embodiment, performance monitoring service 198 and/or monitoring agent 198 comprise any part being called as the product embodiments of TeaLeafCX product suite of being produced by the TeaLeaf technology company being positioned at California San Francisco.In other embodiments, performance monitoring service 198 and/or monitoring agent 198 comprise any part of the commerce services management product of such as BMC performance manager and patrol product (BMC Performance Manager and Patrol products) of being produced by the BMC Software being positioned at Texas Houston.
Client computer 102, server 106 and equipment 200 can be deployed as and/or perform on the calculation element of any type and form, and the network communication of all if what type in office and form also performs the computing machine of operation described herein, network equipment or equipment.Fig. 1 E and 1F describes the block diagram of the calculation element 100 that can be used for the embodiment implementing client computer 102, server 106 or equipment 200.As shown in Fig. 1 E and 1F, each calculation element 100 comprises CPU (central processing unit) 101 and main storage unit 122.As referring to figure 1e, calculation element 100 can comprise the indicating device 127 of visible display device 124, keyboard 126 and/or such as mouse.Each calculation element 100 also can comprise other selectable unit, such as one or more input/output device 130a-130b (total use Reference numeral 130 represents), and the cache memory 140 communicated with CPU (central processing unit) 101.
CPU (central processing unit) 101 responds and processes any logical circuit of instruction taken out from main storage unit 122.In many examples, CPU (central processing unit) is provided by microprocessor unit, such as: the microprocessor unit manufactured by the Intel Company of California Mountain View; The microprocessor unit manufactured by the motorola inc of Illinois Schaumburg; The microprocessor unit manufactured by the Transmeta Company of California Santa Clara; The RS/6000 processor manufactured by the International Business Machines company of New York White Plains; Or the microprocessor unit manufactured by the Advanced Micro Devices company of California Sunnyvale.Calculation element 100 can based on any one in these processors, or can according to such other processor any run described herein.
Main storage unit 122 can be can store data and allow microprocessor 101 directly to access one or more memory chips of any memory location, such as static RAM (SRAM), burst SRAM or synchronization burst SRAM (BSRAM), dynamic RAM DRAM, fast page mode DRAM (FPM DRAM), enhancement mode DRAM (EDRAM), growth data exports RAM (EDO RAM), growth data exports DRAM (EDO DRAM), Burst Extended Data exports DRAM (BEDO DRAM), enhancement mode DRAM (EDRAM), synchronous dram (SDRAM), JEDEC SRAM, PC100 SDRAM, double data rate SDRAM (DDR SDRAM), enhancement mode SRAM (ESDRAM), synchronization link DRAM (SLDRAM), direct Rambus DRAM (DRDRAM) or ferroelectric RAM (FRAM).Primary memory 122 can based on any one of above-mentioned storage chip, or other available storage chip any that can run as described herein.In embodiment in fig. ie, processor 101 is communicated with primary memory 122 by system bus 150 (being described in more detail below).Fig. 1 F describes the embodiment of the calculation element 100 that processor wherein is directly communicated with primary memory 122 by port memory 103.Such as, in figure 1f, primary memory 122 can be DRDRAM.
Fig. 1 F describes the embodiment that primary processor 101 wherein is directly communicated with cache memory 140 by the second bus, and the second bus is sometimes also referred to as dorsal part bus.In other embodiments, primary processor 101 uses system bus 150 to communicate with cache memory 140.Cache memory 140 had usually than primary memory 122 response time faster, and was usually provided by SRAM, BSRAM or EDRAM.In embodiment in figure 1f, processor 101 is communicated with multiple I/O device 130 by local system bus 150.Can use various different bus that CPU (central processing unit) 101 is connected to any I/O device 130, described bus comprises VESA VL bus, isa bus, eisa bus, MCA (MCA) bus, pci bus, PCI-X bus, PCI-Express bus or NuBus.For the embodiment that I/O device is video display 124, processor 101 can use advanced graphics port (AGP) to communicate with display 124.Fig. 1 F describes an embodiment of the computing machine 100 that primary processor 101 is directly communicated with I/O device 130b by super transmission (HyperTransport), fast I/O or InfiniBand.Fig. 1 F also describes and mixes local bus and the embodiment directly communicated wherein: processor 101 uses local interconnect bus to communicate with I/O device 130b, directly communicates with I/O device 130a simultaneously.
Calculation element 100 can support any suitable erecting device 116, such as receive as 3.5 inches, the floppy disk of floppy disk 5.25 inch disk or ZIP disk, CD-ROM drive, CD-R/RW driver, DVD-ROM driver, multiple format tape drive, USB device, hard disk drive or be suitable for install as any client proxy 120 or the software of its part and other device any of program.Calculation element 100 can also comprise memory storage 128, such as one or more hard disk drive or Redundant Array of Independent Disks (RAID), for storing operating system and other related software, and for storing the Application Software Program of any program such as relating to client proxy 120.Or, any one of erecting device 116 can be used as memory storage 128.In addition, operating system and software can run from the bootable medium of such as bootable CD, such as for a bootable CD of GNU/Linux, this bootable CD can obtain as GNU/Linux distribution from knoppix.net.
In addition, calculation element 100 can comprise by the network interface 118 of multiple connecting interface to LAN (Local Area Network) (LAN), wide area network (WAN) or the Internet, described multiple connection includes but not limited to standard telephone line, LAN or wide-area network link (such as 802.11, T1, T3,56kb, X.25), broadband connection (as ISDN, frame relay, ATM), wireless connections or above-mentioned some combinations that any or all connects.Network interface 118 can comprise built-in network adapter, network interface unit, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modulator-demodular unit or be applicable to calculation element 100 interface to can communicating and performing any miscellaneous equipment of the network of any type of operation described herein.Various I/O device 130a-130n can be comprised in calculation element 100.Input media comprises keyboard, mouse, Trackpad, trace ball, microphone and plotting sheet.Output unit comprises video display, loudspeaker, ink-jet printer, laser printer and thermal printer.As referring to figure 1e, I/O device 130 can be controlled by I/O controller 123.I/O controller can control one or more I/O device, such as keyboard 126 and indicating device 127 (as mouse or light pen).In addition, I/O device can also provide memory storage 128 for calculation element 100 and/or install medium 116.In other embodiments, calculation element 100 can provide USB to connect to receive hand-held USB memory storage, such as, by being positioned at California Los Alamitos, the USB flash memory driver line of equipment produced of Twintech Industry company.
In certain embodiments, calculation element 100 can comprise multiple display device 124a-124n or coupled, and these display device can be identical or different type and/or form separately.Thus, any one I/O device 130a-130n and/or I/O controller 123 can comprise the combination of suitable hardware, software or the hardware and software of arbitrary type and/or form, to be connected to support, to allow or to provide and to use multiple display device 124a-124n by calculation element 100.Such as, calculation element 100 can comprise the video adapter of any type and/or form, video card, driver and/or storehouse, with display device 124a-124n interface, communication, be connected or otherwise use display device.In one embodiment, video adapter can comprise multiple connector with multiple display device 124a-124n interface.In other embodiments, calculation element 100 can comprise multiple video adapter, and each video adapter is connected with one or more in display device 124a-124n.In certain embodiments, any portion of the operating system of calculation element 100 can be arranged to and use multiple display 124a-124n.In other embodiments, one or more in display device 124a-124n can be provided by other calculation element one or more, such as such as by calculation element 100a and 100b that network is connected with calculation element 100.These embodiments can comprise the software being designed and being configured to the arbitrary type display device of another computing machine being used as the second display device 124a of calculation element 100.Those of ordinary skill in the art can be familiar with and understand various method and the embodiment that calculation element 100 can be configured to have multiple display device 124a-124n.
In a further embodiment, I/O device 130 can be the bridge 170 between system bus 150 and external communication bus, and described external communication bus such as usb bus, Apple Desktop Bus, RS-232 is connected in series, SCSI bus, FireWire bus, FireWire800 bus, industry ethernet, AppleTalk bus, Gigabit Ethernet bus, asynchronous transfer mode bus, HIPPI bus, Super HIPPI bus, SerialPlus bus, SCI/LAMP bus, Fiber Channel bus or serial SCSI bus.
Operate under the control of the operating system of the usual scheduling in control task of that class calculation element 100 described in Fig. 1 E and 1F and the access to system resource.Calculation element 100 can run any operating system, as windows operating system, the Unix of different release version and (SuSE) Linux OS, for the MAC of any version of macintosh computer any embedded OS, any real time operating system, any open source operating system, any proprietary operating systems, any operating system for mobile computing device, or any other can run on the computing device and complete the operating system of operation described here.Typical operating system comprises: WINDOWS 3.x, WINDOWS 95, WINDOWS 98, WINDOWS 2000, WINDOWS NT 3.51, WINDOWS NT 4.0, WINDOWS CE and WINDOWSXP, and all these are produced by the Microsoft being positioned at State of Washington Redmond; The MacOS produced by the Apple computer being positioned at California Cupertino; The OS/2 produced by the International Business Machine Corporation (IBM) being positioned at New York Armonk; And by being positioned at the (SuSE) Linux OS that can freely use of Caldera company issue or the Unix operating system of any type and/or form of Utah State Salt Lake City, and other.
In other embodiments, calculation element 100 can have different processor, operating system and the input equipment that meet this device.Such as, in one embodiment, computing machine 100 be by the Treo180 of Palm Company, 270,1060,600 or 650 smart phones.In this embodiment, Treo smart phone operates under the control of PalmOS operating system, and comprises stylus input media and five navigation device.In addition, calculation element 100 can be any workstation, desktop computer, on knee or notebook, server, handheld computer, mobile phone, other computing machine any, maybe can communicate and have enough processor abilities and memory capacity to perform calculating or the telecommunication installation of other form of operation described herein.
As shown in Figure 1 G, calculation element 100 can comprise multiple processor, can be provided for not only data slice is performed to multiple instruction simultaneously or performs the function of an instruction simultaneously.In certain embodiments, calculation element 100 can comprise the parallel processor with one or more core.In one of these embodiments, calculation element 100 is shared drive LPT devices, has multiple processor and/or multiple processor core, is conducted interviews by all free memories as a global address space.These embodiments another in, calculation element 100 is distributed memory LPT devices, has multiple processor, each processor access local storage.These embodiments another in, the existing shared storer of calculation element 100 has again the storer only by par-ticular processor or processor subset access.These embodiments another in, as multi-core microprocessor calculation element 100 by the combination of two or more independent processor in a package, usually in an integrated circuit (IC).These embodiments another in, calculation element 100 comprises the chip with unit wideband engine (CELL BROADBAND ENGINE) framework, and comprise high power treatment device unit and multiple synergetic unit, high power treatment device unit and multiple synergetic unit are linked together by inner high speed bus, inner high speed bus can be called cell interconnection bus.
In certain embodiments, processor is provided for the function multiple data slice being performed simultaneously to single instruction (SIMD).In other embodiments, processor is provided for the function multiple data slice being performed simultaneously to multiple instruction (MIMD).In another embodiment, processor can use the combination in any of SIMD and MIMD core in single assembly.
In certain embodiments, calculation element 100 can comprise graphics processing unit.Shown in Fig. 1 H in one of these embodiments, calculation element 100 comprises at least one CPU (central processing unit) 101 and at least one graphics processing unit.These embodiments another in, calculation element 100 comprises at least one parallel processing element and at least one graphics processing unit.These embodiments another in, calculation element 100 comprises multiple processing units of any type, and one in multiple processing unit comprises graphics processing unit.
In some embodiments, the user that the first calculation element 100a represents client computing devices 100b performs application.In another embodiment, calculation element 100 performs virtual machine, and it provides execution session, and in this session, the user representing client computing devices 100b performs application.In one of these embodiments, performing session is post the desktop session of carrying.These embodiments another in, calculation element 100 performs terminal server session.Terminal server session can provide the desktop environment of posting and carrying.These embodiments another in, performing session, to provide access to computing environment, computing environment to comprise following one or more: application, multiple application, desktop application and can perform the desktop session of one or more application.
B. equipment framework
Fig. 2 A illustrates an example embodiment of equipment 200.There is provided equipment 200 framework of Fig. 2 A only for example, be not intended to as restrictive framework.As shown in Figure 2, equipment 200 comprises hardware layer 206 and is divided into the software layer of user's space 202 and kernel spacing 204.
Hardware layer 206 provides hardware element, and the program in kernel spacing 204 and user's space 202 and service are performed on this hardware element.Hardware layer 206 also provides structure and element, and with regard to equipment 200, these structures and element allow the program in kernel spacing 204 and user's space 202 and service not only to carry out data communication in inside but also carried out data communication with outside.As shown in Figure 2, hardware layer 206 comprises the processing unit 262 for software program for execution and service, for the storer 264 of storing software and data, for by Internet Transmission with receive the network port 266 of data, and for performing the encryption processor 260 of the function treatment relevant to secure sockets layer by the data of Internet Transmission and reception.In certain embodiments, CPU (central processing unit) 262 can perform encryption processing the function of device 260 in independent processor.In addition, hardware layer 206 can comprise the multiprocessor for each processing unit 262 and encryption processor 260.Processor 262 can comprise the arbitrary processor 101 described in above composition graphs 1E and 1F.Such as, in one embodiment, equipment 200 comprises first processor 262 and the second processor 262 '.In other embodiments, processor 262 or 262 ' comprises polycaryon processor.
Although the hardware layer 206 of the equipment 200 illustrated is usually with encryption processor 260, such as, but processor 260 can be the processor performing and relate to the function of any cryptographic protocol, secure sockets layer (SSL) or Transport Layer Security (TLS) agreement.In certain embodiments, processor 260 can be general processor (GPP), and in a further embodiment, can be the executable instruction for performing any safety-related protocol process.
Although the hardware layer 206 of equipment 200 includes some element in Fig. 2, but the hardware components of equipment 200 or assembly can comprise the element of any type of calculation element and form, hardware or software, such as composition graphs 1E and the 1F calculation element 100 that illustrates and discuss herein.In certain embodiments, equipment 200 can comprise calculating or the network equipment of server, gateway, router, switch, bridge or other type, and has any hardware related to this and/or software element.
The operating system of equipment 200 is distributed, manage or be separated in addition available system storage to kernel spacing 204 and user's space 204.In the software architecture 200 of example, operating system can be the Unix operating system of any type and/or form, although the present invention does not limit like this.Like this, equipment 200 can run any operating system, as any version the Unix of Windows operating system, different editions and (SuSE) Linux OS, any version for macintosh computer any embedded OS, any network operating system, any real time operating system, any open-source operating system, any special purpose operating system, for mobile computing device or network equipment any operating system or can run on the device 200 and perform other operating system any of operation described herein.
Retain kernel spacing 204 for running kernel 230, kernel 230 comprises any device driver, kernel extensions or other kernel related softwares.Just as known to those skilled in the art, kernel 230 is cores of operating system, and provides access, the control and management of the related hardware element to resource and equipment 104.According to the embodiment of equipment 200, kernel spacing 204 also comprises and multiple network service of cache manger 232 collaborative work or process, and cache manger 232 is sometimes also referred to as integrated high-speed cache, and its benefit will describe in further detail herein.In addition, the embodiment of kernel 230 being installed depending on by equipment 200, being configured or the embodiment of operating system that other use.
In one embodiment, equipment 200 comprises a network stack 267, such as, based on the storehouse of TCP/IP, for communicating with client computer 102 and/or server 106.In one embodiment, network stack 267 is used to communicate with first network (such as network 108) and second network 110.In certain embodiments, equipment 200 stops the first transport layer and connects, and the TCP of such as client computer 102 connects, and the second transport layer to server 106 setting up client computer 102 use connects, such as, terminate in equipment 200 to be connected with the second transport layer of server 106.Set up the first and second transport layers by independent network stack 267 to connect.In other embodiments, equipment 200 can comprise multiple network stack, and such as 267 or 267 ', and can set up or stop the first transport layer connection at a network stack 267, second network storehouse 267 ' can be set up or stop the second transport layer and connect.Such as, a network stack can be used for receiving on a first network and transmission network grouping, and another network stack is used for receiving over the second network and transmission network grouping.In one embodiment, network stack 267 comprises the impact damper 243 for ranking for one or more network packet, and wherein network packet is transmitted by equipment 200.
As shown in Figure 2, kernel spacing 204 comprises cache manger 232, the integrated Packet engine 240 of high-velocity bed 2-7, crypto engine 234, policy engine 236 and multi-protocols compressed logic 238.Run in kernel spacing 204 or kernel mode instead of user's space 202 these assemblies or process 232,240,234,236 and 238 improve in these assemblies each separately with the performance combined.Kernel operations means that these assemblies or process 232,240,234,236 and 238 are run in the core address space of the operating system of equipment 200.Such as, run in kernel mode crypto engine 234 by mobile encrypted and decryption oprerations in endorse improvement encryption performance, thus the quantity of the storage space that can reduce in kernel mode or kernel thread and the storage space in user model or the transmission between thread.Such as, the data obtained at kernel mode may not need transmission or copy the process or thread that operate in user model to, such as, from kernel-level data to user-level data structure.In yet another aspect, the quantity of the contextual processing between kernel mode and user model can also be reduced.In addition, any assembly or process 232,240,235, between 236 with 238 synchronously with communicate and can be performed in kernel spacing 204 more efficient.
In certain embodiments, any part of assembly 232,240,234,236 and 238 can be run or operate in kernel spacing 204, and the other parts of these assemblies 232,240,234,236 and 238 can be run or operate in user's space 202.In one embodiment, equipment 200 uses kernel-level data to provide the access of any part to one or more network packet, such as, comprises the network packet from the request of client computer 102 or the response from server 106.In certain embodiments, can by Packet engine 240 by the transport layer driver interface of network stack 267 or filtrator acquisition kernel-level data.Kernel-level data can comprise by the addressable any interface of the kernel spacing 204 relevant to network stack 267 and/or data, the network traffics being received by network stack 267 or sent or grouping.In other embodiments, any assembly or process 232,240,234,236 and 238 can use kernel-level data to carry out the operation of the needs of executive module or process.In an example, when using kernel-level data, assembly 232,240,234,236 and 238 runs in kernel mode 204, and In yet another embodiment, when using kernel-level data, assembly 232,240,234,236 and 238 runs in user model.In certain embodiments, kernel-level data can be copied or be delivered to the second kernel-level data, or the user-level data structure of any expectation.
Cache manger 232 can comprise any combination of software, hardware or software and hardware, to provide cache access, the control and management of the content to any type and form, such as object or the object of dynamic generation of service is provided by source server 106.The data being processed by cache manger 232 and stored, object or content can comprise the data of any form (such as markup language), or the data of any type by the communication of any agreement.In certain embodiments, cache manger 232 copies the raw data being stored in other places or the data previously having calculated, and produced or transmit, wherein relative to reading cache memory element, need the access time more grown to obtain, to calculate or otherwise to obtain raw data.Once data are stored in cache storage element, by the copy of access cache instead of regain or recalculate raw data subsequent operation can be carried out, therefore decrease the access time.In certain embodiments, cache element can comprise the data object in the storer 264 of equipment 200.In other embodiments, cache storage element can include the storer than storer 264 access time faster.In yet another embodiment, cache element can comprise arbitrary type of equipment 200 and the memory element of form, a part for such as hard disk.In certain embodiments, processing unit 262 can provide the cache memory being cached manager 232 and using.In yet another embodiment, cache manger 232 can use storer, any part of memory block or processing unit and combination to come cached data, object or other content.
In addition, cache manger 232 comprises any logic, function, the rule of any embodiment of the technology for performing equipment 200 described herein or operates.Such as, cache manger 232 comprises the termination based on cycle ineffective time, or receives from client computer 102 or server 106 the logical OR function that invalid command makes object invalidation.In certain embodiments, cache manger 232 can be used as program, service, process or task operating and performs in kernel spacing 204, and in other embodiments, performs in user's space 202.In one embodiment, the Part I of cache manger 232 performs in user's space 202, and Part II performs in kernel spacing 204.In certain embodiments, cache manger 232 can comprise the general processor (GPP) of any type, or the integrated circuit of any other type, such as field programmable gate array (FPGA), programmable logic device (PLD), or special IC (ASIC).
Policy engine 236 can comprise such as intelligent statistical engine or other programmable applications.In one embodiment, policy engine 236 provides configuration mechanism to allow user to identify, specify, to define or configuring high speed caching strategy.Policy engine 236, in certain embodiments, also access storer with supported data structure, such as backup sheet or hash are shown, and the cache policies selected to enable user determines.In other embodiments, except the access to safety, network traffics, network access, compression or other any function of being performed by equipment 200 or operation, control and management, policy engine 236 can comprise any logic, rule, function or operation with access, the control and management of determining and provide object, data or content to equipment 200 high-speed caches.Other embodiment of specific cache policy further describes herein.
Crypto engine 234 comprises any logic of any safety-related protocol for manipulating such as SSL or TLS or the process of any function that wherein relates to, business rules, function or operation.Such as, crypto engine 234 is encrypted and is deciphered the network packet transmitted by equipment 200, or its any part.Crypto engine 234 also representative client 102a-102n, server 106a-106n or equipment 200 can arrange or set up SSL or TLS connection.Therefore, crypto engine 234 provides unloading and the acceleration of SSL process.In one embodiment, crypto engine 234 uses tunnel protocol to be provided in the VPN (virtual private network) between client computer 102a-102n and server 106a-106n.In certain embodiments, crypto engine 234 communicates with encryption processor 260.In other embodiments, crypto engine 234 comprises the executable instruction operated on encryption processor 260.
Multi-protocols compression engine 238 comprises any logic for compressing one or more network packet protocol (such as by any agreement that the network stack 267 of equipment 200 uses), business rules, function or operation.In one embodiment, multi-protocols compression engine 238 bi-directional compression is arbitrary agreement based on TCP/IP between client computer 102a-102n and server 106a-106n, comprise messages application DLL (dynamic link library) (MAPI) (Email), file transfer protocol (FTP) (FTP), HTML (Hypertext Markup Language) (HTTP), CIFS (CIFS) agreement (file transfer), independent computing architecture (ICA) agreement, RDP (RDP), WAP (wireless application protocol) (WAP), voice (VoIP) agreement on mobile IP protocol and IP.In other embodiments, multi-protocols compression engine 238 provides the compression of the agreement based on HTML (Hypertext Markup Language) (HTML), and in certain embodiments, provides the compression of any markup language, such as extend markup language (XML).In one embodiment, multi-protocols compression engine 238 provides the compression of any High Performance Protocol, such as, be designed for any agreement that equipment 200 communicates to equipment 200.In yet another embodiment, multi-protocols compression engine 238 use amendment transmission control protocol compress any load of any communication or any communication, such as affairs TCP (T/TCP), with select confirm TCP (TCP-SACK), with the TCP (TCP-LW) of large window, the congested forecast agreement of such as TCP-Vegas agreement and TCP fraud protocol (TCP spoofing protocol).
Same, multi-protocols compression engine 238 accelerates the performance via desktop client and even mobile client access application for user, described desktop client such as Micosoft Outlook and non-web thin client, such as apply any client computer started, described mobile client such as palm PC by the common enterprise as Oracle, SAP and Siebel.In certain embodiments, by perform in kernel mode 204 inside and integrated with the packet processing engine 240 of accesses network storehouse 267, multi-protocols compression engine 238 can compression-tcp/IP agreement any agreement of carrying, such as any application layer protocol.
The integrated Packet engine 240 of high-velocity bed 2-7, usually also referred to as packet processing engine, or Packet engine, be responsible for the management of the kernel level process of the grouping that equipment 200 is received by the network port 266 and sent.The integrated Packet engine of high-velocity bed 2-7 240 can comprise the impact damper for one or more network packet of queuing up during the process such as receiving network packet and transmission network grouping.In addition, the integrated Packet engine of high-velocity bed 2-7 240 communicates sent by the network port 266 and receive network packet with one or more network stack 267.The integrated Packet engine 240 of high-velocity bed 2-7 and crypto engine 234, cache manger 232, policy engine 236 and multi-protocols compressed logic 238 collaborative work.More specifically, configuration crypto engine 234 is to perform the SSL process of grouping, collocation strategy engine 236 relates to the function of traffic management to perform, such as request level content switches and request level high-speed cache is redirected, and configures multi-protocols compressed logic 238 to perform the function relating to data compression and decompression.
The integrated Packet engine 240 of high-velocity bed 2-7 comprises packet transaction timer 2 42.In one embodiment, packet transaction timer 2 42 provides one or more time interval to trigger input processing, such as, receives or export (namely transmitting) network packet.In certain embodiments, the integrated Packet engine 240 of high-velocity bed 2-7 processes network packet in response to timer 2 42.Packet transaction timer 2 42 provides the signal of any type and form with the event notifying, trigger or the transmission time is relevant, interval or generation to Packet engine 240.In many examples, packet transaction timer 2 42 operates with Millisecond, such as 100ms, 50ms or 25ms.Such as, in some instances, packet transaction timer 2 42 provides the time interval or otherwise makes to process network packet by the integrated Packet engine 240 of high-velocity bed 2-7 with the 10ms time interval, and in other embodiments, the integrated Packet engine 240 of high-velocity bed 2-7 is made to process network packet with the 5ms time interval, and in a further embodiment, be short to 3,2 or the 1ms time interval.The integrated Packet engine of high-velocity bed 2-7 240 can be connected with crypto engine 234, cache manger 232, policy engine 236 and multi-protocols compression engine 238 during operation, integrated or communicate.Therefore, in response to packet transaction timer 2 42 and/or Packet engine 240, any logic of crypto engine 234, cache manger 232, policy engine 236 and multi-protocols compression engine 238, function or operation can be performed.Therefore, the time interval granularity provided by packet transaction timer 2 42, can perform any logic of crypto engine 234, cache manger 232, policy engine 236 and multi-protocols compression engine 238, function or operation, such as, the time interval is less than or equals 10ms.Such as, in one embodiment, cache manger 232 can perform the termination of the object of any high-speed cache in response to the integrated Packet engine of high-velocity bed 2-7 240 and/or packet transaction timer 2 42.In yet another embodiment, the termination of the object of high-speed cache or ineffective time are set to the particle size fraction identical with the time interval of packet transaction timer 2 42, such as every 10ms.
Different from kernel spacing 204, user's space 202 by user mode application or the program run at user model the storage area of operating system that uses or part.User mode application can not direct access kernel space 204 and use service call to access kernel services.As shown in Figure 2, the user's space 202 of equipment 200 comprises graphical user interface (GUI) 210, command line interface (CLI) 212, shell service (shell service) 214, health monitoring program 216 and guards (daemon) and serves 218.GUI 210 and CLI212 provides system manager or other users can the device that operates of mutual with it and opertaing device 200, such as, by the operating system of equipment 200.GUI210 and CLI 212 can comprise the code operated in user's space 202 or interior core framework 204.GUI210 can be the graphical user interface of any type or form, can pass through text, figure or other forms and be presented by the program of any type or application (as browser).CLI 212 can be order line or the text based interface of any type and form, such as, by order line that operating system provides.Such as, CLI 212 can comprise shell, and this shell makes user and the interactional instrument of operating system.In certain embodiments, the shell by bash, csh, tcsh or ksh type provides CLI 212.Shell service 214 comprises program, service, task, process or executable instruction to support by mutual with equipment 200 or operating system of user by GUI 210 and/or CLI 212
Health monitoring program 216 is for monitoring, checking, report and guarantee that network system is normally run, and user passes through the content of network reception request.Health monitoring program 216 comprises one or more program, service, task, process or executable instruction, for any behavior of watch-dog 200 provides logic, rule, function or operation.In certain embodiments, health monitoring program 216 is tackled and is checked any network traffics transmitted by equipment 200.In other embodiments, health monitoring program 216 is by any suitable method and/or machine-processed and one or more following equipment connection: crypto engine 234, cache manger 232, policy engine 236, multi-protocols compressed logic 238, Packet engine 240, guards service 218 and shell service 214.Therefore, health monitoring program 216 can call any application programming interface (API) to determine the state of any part of equipment 200, situation or health.Such as, health monitoring program 216 periodically can check (ping) or send status poll whether movable and currently to run with scrutiny program, process, service or task.In yet another embodiment, health monitoring program 216 can check that any state, mistake or the history log that are provided by any program, process, service or task are to determine any situation of equipment 200 any part, state or mistake.
Guarding service 218 is the programs run continuously or run in background, and the periodic service requests that treatment facility 200 receives.In certain embodiments, the service of guarding can to other programs or process (such as suitable another guards service 218) Forward-reques.As known to those skilled in the art, guarding service 218 can the operation of unsupervised, such as, to perform continuous print or periodically system scope function, network control, or performs the task of any needs.In certain embodiments, one or morely guard service 218 and operate in user's space 202, and in other embodiments, one or morely guard service 218 and operate in kernel spacing.
Referring now to Fig. 2 B, describe another embodiment of equipment 200.Generally speaking, what equipment 200 provided in following services, function or operation is one or more: the SSL VPN for the communication between one or more client computer 102 and one or more server 106 is communicated with 280, exchanges/load balance 284, domain name service resolve 286, acceleration 288 and application firewall 290.Each of server 106 can provide one or more Internet-related services 270a-270n (being called service 270).Such as, server 106 can provide http to serve 270.Equipment 200 comprises one or more virtual server or virtual IP server, is called vServer 275, vS 275, VIP server or is only VIP 275a-275n (herein also referred to as vServer 275).VServer275 receives according to the configuration of equipment 200 and operation, tackles or communication otherwise between processing client 102 and server 106.
VServer 275 can comprise any combination of software, hardware or software and hardware.VServer275 can comprise the program of any type and the form run in user model 202 in the device 200, kernel mode 204 or its any combination, service, task, process or executable instruction.VServer275 comprises any logic, function, rule or operation, and to perform any embodiment of technology described herein, such as SSL VPN 280, conversion/load balance 284, domain name service are resolved 286, accelerated 288 and application firewall 290.In certain embodiments, vServer 275 is established to the connection of the service 270 of server 106.Service 275 can comprise the equipment 200 that can be connected to, client computer 102 or vServer 275 and any program communicated with it, application, process, task or executable instruction set.Such as, serve 275 and can comprise web server, http-server, ftp, Email or database server.In certain embodiments, service 270 is finger daemon or network drive, for monitoring, receiving and/or send the communication of application, and such as Email, database or enterprise's application.In certain embodiments, serve 270 to communicate on specific IP address or IP address with port.
In certain embodiments, one or more strategy of vServer 275 application strategy engine 236 is to the network service between client computer 102 and server 106.In one embodiment, this strategy is relevant to vServer 275.In yet another embodiment, this strategy is based on user or user's group.In yet another embodiment, strategy is for general and be applied to one or more vServer 275a-275n, any user communicated with by equipment 200 or user's group.In certain embodiments, the strategy of policy engine has the condition of any this strategy of content application based on communication, the context of the stem in content such as Internet protocol address, port, protocol type, the grouping of communication or field or communication, such as user, user's group, vServer 275, transport layer connect and/or the mark of client computer 102 or server 106 or attribute.
In other embodiments, equipment 200 communicates or interface with policy engine 236, to determine checking and/or the mandate of long-distance user or remote client 102, with access from server 106 computing environment 15, application and/or data file.In yet another embodiment, equipment 200 communicates with policy engine 236 or alternately, to determine checking and/or the mandate of long-distance user or remote client 102, makes application transfer system 190 transmit one or more computing environment 15, application and/or data file.In yet another embodiment, equipment 200 is set up VPN or SSL VPN based on the checking of policy engine 236 couples of long-distance users or remote client 102 and/or mandate and is connected.In an embodiment, equipment 200 is based on the policy control network traffics of policy engine 236 and communication session.Such as, based on policy engine 236, equipment 200 can control computing environment 15, application or the access of data file.
In certain embodiments, vServer 275 sets up transport layer with client computer 102 through client proxy 120 and is connected, and such as TCP or UDP connects.In one embodiment, vServer 275 monitors and receives communicating from client computer 102.In other embodiments, vServer 275 sets up transport layer with client-server 106 and is connected, and such as TCP or UDP connects.In one embodiment, vServer275 is established to the operation Internet protocol address of server 270 on a server 106 and is connected with the transport layer of port.In yet another embodiment, the first transport layer connection to client computer 102 joins with the second transport layer join dependency to server 106 by vServer 275.In certain embodiments, vServer275 is established to the transport layer connection pool of server 106 and connects the request of multiplexed client computer via the transport layer of described pond (pooled).
In certain embodiments, equipment 200 provides the SSL VPN between client computer 102 and server 106 to connect 280.Such as, client computer 102 request on first network 102 is established to the connection of the server 106 on second network 104 '.In certain embodiments, second network 104 ' can not from first network 104 route.In other embodiments, client computer 102 is positioned in common network 104, and server 106 is positioned on dedicated network 104 ', such as enterprise network.In one embodiment, client proxy 120 tackles the communication of the client computer 102 on first network 104, encrypts this communication, and connects this communication of transmission to equipment 200 through the first transport layer.The first transport layer on first network 104 connects and joins with the second transport layer join dependency to the server 106 on second network 104 by equipment 200.Equipment 200 receives the communication of tackling from client proxy 102, deciphers this communication, and connects this communication of transmission to the server 106 on second network 104 through the second transport layer.Second transport layer connection can be that the transport layer in pond connects.Same, equipment 200 is that the client computer 102 between two networks 104,104 ' provides end-to-end secure transport layers to connect.
In one embodiment, equipment 200 posts in-house network Internet protocol or IntranetIP 282 address of the client computer 102 of carrying in VPN (virtual private network) 104.Client computer 102 has home network identifier, Internet protocol (IP) address on such as first network 104 and/or Hostname.When being connected to second network 104 ' through equipment 200, equipment 200 is for IntranetIP 282 address is set up, distributes or otherwise provided to client computer 102 on second network 104 ', and it is the network identifier of such as IP address and/or Hostname.Use the IntranetIP 282 set up for client computer, equipment 200 is monitored and is received any communication pointing to this client computer 102 on second or private 104 '.In one embodiment, equipment 200 is used as or representative client 102 on the second dedicated network 104.Such as, In yet another embodiment, vServer 275 monitors and the communicating of the IntranetIP282 responding client computer 102.In certain embodiments, if the calculation element on second network 104 ' 100 sends request, equipment 200 processes this request as client computer 102.Such as, equipment 200 can respond the examination to client computer IntranetIP 282.In yet another embodiment, the calculation element 100 on the second network 104 that equipment can be connected with request and client computer IntranetIP 282 connects, and such as TCP or UDP connects.
In certain embodiments, equipment 200 provides following one or more speed technology 288:1 for the communication between client computer 102 and server 106) compression; 2) decompress; 3) transmission control protocol pond; 4) transmission control protocol is multiplexed; 5) transmission control protocol buffering; And 6) high-speed cache.In one embodiment, equipment 200 connects to allow to be accessed by the repeating data of client computer through the Internet by opening to be connected with one or more transport layer of each server 106 and to maintain these, alleviates connect by repeating the transport layer of opening and closing to client computer 102 a large amount of process loads caused for server 106.This technology is called " connection pool " herein.
In certain embodiments, in order to the transport layer through pond connects seamless spliced from client computer 102 to the communication of server 106, equipment 200 is by changing or multiplex communication at transport layer protocol level amendment sequence number and confirmation number.This is called as " connecting multiplexed ".In certain embodiments, application layer protocol is not needed to interact.Such as, in the situation of the grouping that arrives (that is, from the grouping that client computer 102 receives), the source network address of described grouping is changed to the network address of the output port of equipment 200, and the object network address is changed to the network address of destination server.Sending grouping (namely, from server 106 receive a grouping) situation in, source network address is changed into the network address of the output port of equipment 200 by the network address from server 106, and destination address is changed into the network address of the client computer 102 of request by the network address from equipment 200.Sequence number and confirmation number sequence number that the client computer 102 that the transport layer being also converted into the equipment 200 of client computer 102 is connected is expected and the confirmation of grouping.In certain embodiments, the packet checks of transport layer protocol takes into account these conversions with being recalculated.
In yet another embodiment, equipment 200 exchanges or load balance function 284 for the communication between client computer 102 and server 106 provides.In certain embodiments, equipment 200 carrys out dispense flow rate according to layer 4 useful load or application-level request data and client requests is directed to server 106.In one embodiment, although the network layer of network packet or layer 2 identifying purpose server 106, equipment 200 utilizes and is carried as the data of the useful load of transport layer packet and application message are to determine server 106 so that distributing network grouping.In one embodiment, the health of health monitoring program 216 monitoring server of equipment 200 determines distribution client requests is to which server 106.In certain embodiments, if to detect certain server 106 unavailable or have the load exceeding predetermined threshold for equipment 200, client requests can be pointed to or be distributed to another server 106 by equipment 200.
In certain embodiments, equipment 200 is as domain name service (DNS) resolver or otherwise for providing parsing from the DNS request of client computer 102.In certain embodiments, equipment tackles the DNS request sent by client computer 102.In one embodiment, equipment 200 with the IP address of equipment 200 or its post the IP address of carrying and carry out the DNS request of customer in response machine.In this embodiment, the network service being used for domain name is sent to equipment 200 by client computer 102.In yet another embodiment, equipment 200 with the second equipment 200 ' or its post the IP address of carrying and carry out the DNS request of customer in response machine.In certain embodiments, equipment 200 uses the IP address of the server 106 determined by equipment 200 to carry out the DNS request of customer in response machine.
In yet another embodiment, equipment 200 provides application firewall function 290 for the communication between client computer 102 and server 106.In one embodiment, policy engine 236 is provided for the rule detecting and block illegal request.In certain embodiments, application firewall 290 defends denial of service (DoS) to attack.In other embodiments, the content of the request that equipment inspection is tackled, to identify and to block the attack based on application.In certain embodiments, rule/policy engine 236 comprises one or more application firewall or the safety control strategy of the protection for providing the tender spots based on web or the Internet to multiple kind and type, such as following one or more tender spots: 1) buffer zone is released, 2) CGI-BIN parameter manipulation, 3) list/hide field manipulation, 4) force to browse, 5) cookie or session poisoning, 6) destroyed Access Control List (ACL) (ACLs) or weak password, 7) cross site scripting process (XSS), 8) order is injected, 9) SQL injects, 10) erroneous trigger sensitive information leakage, 11) to the dangerous use of encryption, 12) server error configuration, 13) back door and debugging option, 14) website is altered, 15) platform or operating system weakness, with 16) zero day attack.In one embodiment; to one or more of following situations; application firewall 290 provides the protection of html format field with the form of inspection or analysis network service: 1) return required field; 2) added field is not allowed; 3) read-only and hiding field enforcement (enforcement); 4) drop-down list and radio button field is consistent, and 5) format fields length enforcement.In certain embodiments, application firewall 290 guarantees that cookie is not modified.In other embodiments, application firewall 290 defends pressure to browse by performing legal URL.
In other embodiments, application firewall 290 protects any confidential information comprised in network communications.Application firewall 290 can check according to the rule of engine 236 or strategy or analyze arbitrary network service to be identified in the arbitrary confidential information in the either field of network packet.In certain embodiments, application firewall 290 identifies that credit number, password, SSN (social security number), name, patient identification code, contact details and the one or many at age occur in network communications.The coded portion of network service can comprise these and occur or confidential information.Occur based on these, in one embodiment, application firewall 290 can take strategy action to network service, such as stops sending network service.In yet another embodiment, application firewall 290 can rewrite, moves or otherwise cover this appearance identified or confidential information.
Still with reference to figure 2B, equipment 200 can comprise as composition graphs 1D above the performance monitoring agency 197 that discusses.In one embodiment, equipment 200 receives monitoring agent 197 from the monitor service 198 such as described in Fig. 1 D or monitoring server 106.In certain embodiments, equipment 200 preserves monitoring agent 197 in the memory storage of such as disk, for sending any client computer or server that communicate with equipment 200 to.Such as, in one embodiment, equipment 200 sends monitoring agent 197 to client computer when receiving the request of setting up transport layer connection.In other embodiments, equipment 200 sends monitoring agent 197 when setting up and being connected with the transport layer of client computer 102.In yet another embodiment, equipment 200 sends monitoring agent 197 to client computer when tackling or detect the request to web page.In yet another embodiment, equipment 200 sends monitoring agent 197 to client computer or server in response to the request of monitoring server 198.In one embodiment, equipment 200 sends monitoring agent 197 to the second equipment 200 ' or equipment 205.
In other embodiments, equipment 200 performs monitoring agent 197.In one embodiment, monitoring agent 197 measurement and monitoring perform on the device 200 any application, program, process, service, task or thread performance.Such as, performance and the operation of vServers 275A-275N can be monitored and measure to monitoring agent 197.In yet another embodiment, the performance of any transport layer connection of monitoring agent 197 measurement and monitoring equipment 200.In certain embodiments, monitoring agent 197 measurement and monitoring is by the performance of any user conversation of equipment 200.In one embodiment, any Virtual Private Network connection of monitoring agent 197 measurement and monitoring by the such as SSL VPN session of equipment 200 and/or the performance of session.In a further embodiment, the internal memory of monitoring agent 197 measurement and monitoring equipment 200, CPU and disk use and performance.In yet another embodiment, the performance of any speed technology 288 performed by equipment 200 of monitoring agent 197 measurement and monitoring such as SSL unloading, connection pool and multiplexed, high-speed cache and compression.In certain embodiments, the performance of arbitrary load balance of being performed by equipment 200 of monitoring agent 197 measurement and monitoring and/or content exchange 284.In other embodiments, the performance that the application firewall 290 that monitoring agent 197 measurement and monitoring is performed by equipment 200 is protected and processed.
C. client proxy
See Fig. 3 now, describe the embodiment of client proxy 120.Client computer 102 comprises client proxy 120, for setting up and switched communication via network 104 and equipment 200 and/or server 106.Generally speaking, client computer 102 operates on calculation element 100, and this calculation element 100 has the operating system with kernel mode 302 and user model 303, and with the network stack 310 of one or more layers of 310a-310b.One or more application can be installed and/or be performed to client computer 102.In certain embodiments, one or more application communicates with network 104 by network stack 310.One of described application, such as web browser, also can comprise the first program 322.Such as, the first program 322 can be used in certain embodiments to install and/or perform client proxy 120, or wherein any part.Client proxy 120 comprises interception mechanism or blocker 350, for tackling the network service from one or more application from network stack 310.
The network stack 310 of client computer 102 can comprise the software of any type and form or hardware or its combination, for provide with the connection of network with communicate.In one embodiment, network stack 310 comprises the software simulating for network protocol suite.Network stack 310 can comprise one or more network layer, such as, be any network layer of open system interconnection (OSI) traffic model that those skilled in the art generally acknowledged and understood.Like this, network stack 310 can comprise for any any type of following osi model layer and the agreement of form: 1) physical link layer; 2) data link layer; 3) network layer; 4) transport layer; 5) session layer); 6) presentation layer, and 7) application layer.In one embodiment, network stack 310 can be included in the transmission control protocol (TCP) on the network layer protocol of Internet Protocol (IP), is commonly referred to TCP/IP.In certain embodiments, can carry ICP/IP protocol on Ethernet protocol, Ethernet protocol can comprise any race of IEEE wide area network (WAN) or LAN (Local Area Network) (LAN) agreement, such as, by these agreements that IEEE802.3 covers.In certain embodiments, network stack 310 comprises the wireless protocols of any type and form, such as IEEE 802.11 and/or mobile Internet Protocol.
Consider the network based on TCP/IP, any agreement based on TCP/IP can be used, comprise messages application DLL (dynamic link library) (MAPI) (email), file transfer protocol (FTP) (FTP), HTML (Hypertext Markup Language) (HTTP), common internet file system (CIFS) agreement (file transfer), independent computing architecture (ICA) agreement, RDP (RDP), WAP (wireless application protocol) (WAP), mobile IP protocol, and ip voice (VoIP) agreement.In yet another embodiment, network stack 310 comprises the transmission control protocol of any type and form, the transmission control protocol such as revised, such as affairs TCP (T/TCP), with the TCP (TCP-SACK) selecting to confirm, with the TCP (TCP-LW) of large window, the such as congestion prediction protocol of TCP-Vegas agreement, and TCP fraud protocol.In other embodiments, network stack 310 can use any type of UDP on such as IP and the User Datagram Protoco (UDP) (UDP) of form, such as, for voice communication or real-time data communication.
In addition, network stack 310 can comprise the one or more network drives supporting one or more layers, such as TCP driver or network layer driver.Network layer driver can be used as a part for the operating system of calculation element 100 or is included as any network interface unit of calculation element 100 or a part for other network access component.In certain embodiments, any network drive of network stack 310 can be customized, revises or adjust the customization or the amendment part that provide network stack 310, is used for supporting any technology described herein.In other embodiments, design and build accelerated procedure 302 with network stack 310 co-operating or work, above-mentioned network stack 310 by client computer 102 operating system installation or otherwise provide.
Network stack 310 comprises the interface of any type and form, for any information and the data that receive, obtain, provide or otherwise access relates to the network service of client computer 102.In one embodiment, application programming interface (API) is comprised with the interface of network stack 310.Interface also can comprise any function call, hook or strobe utility, the interfacing of event or callback mechanism or any type.Network stack 310 can receive or provide to the function of network stack 310 by interface or operate relevant any type and the data structure of form, such as object.Such as, data structure can comprise the information relevant to network packet and data or one or more network packet.In certain embodiments, data structure is included in a part for the network packet of the protocol layer process of network stack 310, the network packet of such as transport layer.In certain embodiments, data structure 325 comprises kernel rank data structure, and in other embodiments, data structure 325 comprises user model data structure.That kernel-level data can comprise acquisition or the data structure relevant to a part for the network stack 310 operated in kernel mode 302 or the network driver operated in kernel mode 302 or other software or by any data structure run or operate in the service of kernel mode of operating system, process, task, thread or other executable instruction and obtain or receive.
In addition, the some parts of network stack 310 can perform at kernel mode 302 or operate, such as, and data link or network layer, and other parts perform at user model 303 or operate, the such as application layer of network stack 310.Such as, the Part I 310a of network stack can provide to application and access the user model of network stack 310, and the Part II 310a of network stack 310 provides the access to network.In certain embodiments, the Part I 310a of network stack can comprise one or more more top of network stack 310, any layer of such as layer 5-7.In other embodiments, the Part II 310b of network stack 310 comprises one or more lower layer, any layer of such as layer 1-4.Each Part I 310a of network stack 310 and Part II 310b can comprise any part of network stack 310, be positioned at any one or more network layers, be in user model 203, kernel mode 202, or its combination, or in any part of network layer or to the point of interface of network layer, or any part of user model 203 and kernel mode 202 or the point of interface to user model 203 and kernel mode 202.
Blocker 350 can comprise any combination of software, hardware or software and hardware.In one embodiment, blocker 350 network stack 310 any point interception network service, and be redirected or send network service to desired by blocker 350 or client proxy 120, management or control destination.Such as, blocker 350 can be tackled the network service of the network stack 310 of first network and send this network service to equipment 200, for sending on second network 104.In certain embodiments, blocker 350 comprises and docking and the blocker 350 of arbitrary type of the driver of the network drive worked together with network stack 310 containing to be such as fabricated and to design.In certain embodiments, client proxy 120 and/or blocker 350 operate in one or more layer of network stack 310, such as in transport layer.In one embodiment, blocker 350 comprises filter driver, Hook Mechanism or is connected to arbitrary form of transport layer and the suitable networks driver interface of type of network stack, such as by transfer driver interface (TDI).In certain embodiments, blocker 350 is connected to another protocol layer of any layer on the first protocol layer of such as transport layer and such as transmission protocol layer, such as, and application protocol layer.In one embodiment, blocker 350 can comprise the driver in accordance with NetWare Driver Interface Specification (NDIS), or ndis driver.In yet another embodiment, blocker 350 can comprise microfilter or miniport driver.In one embodiment, blocker 350 or its part operate in kernel mode 202.In yet another embodiment, blocker 350 or its part operate in user model 203.In certain embodiments, a part for blocker 350 operates in kernel mode 202, and another part of blocker 350 operates in user model 203.In other embodiments, client proxy 120 operates at user model 203, but is connected to the part of kernel mode driver, process, service, task or operating system by blocker 350, such as to obtain kernel-level data 225.In other embodiments, blocker 350 is user mode application or program, such as applies.
In one embodiment, any transport layer connection request tackled by blocker 350.In these embodiments, blocker 350 performs transport layer application programming interface (API) and calls to arrange destination information, and the IP address, destination and/or the port that such as arrive desired locations are used for location.In this way, blocker 350 interception directional transmissions layer of laying equal stress on is connected to the IP address and port that to be controlled by blocker 350 or client proxy 120 or manage.In one embodiment, the destination information of connection is set to local ip address and the port of the client computer 102 that client proxy 120 is monitored by blocker 350.Such as, client proxy 120 can comprise transport layer communication intercept local ip address for being redirected and the agency service of port.In certain embodiments, redirected transport layer communication is sent to equipment 200 by client proxy 120 subsequently.
In certain embodiments, domain name service (DNS) request tackled by blocker 350.In one embodiment, DNS request resolved by client proxy 120 and/or blocker 350.In yet another embodiment, blocker sends the DNS request of tackling to equipment 200 to carry out dns resolution.In one embodiment, equipment 200 is resolved DNS request and DNS response is sent to client proxy 120.In certain embodiments, equipment 200 resolves DNS request through another equipment 200 ' or dns server 106.
In yet another embodiment, client proxy 120 can comprise two agencies 120 and 120 '.In one embodiment, first agent 120 can be included in the blocker 350 of the network layer operation of network stack 310.In certain embodiments, the request of first agent 120 catch net network layers, such as Internet Control Message Protocol (ICMP) request (such as, examination and traceroute).In other embodiments, second agent 120 ' can tackle transport layer communication in transport layer operations.In certain embodiments, first agent 120 communicates in one deck interception of network stack 210 and is connected with second agent 120 ' or tackled communication is sent to second agent 120 '.
Client proxy 120 and/or blocker 350 can be transparent with other protocol layer any to network stack 310 mode at protocol layer operations or dock with it.Such as, in one embodiment, blocker 350 can with any protocol layer under the transport layer to such as network layer and such as session, represent or application layer protocol transport layer on the transparent mode of any protocol layer network stack 310 transport layer operations or dock with it.This allows other protocol layer of network stack 310 to carry out as expected operating and without the need to amendment to use blocker 350.Like this, client proxy 120 and/or blocker 350 can be connected the arbitrary communication provided via arbitrary agreement that transport layer carries with safety, optimization, acceleration, route or load balance, the arbitrary application layer protocol on such as TCP/IP with transport layer.
In addition, client proxy 120 and/or blocker can operate in the mode that other calculation element any of the user to any application, client computer 102 and the such as server communicated with client computer 102 is transparent or dock with it on network stack 310.Client proxy 120 and/or blocker 350 can be mounted without the need to the mode of amendment application and/or to perform on client 102.In certain embodiments, the user of client computer 102 or the calculation element that communicates with client computer 102 do not recognize the existence of client proxy 120 and/or blocker 350, execution or operation.Equally, in certain embodiments, relative to application, the user of client computer 102, such as server another calculation element or on the protocol layer connected by blocker 350 and/or under any protocol layer install, perform and/or operate client proxy 120 and/or blocker 350 pellucidly.
Client proxy 120 comprises accelerated procedure 302, stream client computer 306, collects agency 304 and/or monitoring agent 197.In one embodiment, client proxy 120 comprises independent computing architecture (ICA) client computer or its any portion developed by the Citrix Systems Inc. of Florida State FortLauderdale, and also refers to ICA client computer.In certain embodiments, client proxy 120 comprises application stream client computer 306, for being applied to client computer 102 from server 106 stream transmission.In certain embodiments, client proxy 120 comprises accelerated procedure 302, for accelerating the communication between client computer 102 and server 106.In yet another embodiment, client proxy 120 comprises collects agency 304, for performing end-point detection/scanning and for being equipment 200 and/or server 106 collecting terminal dot information.
In certain embodiments, accelerated procedure 302 comprises the client-side accelerated procedure for performing one or more speed technology, to accelerate, strengthen or otherwise improve the communication of client computer and server 106 and/or the access to server 106, such as access the application provided by server 106.The logic of the executable instruction of accelerated procedure 302, function and/or operation can perform one or more following speed technology: 1) multi-protocols compression, 2) transmission control protocol pond, 3) transmission control protocol is multiplexed, 4) transmission control protocol buffering, and 5) by the high-speed cache of cache manger.In addition, accelerated procedure 302 can perform encryption and/or the deciphering of any communication being received by client computer 102 and/or sent.In certain embodiments, accelerated procedure 302 in an integrated fashion or form perform one or more speed technology.In addition, accelerated procedure 302 arbitrary agreement that can carry the useful load of the network packet as transport layer protocol or multi-protocols perform compression.
Stream client computer 306 comprises application, program, process, service, task or executable instruction, and described application, program, process, service, task or executable instruction are for receiving and perform the application of transmitting as a stream from server 106.Server 106 can transmit one or more application data file as a stream to stream client computer 306, for playing, performing or otherwise cause the application in client computer 102 to be performed.In certain embodiments, server 106 sends the application data file of one group of compression or packing to stream client computer 306.In certain embodiments, multiple application file is compressed and is stored on file server in files, such as CAB, ZIP, SIT, TAR, JAR or out file.In one embodiment, server 106 decompresses, unpacks or separates grade application file and this file is sent to client computer 102.In yet another embodiment, client computer 102 decompresses, unpacks or separates a grade application file.Stream client computer 306 dynamically installs application or its part, and performs this application.In one embodiment, flowing client computer 306 can be executable program.In certain embodiments, flow client computer 306 and can start another executable program.
Collect agency 304 and comprise application, program, process, service, task or executable instruction, for identifying, obtaining and/or collect the information about client computer 102.In certain embodiments, equipment 200 sends and collects agency 304 to client computer 102 or client proxy 120.Can configure according to one or more strategies of the policy engine 236 of equipment and collect agency 304.In other embodiments, collect agency 304 and send the information of collecting on client 102 to equipment 200.In one embodiment, the policy engine 236 of equipment 200 use collected information to determine and provide client computer to arrive the connection of network 104 access, authentication vs. authorization control.
In one embodiment, collect agency 304 and comprise end-point detection and scan mechanism, it identifies and determines one or more attribute or the feature of client computer.Such as, collect agency 304 and can identify and determine any one or more following client-side attributes: the 1) version of operating system and/or operating system, 2) services package of operating system, 3) service that runs, 4) process run, and 5) file.Collect agency 304 can also identify and determine existence or the version of any one or more following softwares in client computer: 1) anti-virus software; 2) personal firewall software; 3) Anti-Spam software, and 4) the Internet security software.Policy engine 236 can have based on client computer or any one or more attributes of client-side attribute or one or more strategies of characteristic.
In certain embodiments, client proxy 120 comprise as composition graphs 1D and 2B the monitoring agent 197 discussed.Monitoring agent 197 can be any type of such as Visual Basic or java script and the script of form.In one embodiment, the performance of any part of monitoring agent 197 monitoring and measuring customer machine agency 120.Such as, in certain embodiments, the performance of accelerated procedure 302 is monitored and measured to monitoring agent 197.In yet another embodiment, the performance of stream client computer 306 is monitored and measured to monitoring agent 197.In other embodiments, the performance of monitoring agent 197 monitoring and measurement collection agency 304.In yet another embodiment, the performance of blocker 350 is monitored and measured to monitoring agent 197.In certain embodiments, monitoring agent 197 is monitored and any resource of the such as storer of measuring customer machine 102, CPU and disk.
Monitoring agent 197 can monitor the performance with any application of measuring customer machine.In one embodiment, monitoring agent 197 is monitored and the performance of browser on measuring customer machine 102.In certain embodiments, the performance of any application transmitted via client proxy 120 is monitored and measured to monitoring agent 197.In other embodiments, final user's response time of monitoring agent 197 measurement and monitoring application, such as, based on response time or the http response time of web.The performance of ICA or RDP client computer can be monitored and measure to monitoring agent 197.In yet another embodiment, the index of monitoring agent 197 measurement and monitoring user conversation or utility cession.In certain embodiments, monitoring agent 197 measurement and monitoring ICA or RDP session.In one embodiment, monitoring agent 197 measurement and monitoring equipment 200 acceleration transmit application and/or data to the performance in the process of client computer 102.
In certain embodiments, still with reference to figure 3, the first program 322 may be used for automatically, silently, pellucidly or otherwise install and/or perform client proxy 120 or its part, such as blocker 350.In one embodiment, the first program 322 comprises plug in component, such as ActiveX control or Java control or script, and it is loaded into application and is performed by application.Such as, the first program comprises applies by web browser the ActiveX control being loaded into and running, such as, in the context of storage space or application.In yet another embodiment, the first program 322 comprises executable instruction sets, and this executable instruction sets is loaded into by the application of such as browser and performs.In one embodiment, the first program 322 comprises the program that is designed and constructs to install client proxy 120.In certain embodiments, the first program 322 is obtained from another calculation element by network, downloads or subscribing client agency 120.In yet another embodiment, the first program 322 is installed as the installation procedure of the program of network-driven or plug and play manager in the operating system in client computer 102.
D. for providing the system and method for virtualization applications transfer control
Refer now to Fig. 4 A, block diagram describes an embodiment of virtualized environment 400.In general, calculation element 100 comprises supervisory routine layer, virtualization layer and hardware layer.Supervisory routine layer comprises supervisory routine 401 (also referred to as virtualization manager), and it is distributed by least one virtual machine performed in virtualization layer and manages the access (such as processor 421 and disk 428) to the multiple physical resources in hardware layer.Virtualization layer comprises at least one operating system 410 and distributes to multiple virtual resources of at least one operating system 410.Virtual resource can include, without being limited to multiple virtual processor 432a, 432b, 432c (being generically and collectively referred to as 432) and virtual disk 442a, 442b, 442c (being generically and collectively referred to as 442), and as the virtual resource of virtual memory and virtual network interface.Multiple virtual resource and operating system can be called virtual machine 4o6.Virtual machine 406 can comprise control operation system 405, and this control operation system 405 communicates with supervisory routine 401, and for performing application to manage and to configure other virtual machines on calculation element 100.
Specifically, supervisory routine 401 can provide virtual resource in any mode simulating the operating system of addressable physical equipment to operating system.Supervisory routine 401 can provide virtual resource to arbitrary quantity client operating system 410a, 410b (being generically and collectively referred to as 410).In some embodiments, calculation element 100 performs one or more supervisory routines.In these embodiments, supervisory routine can be used for simulating virtual hardware, divides physical hardware, virtualize physical hardware perform the virtual machine providing access to computing environment.Supervisory routine can comprise these programs manufactured by the VMWare of the Palo Alto being positioned at California, USA; XEN supervisory routine (one is increased income product, and its exploitation is supervised by the Xen.org association that increases income); HyperV, VirtualServer of being thered is provided by Microsoft or Virtual PC supervisory routine, or other.In some embodiments, calculation element 100 performs and creates the supervisory routine that client operating system can perform virtual machine platform thereon, and this calculation element 100 is called as home server.In one of these embodiments, such as, calculation element 100 is the XEN SERVER provided by the Citrix Systems company being positioned at Fla. Fort Lauderdale.
In some embodiments, perform within the operating system that supervisory routine 401 performs on the computing device.In one of these embodiments, the calculation element of executive operating system and supervisory routine 401 can be regarded as having host operating system (performing operating system on the computing device), and client operating system (operating system performed in the computational resource subregion provided by supervisory routine 401).In other embodiments, the hardware direct interaction on supervisory routine 401 and calculation element instead of perform on host operating system.In one of these embodiments, supervisory routine 401 can be regarded as in " naked metal (bare metal) " upper execution, and described " naked metal " refers to the hardware comprising calculation element.
In some embodiments, supervisory routine 401 can produce the virtual machine 406a-c (being generically and collectively referred to as 406) that operating system 410 performs wherein.In one of these embodiments, supervisory routine 401 loaded virtual machine reflection is to create virtual machine 406.These embodiments another in, supervisory routine 401 is executive operating system 410 in virtual machine 406.Still these embodiments another in, virtual machine 406 executive operating system 410.
In some embodiments, supervisory routine 401 controls processor scheduling and the internal memory division of the virtual machine 406 performed on calculation element 100.In one of these embodiments, supervisory routine 401 controls the execution of at least one virtual machine 406.These embodiments another in, supervisory routine 401 presents the abstract of at least one hardware resource provided by calculation element 100 at least one virtual machine 406.In other embodiments, whether and how supervisory routine 401 controls concurrent physical processor ability to be presented to virtual machine 406.
Control operation system 405 can perform at least one application for managing and configure client operating system.In an embodiment, control operation system 405 can perform management application, as comprised the application of following user interface, this user interface provides the access to the function performed for managing virtual machines for keeper, and these functions comprise for performing virtual machine, stop virtual machine performs or identifies the function will distributing to the physical resource type of virtual machine.In another embodiment, supervisory routine 401 is executivecontrol function system 405 in the virtual machine 406 created by supervisory routine 401.In another embodiment, the virtual machine 406 of the physical resource of control operation system 405 on authorized direct access computation device 100 performs.In some embodiments, the control operation system 405a on calculation element 100a can exchange data by the communication between supervisory routine 401a and supervisory routine 401b and the control operation system 405b on calculation element 100b.Like this, one or more calculation element 100 can exchange the data of other physical resources available in relevant processor or resource pool with other calculation elements 100 one or more.In one of these embodiments, this function allows supervisory routine to manage the resource pool be distributed in multiple physical compute devices.These embodiments another in, one or more client operating systems that the management of multiple supervisory routine performs on a calculation element 100.
In an embodiment, control operation system 405 performs on the authorized virtual machine 406 mutual with at least one client operating system 410.In another embodiment, client operating system 410 is communicated with control operation system 405 by supervisory routine 401, with request access disk or network.Still In yet another embodiment, client operating system 410 communicates by the communication channel set up by supervisory routine 401 with control operation system 405, such as, by the multiple shared storage pages provided by supervisory routine 401.
In some embodiments, control operation system 405 comprises the network backend driver for directly communicating with the network hardware provided by calculation element 100.In one of these embodiments, network backend drive processes is from least one virtual machine request of at least one client operating system 110.In other embodiments, control operation system 405 comprises the block back driver for communicating with the memory element on calculation element 100.In one of these embodiments, block back driver reads and writes data from memory element based at least one request received from client operating system 410.
An embodiment, control operation system 405 comprises instrument storehouse 404.In other embodiments, instrument storehouse 404 provides following function: communicate with other control operation systems 405 (being such as positioned on the second calculation element 100b) alternately with supervisory routine 401, or virtual machine 406b, the 406c on Management Calculation device 100.In another embodiment, instrument storehouse 404 comprises self-defined application, and it is for providing the management function of improvement to the keeper of virtual machine cluster.In some embodiments, at least one in instrument storehouse 404 and control operation system 405 comprises Administration API, and it is provided for Remote configuration and the interface of the virtual machine 406 that controlling calculation device 100 runs.In other embodiments, control operation system 405 is communicated with supervisory routine 401 by instrument storehouse 404.
In an embodiment, supervisory routine 401 performs client operating system 410 in the virtual machine 406 created by supervisory routine 401.In another embodiment, the user that client operating system 410 is calculation element 100 provides the access to the resource in computing environment.In another embodiment, resource comprise program, application, document, file, multiple application, multiple file, executable program file, desktop environment, computing environment or to the user of calculation element 100 can other resources.In another embodiment, resource is sent to calculation element 100 by multiple access method, these methods include but not limited to: conventional direct installation on calculation element 100, calculation element 100 is sent to by the method for application stream, by by performing that resource produces on the second calculation element 100 ' and sending calculation element 100 to by the output data that presentation level protocol sends calculation element 100 to, send the output data that the virtual machine execution resource by performing on the second calculation element 100 ' produces to calculation element 100, or perform from the flash memory device (such as USB device) being connected to calculation element 100 or perform by the virtual machine performed at calculation element 100 and produce and export data.In some embodiments, the output data that execution resource produces are transferred to another calculation element 100 ' by calculation element 100.
In an embodiment, the virtual machine that client operating system 410 and this client operating system 410 perform thereon combines and forms Full-virtualization virtual machine, this Full-virtualization virtual machine does not also know it oneself is virtual machine, and such machine can be described as " Domain U HVM (hardware virtual machine) virtual machine ".In another embodiment, Full-virtualization machine comprise simulation basic input/output (BIOS) software in case in Full-virtualization machine executive operating system.In yet another embodiment, Full-virtualization machine can comprise driver, and it provides function by communicating with supervisory routine 401.In such embodiment, driver can be recognized and oneself performs in virtualized environment.In another embodiment, the virtual machine that client operating system 410 and this client operating system 410 perform thereon combines and forms para-virtualization (paravirtualized) virtual machine, this para-virtualization virtual machine recognizes it oneself is virtual machine, and such machine can be described as " Domain U PV virtual machine ".In another embodiment, para-virtualization machine comprises Full-virtualization machine extra driver not to be covered.In another embodiment, para-virtualization machine comprises and is comprised in network backend driver in control operation system 405 and block back driver as above.
Refer now to Fig. 4 B, block diagram describes an embodiment of the multiple networked computing device in system, and wherein, at least one physical host performs virtual machine.In general, system comprises Management Unit 404 and supervisory routine 401.System comprises multiple calculation element 100, multiple virtual machine 406, multiple supervisory routine 401, multiple Management Unit (being also called instrument storehouse 404 or Management Unit 404) and physical resource 421,428.Each of multiple physical machine 100 can be provided as the calculation element 100 that as above composition graphs 1E-1H and Fig. 4 A describes.
Specifically, physical disk 428 is provided by calculation element 100, stores virtual disk 442 at least partially.In some embodiments, virtual disk 442 and multiple physical disk 428 are associated.In one of these embodiments, one or more calculation element 100 can exchange the data of other physical resources available in relevant processor or resource pool with other calculation elements 100 one or more, allows supervisory routine to manage the resource pool be distributed in multiple physical compute devices.In some embodiments, the calculation element 100 performed thereon by virtual machine 406 is called physical host 100 or main frame 100.
The processor of supervisory routine on calculation element 100 performs.Supervisory routine distributes to virtual disk by the visit capacity of physical disk.In an embodiment, supervisory routine 401 distributes the amount of space on physical disk.In another embodiment, supervisory routine 401 distributes the multiple pages on physical disk.In some embodiments, supervisory routine provides virtual disk 442 as initialization and the part performing virtual machine 450 process.
In an embodiment, Management Unit 404a is called pond Management Unit 404a.In another embodiment, can be called that the MOS 405a of Control management system 405a comprises Management Unit.In some embodiments, Management Unit is called instrument storehouse.In one of these embodiments, Management Unit is the instrument storehouse 404 of composition graphs 4A description above.In other embodiments, Management Unit 404 provides user interface, for receiving the mark of the virtual machine 406 that will supply and/or perform from the user of such as keeper.Still in other embodiments, Management Unit 404 provides user interface, receives for the user from such as keeper request virtual machine 4006b being moved to another physical machine from a physical machine 100.In a further embodiment, Management Unit 404a identifies that performing the calculation element 100b of virtual machine 406d asked thereon also indicates the supervisory routine 401b on the calculation element 100b that identifies to perform the virtual machine identified, like this, Management Unit can be called pond Management Unit.
Refer now to Fig. 4 C, describe the embodiment of virtual application transfer control or virtual unit 450.In general, any function of equipment 200 of describing of composition graphs 2A and 2B and/or embodiment (such as applying transfer control) can be deployed in any embodiment of the virtualized environment that composition graphs 4A and 4B above describes above.The function of application transfer control is not dispose with the form of equipment 200, but by the virtualized environment 400 of this function distributing on any calculation element 100 of such as client computer 102, server 106 or equipment 200.
With reference now to Fig. 4 C, describe the block diagram of the embodiment of the virtual unit 450 of operation in the supervisory routine 401 of server 106.As the same in the equipment 200 of Fig. 2 A with 2B, virtual machine 450 can provide the function of availability, performance, unloading and safety.For availability, virtual unit can perform the load balance between the 4th layer and the 7th layer, network and perform Intelligent Service health monitoring.Increase for the performance accelerating to realize by network traffics, virtual unit can perform buffer memory and compression.For the unloading process of any server, virtual unit can perform connection multiplexing and connection pool and/or SSL process.For safety, virtual unit can any application firewall function of actuating equipment 200 and SSL VPN function.
By reference to the accompanying drawings any module of equipment 200 of describing of 2A can packaged, the combination of the form of virtual equipment transfer control 450, design or structure, virtual equipment transfer control 450 can be deployed to the software module or assembly that perform in virtualized environment 300 on the such any server of such as popular server or non-virtualized environment.Such as, the form of the installation kit can installed on the computing device provides virtual unit.With reference to figure 2A, by any one design in cache manger 232, policy engine 236, compression 238, crypto engine 234, Packet engine 240, GUI 210, CLI 212, shell service 214 and assembly or module that any operating system of calculation element and/or virtualized environment 300 is run can be formed in.Virtual equipment 400 does not use the encryption processor 260 of equipment 200, processor 262, storer 264 and network stack 267, but on these resources any that virtualized environment 400 can be used to provide or server 106 otherwise can these resources.
Still with reference to figure 4C, in brief, any one or more vServer 275A-275N can operate or perform in the virtualized environment 400 of the calculation element 100 (as server 106) of any type.Any module and the function of equipment 200 that describe of 2B can become and operate in the virtual of server or non-virtualized environment by design and structure by reference to the accompanying drawings.Any one in vServer 275, SSL VPN 280, Intranet UP 282, switch 284, DNS 286, accelerator 238, APP FW 280 and monitoring agent can be packed, combines, designs or be configured to the form applying transfer control 450, application transfer control 450 can be deployed to the one or more software module or assembly that perform in device and/or virtualized environment 400.
In some embodiments, server can perform multiple virtual machine 406a-406b in virtualized environment, and each virtual machine runs the identical or different embodiment of virtual application transfer control 450.In some embodiments, server can perform the one or more virtual units 450 on one or more virtual machine on of a multiple core processing system core.In some embodiments, server can perform the one or more virtual units 450 on one or more virtual machine on each processor of multi-processor device.
E. the system and method for multicore architecture is provided
According to Moore's Law, every two years on integrated circuit, the quantity of installable transistor can be substantially double.Such as, but the increase of CPU speed can reach a stable level (plateaus), and, since 2005, CPU speed is in the scope of about 3.5-4GHz.Under certain situation, CPU manufacturer may not rely on the increase of CPU speed to obtain extra performance.Some CPU manufacturers can increase additional core to provide extra performance to processor.The product as software and Network Provider of dependence CPU acquisition performance improvement can by the performance utilizing these multi-core CPUs to improve them.Can redesign and/or be written as the software of single CPU design and structure to utilize multithreading, parallel architecture or multicore architecture.
In some embodiments, be called that the multicore architecture of the equipment 200 of nCore or multi-core technology allows equipment break monokaryon performance penalty and utilize the ability of multi-core CPU.In the framework that composition graphs 2A describes, run single network or Packet engine above.The multinuclear of nCore technology and framework allows simultaneously and/or runs multiple Packet engine concurrently.By running Packet engine on each core, equipment framework utilizes the processing power of additional core.In some embodiments, this provide the performance improvement up to seven times and extendability.
Fig. 5 A illustrate distribute on one or more processor core according to a class parallel mechanism or parallel computation scheme (as machine-processed in function parallelization mechanism, data parallel or based on the data parallel mechanism of stream) work, task, load or network traffics some embodiments.In general, Fig. 5 A illustrates the embodiment of the multiple nucleus system of the equipment 200 ' as having n core, and n core is numbered 1 to N.In an embodiment, work, load or network traffics can be distributed on the first core 505A, the second core 505B, the 3rd core 505C, the 4th core 505D, the 5th core 505E, the 6th core 505F, the 7th core 505G etc., like this, distribution be arranged in all n core 505N (being after this referred to as core 505) or n core two or more on.Multiple VIP 275 can be had, on each corresponding core operated in multiple core.Multiple Packet engine 240 can be had, each corresponding core operating in multiple core.Use any method can produce different, change or similar operating load in multiple core on arbitrary core or performance class 515.For function parallelization method, each core runs the difference in functionality of the multiple functions provided by Packet engine, VIP 275 or equipment 200.In data parallel method, data can walk abreast based on the network interface unit (NIC) or VIP 275 receiving data or be distributed on core.In another data parallel method, by data stream is distributed on each core, process is distributed on core.
In the further details of Fig. 5 A, in some embodiments, according to function parallelization mechanism 500, load, work or network traffics can be distributed between multiple core 505.Function parallelization mechanism can based on each core performing one or more corresponding function.In some embodiments, first endorses execution first function, and the second core performs the second function simultaneously.In function parallelization method, the function that multiple nucleus system will perform divided according to functional and be distributed to each core.In some embodiments, function parallelization mechanism can be called tasks in parallel mechanism, and can realize at each processor or check when same data or different pieces of information perform different process or function.Core or processor can perform identical or different code.Under certain situation, different execution threads or code can operationally intercom mutually.Can carry out communicating that data are passed to next thread as a part for workflow from a thread.
In some embodiments, according to function parallelization mechanism 500, work is distributed on core 505, can comprise according to specific function distributed network flow, described specific function is such as network incoming/outgoing management (NWI/O) 510A, Secure Socket Layer (SSL) (SSL) encryption and decryption 510B and transmission control protocol (TCP) function 510C.This can produce based on the work of used function or functional class, performance or computational load 515.In some embodiments, according to data parallel mechanism 540, work is distributed in the distributed data that can comprise based on being associated with specific hardware or component software on core 505 and carrys out distribute workload 515.In some embodiments, work is distributed in core 505 can comprise according to the data parallel mechanism 520 based on stream and carrys out distributed data based on context or stream, thus make the workload 515A-N on each core can be similar, substantially equal or be relatively evenly distributed.
When function parallelization method, each core can be configured run the one or more functions in the multiple functions provided by the Packet engine of equipment or VIP.Such as, core 1 can the network I/O process of actuating equipment 200 ', simultaneously the TCP connection management of core 2 actuating equipment.Similarly, core 3 can perform SSL unloading, and simultaneously core 4 can perform the 7th layer or application layer process and traffic management.Each endorsing performs identical or different function.Eachly endorse an execution not only function.Arbitrary endorsing is run 2A and 2B by reference to the accompanying drawings and is identified and/or the function that describes or its part.In the method, the work on core can coarseness or fine granularity mode divide by function.Under certain situation, as shown in Figure 5A, different IPs can be made to operate in different performances or load level 515 by function division.
When function parallelization method, each core can be configured run the one or more functions in the multiple functions provided by the Packet engine of equipment.Such as, core 1 can the network I/O process of actuating equipment 200 ', simultaneously the TCP connection management of core 2 actuating equipment.Similarly, core 3 can perform SSL unloading, and simultaneously core 4 can perform the 7th layer or application layer process and traffic management.Each endorsing performs identical or different function.Eachly endorse an execution not only function.Any endorsing is run 2A and 2B by reference to the accompanying drawings and is identified and/or the function that describes or its part.In the method, the work on core can coarseness or fine granularity mode divide by function.Under certain situation, as shown in Figure 5A, different IPs can be made to operate in different performances or load level by function division.
Distributed function or task can be come by any structure or scheme.Such as, Fig. 5 B illustrates the first core Core1 505A for the treatment of the application be associated with network I/O function 510A and process.In some embodiments, the network traffics be associated with network I/O can be associated with specific port numbers.Thus, guide to Core1505A by the grouping sending and arrive with the port destination be associated with NW I/O 510A, this Core1 505A is exclusively used in the all-network flow processing and be associated with NW I/O port.Similar, Core2 505B is exclusively used in the function processing and be associated with SSL process, and Core4 505D can be exclusively used in process all TCP levels process and function.
Although Fig. 5 A illustrates the function as network I/O, SSL and TCP, also other functions can be distributed to core.These other functions can comprise described herein one or more function or operation.Such as, any function that composition graphs 2A and 2B describes can be distributed on core based on function basis.Under certain situation, a VIP275A may operate on the first core, and meanwhile, the 2nd VIP 275B with different configuration may operate on the second core.In some embodiments, each core 505 can process specific function, and each like this core 505 can process the process be associated with this specific function.Such as, Core2 505B can unload by treatment S SL, and Core4 505D can process application layer process and traffic management simultaneously.
In other embodiments, according to the data parallel mechanism 540 of any type or form, work, load or network traffics can be distributed on core 505.In some embodiments, same task or function can be performed to realize the data parallel mechanism in multiple nucleus system by each different sheets checking distributed data.In some embodiments, single execution thread or code control the operation to all data slice.In other embodiments, different threads or instruction control operation, but can same code be performed.In some embodiments, the angle of any other network hardware that is that comprise from Packet engine, vServer (VIP) 275A-C, network interface unit (NIC) 542D-E and/or equipment 200 or that be associated with equipment 200 or software realizes data parallel mechanism.Such as, each endorsing runs same Packet engine or VIP code or configuration still at the different enterprising line operates of distributed data collection.Each network hardware or software configuration can receive data that are different, change or substantially identical amount, thus can tool vicissitudinous, different or the load 515 of relatively identical amount.
When data parallel method, can divide and distribution work based on the data stream of VIP, NIC and/or VIP or NIC.In one of these method, work the workload partition of multiple nucleus system on the data set of distribution by making each VIP or be distributed in VIP.Such as, configurable each core runs one or more VIP.Network traffics can be distributed on the core of each VIP of process flow.These methods another in, which NIC can receive network traffics by the workload partition of equipment or be distributed on core based on.Such as, the network traffics of a NIC can be distributed to the first core, and the network traffics of the 2nd NIC can be distributed to the second core simultaneously.Under certain situation, endorse the data of process from multiple NIC.
Although Fig. 5 A shows the single vServer be associated with single core 505, as the situation of VIP1 275A, VIP2 275B and VIP3 275C.But in some embodiments, single vServer can be associated with one or more core 505.On the contrary, one or more vServer can be associated with single core 505.VServer is associated with core 505 and can comprise this core 505 and process all functions that vServer specific for this associate.In some embodiments, each core performs the VIP with same code and configuration.In other embodiments, each core performs to be had same code but configures different VIP.In some embodiments, each core performs the VIP with different code and identical or different configuration.
Similar with vServer, NIC also can associate with specific core 505.In many embodiments, NIC can be connected to one or more core 505, and like this, when NIC reception or transmission of data packets, specific core 505 process relates to the process of reception and transmission of data packets.In an embodiment, single NIC can be associated with single core 505, as the situation of NIC1 542D and NIC2 542E.In other embodiments, one or more NIC can be associated with single core 505.But in other embodiments, single NIC can be associated with one or more core 505.In these embodiments, load can be distributed on one or more core 505, makes each core 505 substantially process similar charge capacity.The core 505 associated with NIC can process all functions and/or data that NIC specific for this associate.
Although according to the data of VIP or NIC work to be distributed in independence core had to a certain degree, in some embodiments, this can cause the unbalanced use of the core as shown in the varying duty 515 of Fig. 5 A.
In some embodiments, according to the data stream of any type or form, load, work or network traffics can be distributed on core 505.These methods another in, can based on data stream by workload partition or be distributed on multiple core.Such as, the network traffics through equipment between client computer or server can be distributed to a core in multiple core and by its process.Under certain situation, set up at first session or connection to endorse be the core that the network traffics of this session or connection distribute.In some embodiments, any unit of data stream flow Network Based or part, as affairs, request/response communication or the flow from the application in client computer.Like this, in some embodiments, the data stream through equipment 200 ' between client-server can than other modes distribute more balanced.
In the data parallel mechanism 520 based on stream, Data distribution8 is relevant with the data stream of any type, and such as request/response is to, affairs, session, connection or application communication.Such as, the network traffics through equipment between client computer or server can be distributed to a core in multiple core and by its process.Under certain situation, set up at first session or connection to endorse be the core that the network traffics of this session or connection distribute.The distribution of data stream can make each core 505 run the charge capacity of substantially equal or relatively uniform distribution, data volume or network traffics.
In some embodiments, any unit of data stream flow Network Based or part, as affairs, request/response communication or the flow being derived from the application in client computer.Like this, in some embodiments, the data stream through equipment 200 ' between client-server can than other modes distribute more balanced.In an embodiment, can based on affairs or a series of affairs distributed data amount.In some embodiments, these affairs can be between client-server, and its feature can be IP address or other packet identifiers.Such as, core 1 505A can be exclusively used in the affairs between specific client and particular server, and therefore, the load 515A on core 1505A can comprise the network traffics be associated with the affairs between specific client and server.By all packets being derived from specific client or server are routed to core 1 505A, network traffics are distributed to core 1 505A.
Although affairs can be based in part on by work or load distribution to core, in other embodiments, load or work can be distributed based on the basis of each grouping.In these embodiments, equipment 200 data interception can divide into groups and packet be distributed to the minimum core of charge capacity 505.Such as, because the load 515A on core 1 is less than the load 515B-N on other core 505B-N, so the packet that first arrives can be distributed to core 1 505A by equipment 200.After first packet being distributed to core 1 505A, the charge capacity 515A on core 1 505A and the process stock number proportional increase of process needed for the first packet.When equipment 200 intercepts the second packet, load can be distributed to core 4 505D by equipment 200, and this is because core 4 505D has second few charge capacity.In some embodiments, packet is distributed to minimum the endorsing of charge capacity guarantee to be distributed to each core 505 load 515A-N keep substantially equal.
In other embodiments, when a part of network traffics are distributed to particular core 505, often can distribute load based on unit.Above-mentioned example illustrates carries out load balance based on often dividing into groups.In other embodiments, load can be distributed based on grouping number, such as, every 10,100 or 1000 groupings be distributed to the minimum core of flow 505.The number of packet distributing to core 505 can be the number determined by application, user or keeper, and can for any number being greater than zero.Still in other embodiments, distribute load based on time index, make, at predetermined amount of time, grouping is distributed to particular core 505.In these embodiments, grouping is distributed to particular core 505 by any time section can determined in 5 milliseconds or by user, program, system, manager or other modes.After the predetermined amount of time past, gave different core 505 by time transmitted in packets within a predetermined period of time.
For by work, load or network traffics, the data parallel method based on stream be distributed on one or more core 505 can comprise the combination in any of above-described embodiment.These methods can be performed by any part of equipment 200, are performed, such as Packet engine by the application performed on core 505 or one group of executable instruction, or are performed by any application performed on the calculation element communicated with equipment 200, program or act on behalf of.
Function shown in Fig. 5 A and data parallel mechanism numerical procedure can be combined in any way, to produce hybrid parallel mechanism or distributed processing scheme, it comprises function parallelization mechanism 500, data parallel mechanism 540, data parallel mechanism 520 or its any part based on stream.Under certain situation, multiple nucleus system can use the load-balancing schemes of any type or form by load distribution on one or more core 505.Load-balancing schemes can be combined with any function and data parallel scheme or its combination.
Fig. 5 B illustrates the embodiment of multiple nucleus system 545, and this system can be one or more systems of any type or form, unit or assembly.In some embodiments, this system 545 can be included in there is one or more process core 505A-N equipment 200 in.System 545 also can comprise the one or more Packet engine (PE) or packet processing engine (PPE) 548A-N that communicate with memory bus 556.Memory bus can be used for communicating with one or more process core 505A-N.System 545 also can comprise one or more network interface unit (NIC) 552 and stream distributor 550, and stream distributor also can communicate with one or more process core 505A-N.Stream distributor 550 can comprise receiver side adjuster (Receiver SideScaler-RSS) or receiver side adjustment (Receiver Side Scaling-RSS) module 560.
With further reference to Fig. 5 B, specifically, in an embodiment, Packet engine 548A-N can comprise any part of equipment 200 described herein, such as any part of equipment described in Fig. 2 A and 2B.In some embodiments, Packet engine 548A-N can comprise any following element: Packet engine 240, network stack 267, cache manger 232, policy engine 236, compression engine 238, crypto engine 234, GUI 210, CLI212, shell service 214, watchdog routine 216 and can receive other any software and hardware elements of packet from any one data bus 556 or one or more core 505A-N.In some embodiments, Packet engine 548A-N can comprise one or more vServer 275A-N or its any part.In other embodiments, Packet engine 548A-N can provide the combination in any of following functions: SSLVPN 280, in-house network IP282, exchange 284, DNS 286, grouping acceleration 288, APP FW 280, as the monitoring provided by monitoring agent 197 and function associate as TCP storehouse, load balance, SSL unloading and process, content exchange, Policy evaluation, high-speed cache, compression, coding, decompression, decoding, application firewall function, XML process and accelerate and SSL VPN be connected.
In some embodiments, Packet engine 548A-N can with particular server, user, client or network associate.When Packet engine 548 associates with special entity, Packet engine 548 can process the packet with this entity associated.Such as, if Packet engine 548 associates with first user, so the grouping associated with first user the grouping produced by first user or destination address processes and operates by this Packet engine 548.Similarly, Packet engine 548 can be selected not associate with special entity, make Packet engine 548 can be not produced by this entity or object be that any packet of this entity is carried out process and otherwise operates.
In some examples, Packet engine 548A-N can be configured to perform any function shown in Fig. 5 A and/or data parallel scheme.In these examples, Packet engine 548A-N can by function or Data distribution8 on multiple core 505A-N, thus make to distribute according to parallel mechanism or distribution scheme.In some embodiments, single Packet engine 548A-N performs load-balancing schemes, and in other embodiments, one or more Packet engine 548A-N performs load-balancing schemes.In an embodiment, each core 505A-N can associate with specific cluster engine 548, makes it possible to perform load balance by Packet engine.In this embodiment, load balance can require that each Packet engine 548A-N of associating with core 505 and other Packet engine associated with core communicate, and makes Packet engine 548A-N jointly to determine by load distribution wherein.An embodiment of this process can comprise and receives moderator for the ballot of load from each Packet engine.Load is distributed to each Packet engine 548A-N by the duration that moderator can be based in part on engine ballot, under certain situation, load is distributed to each Packet engine 548A-N by the priority value that also can be associated based on the present load amount on the core 505 associated with at engine.
Any Packet engine that core runs can run on user model, kernel mode or its combination in any.In some embodiments, Packet engine operates as the application run in user's space or application space or program.In these embodiments, any function that Packet engine can use the interface of any type or form to visit kernel to provide.In some embodiments, the part that Packet engine operates in kernel mode or as kernel operates.In some embodiments, the Part I of Packet engine operates in user model, and the Part II of Packet engine operates in kernel mode.In some embodiments, the first Packet engine on the first core is executed in kernel mode, and meanwhile, the second Packet engine on the second core is executed in user model.In some embodiments, Packet engine or its any part are to NIC or its any driver operates or and its joint operation.
In some embodiments, memory bus 556 can be storer or the computer bus of any type or form.Although describe single memory bus 556 in figure 5b, system 545 can comprise the memory bus 556 of any amount.In an embodiment, each Packet engine 548 can the memory bus 556 independent with one or more be associated.
In some embodiments, NIC 552 can be any network interface unit described herein or mechanism.NIC552 can have the port of any amount.NIC can design and be configured to be connected to the network 104 of any type and form.Although illustrate single NIC 552, system 545 can comprise the NIC 552 of any amount.In some embodiments, each core 505A-N can associate with one or more single NIC 552.Thus, each core 505 can associate with the single NIC 552 being exclusively used in particular core 505.Core 505A-N can comprise any processor described herein.In addition, core 505A-N can be configured according to any core 505 described herein configuration.In addition, core 505A-N can have any core 505 function described herein.Although Fig. 5 B illustrates seven core 505A-G, system 545 can comprise the core 505 of any amount.Specifically, system 545 can comprise N number of core, wherein N be greater than zero integer.
Endorse the storer having or use and be assigned with or assign for this core.Storer can be considered as the proprietary of this core or local storage and only have this to endorse this storer of access.Endorse and there is or use storer that is that share or that be assigned to multiple core.This storer can be regarded as by a not only addressable public or shared storage of core.Endorse any combination using proprietary or common storage.By the independent address space of each core, some eliminating when using same address space coordinate rank.Utilize independent address space, endorse and carry out work with the information in the address space to core oneself and data, and do not worry and other nuclear conflicts.Each Packet engine can have the single memory pond connected for TCP and/or SSL.
Still with reference to figure 5B, any function of the core 505 of composition graphs 5A description above and/or embodiment can be deployed in any embodiment of the virtualized environment that composition graphs 4A and 4B above describes.Not the function of disposing core 505 with the form of concurrent physical processor 505, but by these function distributings in the virtualized environment 400 of such as any calculation element 100 of client computer 102, server 106 or equipment 200.In other embodiments, not the function of disposing core 505 with the form of equipment or a device, but by this function distributing on multiple devices of any layout.Such as, a device can comprise two or more core, and another device can comprise two or more core.Such as, multiple nucleus system can comprise the network of the cluster of calculation element, server zone or calculation element.In some embodiments, not the function of disposing core 505 with the form of core, but by this function distributing on multiple processor, such as, dispose in multiple single core processor.
In an embodiment, core 505 can be the processor of any form or type.In some embodiments, the function of core can substantially similar any processor described herein or CPU (central processing unit).In some embodiments, core 505 can comprise any part of any processor described herein.Although Fig. 5 A illustrates 7 cores, any N number of core can be had in equipment 200, wherein N be greater than 1 integer.In some embodiments, core 505 can be arranged in shared device 200, and in other embodiments, core 505 can be arranged on and communicate with one another in one or more equipment 200 of connection.In some embodiments, core 505 comprises PaintShop, and in other embodiments, core 505 provides general processing capabilities.Core 505 closely can be installed and/or can communicate with one another connection by physics each other.Can with for physically and/or communication mode be coupled to any type of core and be connected core with the bus of form or subsystem, for core, transmit data from core and/or between core.
Each core 505 can comprise the software for communicating with other cores, and in some embodiments, core manager (not shown) can contribute to the communication between each core 505.In some embodiments, inside endorse and provide core to manage.Endorse to use interface or the communication each other of various interface mechanism.In some embodiments, core can be used to send to the message of core and communicate between core, such as, the first core sends message or data by the bus or subsystem being connected to core to the second core.In some embodiments, endorse the shared storage interface communication by any kind or form.In an embodiment, the one or more memory cells shared in all core can be there are.In some embodiments, each endorsing has the single memory unit shared with other core each.Such as, first endorses the first shared storage had with the second core, and with the second shared storage of the 3rd core.In some embodiments, endorse and communicated by the programming of any type or API (function call as by kernel).In some embodiments, operating system identifiable design also supports multi-core device, and is provided for interface and the API of intercore communication.
Stream distributor 550 can be any application, program, storehouse, script, task, service, process or any type performed on the hardware of any type or form and form executable instruction.In some embodiments, stream distributor 550 can be any circuit design for performing any operation described herein and function or structure.In some embodiments, the distribution of stream distributor distribution, forwarding, route, control and/or the data managed on multiple core 505 and/or the Packet engine run on core or VIP.In some embodiments, stream distributor 550 can be called the main device of interface (interface master).In an embodiment, stream distributor 550 is included in one group of executable instruction that the core of equipment 200 or processor perform.In another embodiment, stream distributor 550 is included in one group of executable instruction that the computing machine that communicates with equipment 200 performs.In some embodiments, stream distributor 550 is included in one group of executable instruction that the NIC as firmware performs.Other embodiments, stream distributor 550 comprises any combination for packet being distributed in the software and hardware on core or processor.In an embodiment, stream distributor 550 performs at least one core 505A-N, and in other embodiments, the independent stream distributor 550 distributing to each core 505A-N performs on the core 505A-N be associated.Stream distributor can use the statistics of any type and form or probabilistic algorithm or decision-making to balance the stream on multiple core.Or can be configured to support the sequential operation on NIC and/or core by the device hardware of such as NIC or core design.
System 545 comprises in the embodiment of one or more stream distributor 550, and each stream distributor 550 can associate with processor 505 or Packet engine 548.Stream distributor 550 can comprise the interface mechanism allowing each stream distributor 550 to communicate with other stream distributors 550 performed in system 545.In an example, how balanced load determined by one or more stream distributor 550 by communicating with one another.The operation of this process can substantially and said process similar, submit to moderator by ballot, then moderator determines which stream distributor 550 should receive load.In other embodiments, the load on the core associated by first-class distributor 550 ' identifiable design also determines whether the first packet to be forwarded to associated core based on any following standard: the load on associated core is greater than predetermined threshold; Load on associated core is less than predetermined threshold; Load on associated core is less than the load on other cores; Or may be used for any other index that part determines packet to be forwarded to based on the charge capacity on processor where.
Network traffics can be distributed on core 505 according to distribution as described here, calculating or balancing method of loads by stream distributor 550.In an embodiment, stream distributor can based on function parallelization mechanism distribution scheme 550, data parallel mechanism load distribution scheme 540, based on the data parallel mechanism distribution scheme 520 of stream or the combination in any of these distribution schemes or for any load-balancing schemes of load distribution on multiple processor is carried out distributed network flow.Thus, distributor 550 is flowed by receiving packet and according to the load balance operated or distribution scheme, packet distribution being served as load spreaders on a processor.In an embodiment, how correspondingly stream distributor 550 can comprise for determining distribute packets, one or more operations of work or load, function or logic.In another embodiment, stream distributor 550 can comprise source address that identifiable design associates with packet and destination address also correspondingly one or more child-operations of distribute packets, function or logic.
In some embodiments, stream distributor 550 can comprise the executable instruction of receiver side adjustment (RSS) network drive module 560 or any type packet be distributed on one or more core 505 and form.RSS module 560 can comprise the combination in any of hardware and software.In some embodiments, RSS module 560 and the collaborative work of stream distributor 550 are to be distributed in packet on the multiple processors in core 505A-N or multiprocessor network.In some embodiments, RSS module 560 can perform in NIC 552, in other embodiments, can perform in any one of core 505.
In some embodiments, RSS module 560 uses Microsoft's receiver side adjustment (RSS) method.In an embodiment, RSS is Microsoft's a scalable network active technique (Microsoft Scalable Networkinginitative technology), it makes the reception process on the multiple processors in system be balance, keeps the order transmission of data simultaneously.RSS can use the Hash scheme of any type or form to determine processing core or the processor of network packet.
RSS module 560 can apply the hash function of any type or form, as Toeplitz hash function.Hash function may be used on Hash types value or any value sequence.Hash function can be the secure Hash of any level of security or otherwise encrypt.Hash function can use Hash key (hashkey).The size of key word depends on hash function.For Toeplitz Hash, the Hash key size for IPv6 is 40 bytes, and the Hash key size for IPv4 is 16 bytes.
Based on any one or more standards or design object design or hash function can be constructed.In some embodiments, can use as the input of different Hash and different Hash type provide the hash function of equally distributed Hash result, described different Hash inputs and different Hash type comprises TCP/IPv4, TCP/IPv6, IPv4 and IPv6 head.In some embodiments, (such as 2 or 4) when there is a small amount of bucket can be used to provide the hash function of equally distributed Hash result.In some embodiments, (such as 64 buckets) when there is large measuring tank can be used to provide the hash function of the Hash result of stochastic distribution.In certain embodiments, hash function is determined based on calculating or resource use level.In certain embodiments, hash function is determined based on the difficulty realizing Hash within hardware.In certain embodiments, based on sending with the distance host of malice, the difficulty of the grouping be all hashing onto in same bucket is determined hash function.
RSS can produce Hash from the input of any type and form, such as value sequence.This value sequence can comprise any part of network packet, as any head of network packet, territory or load or its part.In some embodiments, Hash input can be called Hash type, any information tuple that Hash input can comprise with network packet or data stream association, such as, type below: comprise the four-tuple of at least two IP addresses and two ports, comprise the four-tuple of any four class values, hexa-atomic group, two tuples and/or any other numeral or value sequence.Below the Hash examples of types that can be used by RSS:
The four-tuple of-source tcp port, source IP version 4 (IPv4) address, object tcp port and object IPv4 address.
The four-tuple of-source tcp port, source IP version 6 (IPv6) address, object tcp port and object IPv6 address.
Two tuples of IPv4 address, source and object IPv4 address.Two tuples of IPv6 address, source and object IPv6 address.
Two tuples of IPv6 address ,-source and object IPv6 address, comprise the support to resolving IPv6 extended head.
Hash result or its any part can be used for identifying the core for distributed network grouping or entity, as Packet engine or VIP.In some embodiments, one or more Hash position or mask can be applied to Hash result.Hash position or mask can be any figure place or byte number.NIC can support any position, such as 7.Network stack can set the actual number of bits that will use when initialization.Figure place, between 1 and 7, comprises end value.
Table Hash result by any type and form identifies core or entity, such as, by bucket table (bucket table) or indirection table (indrection table).In some embodiments, carry out concordance list by the figure place of Hash result.The scope of Hash mask can limit the size of indirection table effectively.Any part of Hash result or Hash result self can be used for index indirection table.Value in table can identify any core or processor, such as, identified by core or processor identifiers.In some embodiments, in table, identify all cores of multiple nucleus system.In other embodiments, in table, identify a part of core of multiple nucleus system.Indirection table can comprise any number of bucket, such as 2 to 128 buckets, can use these buckets of Hash mask index.Each bucket can comprise the index value scope of mark core or processor.In some embodiments, stream controller and/or RSS module carry out rebalancing offered load by changing indirection table.
In some embodiments, multiple nucleus system 575 does not comprise RSS driver or RSS module 560.These embodiments some in, in software operational module (not shown) or system RSS module software implementation can with stream distributor 550 co-operate or the part operation as stream distributor 550, with the core 505 that grouping is directed in multiple nucleus system 575.
In some embodiments, perform in stream distributor 550 any module on the device 200 or program, or any one core 505 comprised at multiple nucleus system 575 and arbitrary device or assembly perform.In some embodiments, stream distributor 550 ' can perform on the first core 505A, and in other embodiments, stream distributor 550 " can perform on NIC 552.In other embodiments, each core 505 that the example flowing distributor 550 ' can comprise at multiple nucleus system 575 performs.In this embodiment, each example of stream distributor 550 ' and can flow other instance communications of distributor 550 ' to forward grouping back and forth between core 505.There is such situation, wherein, to asking the response of grouping not by same core process, i.e. the first core process request, and the second core processing response.In these situations, the example of stream distributor 550 ' can interception packet forward the packet to expectation or correct core 505, and namely flowing distributor 550 ' can be forwarded to the first core by response.The Multi-instance of stream distributor 550 ' can perform in any combination of the core 505 of any amount or core 505.
Stream distributor can operate in response to any one or multiple rule or strategy.Rule identifiable design receives core or the packet processing engine of network packet, data or data stream.The tuple information of any type that rule identifiable design is relevant with network packet and form, the such as four-tuple of source and destination IP address and source and destination port.Based on the grouping of the tuple specified by received matched rule, stream distributor can forward the packet to core or Packet engine.In some embodiments, forward the packet to core by shared storage and/or core to the transmission of messages of core.
Although Fig. 5 B shows the stream distributor 550 performed in multiple nucleus system 575, in some embodiments, stream distributor 550 can perform and be positioned at away from the calculation element of multiple nucleus system 575 or equipment.In such embodiment, stream distributor 550 can communicate to receive packet with multiple nucleus system 575 and grouping is distributed on one or more core 505.In an embodiment, stream distributor 550 receives the packet on ground for the purpose of equipment 200, is distributed to one or more cores 505 of multiple nucleus system 575 to received packet application distribution scheme and by packet.In an embodiment, stream distributor 550 can be included in router or other equipment, such router can by changing and the metadata of each packet associated and for the purpose of particular core 505, thus each grouping is for the purpose of the child node of multiple nucleus system 575.In such embodiment, can change by the vn-tag mechanism of CISCO or mark each grouping with appropriate metadata.
Fig. 5 C illustrates the embodiment of the multiple nucleus system 575 comprising one or more process core 505A-N.In brief, one in core 505 can be designated as control core 505A and can be used as the control plane 570 of other cores 505.Other endorse to be secondary core, and it works in datum plane, and control core and provide control plane.Core 505A-N shares overall high-speed cache 580.Control core and provide control plane, other karyomorphisms in multiple nucleus system become or provide datum plane.These are checked network traffics and perform data processing function, and control core and provide initialization to multiple nucleus system, configuration and control.
Still with reference to figure 5C, specifically, core 505A-N and control core 505A can be any processor described herein.In addition, core 505A-N and control core 505A can be any processor that can work in system described in Fig. 5 C.In addition, core 505A-N can be any core described herein or core group.Control to endorse to be the core dissimilar with other cores or processor.In some embodiments, control to endorse the different Packet engine of operation or have to configure different Packet engine from the Packet engine of other cores.
Any part of the storer of each core can be assigned to or be used as the overall high-speed cache that core is shared.In brief, the predetermined percentage of each storer of each core or scheduled volume can be used as overall high-speed cache.Such as, each core each storer 50% can be used as or distribute to shared overall high-speed cache.That is, in illustrated embodiment, the 2GB of each core except control plane core or core 1 can be used for the shared overall high-speed cache forming 28GB.The memory space (the amount of memeory) that control plane can determine shared overall high-speed cache is such as configured by configuration service.In some embodiments, each endorse provide different memory space for overall high-speed cache.In other embodiments, arbitrary endorsing does not provide any storer or does not use overall high-speed cache.In some embodiments, any core also can have the local cache in the storer being not yet assigned to overall shared storage.The arbitrary portion of network traffics is stored in the shared high-speed cache of the overall situation by each endorsing.Each endorsing checks that high-speed cache searches any content that will use in request or response.Any endorse from the overall situation share high-speed cache obtain content with data stream, request or response use.
Overall situation high-speed cache 580 can be storer or the memory element of any type or form, any storer such as described herein or memory element.In some embodiments, core 505 may have access to predetermined memory space (i.e. 32GB or any other memory space suitable with system 575).Overall situation high-speed cache 580 can distribute from predetermined memory space, and meanwhile, remaining available memory can distribute between core 505.In other embodiments, each core 505 can have predetermined memory space.Overall situation high-speed cache 580 can comprise the memory space distributing to each core 505.This memory space can byte be that unit is measured, or can measure with the storer number percent distributing to each core 505.Thus, overall high-speed cache 580 can comprise the 1GB storer from the storer associated with each core 505, or can comprise 20% or half of the storer associated with each core 505.Some embodiments, only some core 505 provides storer to overall high-speed cache 580, and in other embodiments, overall high-speed cache 580 can comprise the storer being not yet assigned to core 505.
Each core 505 can use overall high-speed cache 580 to carry out storage networking flow or data cached.In some embodiments, the Packet engine of core uses overall high-speed cache to carry out buffer memory and also uses the data stored by multiple Packet engine.Such as, the cache manger of Fig. 2 A and the caching function of Fig. 2 B can use overall high-speed cache to carry out shared data for acceleration.Such as, each Packet engine can store the response of such as html data in overall high-speed cache.Any cache manger operated on core may have access to overall high-speed cache and high-speed cache response is supplied to client's request.
In some embodiments, core 505 can use overall high-speed cache 580 to carry out storage port allocation table, and it can be used for part based on port determination data stream.In other embodiments, core 505 can use overall high-speed cache 580 to come memory address question blank or any other table or list, and stream distributor can use these tables to determine the packet of arrival and the packet that sends to lead where.In some embodiments, core 505 can read and write high-speed cache 580, and in other embodiments, and core 505 is only from cache read or only write to high-speed cache.Endorse and use overall high-speed cache to perform core to core communication.
Overall high-speed cache 580 can be divided into each memory portion, wherein each part can be exclusively used in particular core 505.In an embodiment, control core 505A and can receive a large amount of available high-speed caches, and other cores 505 can receive the visit capacity of the change to overall high-speed cache 580.
In some embodiments, system 575 can comprise control core 505A.Although core 1 505A shows for controlling core by Fig. 5 C, control to endorse to be any one core in equipment 200 or multiple nucleus system.In addition, although only describe single control core, system 575 can comprise one or more control core, and each control check system has control to a certain degree.In some embodiments, one or more control is endorsed with the particular aspects of respective control system 575.Such as, endorse for one and control to determine to use which kind of distribution scheme, and another endorses the size determining overall high-speed cache 580.
The control plane of multiple nucleus system can be specified by a core and be configured to special management core or as main core.Control plane is endorsed to be provided control, management to the operation of the multiple cores in multiple nucleus system and function and coordinates.Control plane is endorsed to be provided control, management to the distribution of accumulator system on the multiple cores in multiple nucleus system and use and coordinates, and this comprises initialization and config memory system.In some embodiments, control plane comprises stream distributor, for flowing to the distribution to core of the distribution of core and network packet based on data flow con-trol data.In some embodiments, control plane core runs Packet engine, and in other embodiments, control plane core is exclusively used in the control and management of other cores of system.
Control core 505A can carry out certain rank control to other core 505, such as, determine how many storeies to be distributed to each core 505, or determine assign which core to process specific function or hardware/software entity.In some embodiments, controlling core 505A can control these cores 505 in control plane 570.Thus, the processor that uncontrolled core 505A controls can be there is outside control plane 570.Determine that the border of control plane 570 can comprise by the agent maintenance controlling to perform in core 505A or system 575 by the list controlling the core that core 505 controls.Control core 505A and can control following any one: core initialization, when unavailable definite kernel is, load is reassigned to other cores 505 when a core is out of order, which distribution scheme decision realizes, determine which core should receive network traffics, decision distributes how many high-speed caches should to each core, specific function or element is determined whether to be distributed to particular core, determine whether to allow core to communicate with one another, determine the size of overall high-speed cache 580 and the function to the core in system 575, any other of configuration or operation is determined.
f. for the system and method for the grouping of distributed data on multicore architecture and system
1. for packet being distributed to multiple nucleus system in multicore architecture and system and framework
System described in Fig. 5 B and framework are the general views to a kind of possible multiple nucleus system 545, and request and response can be distributed to the Packet engine performed on multiple cores of multiple nucleus system 545 by this multiple nucleus system 545 equably.System also has other aspects a lot, and these aspects can be convenient to being uniformly distributed request and response in certain embodiments, and these aspects can also realize the security strategy that requires to maintain Client IP address or client port number and other system configuration.Multicore architecture 545 processes in the request of segmentation and/or the system of response wherein, also needs extra object and structure process and follow the tracks of the packet of segmentation.
Fig. 6 A shows the embodiment of multiple nucleus system 545.In most embodiments, this system 545 can comprise one or more network interface unit (NIC) 552 that can perform or comprise RSS module 560.NIC 552 can communicate with one or more core 505, and wherein, each core can perform Packet engine 548 and/or stream distributor 550.In certain embodiments, each core 550 can store one or more port assignment table 604 and can comprise one or more port 632 and one or more Internet protocol (IP) address 630.
Continue with reference to figure 6A, in more detail, in one embodiment, multiple nucleus system 545 can be any multiple nucleus system 545 described herein.Especially, multiple nucleus system 545 can be any multiple nucleus system 545 described in Fig. 5 B-5C.Described multiple nucleus system 545 can perform on equipment 200, client computer, server or any other computing machine performing multiple nucleus system 545 described herein.Although the multiple nucleus system 545 shown in Fig. 6 A comprises multiple core 505 and NIC 552, in certain embodiments, multiple nucleus system 545 can comprise extra device and can perform extra program, client computer and module.
In one embodiment, multiple nucleus system 545 can comprise NIC 552, such as any NIC described herein.Although the multiple nucleus system 545 that Fig. 6 A shows describes the multiple nucleus system 545 comprising single NIC 552, in certain embodiments, multiple nucleus system 545 can have multiple NIC 552.These NIC 552 can be the NIC 552 of identical type, and can be dissimilar NIC 552 in other embodiments.NIC 552 can with the one or more communications in the process core 505 of multiple nucleus system 545.Such as, NIC 552 can communicate with each in the first core 505A, the second core 505B, the 3rd core 505C, the 4th core 505D, the 5th core 505E, the 6th core 505F, the 7th core 505G and any " N " individual core 505N, wherein " N " be greater than 0 integer.
In other embodiments, NIC 552 can communicate with the subset of monokaryon 505 or core 505.Such as, NIC 552 can communicate with the first core 505A or core 1 to 4 505A-505D.Comprise in the embodiment of multiple NIC 552 at multiple nucleus system 545, each NIC 552 can communicate with one or more core 505.Such as, a NIC 552 can communicate with core 1 to 4 505A-505D, and the 2nd NIC 552 can communicate with core 5 to 7 505E-505G.Comprise in other embodiments of multiple NIC 552 at multiple nucleus system 545, one or more NIC 552 can communicate with core 505, and other NIC 552 can perform alternative functions, communicate with the other system in multiple nucleus system 545 or device, or serving as redundancy NIC 552, redundancy NIC 552 is used as the backup when main NIC 552 breaks down.In certain embodiments, NIC552 can be come and network and multiple nucleus system 545 interface by transmission and receiving queue, and does not need the framework understanding core 505 or multiple nucleus system 545.In these embodiments, NIC 552 can just send the packet be stored in NIC transmit queue, and receives the network packet sent by network.
In certain embodiments, NIC 552 performs RSS module 560, such as any RSS module 560 described herein.RSS module 560 is to the sequence application hash function of the tuple or value that comprise following combination in any: Client IP address; Client computer port; Object IP address; Destination interface; Or the relevant value in any other source to packet or destination.In certain embodiments, to the core 505 in the end value identification multiple nucleus system 545 of this tuple application hash function generation.RSS module 560 can use distribute packets on the next multiple cores 505 in multiple nucleus system 545 of this character of hash function.By distribute packets on the multiple cores 505 in multiple nucleus system 545, RSS module 560 can with substantially to based on the similar mode of data parallel mechanism flowed distributed network flow equably on multiple core 505.
Core 505 in multiple nucleus system 545 can be any core 505 described herein.In one embodiment, multiple nucleus system 545 can comprise " N " individual core arbitrarily, wherein " N " integer for being greater than 0.In other embodiments, multiple nucleus system 545 can comprise eight cores.Core 505 can be exclusively used in program or the service of some function of processing execution, and in certain embodiments, can be exclusively used in the packet that process is received by some device or program module or sent.In certain embodiments, each core 505 can perform arbitrarily following: Packet engine 548, such as arbitrary Packet engine 548 described herein, or stream distributor 550, such as arbitrary stream distributor 550 described herein.In other embodiments, each core 505 stores arbitrarily following in the thesaurus of association: port assignment table; The list of the port of core 505; Or the IP address list of core 505.
In one embodiment, each core 505 performs Packet engine 548A-N and can comprise any vServer275 described herein.Packet engine 548A-N can be included in each core 505, and Packet engine 548A-N can be referred to as Packet engine 548.In certain embodiments, the tuple that Packet engine 548 changes according to the stream distribution rule performed by each Packet engine 548 or Update Table divides into groups.In one embodiment, the Client IP address in the tuple of the packet received by Packet engine 548 is replaced with the IP address 630A-B of the core 505 it performing this Packet engine 548 by Packet engine 548.In yet another embodiment, the client computer port in the tuple of the packet received by Packet engine 548 is replaced with the port 632A-B selected in multiple port 632A-B of the core 505 performing this Packet engine 548 from it by Packet engine 548.In other embodiments, Packet engine 548 keeps all aspects of packet, comprises the content of the tuple of packet.In certain embodiments, Packet engine 548 communicates with one or more server 106, so that the packet of going to server 106 received is forwarded to those servers 106.Similarly, in certain embodiments, Packet engine 548 communicates with one or more client computer 102, so that the packet of going to client computer 102 received is forwarded to those client computer 102.
In certain embodiments, each core 505 visit by the Packet engine 548 that performs on this core 505 or any other module or object the thesaurus of distributing to each core 505 or to multiple nucleus system 545 in all core 505 can shared thesaurus.Like this, each module core 505 performed, program, client computer and/or object can access the addressable any thesaurus of core 505.In one embodiment, port assignment 604A-N be stored in or share or distribute to particular core 505 thesaurus in.Single core 505 can have one or more port assignment table 604A-N (being referred to as port assignment table 604), and wherein each port assignment table 604 lists the available and disabled port on specific core 505A.In one embodiment, core 505 can have a port assignment table 604, and in other embodiments, core 505 can have 64 or 256 port assignment tables 604.Such as, the port assignment Table A 604A on core 1 505A can store the record of the state of each port 632A-B on instruction core 1 505A.The state of each port 632A-B can comprise any following characteristic: this port is opened or closed; Whether this port is assigned with, namely this port whether can with or unavailable; Whether this port is in predistribution scope; And any other relevant characteristic of this port.Like this, as the Packet engine A 548A on fruit stone 1 505A wants to determine whether specific port is opened and/or available, Packet engine A548A can inquire about port assignment Table A 604A to determine required port and whether open and/or available.
When core 505 has multiple port assignment table 604, each port assignment table can associate with value or other unique identifiers.In one embodiment, each port assignment table 604 has ident value, and described ident value can be determined by the certain applications hash function of the tuple to packet.So, any Hash described herein can by Packet engine 548 or stream distributor 550 be applied to Client IP address, client computer port, object IP address and/or destination interface combination in any determine unique value for this packet.This unique value and then identify port assignment table 604 on core 505.Such as, as the Packet engine 548B on fruit stone 2 505B wants by port assignment to the packet received, this Packet engine 548B is first to the Client IP address identified in packet and object IP address applications Hash.Based on the result of Hash, Packet engine 548B selects port allocation table 604 from the one or more port assignment tables 604 core 2 505B, and selects port 632C-D based on to the inspection of selected port assignment table 604.
In certain embodiments, based on the change made the port 632 of core 505 or distribute to packet or affairs based on by port 632, port assignment table 604 dynamically can be changed by Packet engine 548, stream distributor 550 or other programs, service or device.In one embodiment, when a part for port is assigned to the specific port assignment table 604 in core 505 or distributes to specific core 505, this port assignment table 604 is updated to reflect this distribution.This renewal can be upgrade affected port 632 record to reflect this distribution, or upgrade affected port 632 to list port 632, wherein the port 632 of this part be classified as open, every other port 632 is classified as closedown.In other embodiments, once port is assigned to packet between two computing machines or affairs, by listing the state of the port be assigned with, such as, to close or unavailable and divided into groups or affairs by identification data in some cases, upgrading port assignment table 604 to reflect this distribution.
In certain embodiments, each Packet engine 548 or core 505 can be designated or distribute one or more port numbers 632, or otherwise associate with one or more port numbers 632 (being generically and collectively referred to as port 632).Port numbers can be the logic data structure for end points in network, and can be called port in certain embodiments.In certain embodiments, port numbers can be included in the head of packet and can point to the process that this packet will be addressed to.Although Fig. 6 A shows each core 505 have two ports 632, each core 505 can comprise multiple port 632, i.e. a hundreds of port 632 and several thousand or millions of ports 632 in some cases.In most embodiments, by unique value or Digital ID port 632.Packet or affairs are distributed to port 632 and can comprise the head of the packet upgrading this packet or affairs to reflect the unique value or numeral that are associated with the port 632 be assigned with.In many examples, port 632 is tracked in the port assignment table 604 of each core 505.Although each core 505 can have its oneself one group of port 632, the value associated with each port 632 or numeral can repeat on each core 505.Such as, core 3 505C can have port one to 3000, and core 5 505E also can have port one to 3000.The fact below the uniqueness of each port in core 3 505C and core 5 505E comes from: the port of core 3 505C is associated with the one or more IP addresses being exclusively used in core 3 505C, and the port of core 5 505E is associated with the one or more IP addresses being exclusively used in core 5 505E.
Similarly, each Packet engine 548 or core 505 can be endowed, distribute, associate or post and carry one or more IP address 630A-B.Although Fig. 6 A shows each core 505 have two IP addresses 630 (being referred to as IP address 630), each core 505 can have " N " individual IP address 630, wherein " N " integer for being greater than 0 arbitrarily.In certain embodiments, core 505 IP address 630 by keeper, application or in multiple nucleus system 545 perform other service or program predistribution.In other embodiments, the IP address 630 in a group or a scope is assigned to each core 505.In other embodiments, same IP address 630 is assigned to each core 505.In most embodiments, this IP address 630 is the IP address of described multiple nucleus system 545.
In one embodiment, the first core 505 can perform stream distributor 550.Described stream distributor 550 can be any stream distributor 550 described herein.Although in the multiple nucleus system 545 that Fig. 6 A shows, stream distributor 550 performs on the first core 505, and each core 505 can perform the example of the stream distributor 550 being exclusively used in this core 505.If stream distributor 550 performs on single core 505, can think that this core controls core or main core.In other embodiments, flow distributor 550 in multiple nucleus system 545, at least one NIC 552 to perform.RSS module 560 is included in some embodiments in multiple nucleus system 545, system 545 may not comprise stream distributor 550.
Be to the detailed description of at least one in the core 505 of multiple nucleus system 545 as shown in Figure 6B.Core 505N can be any one in multiple nucleus system 545 in " N " individual core, wherein " N " integer for being greater than 0.Core 505N can comprise stream distributor 550, Packet engine 548N, one or more port assignment table 604 and one or more IP address 630.Packet engine 548N can perform segmentation module 650, and segmentation module 650 can access segment table 655 further, and segmentation table 655 can be accessed by Packet engine 548N and segmentation module 650.Each port assignment table 604 can store or follow the tracks of one or more port 632.
With further reference to Fig. 6 B, in more detail, in one embodiment, multiple nucleus system 545 can be any above-mentioned multiple nucleus system 545.Similarly, core 505 can be any above-mentioned core 505.In one embodiment, each in the core 505 in multiple nucleus system 545 comprises the element of the core 505 described in Fig. 6 B.In other embodiments, the core 505 of multiple nucleus system 545 comprises the combination of the element of the core 505 described in Fig. 6 B.
In one embodiment, core 505 can perform the example of stream distributor 550 or stream distributor 550.In certain embodiments, core 505 can perform the Multi-instance of stream distributor 550.Stream distributor 550 can be any stream distributor 550 described herein.In other embodiments, core 505 does not perform or comprises the example of stream distributor 550 or stream distributor 550.In these embodiments, core 505 can pass through another program or the module of Packet engine 548N or execution on core 505, communicates with the stream distributor 550 that another core 505 in multiple nucleus system 545 or another device perform.
Core 505 or the Packet engine 548 performed on this core 505 can access or otherwise associate with above-mentioned multiple port assignment table 604.In one embodiment, core 505 can access single port allocation table, and in other embodiments, core 505 can access " N " individual port assignment table, wherein " N " integer for being greater than 0.Described port assignment table 604 can be any port assignment table 604 described herein.Although Fig. 6 A-6B describes port assignment table, in other embodiments, each core 505 can access the port list comprising available and unavailable port.In other embodiments, each core 505 can access the thesaurus stored about the availability information of each port 632 of core 505.
In most embodiments, port assignment table 604 follow the tracks of core 505 or the characteristic of port 632 that used by core 505 or state.Port assignment table 604 can follow the tracks of multiple nucleus system 545 or core 505 all local ip address on, which port be available, open or the free time.In many examples, port 632 can be any port described herein, and can be arbitrary port.In certain embodiments, port 632 associates with specific port assignment table 604.Such as, port assignment Table A 604A trace port 1-N 632A-N, and port assignment table B 604B trace port 1-N 632A-N.At each occurrence, the port 632 followed the tracks of by port assignment table is exclusively used in this port assignment table.So although described port 632 may be identical numeral, the port 632 followed the tracks of by port assignment Table A 604A is exclusively used in port assignment Table A 604A, and the port 632 followed the tracks of by port assignment table B 604B is exclusively used in port assignment table B 604B.The selectivity of each port is determined by the characteristic of the tuple of the packet being assigned to port.Such as, the first packet has the first tuple, and the first tuple has the first Client IP address and the first destination address.Second packet has second tuple different from described first tuple, and the second tuple comprises one or all in different Client IP address and different destination addresses, i.e. the second Client IP address and the second destination address.Although the first packet and the second packet can be assigned with identical port numbers; But described first packet can associate with the port assignment table 604 corresponding to the first Client IP address and/or the first destination address.Similarly, described second packet can associate with the port assignment table 604 corresponding to the second Client IP address and/or the second destination address.
In certain embodiments, a part for port assignment table 604 or port assignment table can be stored in away from the calculation element of multiple nucleus system 545 or thesaurus.Port assignment table 604 can be stored in the equipment, computing machine or the thesaurus that are arranged in multiple nucleus system 545 outside.When port assignment table 604 is positioned at multiple nucleus system 545 outside, computing machine, device or the program performed on this computing machine, device or in thesaurus or agency can communicate with multiple nucleus system 545.Once set up the communication between remote port allocation table 604 and multiple nucleus system 545, the Packet engine 548 in multiple nucleus system 545 just can be inquired about in the mode of inquiring about with Packet engine 548 and upgrade local port allocation table 604 broadly similar and upgrade remote port allocation table 604.
In certain embodiments, each core 505 in multiple nucleus system 545 comprises one or more IP address 630A-N (being referred to as IP address 630).IP address 630 can be any IP address or address, and can be any IP address 630 described herein.In one embodiment, each port assignment table 604 can associate with specific IP address 630.In certain embodiments, this IP address 630 can be agency or mute IP address, such as 0.0.0.1.Similarly, in certain embodiments, this core 505 of this multiple nucleus system 545 can associate with specific IP address 630 or IP address range.
In certain embodiments, Packet engine 548N performs or comprises segmentation module 650.In certain embodiments, described segmentation module 650 can be the hardware element comprised at described multiple nucleus system 545.In other embodiments, described segmentation module 650 can be the software module performed on described core 505.In other embodiments, Packet engine 548N performs segmentation module 650, and described segmentation module 650 can comprise any combination of hardware and software.In certain embodiments, this segmentation module 650 can be included in Packet engine 548, makes Packet engine 548 perform the instruction originally performed by segmentation module 650.In addition, in certain embodiments, Packet engine 548N can access the segmentation table 655 in the storer being stored in multiple nucleus system 545.In certain embodiments, segmentation module 650 inputs segments data packets and applies segmentation action.Be in the embodiment of " assembling " in segmentation action, segmentation module 650 assembles segments data packets to regenerate or to re-create packet.Be that in other embodiments of " bridge joint ", each segments data packets is sent to different core 505 by segmentation module 650, becomes original data packet to recombinate in segmentation action.In certain embodiments, no matter segmentation action is " assembling " or " bridge joint ", segmentation module 650 all assembles segments data packets to regenerate or to re-create packet.In certain embodiments, segmentation action can specify any following actions: the assembling port of segments data packets and the remaining segmentation of bridge joint; Them are marked before bridge data packet segmentation; Only assembling has those segments data packets of the feature set pre-set; The head of assembling packet, and the remainder of segments data packets is sent to different core 505 with restructuring.
In one embodiment, segmentation module 650 is based in part on and whether creates protocol control block (PCB) or network address translation protocol controll block (NATPCB) determines segmentation action.When there is PCB or NATPCB for the moment, receiving the Packet engine 548 of the packet be segmented or flowing the object core that first distributor 550 determines this segments data packets.Can ground part based on multiple nucleus system 545 and initiate segments data packets computing machine between connection type determine will to segments data packets application segmentation action.In certain embodiments, determine that segmentation action comprises: carry out PCB, NATPCB, chopping rule, oppositely NAT (RNAT) and service search.In one embodiment, the Packet engine or the stream distributor that receive segments data packets will be forwarded to the Packet engine or stream distributor that perform on object core by the segmentation action determined.Like this, when segments data packets is sent to described object core, this segmentation action can be applied to segments data packets.
In other embodiments, when there is PCB or NATPCB for the moment, first this segments data packets is assembled in the packet of restructuring, until there is complete packet header to use by the Packet engine 548 or the stream distributor 550 that receive the packet be segmented.Then the object core of this segments data packets is determined.If the core that have received segments data packets is not object core, so, the Packet engine on this reception core or stream distributor carry out NATPCB/PCB and search, until determine segmentation action.For the purpose of described reception core core embodiment in, the Packet engine received on core carries out service and RNAT searches to determine segmentation action.
In many examples, when reception core is not object core, described reception is endorsed to determine segmentation action and the message of segmentation action correct for instruction is sent to object core.In one embodiment, described segmentation action sends with following values by the Packet engine received on core: source IP address; Object IP address; Source port and destination interface.Can be stored in segmentation table 655 by the segmentation action determined.In certain embodiments, when object core receives segmentation action, segmentation action can be stored in segmentation table 655 by the Packet engine on object core or stream distributor.Segmentation action can store with any following identification information: Client IP address; Source IP address; Object IP address; Source port; Client computer port; Or destination interface.
In certain embodiments, when the segments data packets received is UDP segmentation, based on each packet of two tuple Hash.This two tuple can comprise any following values: Client IP address; Source IP address; Object IP address; Source port; Client computer port; Or destination interface.Can carry out the determination of segmentation action and the determination to object core according to above-mentioned any method.
Any parallel computation scheme that the distribution of packet, network traffics or request and response can describe from here realizes.In one embodiment, network traffics distribution can distribute based on symmetric(al) flow.Toeplitz Hash or any comparable Hash (comparable hash) can be used to realize symmetric(al) flow distribution with each packet determination object core for being received by multiple nucleus system 545.In certain embodiments, symmetric(al) flow distribution Hash or symmetrical Hash distribution (SHD) identical substantially with the Hash used by RSS module 560.Being operating as of described Hash: input word throttling, the sequence of such as tuple or value, and provide the key word that can be used for Hash calculation for the RSS driver of RSS module 560 inside.Like this, when the array of " N " individual byte is input in hash function, this byte stream can be identified as input [0] input [1] input [2] ... input [N-1]; Wherein leftmost byte is input [0], and leftmost position is the highest significant position of input [0], and wherein rightmost byte is input [N-i], and rightmost position is the least significant bit (LSB) of input [N-i].In certain embodiments, described Hash can according to following relational operation:
For all until the input of " N ", calculate: for each position " B " from left to right in input [], if " B " equals 1, so (" result " A=(Far Left of K 32)) and 1 position that moved to left by K, then returns " result ".
In certain embodiments, described Hash is distributed in XOR computing according to following equation or relation, Hash (A xor B)=Hash (A) xor Hash (B).In other embodiments, described Hash can be distributed in any logical operation, such as: any other logical operation function in NAND, NOR, OR, AND or the method and system that describes herein.
The sequence being input to tuple in described Hash or value can be the series connection of any following values: Client IP address; Source IP address; Object IP address; Local ip address; Mute IP address; The IP address distributed; IP address of equipment; Client computer port; Source port; Destination interface; Local port; Mute port; The port distributed; Device port; Or any other IP address or port.In certain embodiments, the order of this tuple keeps in the following manner, and namely this tuple is the series connection of Client IP address, client computer port, object IP address and destination interface.This tuple can comprise two, four, six or any values.In addition, this tuple can comprise the value of any type, namely numeral, binary, ternary, alphabetical or alphanumeric.
Here is in the Internet protocol of different editions and how applies the example of Hash when using TCP or UDP.These examples are intended to Hash described in application, and do not mean that and limit its scope.
example 1-IPV4:TCP/UDP
In this example, this tuple comprises the series connection of following values: source address; Destination address; Source port; And destination interface.Thus the feature of this tuple or input of character string can be characterized by following relation: INPUT [12]=12-15,16-19,20-21,22-23.Record@n-m identifier word adjusting range, i.e. n=12, m=15 ,@12-15.Hash is applied to this input of character string to be characterized by following formula:
Hash Result=ComputeHash(Input,12)
example 2-IPV4: other
In this example, this tuple comprises the series connection of following values: source address; And destination address.Thus the feature of this tuple or input of character string can be characterized by following relation: INPUT [8]=12-15,16-19.Record@n-m identifier word adjusting range, i.e. n=12, m=15 ,@12-15.Hash is applied to this input of character string to be characterized by following formula:
Hash Result=ComputeHash(Input,8)
example 3-IPV6:TCP/UDP
In this example, this tuple comprises the series connection of following values: source address; Destination address; Source port; And destination interface.Thus the feature of this tuple or input of character string can be characterized by following relation: INPUT [36]=8-23,24-39,40-41,42-43.Record@n-m identifier word adjusting range, i.e. n=8, m=23 ,@8-23.Hash is applied to this input of character string to be characterized by following formula:
Hash Result=ComputeHash(Input,36)
example 4-IPV6: other
In this example, this tuple comprises the series connection of following values: source address; And destination address.Thus the feature of this tuple or input of character string can be characterized by following relation: INPUT [32]=8-23,24-39.Record@n-m identifier word adjusting range, i.e. n=8, m=23 ,@8-23.Hash is applied to this input of character string to be characterized by following formula:
Hash Result=ComputeHash(Input,32)
In certain embodiments, when multiple nucleus system 545 is tackled or otherwise process the packet and/or network traffics that do not use Internet protocol, Hash is not calculated.In this embodiment, non-IP grouping or flow can be routed to the core 505 of acquiescence.The resource that this core 505 can be exclusively used in process non-IP grouping or can distribute some is to process non-IP network flow.
2. for providing the method and system of symmetrical request and response process
On one or more cores in multiple nucleus system, distributed network flow can comprise: obtain packet or request, identify the tuple of this packet, to this tuple application Hash, and send out this packet to the consideration convey by this Hash result identification.This Hash can be any above-mentioned Hash, or can be any Hash with the characteristic of above-mentioned Hash.Especially, this Hash can be any such Hash, when being applied to tuple, produces the result identifying at least one core in multiple nucleus system.This tuple can comprise any amount of feature of packet.In certain embodiments, this tuple can comprise source IP address, object IP address, source port and destination interface.
In order to ensure the response of being correlated with packet handled before or other packets are forwarded or be otherwise distributed to identical core, Packet engine selects the IP address of this Packet engine or core, and the port numbers of this Packet engine or core, its binding purpose IP address and destination interface can form the second tuple.The Hash result that above-mentioned Hash obtains identifying this first core is applied to this second tuple.By utilizing this second tuple to revise request, Packet engine can be guaranteed all will comprise this second tuple to any response of this request.Accordingly, when flowing the tuple of distributor to response and applying above-mentioned Hash, its result will identify described first core.Like this, this response is distributed to nuclear phase the first core be together distributed to described request by described stream distributor.
Each tuple is enough unique, to make described Hash result have sufficient uniqueness, reaches the object of symmetrical request and response in one or more Packet engine of performing on multiple cores of multiple nucleus system 545.This Hash is symmetrical, because can there is another tuple, this Hash is generated and the result come to the same thing the first tuple being applied to this Hash.Identical core is turned back to by making respond packet in order to examine the second tuple, Packet engine selects the element of this second tuple, to guarantee that these elements will generate the Hash result roughly the same with the first Hash, described first Hash is the result the first tuple being applied to this Hash.
Fig. 7 A is depicted as the process flow diagram described for using above-mentioned Hash embodiment of the method 700 of distributed network flow on one or more cores 505 of multiple nucleus system 545.First, the stream distributor 550 of multiple nucleus system 545 or RSS module 560 receive packet (step 704) from client computer, server or other computing machines, and by calculating cryptographic hash (step 706) to the first tuple application Hash of the packet received.First tuple can comprise Client IP address, object IP address, client computer port and destination interface.In certain embodiments, to the end value of the first tuple application Hash sometimes referred to as Hash.Select the core 505 (step 708) in multiple nucleus system 545 according to Hash result value, and this packet received is forwarded to selected core (step 710).Now, described first tuple still comprises following values: Client IP address; Object IP address; Client computer port; And destination interface.Packet engine 548 on selected core 505 receives packet and upgrades this tuple (step 712) with the IP address of selected multiple nucleus system 545, equipment 200 or selected core 505.Described first tuple comprises following values now: selected IP address; Object IP address; Client computer port; And destination interface.Then Packet engine 548 can identify port, when this port substitute client computer port be included in the first tuple time, return cause packet to selected core 505.When identifying this port, the selected port of described Packet engine 548 upgrades the first tuple (step 714).The element of the first tuple comprises now: selected IP address; Destination address; Selected port; And destination interface.The tuple of this packet and amendment thereof is then sent to server, client computer or other computing machines (step 716).Any response to this packet is all forwarded to multiple nucleus system 545 and is received (step 704) by multiple nucleus system 545.Then the method 700 repeats.
With further reference to Fig. 7 A, in one embodiment, Client IP address and client computer port can refer to source IP address and source port.Oneself computing machine of described source IP address identification data source of packets or equipment.In certain embodiments, described source machine device or equipment generate packet.In one embodiment, described Client IP address can refer to client computer, and in other embodiments, and described Client IP address can refer to server or other computing machines or equipment.Similarly, the object computing machine that is sent to of described object IP address designation packet or equipment.In certain embodiments, object computing machine or equipment are server, and in other embodiments, and object computing machine or equipment are client computer or other computing machines or equipment.
In certain embodiments, the step of method 700 performs by flowing distributor 550.In other embodiments, these steps can be performed by RSS module 560.In other embodiments, these steps can be performed by the combination of RSS module 560 and stream distributor 550.In other embodiments, when NIC 552 is RSS miss (RSS-unaware) NIC 552, when namely NIC 552 does not comprise RSS module 560, described stream distributor 550 is used.In other embodiments, another distribution module performed in multiple nucleus system 545 or client computer can perform any action by the execution of stream distributor or step.
In certain embodiments, the packet received from client computer is request.In other embodiments, the packet received from client computer is information, response, the information of renewal or any other type or communication.In certain embodiments, the packet received from server is response.In other embodiments, the packet received from server is information, request, the information of renewal or any other type or communication.
In many examples, multiple nucleus system 545 receives packet (step 704) from the client computer network 104 and/or server.In most embodiments, before multiple nucleus system 545 is installed in one or more server, client computer and other computing machines and equipment, make any to mail to or packet from these servers, client computer and other computing machines and equipment has to pass through this multiple nucleus system 545.Like this, in certain embodiments, the NIC 552 in multiple nucleus system 545 receives all packets.In other embodiments, one or more NIC 552 in multiple nucleus system 545 receive and to mail to or from each packet of server, client computer and computing machine.The stream distributor 550 of multiple nucleus system 545 discharge (drain) or otherwise from NIC 552 NIC 552 receiving queue obtain the packet received.When obtaining packet from NIC 552 receiving queue, stream distributor 550 determines packet should be sent to which core 505 in multiple nucleus system 545.
When flowing distributor 550 and obtaining packet from NIC 552 receiving queue, packet has a series of value, and this series of value forms tuple together.In certain embodiments, this tuple or a series of value comprise Client IP address, object IP address, client computer port and destination interface.Client IP address is the IP address in the source of packet, and in some cases, it can be client computer, and in other cases, it can be server or other computing machines.Object IP address is the IP address of the computing machine that is sent to of packet or equipment.Like this, in some cases, object IP address is the address of server, and in other embodiments, object IP address is the address of client computer.Client computer port and destination interface are the port with source machine or object machine association.These ports configured usually before transmission packet, but in certain embodiments, client computer port and/or destination interface are mute port or proxy port, and in other embodiments, client computer port and/or destination interface are default port.
Once multiple nucleus system 545 receives packet, any other module or the program of stream distributor 550 or execution in multiple nucleus system 545 can apply above-mentioned Hash (step 706) to described first tuple.In certain embodiments, before application Hash, the first tuple is created.The first tuple can be created by series connection Client IP address, object IP address, client computer port and source port.In certain embodiments, these values are stored in the head in packet.In other embodiments, these values are stored in the metadata associated with packet.In other embodiments, these values are stored in the loading section of packet, and must extract from packet before this tuple of establishment.In certain embodiments, these values of connecting can be completed by any one in RSS module 560, stream distributor 550 or the series connection program performed in multiple nucleus system 545 or module.In other embodiments, these values of connecting can occur as a part for Hash.In certain embodiments, can according to above-mentioned any method application Hash.In many cases, the result of application Hash produces to export, such as end value, cryptographic hash or expression any other value to the result of the first tuple application Hash.
In certain embodiments, described Hash can be calculated by stream distributor 550 or another module performed in multiple nucleus system 545, and in other embodiments, described Hash can be calculated by the computing machine in multiple nucleus system 545 outside or equipment.In one embodiment, the long-range router being positioned at multiple nucleus system 545 outside data interception can divide into groups before packet is received by multiple nucleus system 545.In this embodiment, router can apply described Hash to packet, to determine which core 505 should receive each packet in multiple nucleus system 545.After determining that specific packet should be sent to which core 505, packet is sent to multiple nucleus system 545 address by the mode that router can be grouped into suitable core 505 with multiple nucleus system 545 forwarding data.In other embodiments, described Hash can be applied by computing machine or different equipment.
In certain embodiments, stream distributor 550 or RSS module 560 can select core 505 (step 708) based on to the end value of the first tuple application Hash from multiple nucleus system 545.In certain embodiments, the value that Hash generates is pointed to or identifies the core 505 in multiple nucleus system 545.This attribute of Hash can be utilized to carry out between multiple cores 505 of multiple nucleus system 545 distributed network flow equably substantially.In one embodiment, in the memory cell form of the list storing the core of possible cryptographic hash and correspondence thereof being stored in multiple nucleus system 545 or thesaurus.When applying Hash with after obtaining end value, stream distributor 550 or RSS module 560 can to this table lookup cores corresponding with this result cryptographic hash.The record of this form can be designed as guarantees network traffics being uniformly distributed on described multiple core 505.
After have selected core 505, packet is forwarded to selected core 505 (step 710).Packet can be forwarded by any one in stream distributor 550, RSS module 560 or intercore communication device (not shown).In certain embodiments, forwarding data grouping can comprise and packet to be copied in multiple nucleus system 545 in the addressable memory cell of each core 505, thesaurus or high-speed cache; And to being selected to receive core 505 forwarding messages of packet, described message designation data grouping is stored in memory, and can be used for being downloaded by selected core 505 or providing download to selected core 505.Then, other modules of Packet engine 548 or execution on selected core 505 can access shared memory cell to download packet.In other embodiments, to the message delivery system of core, packet can be forwarded to selected core 505 by core, described core uses to the message delivery system of core the internal network comprising each core 505 in multiple nucleus system 545.This core can utilize multiple nucleus system 545 internal network to the message delivery system of core and be exclusively used in the address of each core 505 or Packet engine 548 in multiple nucleus system 545.In certain embodiments, packet can be sent to the destination address of the core corresponding with selected core 505 to the message delivery system of core.
When packet is forwarded or is sent to selected core 505 (step 710), packet can be received by the Packet engine 548 performed on selected core 505.In certain embodiments, Packet engine 548 management is forwarded to reception and the transmission of the packet of core 505.When Packet engine 548 receives packet, Packet engine 548 can do the determination of any amount to packet, and can perform the operation of arbitrary number of times to packet.In one embodiment, Packet engine 548 can determine source IP address and the source port that need not keep the first tuple.Determine based on this, Packet engine 548 can change the first tuple, makes it comprise different source IP addresss and/or different source ports.
When the determination made be Packet engine 548 can revise in Client IP address and client computer port one or all time, described Packet engine 548 then can replace Client IP address (step 712) with the IP address of selected core 505.In certain embodiments, this IP address can be the IP address of multiple nucleus system 545.In other embodiments, this IP address can be the IP address of equipment 200.In other embodiments, this IP address can be any one in the IP address of selected core 505.In certain embodiments, selected core 505 can have one or more IP address 630.In one embodiment, Packet engine 548 can be selected in IP address 630 one and replace Client IP address with selected IP address 630.When have modified this tuple with selected IP address 630, the first tuple is modified to and comprises selected IP address 630, client computer port, object IP address and destination interface.
In certain embodiments, Packet engine 548 selects port from multiple ports 632 of selected core 505.In one embodiment, Packet engine 548 selects port by repeatedly applying above-mentioned Hash to the combination of each possible IP address 630 and port 632.Packet engine 548 selects port 630, and when this port 630 is included in the first tuple, when above-mentioned Hash is applied to the first tuple, this port 630 identifies selected core 505.Such as, Packet engine 548 can select IP address 630, then utilizes each available port 632 of selected core 505 to revise the first tuple, until the core 505 selected by output identification of Hash.In certain embodiments, Packet engine 548 this tuple of selected port modifications.Revise this tuple can comprise: selected port is input in this tuple (step 714), or replace client computer port with selected port.Once this tuple of port modifications selected by using, this tuple just comprises following values: selected IP address; Object IP address; Selected port; And destination interface.
In most embodiments, the packet with modified tuple is sent to client computer or server (step 716) by described Packet engine 548.If packet is derived from server, so in many examples, packet is sent to client computer by Packet engine 548, and vice versa.In certain embodiments, packet is sent to the computing machine corresponding with object IP address or equipment by Packet engine 548.In other embodiments, packet is first sent to centre or proxy server or equipment by Packet engine 548, and then packet is sent to object computing machine or equipment.
Once packet is sent to object computing machine or equipment, multiple nucleus system 545 just can receive another packet (step 704).In certain embodiments, as long as multiple nucleus system 545 receives and sends packet and network traffics, the method 700 can continue to carry out.Although Fig. 7 A illustrate only the single instance of the method 700 that wherein each step occurs separately, in other embodiments, multiple steps of method 700 can occur simultaneously.Such as, multiple nucleus system 545 receives the packet (step 704) from client computer or server, and almost meanwhile, Packet engine 548 can receive the packet (step 710) of forwarding.In another example, the Packet engine 548B on the second core 505B receives the packet (step 710) forwarded, and almost meanwhile, the Packet engine 548A on the first core 505A receives the packet (step 710) forwarded.So any amount of step can almost occur simultaneously, comprise identical step.
Be illustrated in figure 8 an embodiment for the method 800 of distributed data grouping on the multiple cores 505 in multiple nucleus system 545.In one embodiment, multiple nucleus system 545 receives packet (step 802), and flows the tuple (step 804) of distributor 550 or the grouping of RSS module 560 identification data.After identifying this tuple, above-mentioned Hash (step 806) is applied to generate end value to the tuple of this identification.In most embodiments, the core 505 in this end value identification multiple nucleus system 545.Packet is sent to by the core 505 (step 808) of this result cryptographic hash identification by RSS module 560 or stream distributor 550.In certain embodiments, the Packet engine 548 on selected core 505 receives packet, and the IP address of core 505 selected by selecting and port (step 810).Then, Packet engine 548 can determine whether the Hash to the part of selected IP address, selected port and this tuple generates the value of the core 505 selected by identification.When determining that the value generated above-mentioned tuple application Hash can identify selected core 505, then Packet engine 548 selected IP address and this tuple (step 814) of port modifications.After have modified this tuple, modified packet is forwarded to remote computation machine (step 816) by Packet engine 548 or another module performed on selected core 505.
Continue with reference to figure 8, in more detail, in one embodiment, the NIC 552 in multiple nucleus system 545 receives the one or more packets (step 802) being sent to multiple nucleus system 545 by network 104.In one embodiment, flow distributor and obtain packet from NIC 552.In other embodiments, RSS module 560, Packet engine 548 or other distribution modules or program are discharged or otherwise obtain packet from NIC 552.Stream distributor can be discharged from the receiving queue NIC 552 or obtain packet.
In certain embodiments, the packet received is client requests, and in other embodiments, the packet of this reception is server response.Processed by the core 505 of once processed described client requests in order to ensure the response of this server, the Packet engine 548 that first core 505 performs selects IP address and port numbers, and this IP address and port numbers will make server respond and be distributed to described first core 505.Select this IP address and port numbers to make when they and object IP address and destination number combination, the tuple obtained can identify this first core 505.This result tuple (i.e. the second tuple) identifies that the mode of the first core 505 is: when applying above-mentioned hash function to this second tuple, the first core 505 described in Hash result identification.When server generates response, this response comprises selected IP address, selected port numbers, object IP address and destination slogan.Like this, when flowing the tuple in distributor 550 pairs of server responses and applying above-mentioned Hash, Hash result is by identification first core 505 and server response is forwarded or is assigned to this first core 505.
When stream distributor 550 have received packet, this stream distributor or distribution module can the tuples (step 804) of identification data grouping.In certain embodiments, this tuple can comprise any combination of following values: Client IP address; Object IP address; Client computer port; Destination interface; Or any other IP address, port or other sources or object discre value.In certain embodiments, Client IP address can be source IP address.Similarly, in certain embodiments, client computer port can be source port.In certain embodiments, the tuple of identification data grouping can comprise: by connecting, any above-mentioned value generates this tuple to create character string.In certain embodiments, this tuple is character string or the array of value.
In certain embodiments, Hash or cryptographic hash (step 806) is calculated by applying above-mentioned Hash to the tuple identified.Cryptographic hash can represent with arbitrary following denotations: Hash; Cryptographic hash; End value; Result; Or value.Can by stream distributor 550 or any other distribution module application Hash performed in multiple nucleus system 545.
After application Hash, determination result value whether can identify core 505 in multiple nucleus system 545.When Hash result identifies specific core 505, by stream distributor 550 or any other stream distribution module, packet is forwarded to identified core 505 (step 808).In one embodiment, packet is forwarded to the Packet engine 548 performed on identified core 505 by this stream distributor 550.The core 505 identified can be called the first core 505.In certain embodiments, Hash result can core 505 in nonrecognition multiple nucleus system 545.In these embodiments, packet can be forwarded to the acquiescence core 505 in multiple nucleus system 545.In other embodiments, packet may not the related tuple of tool.In those embodiments, packet can be forwarded to the acquiescence core 505 in multiple nucleus system 545.
When packet is forwarded to identified core 505, the Packet engine 548 that identified core 505 performs or other modules or engine can receive the packet of this forwarding.In certain embodiments, the communication module that identified core 505 performs receives packet, and packet is forwarded to the Packet engine 548 on identified core 505.When this Packet engine 548 receives the grouping of forwarding, this Packet engine 548 can select the IP address of core 505 and the port (step 810) of core.In certain embodiments, this IP address can be the IP address of multiple nucleus system 545 or the IP address of equipment 200.In other embodiments, this IP address can be the IP address of core 505.Core 505 can have one or more IP address, so in certain embodiments, Packet engine 548 can select IP address based on to the determination whether the IP address that a part for selected port and the first tuple is combined identifies identified core 505.
The port selecting the port of core 505 to comprise to search for and associate with selected core 505 to identify port, the core 505 when this port is included in the first tuple selected by identification.In certain embodiments, Packet engine 548 can circulate through each IP address of core 505 and each port of core 505, to identify the IP address/port combination that can identify selected core 505.Such as, selected core 505 can be have the first core 505 of tuple comprising Client IP address, client computer port, object IP address and destination interface.Packet engine 548 can revise this tuple to comprise selected IP address, selected port, object IP address and destination interface.Before permanently revising this packet, first Packet engine 548 applies above-mentioned Hash (step 812) to modified tuple.If result cryptographic hash identifies the first core 505, so Packet engine 548 just permanently revises this packet, replaces or amendment Client IP address with selected IP address, and replaces with selected port or amendment client computer port.If result cryptographic hash can not identify the first core 505, so Packet engine 548 revises one of selected port and selected IP address or whole, and again applies Hash.
In certain embodiments, selecting side slogan or IP address can comprise: from the first core 505 or first core 505 perform Packet engine 548 one or more IP addresses in select IP address, and from the port table associated with the first core 505 or from the one or more port numbers associated with the first core 505 or Packet engine 548 selecting side slogan.Packet engine 548 can select IP address and first end slogan.When determining that this first end slogan is unavailable, this Packet engine 548 can be selected the second port numbers and determine that this second port numbers can be used.When determining that described second port numbers is available, this Packet engine 548 then can to comprise selected IP address, the second port numbers, object IP address and destination slogan four-tuple apply above-mentioned Hash.When determination result cryptographic hash identifies the first core 505, this Packet engine 548 revises client requests to comprise selected IP address, the second port numbers, object IP address and destination slogan.In other embodiments, this Packet engine 548 can determine that described first port is unavailable, can select the 2nd IP address from available IP address, and can select the second port numbers from the multiple ports associated with the first core 505.Packet engine 548 then can to comprise the 2nd IP address, the second port numbers, object IP address and destination interface five-tuple apply above-mentioned Hash.When determination result Hash identifies the first core 505, Packet engine 548 can upgrade client requests with five-tuple, makes this client requests identification the 2nd IP address and the second port numbers.
Application above-mentioned Hash (step 812) with verify selected port and selected IP address identify when combining with object IP address and destination interface selected by core 505 after, Packet engine can Update Table divide into groups to make this tuple comprise: selected IP address; Object IP address; Selected port; With destination interface (step 814).In this embodiment, Client IP address and client computer port are no longer included in this tuple.In other words, these values be substituted for by selected IP address and selected port.
In many examples, the packet of renewal and tuple, after Update Table grouping and tuple, are sent to remote computing device (step 816) by Packet engine 548.In certain embodiments, described remote computing device can be client computer, server or away from another computing machine of multiple nucleus system 545 or equipment.In other embodiments, modified packet can be sent to middle device by Packet engine 548, and this middle device forwards this packet to destination.In certain embodiments, described destination can by object IP address and/or destination interface identification.
In certain embodiments, method 800 also comprises: stream distributor 550 receives the response to client's request being assigned to the first core 505.This response can be generated by server and can comprise tuple, (i.e. the second tuple or tlv triple), and this tuple comprises selected IP address, selected port numbers, object IP address and destination interface.Stream distributor 550 is to this tuple application Hash of this response, and result cryptographic hash identifies the first core 505.After determining, the Packet engine 548 that the response of this server distributes or is distributed to the first core 505 or performs on the first core 505 by this stream distributor.
In other embodiments, method 800 can also comprise: Packet engine 548 performs on the first core 505, and this Packet engine 548 upgrades the port assignment table associated with the first core 505 and/or Packet engine 548.This port assignment table can by Packet engine 548 utilize record or information upgrade, this record or information instruction be included in server response the second tuple in and the selected port numbers be included in revised client requests be assigned to packet.Like this, any follow-up packet processed by the first Packet engine 548 on the first core 505 or request all can not receive this by the port numbers selected, because this port numbers of port assignment table identification is unavailable.
The example being applied to the method 800 that client requests responds with corresponding server comprises: stream distributor 550 receives the client requests produced by the client computer communicated with multiple nucleus system 545.This stream distributor 550 identifies the first tuple of this client requests, and this first tuple comprises Client IP address, object IP address, client computer port and destination interface.After this first tuple of identification, stream distributor 550 applies above-mentioned hash function to produce Hash result to this first tuple, the first core 505 in this Hash result identification multiple nucleus system 545.Then, described client requests is forwarded to the first core 505 by stream distributor 550, and this client requests is received by the first Packet engine performed on the first core 505.First Packet engine 548 receives this client requests and selects the IP address of the first core 505 or the first Packet engine 548 and the port numbers of the first core 505 or the first Packet engine 548.Select this IP address and port, with make to comprise selected IP address, object IP address, selected port and destination interface the second tuple application Hash will produce the Hash result that can identify the first core 505.So do and any response for this client requests can be made all to be distributed to the first core 505, instead of another core 505 in multiple nucleus system 545.Guarantee that identical core 505 had not only processed to ask but also processing response can reduce the demand that the packet processed multiple nucleus system 545 generates unnecessary data trnascription, and guarantee the symmetrical treatment of request/response.Once the first Packet engine 548 have selected IP address and port numbers, this client requests is just sent to server by the first Packet engine 548.Then, stream distributor 550 receives and responds the server of this client requests, and wherein the response of this server has the second tuple, and this second tuple comprises selected IP address, selected port numbers, object IP address and destination interface.This stream distributor 550 applies above-mentioned Hash to this second tuple, Hash result identification first core 505.
Correspondingly, flow distributor 550 this server response the first Packet engine 548 be forwarded on the first core 505 is processed.
Figure 9 shows that an embodiment of the method 900 of distributed network flow on multiple cores 505 of multiple nucleus system 545.How the Packet engine 548 that the method 900 that Fig. 9 describes shows on core 505 processes the packet received.Packet engine 548 receives the packet (step 902) be assigned with, and selects the IP address (step 904) of the core 505 it performing Packet engine 548.This Packet engine 548 is also from distributing to selecting side slogan (step 906) core 505 or Packet engine 548 or multiple port numbers of being associated with core 505 or Packet engine 548.Once have selected IP address and port numbers, Packet engine 548 then determines whether can identify current core 505 to the Hash of selected IP address and selected port numbers and object IP address and destination slogan.Concrete, Packet engine 548 determines whether selected port numbers can identify this current core 505 (step 908).When determining that selected port numbers can not identify current core 505, Packet engine 548 selects next port number (step 906) from the multiple port numbers associated with core 505.When determining that selected port numbers can identify current core 505, Packet engine 548 then determines whether selected port numbers is opened or otherwise can use (step 910).When determining that selected port numbers is not opened, Packet engine 548 selects next port number (step 906) from the multiple port numbers associated with core 505.When determining that selected port numbers is opened or otherwise can use, Packet engine 548 selected IP address and selected port numbers Update Table grouping (step 912), and this packet and modified tuple thereof are forwarded to remote computation machine (step 914).
Continue with reference to figure 9, in more detail, in one embodiment, method 900 can be performed by the Packet engine 548 performed on core 505.In yet another embodiment, method 900 can be performed by the example of the stream distributor 550 performed on core 505 or stream distributor 550.In other embodiments, method 900 can be performed by any stream distribution module that can perform on core 505 or agency.Although show the process to packet in fig .9, this packet can be revised in specific core 505 upper part, in certain embodiments, can by the control core process in multiple nucleus system 545 to the amendment of packet.
The Packet engine 548 of the step of the method 900 described in Fig. 9 of execution can perform on specific core 505.In most embodiments, the method 800 that core 505 is shown by Fig. 8 is in advance selected.So in most of the cases, based on the tuple above-mentioned Hash being applied to packet, the packet received by Packet engine 548 has been assigned with and has given core 505.In most cases, this tuple at least comprises Client IP address, object IP address, client computer port and destination interface.In certain embodiments, this tuple can be any above-mentioned tuple, and can comprise any amount of source or object discre value.In other embodiments, Client IP address can be the source IP address carrying out source machine of identification data grouping.Similarly, client computer port can be source port.
In one embodiment, the Packet engine 548 specific core 505 of multiple nucleus system 545 performed receives the packet (step 902) being assigned to this specific core 505.Packet engine 548 directly can receive packet, or in certain embodiments, the communication module that core 505 performs can receive and send packet.In other embodiments, the virtual NIC (not shown) performed on core 505 can receive and send packet.In certain embodiments, receive packet can also comprise: discharge data grouping from the logic receiving queue core 505.Logic receiving queue can store the packet being sent to core 505.According to the access method of first in first out, Packet engine 548 can visit packet in logic receiving queue by discharging or otherwise obtain packet from receiving queue.Another kind of access method can be first-in last-out.
When Packet engine 548 obtains packet, in certain embodiments, Packet engine 548 can determine whether to divide into groups by Update Table.After which part determining packet can be modified, Packet engine 548 can Update Table grouping.In certain embodiments, multiple nucleus system 545 can be configured to some part indicating Packet engine 548 the Update Table groupings performed on multiple nucleus system 545.
In certain embodiments, Packet engine 548 can select the IP address (step 904) of core 505 from the one or more IP addresses associated with core 505.Core 505 can have multiple IP address, and can have an IP address range in certain embodiments.In other embodiments, core 505 can have single ip address.In certain embodiments, Packet engine 548 selects the IP address of core 505, in other embodiments, and IP address that is that Packet engine 548 can select multiple nucleus system 545 or equipment 200 in multiple nucleus system 545.
When behind selection IP address, Packet engine 548 can select port (step 906) from multiple ports of core 505.Core 505 can have one or more port, and in certain embodiments, can by the list storage of each in the port 505 of multiple nucleus system 545 in port assignment table.Port is selected to comprise: to circulate through the record of the port assignment table of each port listing core 505 and selecting side slogan.Described port can circulate through in a digital manner based on port numbers or the order listing these ports in port assignment table.In other embodiments, Packet engine 548 can select port by the scope circulating through the number corresponding with port numbers possible on core 505 or value.
In certain embodiments, Packet engine 548 can select the first port (step 906), then determine whether this port be correct port (step 908) and this port whether can with or open (step 910).If the first selected port is not correct port or unavailable or do not open, Packet engine 548 can select next port, i.e. the second port of core 505, and determine again this port be whether correct port (step 908) and this port whether can with or open (step 910).In most embodiments, Packet engine 548 circulates through all possible port, until Packet engine 548 identifies the not only correct but also port opened.
After Packet engine 548 have selected port, Packet engine is first by determining making respond packet return selected core (step 908), whether selected port determines whether selected port is correct port.Can make this and determine by above-mentioned Hash being applied to tuple, described tuple comprises the series connection of fol-lowing values: selected IP address; Destination address; Selected port; And destination interface.Above-mentioned Hash is applied to this tuple and can generates result cryptographic hash, this result cryptographic hash or or can cannot identify the current core 505 it performing Packet engine 548.The value of this tuple of connecting, to generate this tuple, can be performed by the example of the stream distributor 550 that Packet engine 548 or core 505 perform.Similarly, can be performed by the example of Packet engine 548 or stream distributor this tuple application Hash.When result cryptographic hash identifies current or selected core 505, selected port is correct port, because it returns make respond packet to current core 505.When result cryptographic hash can not identify current or selected core 505, selected port is not correct port, because it can not make respond packet return current core 505.In this case, Packet engine 548 will be selected another port (step 906) and repeat to determine that whether this port is the process (step 910) of correct port.
When determining that selected port numbers is correct port numbers (step 908), determine this port numbers whether can with or open (step 912).In most embodiments, when any following for true time port numbers is for open or available: this port numbers is not used; Or this port numbers can use.Contrary, followingly not open or unavailable for true time port numbers when any: this port numbers has been assigned to another affairs, service or packet; Or this port numbers is closed by network manager or multiple nucleus system 545.In many examples, port numbers whether can with or to open be the characteristic that port number assignment table is followed the tracks of.Port number assignment table can be above-mentioned any port number assignment table, and can be stored in can any above-mentioned position of storage port number table.In certain embodiments, after Packet engine 548 determines that described port numbers is correct port numbers, to obtain the details of specific port numbers, attribute or characteristic, Packet engine 548 can determine that this port numbers can be used by inquiry port number assignment table.When respond instruction this port numbers open and this port numbers be not assigned to any other packet, affairs or server time, this tuple is revised in Packet engine 548 selected IP address and selected port numbers.But, when this port numbers of response instruction is unavailable or when not opening, Packet engine 548 is selected another port numbers (step 906) and is repeated to determine whether this port numbers whether be correct port numbers (step 908) and this port numbers is open the process with available (step 910).
When Packet engine 548 have selected correct, open with available port time, Packet engine 548 then more new data packet, thus the tuple of this packet comprises selected IP address and selected port (step 912).Revise or upgrade this tuple and can comprise: the amendment making any necessity comprises to make this tuple: selected IP address; Object IP address; Selected port; And destination interface.Like this, Client IP address and client computer port information can be substituted by selected IP address and selected port.
After Update Table grouping, modified packet can be sent to remote computation machine (step 914) by Packet engine 548.Modified packet is sent to remote computation machine can comprise: modified packet is sent to by the client computer of object IP address and/or destination interface identification, server, equipment or computing machine.In certain embodiments, before packet is sent to its object computing machine or equipment, modified packet is sent to proxy server or equipment.In other embodiments, before packet being sent to its object computing machine or equipment, modified packet is stored in the memory cell in multiple nucleus system 545.In certain embodiments, this memory cell other memory cells that can be overall high-speed cache or be shared by core all in multiple nucleus system 545 and device.In other embodiments, this memory cell can be other thesauruss that high-speed cache or current core 505 can be accessed.
3. for keeping source IP and act on behalf of system and the side of source port in the multi-core environment of load balance method
Fig. 7 A, 8 and 9 describes the method for IP address and port replacement or amendment Client IP address and the client computer port selected by the Packet engine 548 on specific core 505 wherein, and Fig. 7 B, 12A and 12B describe the system keeping Client IP address wherein.But, client computer port or source port can be selected by Packet engine and be inserted in the tuple of packet and substitute the proxy port of client computer port.In some systems, the keeper that the owner of server zone or multiple nucleus system 545 are executed in network wherein may expect that each packet at least keeps its original source IP address.Keeper may have a lot of reasons to wish so to do, and some of them reason may comprise the object in order to safety, in order to the object in market, in order to follow the tracks of networking access, in order to limiting network access, or in order to any other reason.By allowing each packet to keep its source IP address, can follow the tracks of and control each packet.Such as, know that the source of packet can allow this system to stop specific IP address or territory to the access of network.Similarly, know that the source of packet can allow the geographic position of the user in this system keeps track accesses network or territory.In most of the cases, know that source IP address allows system identification source of packets position certainly, and allow further control system whether to process specific packet.
Suppose to revise client port number, in certain embodiments, can combine to identify that the quantity when the port numbers selected by pronucleus may tail off with kept Client IP address.So each core 505 can associate with multiple port assignment table, and wherein each port assignment table stores available port numbers list.Allow each core 505 also to associate with one or more port assignment table except associating with one or more port numbers and add another layer of uniqueness, namely each request can associate with the port numbers in specific port assignment table now.This extra one deck uniqueness can overcome the problem tailed off because of the port numbers keeping Client IP address to cause.
Fig. 7 B is depicted as the process flow diagram described for using above-mentioned Hash embodiment of the method 780 of distributed network flow on one or more cores 505 of multiple nucleus system 545.The method 780 and method 700 broadly similar shown in Fig. 7 A.But in the method 780 shown in Fig. 7 B, Packet engine 548 keeps Client IP address.Similar to the method 700 shown in Fig. 7 A, stream distributor 550 or RSS module 560 receive the packet (step 782) from client computer, server or other computing machines, and by calculating cryptographic hash (step 784) to the first tuple application Hash of the packet received.This first tuple can comprise Client IP address, object IP address, client computer port and destination interface.In certain embodiments, the value sometimes referred to as Hash can be produced to the first tuple application Hash.Select the core 505 (step 786) in multiple nucleus system 545 according to this Hash result value, and the packet received is forwarded to selected core (step 788).Now, the first tuple still comprises following values: Client IP address; Object IP address; Client computer port; And destination interface.Packet engine 548 on selected core receives packet and keeps Client IP address (step 709), but upgrades the first tuple (step 792) with selected port.At this moment, the first tuple comprises following values: Client IP address; Object IP address; Selected port; And destination interface.Then, packet and modified tuple thereof are sent to server, client computer or other computing machines (step 794).What generated by server, client computer or other computing machines is forwarded to multiple nucleus system 545 to any response of this packet, and is received (step 782) by multiple nucleus system 545.Arrive this, method 700 repeats.
Continue with reference to figure 7B, in more detail, in one embodiment, the method 780 shown in Fig. 7 B and the difference between the method shown in Fig. 7 A 700 are: the method 780 shown in Fig. 7 B keeps client computer or source IP address.Like this, other additional steps are identical with the step described in the method 700 shown in Fig. 7 A substantially.Such as, similar with the method 700 described before, multiple nucleus system 545 can receive packet (step 782) from client computer, server or other computing machines.In certain embodiments, step 782 can be any embodiment of the step 704 that Fig. 7 A describes.Similar with method 700 mentioned above, to the first tuple application Hash (step 784) of packet, and select core (step 786) based on the result of this Hash.Step 784 can be any embodiment of the step 706 that Fig. 7 A describes, and step 786 can be any embodiment of the step 708 that Fig. 7 A describes.Once have selected core 505, just packet can be forwarded to selected core 505 (step 788).Step 788 can be any embodiment of step 710.After the tuple associated with packet is modified, modified packet is sent to server, client computer or other computing machines (step 794).Step 794 can be any embodiment of step 716.
In certain embodiments, once packet is forwarded to selected core 505 (step 788), other engines of Packet engine 548 or execution on selected core 505 or module just can receive grouping and determine whether to revise this grouping.Determine whether Update Table grouping can to comprise doing and anyly followingly to determine: whether can the part of Update Table grouping; Whether can the tuple of Update Table grouping; Whether can any part of tuple of Update Table grouping; Can Update Table grouping and/or which part of tuple; And to described Packet engine 548 whether can the tuple of Update Table grouping or packet have an impact any other determine.In one embodiment, Packet engine 548 is determined can the part of Update Table grouping, and especially can the part of tuple of Update Table grouping.This determines to comprise the Client IP address of determining packet or source IP address should keep and can not change.Determine based on this, Packet engine 548 can be determined to adjust packet transaction according to this.In certain embodiments, can by analyzing packet, the head of packet or any other attribute of packet make and determining.In other embodiments, multiple nucleus system 545 can be configured to keep Client IP address.In these embodiments, because the operation of system 545 is configured accordingly, can the determination of tuple of Update Table grouping or packet so not make for whether.
When make be defined as keeping Client IP address time or Client IP address should be kept when system 545 instruction time, Packet engine 548 keeps Client IP address (step 790) instead of this tuple is revised as the IP address comprising core 505 or system 545.After this step, this tuple comprises following values: Client IP address; Object IP address; Client computer port; And destination interface.
Keep Client IP address can cause all being routed to the core different from selected core 505 to any response of packet.So Packet engine 548 should identify and select port 632 from multiple ports 632 of selected core 505, when this port 632 substitutes client computer port and is included in this tuple, enables to identify selected core 505 to the Hash of this tuple.Like this, Packet engine 548 circulates through each in multiple ports 632 of core 505, to identify such port 632 and to select this port 632.Select after port 632, the tuple of Packet engine 548 more new data packet is to comprise selected port 632 (step 792).After this step, this tuple comprises following values: Client IP address; Object IP address; Selected port; And destination interface.
Then, the packet through upgrading and tuple are sent to server, client computer or computing machine (step 794).When the data packet is transmitted, it comprises tuple, and this tuple comprises following values: Client IP address; Object IP address; Selected port; And destination interface.
An embodiment of method 1200 for distribute packets in multiple nucleus system 545 that what Figure 12 A showed is.In the method, stream distributor 550 or RSS module 560 receive packet (step 1202), and the tuple (step 1204) of identification data grouping.After identifying this tuple, to this tuple application Hash to generate result (step 1206) and packet to be sent to by the core (step 1208) of this Hash result identification.In certain embodiments, this packet can be received by the Packet engine 548 on described core.Packet engine 548 can keep the Client IP address (step 1210) be included in this tuple, but, port (step 1212) can be selected from multiple ports of described core and this tuple (step 1214) can be revised with determined port.Once this tuple is modified, packet and modified tuple can be sent to remote computation machine (step 1216).
Continue with reference to figure 12A, in more detail, in one embodiment, described method 1200 is identical substantially with the method shown in Fig. 8.So step 1202 can be any embodiment of the step 802 shown in Fig. 8, similarly, step 1204 can be any embodiment of the step 804 shown in Fig. 8.Step 1206 can be any embodiment of the step 806 shown in Fig. 8, and step 1208 can be any embodiment of the step 808 shown in Fig. 8, and step 1216 can be any embodiment of the step 816 shown in Fig. 8.In certain embodiments, with the difference of the method 800 shown in Fig. 8, the method 1200 shown in Figure 12 A is that the method 1200 shown in Figure 12 A keeps described Client IP address.
The Packet engine 548 performing the step of the method 1200 described in Figure 12 A can perform on specific core 505.So in most of the cases, based on the tuple above-mentioned Hash being applied to packet, the packet received by Packet engine 548 has been assigned with and has given core 505.In most cases, this tuple at least comprises Client IP address, object IP address, client computer port and destination interface.In certain embodiments, this tuple can be any above-mentioned tuple, and can comprise any amount of source or order discre value.In other embodiments, Client IP address can be the source IP address carrying out source machine of identification data grouping.Similarly, client computer port can be source port.
In one embodiment, receive in the upper Packet engine 548 performed of the specific core 505 (i.e. the first core 505) of multiple nucleus system 545 packet (step 1208) being assigned to this specific core 505.Packet engine 548 directly can receive packet, or in certain embodiments, the communication module that core 505 performs can receive and send packet.In other embodiments, virtual NIC (not shown) core 505 performed can receive and send packet.In certain embodiments, receive packet can also comprise: discharge data grouping from the logic receiving queue core 505.Logic receiving queue can store the packet being sent to core 505.According to the access method of first in first out, Packet engine 548 can visit packet in logic receiving queue by discharging or otherwise obtain packet from receiving queue.Another kind of feasible access method can be first-in last-out.In certain embodiments, Packet engine 548 can subscribing client request or server response.
When Packet engine 548 obtains packet, in certain embodiments, Packet engine 548 can determine whether to revise this packet.After which part determining packet can be modified, Packet engine 548 can revise this packet.In certain embodiments, multiple nucleus system 545 can be configured to some part indicating Packet engine 548 the Update Table groupings performed in multiple nucleus system 545.
In certain embodiments, described Packet engine 548 can be determined to revise this packet.In other embodiments, multiple nucleus system 545 can be configured such that packet is not modified, and more precisely, except client computer port, each element of the tuple of packet is kept.Like this, when Packet engine 548 receives packet, Packet engine 548 keeps Client IP address, i.e. source IP address (step 1210).
In certain embodiments, before calculating the second Hash to Client IP address, object IP address, selected port numbers and destination interface, Packet engine 548 is determined act on behalf of this client computer port and keep this Client IP address.In certain embodiments, determine that acting on behalf of this client computer port can comprise: determine to select port from multiple ports of the first core 505, and replace this client computer port with selected port.
In one embodiment, described Packet engine 548 selects port (step 1212) from multiple ports of core 505.In certain embodiments, selected port is proxy port, and this proxy port can replace client computer port and be included in the first tuple.Can determine that this proxy port is to make, to the Hash of the first revised tuple, pronucleus 505 is worked as in identification.This determines to have come by applying above-mentioned Hash to the second tuple, and described second tuple comprises Client IP address, object IP address, selected port numbers and destination interface.When result identification first core 505 of this Hash, can determine that selected port is assigned to causing when pronucleus 505 response of described packet.This determines to comprise: determine whether this port can be used.When port is unavailable or be assigned to packet, Packet engine 548 can be selected the second port and determine whether this port numbers will cause respond packet to be routed or to be distributed to the first core 505.Once determine port, just by this first tuple (step 1214) of port modifications identified, and modified packet and tuple are forwarded to remote computation machine (step 1216).When sent, packet maintains the tuple comprising following element: Client IP address; Object IP address; Selected port; And destination interface.
In certain embodiments, port is selected also to comprise: selecting side slogan from the port assignment table associated with the first core 505.Described port assignment table can be one in the multiple port assignment tables associated with the first core 505, and can be positioned on the agent IP address of the first core 505.In one embodiment, Packet engine 548 selects first end slogan from multiple port numbers, and determines that the Hash (this second tuple comprises Client IP address, object IP address, first end slogan and destination slogan) to the second tuple can not identify the first core 505.When making after this determines, Packet engine 548 selects the second port numbers from same port allocation table, and determines that tlv triple (this tlv triple comprises Client IP address, object IP address, the second port numbers and destination slogan) can not identify the first core 505.In certain embodiments, based on determining that first end slogan is unavailable, Packet engine 548 selects the second port numbers.In another embodiment, Packet engine 548 selects first end slogan from the multiple port numbers port assignment table, and this port assignment table selects based on the result tuple comprising Client IP address and object IP address being applied to above-mentioned Hash.
Modified packet is sent to remote computation machine can comprise: modified packet is sent to by the client computer of object IP address and/or destination interface identification, server, equipment or computing machine.In certain embodiments, before packet is sent to its object computing machine or equipment, modified packet is sent to proxy server or equipment.In other embodiments, before packet is sent to its object computing machine or equipment, modified packet is stored in the memory cell of multiple nucleus system 545 inside.In certain embodiments, described memory cell can be the overall high-speed cache or other memory cells shared by core all in multiple nucleus system 545 and device.In other embodiments, described memory cell can be the current addressable high-speed cache of core 505 or other thesauruss.
An embodiment of method 1250 for selecting port in the port assignment table from selected core 505 that what Figure 12 B showed is.Packet engine 548 on selected core 505 calculates the Hash (step 1252) to Client IP address and object IP address, the port assignment table (step 1254) on the core 505 selected by described Hash identification.Once have selected port assignment table, select the port (step 1256) in this port assignment table and determine that whether this port is for opening (step 1258).Then by the tuple (step 1260) of determined port modifications packet, and modified packet and tuple are forwarded to remote computation machine (step 1262).
Continue with reference to figure 12B, in more detail, in one embodiment, the Packet engine 548 that selected core 505 performs calculates the cryptographic hash (step 1252) to Client IP address and object IP address.Calculate described cryptographic hash can comprise series connection described Client IP address and object IP address to create character string or two tuples.Then, described Packet engine 548 applies above-mentioned hash function to generate end value or cryptographic hash to this two tuple.In many examples, this cryptographic hash is identified in the port assignment table (step 1254) on selected core 505.In certain embodiments, the port assignment table that multiple and specific core 505 may be had to associate.Determine that from which port assignment table, select port to comprise generates described cryptographic hash and use this cryptographic hash to select corresponding port assignment table.
In most embodiments, once Packet engine 548 have selected port assignment table, described Packet engine 548 and then can select port (step 1256) from this port assignment table.When have selected port, need to determine whether this port is port (step 1258) that is correct, that open.This determines to be completed by the method 900 shown in Fig. 9.When determine this port be incorrect port and/or closedown and disabled time, described Packet engine 548 can select different ports from selected port assignment table.When have selected new port, need to determine that whether this port is correct and opens.In certain embodiments, not only correct but also available port is not had in described port assignment table.In these embodiments, different port assignment tables can be selected.Then from the port assignment table of this new selection, select port, and carry out new determination, determine whether selected port is not only correct but also available port.
When selected port be not only the correct but also port opened time, can by the tuple (step 1260) of selected this packet of port modifications.When after this tuple of port modifications selected by using, modified packet can be sent to remote computation machine (step 1262).
Modified packet is sent to remote computation machine can comprise: modified packet is sent to by the client computer of object IP address and/or destination interface identification, server, equipment or computing machine.In certain embodiments, before packet is sent to its object computing machine or equipment, modified packet is sent to proxy server or equipment.In other embodiments, before packet is sent to its object computing machine or equipment, modified packet is stored in the memory cell in multiple nucleus system 545.In certain embodiments, described memory cell can be the overall high-speed cache or other memory cells shared by core all in multiple nucleus system 545 and device.In other embodiments, described memory cell can be the high-speed cache that can access of current core 505 or other thesauruss.
4. for keeping the system and method for source IP and source port in the multi-core environment of load balance
Fig. 7 A, 8 and 9 describes the method for IP address and port replacement or amendment Client IP address and the client computer port selected by the Packet engine 548 on specific core 505 wherein, and Fig. 7 B, 12A and 12B describe the system keeping Client IP address wherein; And Fig. 7 C, 10A and 10B describe the system wherein keeping Client IP address and client computer port.In some systems, the keeper that the owner of server zone or multiple nucleus system 545 are executed in network wherein may expect the source IP address that each packet keeps it original and source port.Keeper may have a lot of reasons to wish so to do, and some of them reason may comprise the object in order to safety, in order to the object in market, in order to follow the tracks of networking access, in order to limiting network access, or in order to any other reason.By allowing each packet to keep its source IP address or source port, can follow the tracks of and control each packet.Such as, know that the source of packet can allow this system to stop specific IP address or territory to the access of network.Similarly, know that the source of packet can allow the geographic position of the user in this system keeps track accesses network or territory.In most of the cases, know that source IP address and source port allow system identification source of packets position certainly, and allow further control system whether to process specific packet.
Fig. 7 C is depicted as the process flow diagram described for using above-mentioned Hash embodiment of the method 750 of distributed network flow on one or more cores 505 of multiple nucleus system 545.The method 750 and method 700 broadly similar shown in Fig. 7 A.But in the method 750 shown in Fig. 7 C, Packet engine 548 keeps Client IP address and client computer port.Similar to the method 700 shown in Fig. 7 A, stream distributor 550 or RSS module 560 receive the packet (step 766) from client computer, server or other computing machines, and by calculating cryptographic hash (step 756) to the first tuple application Hash of the packet received.This first tuple can comprise Client IP address, object IP address, client computer port and destination interface.In certain embodiments, the value sometimes referred to as Hash can be produced to the first tuple application Hash.Select the first core 505A (step 758) of multiple nucleus system 545 according to this Hash result value, and the packet received is forwarded to the first selected core 505A (step 760).Now, the first tuple still comprises following values: Client IP address; Object IP address; Client computer port; And destination interface.Once selected core receives the packet of forwarding, the first selected core 505A determines whether this core is correct core (step 772).When make to be defined as selected core 505A be correct core, so method proceeds to step 762.But, when make to be defined as selected core 505A be not correct core, so before carry out step 762, packet is forwarded to correct core (step 774).On the first core 505A or on the correct core being different from the first core 505A Packet engine 548 keeps Client IP address and client computer port (step 762), and packet is afterwards sent to server, client computer or other computing machines (step 764).What generated by server, client computer or other computing machines is all forwarded to multiple nucleus system 545 to any response of this packet, and is received (step 766) by multiple nucleus system 545.Arrive this, method 750 repeats.
Continue with reference to figure 7C, in more detail, in one embodiment, the method 750 shown in Fig. 7 C and the difference between the method shown in Fig. 7 A 700 are: the method 750 shown in Fig. 7 C maintains Client IP address and client computer port.Like this, other additional steps are identical with the step described in the method 700 shown in Fig. 7 A substantially.Such as, similar with the method 700 described before, multiple nucleus system 545 can receive packet (step 766) from client computer, server or other computing machines.In certain embodiments, step 766 can be any embodiment of the step 704 that Fig. 7 A describes.Similar with above-described method 700, to the first tuple application Hash (step 756) of packet, and select core (step 758) based on the result of this Hash.Step 756 can be any embodiment of the step 706 that Fig. 7 A describes, and step 758 can be any embodiment of the step 708 that Fig. 7 A describes.Once have selected core 505, just packet can be forwarded to selected core 505 (step 760).Step 760 can be any embodiment of step 710.After the tuple associated with packet is modified, modified packet is sent to server, client computer or other computing machines (step 764).Step 764 can be any embodiment of step 716.
In one embodiment, when the Packet engine 548 on selected core 505 receives the packet of forwarding (step 760), whether Packet engine 548 is by current core process before determining this grouping.If current core is not correct core (step 772), then packet is forwarded to correct core (step 774).Correct core can be determined by applying above-mentioned Hash to the tuple of packet.Forward or otherwise forward packets to correct endorse with by core to the message delivery system of core and/or packet is copied to the overall high-speed cache that current core and correct core can access and come.
When packet being forwarded to correct core, the first tuple comprises following values: Client IP address; Object IP address; Client computer port; And destination interface.That in the embodiment of correct core, current underwriting holds Client IP address and client computer port (step 762) at current core.Similarly, when correct core receives packet, this correct underwriting holds Client IP address and client computer port (step 762).Hold Client IP address and client computer port by underwriting, this tuple continues to comprise following values: Client IP address; Object IP address; Client computer port; And destination interface.Once maintain Client IP address and client computer port, packet is sent to server, client computer or other calculation elements or equipment.
Figure 10 A is shown as the method 1000 for packet being distributed to specific core 505 in multiple nucleus system 545.Method 1000 comprises: receive packet (step 1002), identify this packet tuple (step 1004) and to this tuple application Hash (step 1006).Then, packet is forwarded to the core 505 (step 1008) in multiple nucleus system 545, its center 505 identifies by the end value produced after the above-mentioned any Hash of tuple application of this packet.The Packet engine 548 that selected core 505 performs keeps Client IP address and the client computer port (step 1010) of this tuple, and by this packet and not have the tuple of amendment to be forwarded to remote computation machine (step 1012).
Continue with reference to figure 10A, in more detail, in one embodiment, method 1000 is substantially identical with the method that Fig. 8 shows.So step 1002 can be any embodiment of the step 802 that Fig. 8 shows, similarly, step 1004 can be any embodiment of the step 804 that Fig. 8 shows.Step 1006 can be any embodiment of the step 806 that Fig. 8 shows, and step 1008 can be any embodiment of the step 808 that Fig. 8 shows, and any embodiment of the step 816 that step 1012 can be Fig. 8 to be shown.In certain embodiments, with the difference of the method 800 shown in Fig. 8, the method 1000 shown in Figure 10 A is that the method 1000 shown in Figure 10 A keeps Client IP address and client computer port.
The Packet engine 548 performing the step of the method 1000 described in Figure 10 A can perform on specific core 505.In most embodiments, core 505 is selected in advance by the method 1000 shown in Figure 10 A.So, in most of the cases, based on applying above-mentioned Hash to the tuple of packet, the packet that Packet engine 548 receives is allocated to core 505.In most of the cases, this tuple at least comprises: Client IP address, object IP address, client computer port and destination interface.In certain embodiments, this tuple can be tuple above-mentioned arbitrarily, and can comprise any amount of source or object discre value.In other embodiments, Client IP address can be the source IP address of identification data source of packets machine certainly.Similarly, described client computer port can be source port.
In one embodiment, the Packet engine 548 in multiple nucleus system 545, specific core 505 performed receives the packet (step 1008) distributing to this specific core 505.Packet engine 548 directly can receive packet, or in certain embodiments, the communication module that core 505 performs can receive and send packet.In other embodiments, virtual NIC (not shown) core 505 performed can receive and send packet.In certain embodiments, receive packet can also comprise from the logic receiving queue discharge data grouping core 505.Logic receiving queue can store the packet being sent to core 505.Packet engine 548 according to first in first out access method, can visit logic receiving queue by discharging or otherwise obtain packet from receiving queue.Another kind of feasible access method can be first-in last-out.In certain embodiments, Packet engine 548 performs on the first core 505, and receive this packet based on the Hash of the first tuple to packet from stream distributor, described first tuple comprises Client IP address, object IP address, client computer port and destination interface.In certain embodiments, described packet can be client requests or server response.
When Packet engine 548 obtains packet, in certain embodiments, Packet engine 548 can determine whether to divide into groups by Update Table.After which part determining packet can be modified, Packet engine 548 can revise this packet.In certain embodiments, multiple nucleus system 545 can be configured to some part indicating Packet engine 548 the Update Table groupings performed on multiple nucleus system 545.
In certain embodiments, Packet engine 548 can be determined to divide into groups by Update Table.In other embodiments, multiple nucleus system 545 can be configured so that packet is not modified, and more precisely, each element of the tuple of packet is kept.In other embodiments, Packet engine 548 is configured to the security strategy in response to the first core 505 or multiple nucleus system 545, and wherein said security strategy instruction keeps client computer port and Client IP address.Like this, when Packet engine 548 receives packet, Packet engine 548 not only keeps Client IP address but also keep client computer port, i.e. source IP address and source port (step 1010).So packet forwards or is otherwise sent to remote computation machine or equipment (step 1012) by Packet engine 548.When after transmission, packet keeps tuple, and this tuple comprises following element: Client IP address; Object IP address; Client computer port; And destination interface.
Modified packet is sent to remote computation machine can comprise: modified packet is sent to by the client computer of object IP address and/or destination interface identification, server, equipment or computing machine.In certain embodiments, before packet is sent to its object computing machine or equipment, modified packet is sent to proxy server or equipment.In other embodiments, before packet is sent to its object computing machine or equipment, the packet be modified is stored in the memory cell of multiple nucleus system 545 inside.In certain embodiments, memory cell can be the overall high-speed cache or other memory cells shared by core all in multiple nucleus system 545 and device.In other embodiments, memory cell can be the high-speed cache that can access of current core 505 or other thesauruss.
What Figure 10 B showed is the more detailed embodiment at least partially of the method 1000 shown in Figure 10 A.The embodiment of the process that method 1050 Packet engine 548 shown on selected core 505 that Figure 10 B shows performs when receiving the packet of forwarding.After receiving packet (step 1052), Packet engine 548 can identify the tuple of this packet, and applies above-mentioned Hash (step 1054) to identified tuple.After application Hash, whether by this core process (step 1058) before Packet engine determination packet.It is out-of-date to be processed by core 505 before the determination made is packet, and Packet engine 548 starts process data packets (step 1060).Processed by another core 505 before the determination made is packet, the object core 505 (step 1062) correct by this Hash result identification and packet is forwarded to this correct object core (step 1064).
More detailed reference diagram 10B further, in one embodiment, method 1050 can be performed by the Packet engine 548 on selected core 505.In other embodiments, method 1050 can by stream distributor 550 example, or on selected core 505 perform other stream distribution modules perform.In certain embodiments, selected core 505 is the core selected based on the Hash of the tuple to packet by the execution stream distributor 550 on multiple nucleus system 545 or RSS module 560.So, when first multiple nucleus system 545 receives packet, any above-mentioned Hash of tuple application of stream distributor 550 or the 560 pairs of packets of RSS module.The result of this Hash identifies the core 505 in multiple nucleus system 545, and flows distributor 550 or packet is forwarded to selected core 505 by RSS module 560.In most embodiments, any selected core 505 or current core 505 of relating to all refers to by stream distributor 550 or RSS module 560 core 505 based on the first group selection associated with packet.
In one embodiment, Packet engine 548 receives by the packet (step 1052) to selected core 505 of stream distributor 550, RSS module 560 or any other flow point cloth module forwards.Packet engine 548 directly can receive packet, or in certain embodiments, the communication module that core 505 performs can receive and send packet.In other embodiments, virtual NIC (not shown) core 505 performed can receive and send packet.In certain embodiments, receive packet can also comprise: discharge data grouping from the logic receiving queue core 505.Logic receiving queue can store the packet being sent to core 505.According to the access method of first in first out, Packet engine 548 can visit packet in logic receiving queue by discharging or otherwise obtain packet from receiving queue.Another kind of feasible access method can be first-in last-out.In certain embodiments, the Packet engine 548 that second core 505 performs can receive packet based on the Hash of the second tuple to packet from stream distributor 550, and described second tuple comprises Client IP address, client computer port, object IP address and destination interface.In certain embodiments, this packet can be respond the server of the client requests processed by the first core 505 in multiple nucleus system 545 before.In certain embodiments, what Figure 10 A showed is the process of the first core 505 pairs of client requests.
In certain embodiments, Packet engine 548 to tuple application Hash (step 1054) associated with the packet received, such as any above-mentioned Hash.Application Hash can also comprise the tuple of first identification data grouping.Determine that the tuple of packet can comprise to identify and series connection following values: Client IP address; Object IP address; Client computer port; And destination interface.In one embodiment, this tuple comprises the series connection of these values.In certain embodiments, Packet engine 548 performs this series connection, and in other embodiments, this tuple is included in the packet received.
In certain embodiments, the result identifying purpose core 505 of this Hash.In certain embodiments, this core 505 identifies current or selected core 505, and in other embodiments, the core 505 that this result identification is different from current or selected core 505.Although the method 1050 shown in Figure 10 B comprises step 1054, in certain embodiments, method 1050 does not comprise step 1054.In these embodiments, can be completed by following method the determination whether processed by current core 505 before packet: compare form or list that the Packet engine 548 on the attribute of packet and described current core can access.These attributes can be Client IP address, client computer port, object IP address, destination interface, the mark stored in the metadata, core before instruction processing packet mark maybe can be stored in form or list and may be used for identifying whether specific core 505 processed any one in any other attribute of packet.This form or list can be upgraded by Packet engine 548 when each process data packets of core 505.Described renewal can comprise the record indicating the packet with some characteristic to be processed by core 505.
Packet engine 548 can check result cryptographic hash or follow the tracks of the form of packet attributes, before determining current core 505, whether processed current packet.It is out-of-date to be processed by current core 505 before Packet engine 548 determines this grouping, and Packet engine 548 continues this packet (step 1060) of process.It is out-of-date not processed by current core 505 before Packet engine 548 determines this grouping, and Packet engine 548 identifies correct core 505 (step 1062) and this packet is forwarded to correct core (step 1064).
In certain embodiments, determine that correct core 505 (step 1062) comprising: the result (step 1054) checking the tuple application Hash to packet.This Hash result can be stored in the high-speed cache or another memory cell or position can accessed by first, second or the 3rd core 505, can determine the packet being sent to misaddress where to be sent to after making.In certain embodiments, this packet can be kept in high-speed cache or other memory cells, position or sharing synthesis process, and wherein this sharing synthesis process can be accessed by each core comprised in the multiple nucleus system 545 of the first core and the second core.Do not applying in the embodiment of Hash before, described Packet engine 548 can apply above-mentioned Hash to obtain result cryptographic hash to the tuple of packet.Core 505 in the multiple nucleus system 545 that this result cryptographic hash identification is different from current or selected core 505.Can be identical with the hash function applied client requests before to the Hash of the second tuple application.
Determine that this different core or the first core 505 comprise for correct endorsing: determine that received response corresponds to not by client requests that the second Packet engine 548 on the second core 505 processes.Packet engine 548 can obtain the information about identified core 505, and packet is forwarded to by the correct object core 505 (step 1064) of this Hash result identification.The inquiry information relevant with correct core 505 or the first core 505 can comprise: in port assignment table, find port to identify the first core 505.In certain embodiments, the Packet engine 548 that the second core 505 performs can send message to the first core 505 or the core 505 identified, indicates this packet (i.e. server response) to process by the Packet engine 548 on the first core 505.
Packet is forwarded to correct object core 505 (step 1064) to occur with one of following two kinds of methods: packet can be copied to can by current core 505 and the correct addressable common cache of core 505 or memory cell, and packet can be downloaded by correct core 505; Or, by the internal network that multiple core 505 communicates mutually, packet can be sent to correct core 505 thereon.Be stored in the embodiment of public memory cell in packet, packet copies in common cache or common storage unit by Packet engine 548, and the Packet engine sent a message on correct core, notifies that it downloads the packet copied.Core can be used by the Packet engine 548 of current core 505 to the message delivery system of core or intra-system communication network, to send a message to the Packet engine 548 of correct core 505, described message indicates the Packet engine 548 of another core 505 to download copied packet from the high-speed cache shared or memory element.By internal network, packet is sent in the embodiment of correct core 505 wherein, the Packet engine 548 of current core 505 obtains the address of the Packet engine 548 of correct core, and by by the internal network in multiple nucleus system 545, packet is forwarded to this address.In certain embodiments, packet is forwarded to the control core of multiple nucleus system 545 by the Packet engine of current core, and then packet is forwarded to correct core by this control core.In other embodiments, packet is forwarded to contiguous core by the Packet engine of current core, and the core of this vicinity determines that it is not correct core and packet is forwarded to its contiguous core.This process lasts carries out until correct core receives packet.
5. the system and method that packet segmentation in multi-core environment guides and recombinates
In certain embodiments, client requests, server response or the packet of another kind of type can be segmented.In multiple nucleus system 545, the packet be segmented of recombinating also has other one deck complicacy, because the packet in certain embodiments, be segmented by be not this request, Packet engine 548 that the core 505 of the final purpose core 505 of response or packet performs or stream distributor 550 receive.So Packet engine 548 or stream distributor 550 must forward recombinated packet or segments data packets to object core 505.Until at least a part for this data packet header is reorganized, when making it possible to obtain following numerical value: source IP address, object IP address, source port and destination interface, this object core 505 could be determined.Once obtain these values, recombinated packet or segments data packets can be forwarded to by the core 505 identified above-mentioned numerical applications Hash by Packet engine 548 or stream distributor 550.
Figure 11 A is depicted as the embodiment of the method 1100 on the one or more cores 505 for the network traffics be segmented being distributed in multiple nucleus system 545.Multiple nucleus system 545 receives segments data packets (step 1102), and segments data packets is assembled into complete packet, until obtain packet header (step 1104) by the stream distributor 550 performed in multiple nucleus system 545 or RSS module 560.After obtaining packet header, in this head, identify the tuple comprising source IP address, object IP address, source port and destination interface.Stream distributor 550 or RSS module 560 are to this tuple application Hash, and this end value is identified at least one core 505 in multiple nucleus system 545.After identifying core 505, segments data packets is just sent to selected core 505 (step 1106).Packet engine 548 on selected core 505 receives this segments data packets, and they is forwarded to the segmentation module 650 (step 1108) performed on selected core 505.When segmentation module 650 receives segments data packets, segmentation module 650 is from segments data packets recombination data grouping (step 1110).
More detailed reference diagram 11A further, in one embodiment, described method 1100 can be performed by the Packet engine 548 that core 505 performs.In yet another embodiment, method 1100 can be performed by the example of the stream distributor 550 performed on core 505 or stream distributor.In other embodiments, method 1100 can be performed by any stream distribution module that can perform on core 505 or agency.Although show in Figure 11 A from the grouping of segments data packets recombination data, in certain embodiments, the restructuring of packet can be processed by the control core in multiple nucleus system 545.
The Packet engine 548 at least performing the part steps of the method 1100 described in Figure 11 A can perform on specific core 505.In most embodiments, core 505 is in advance by selecting the tuple application Hash of described segments data packets.In most of the cases, this tuple at least comprises: Client IP address, object IP address, client computer port and destination interface.In certain embodiments, this tuple can be any above-mentioned tuple, and can comprise any amount of source or object discre value.In other embodiments, Client IP address can be the source IP address of identification data source of packets machine certainly.Similarly, client computer port can be source port.
In one embodiment, the stream distributor 550 performed in multiple nucleus system 545 is from the long-range computing machine or the equipment receiving data packet segmentation (step 1102) that are positioned at multiple nucleus system 545 outside.Stream distributor 550 directly can receive segments data packets, or in certain embodiments, communication module can receive and send packet or segments data packets.In other embodiments, NIC 552 can receive and send packet and segments data packets.In certain embodiments, reception packet and segments data packets can also comprise: from the receiving queue discharge data grouping NIC 552 or segments data packets.Receiving queue can store the packet and segments data packets that are sent to multiple nucleus system 545.Stream distributor 550 can according to first in first out access method, visits packet in receiving queue and segments data packets by discharging or otherwise obtain packet and segments data packets from receiving queue.Another kind of feasible access method can be first-in last-out.
In certain embodiments, Packet engine 548 can receive the client requests of identification first tuple, and this first tuple comprises client computer Internet protocol address, client computer port, server internet protocol address and Service-Port.In these embodiments, Packet engine 548 can performed based on the core 505 selected by the Hash to the first tuple by stream distributor 550.Then, described stream distributor 550 can receive multiple segmentations (step 1102) of the response to the client requests received by Packet engine 548, server sends the segmentation of described response in response to subscribing client request, described client requests is forwarded by the Packet engine 548 performed on core 505.
Once stream distributor 550 have received one or more segments data packets (step 1102), described stream distributor 550 can start from the grouping of segments data packets recombination data, until obtain packet header (step 1104).In certain embodiments, to be recombinated whole packet from the segments data packets received by stream distributor 550.In other embodiments, only have part that in packet, those form described head to be flowed distributor 550 to recombinate.In other embodiments, stream distributor 550 can start from the grouping of segments data packets recombination data, until stream distributor 550 can extract following information from the packet of partly assembling: source IP address; Object IP address; Source port; And destination interface.In many examples, this information is stored in packet header.Like this, when flowing distributor 550 and determining to comprise data packet header in the packet that described part is recombinated at least partially, stream distributor 550 stops dividing into groups from segments data packets recombination data.Determine that comprising data packet header in the packet that described part is recombinated at least partially can comprise: assemble the part in multiple segmentation, and/or this part of assembling in multiple segmentation is until the head of this response is assembled.
Once head is identified, described stream distributor 550 can identification data grouping tuple (i.e. the second tuple, tlv triple, or the first tuple), wherein this tuple can be any tuple described herein.In certain embodiments, this tuple comprises series connection or the character string of the following values extracted from data packet header: source IP address; Object IP address; Source port; And destination interface.In other embodiments, this tuple can at least comprise by the source IP address of multiple identification by stages and object IP address.Identify that this tuple can also comprise: the arbitrary content (i.e. source IP address and object IP address) extracting this tuple from data packet header or described response head.Once this tuple is identified, described stream distributor 550 applies above-mentioned Hash to generate second, third or the first Hash to identified tuple.Core 505 (i.e. the second core) in the result identification multiple nucleus system 545 of this Hash.The core 505 identified can be called object core 505 or the second core 505.Described stream distributor 550, or any other communication module in multiple nucleus system 545, be sent to described object core 505 (step 1106) by segments data packets.
The Packet engine 548 that object core 505 performs can receive segments data packets, and segments data packets is forwarded to the segmentation module 650 (step 1108) performed on object core 505.In certain embodiments, Packet engine 548 can store them when receiving multiple segmentation.Packet engine can store multiple segmentation in memory cell or high-speed cache, and this memory cell or high-speed cache can be accessed by the core 505 and object core 505 receiving segmentation at first.When receiving segments data packets, institute's segmentation module 650 is from segments data packets recombination data grouping (step 1110) received.In certain embodiments, segmentation module 650 is not allowed to recombinate described packet, Packet engine 548 performs segmentation action to described multiple segmentation, and the multiple segmentations received by object core 505 are directed to the first core 505 or the initial core receiving this request to utilize the rule of the stream distributor performed on object core 505 to determine.In these embodiments, described segmentation action can be the Assembly Action of instruction Packet engine 548 or the grouping of segmentation module 650 recombination data, or can be that packet sends or is directed to the bridge joint action of the first core 505 or another core 505 (i.e. the second core the 505, three core 505) by instruction Packet engine 548 or segmentation module 650.In certain embodiments, determine that multiple segmentation is sent to the first core 505 from object core 505 can comprise: first determine the connection that this first core 505 processed client requests or otherwise established between client and server.Multiple segmentation is directed to the first core 505 from object core 505 can also comprise: send message by the Packet engine 548 object core 505 to the Packet engine 548 on the first core 505, indicates the Packet engine 548 on the first core 505 to process assembled multiple segmentations.
When said method 1100 part flowed distributor 550 partly perform time, those can be performed by the Packet engine 548 performed on the first core 505A by flowing the step that distributor 550 performs.In certain embodiments, segments data packets can be forwarded to acquiescence core, and this acquiescence core is exclusively used in process data packets segmentation.This system can be configured to all segments data packets to be forwarded to the first core 505A of the example having segmentation module 650 or perform segmentation module 650 thereon, instead of with flowing distributor 550 or the segmentation of RSS module 560 process data packets.Segmentation module 650 can divide into groups by recombination data, until all can use for the associated data packet part extracted by the stream distributor example 550 performed on the core of acquiescence.
When the Packet engine 548 performed on the core given tacit consent to or the first core 505A receives the packet be segmented, described Packet engine 548 can by the message delivery system of core to core, or by the communication network in multiple nucleus system, segments data packets is sent to object core.In certain embodiments, send segments data packets (step 1106) can comprise: segments data packets is copied in overall high-speed cache or memory cell, and, send message to object core or the Packet engine that performs on object core, this message indicates described Packet engine from overall high-speed cache downloading data packet segmentation.In other embodiments, segments data packets can be encapsulated in another packet header, and this head indicates this segments data packets should be sent to the Packet engine 548 of object core 505.By the internal network in multiple nucleus system 545, these segments data packets can be sent to object Packet engine.
Can perform said method 1100 by stream distributor 550 or RSS module 560 in other embodiments, described stream distributor 550 or RSS module 560 also perform or have segmentation module.Described segmentation module can process all segments data packets that are that tackled by stream distributor 550 or RSS module 560 or that receive.
What Figure 11 B showed is for being distributed by segments data packets or being distributed to another embodiment of method 1150 of multiple nucleus system 545 center 505.Stream distributor 550 or RSS module 560 receive segments data packets (step 1152), and from segments data packets assembling packet, until obtain packet header (step 1154).Once head is reorganized, described stream distributor 550 or RSS module 560 can extract following values to create tuple or the character string of these values, and these values are: source IP address; Object IP address; Source port; And destination interface.After creating or identifying the tuple of head of recombinating, to this tuple application Hash.In most embodiments, the core (step 1156) in this Hash result identification multiple nucleus system 545, this endorses to be called object core.Once object core 505 is identified, segmentation action (step 1158) can be determined.If described segmentation action is " assembling " (step 1160), so from segments data packets recombination data grouping (step 1164), and recombinated packet can be sent to the Packet engine (step 1166) on object core.When described segmentation action is not " assembling ", segments data packets can be directed to the object Packet engine (step 1162) that object core 505 performs.
More detailed reference diagram 11B further, in one embodiment, method 1150 can be performed by the Packet engine 548 performed on core 505.In yet another embodiment, method 1150 can be performed by the example of the stream distributor 550 performed on core 505 or stream distributor.In other embodiments, described method 1100 can be performed by any stream distribution module that can perform on core 505 or agency.Although show in Figure 11 B from the grouping of segments data packets recombination data, in certain embodiments, the restructuring of packet can by the control core process in multiple nucleus system 545.
The Packet engine 548 performing at least part of step of the method 1150 described in Figure 11 B can perform on specific core 505.In most embodiments, core 505 is in advance by selecting the tuple application Hash of segments data packets.In most of the cases, this tuple at least comprises: Client IP address, object IP address, client computer port and destination interface.In certain embodiments, this tuple can be any above-mentioned tuple, and can comprise source or the object discre value of any amount.In other embodiments, described Client IP address can be the source IP address of identification data source of packets machine certainly.Similarly, institute family client computer port can be source port.
In one embodiment, the stream distributor 550 performed in multiple nucleus system 545 is from the long-range computing machine or the equipment receiving data packet segmentation (step 1152) that are positioned at multiple nucleus system 545 outside.Stream distributor 550 directly can receive segments data packets, or in certain embodiments, communication module can receive and send packet or segments data packets.In other embodiments, NIC 552 can receive and send packet and segments data packets.In certain embodiments, reception packet and segments data packets can also comprise: from the receiving queue discharge data grouping NIC 552 or segments data packets.Receiving queue can store the packet and segments data packets that are sent to multiple nucleus system 545.Stream distributor 550 can according to first in first out access method, visits packet in receiving queue and segments data packets by discharging or otherwise obtain packet and segments data packets from receiving queue.Another kind of feasible access method can be first-in last-out.
In certain embodiments, Packet engine 548 can receive the client requests of the first tuple identifying and comprise client computer Internet protocol address, client computer port, server internet protocol address and Service-Port.In these embodiments, Packet engine 548 can perform on core 505, and this core 505 is selected based on to the Hash of described first tuple by stream distributor 550.Then, stream distributor 550 can receive multiple segmentations (step 1102) of the response to the client requests received by Packet engine 548, the segmentation of described response is sent out in response to subscribing client request by server, and described client requests is forwarded by the Packet engine 548 performed on core 505.
Once stream distributor 550 have received one or more segments data packets (step 1152), described stream distributor 550 can start from the grouping of segments data packets recombination data, until obtain packet header (step 1154).In certain embodiments, to be recombinated whole packet from the segments data packets received by described stream distributor 550.In other embodiments, only have part that in packet, those form described head to be flowed distributor 550 to assemble.In other embodiments, stream distributor 550 can start from the grouping of segments data packets recombination data, until stream distributor 550 can extract following information from the packet of sections fit: source IP address; Object IP address; Source port; And destination interface.In many examples, this information is stored in packet header.Like this, when flowing distributor 550 and determining to comprise data packet header in the packet that described part is recombinated at least partially, stream distributor 550 stops dividing into groups from segments data packets recombination data.Comprise data packet header at least partially in the packet of determining section restructuring can comprise: assemble the part in multiple segmentation, and/or this part of assembling in multiple segmentation is until the head of response is assembled.
Once head is identified, described stream distributor 550 can identification data grouping tuple (i.e. the first tuple, the second tuple, tlv triple), wherein this tuple can be any tuple described herein.In certain embodiments, this tuple comprises series connection or the character string of the following values extracted from data packet header: source IP address; Object IP address; Source port; And destination interface.Once this tuple is identified, described stream distributor 550 applies above-mentioned Hash to generate second, third or the first Hash to identified tuple.Core 505 (i.e. the second core) in the result identification multiple nucleus system 545 of this Hash.The core 505 identified can be called object core 505 or the second core.Segments data packets is sent to described object core 505 (step 1156) by described stream distributor 550 or any other communication module in multiple nucleus system 545.
Then described stream distributor 550 can determine the segmentation action (step 1158) associated with segments data packets.In certain embodiments, segmentation action is specified by multiple nucleus system 545.Keeper multiple nucleus system 545 can be configured to pass each segments data packets is sent to restructuring segmentation object core by segments data packets " bridge joint " to object core.In other embodiments, keeper can configure multiple nucleus system 545 for before packet is sent to object core, and segments data packets " assembling " is become packet.In other embodiments, segmentation action is identified in the metadata that can associate in data packet header or with each packet.In other embodiments, can decide " assembling " or " bridge joint " based on the combination in any of following condition: the quantity of segments data packets; Data type in packet load; The size of each segments data packets; The size of packet; Source IP address; Object IP address; The quantity of process resource available in multiple nucleus system 545; Or any other factor.Consider in the embodiment of data packet size at stream distributor 550, when one by one can not send according to " bridge joint " segmentation action when determining packet excessive, stream distributor 550 " can assemble " packet.When flowing distributor 550 and considering the quantity of available process resource, stream distributor 550 can the quantity of load on analysis purpose core, and determine described object core whether have enough can resource carry out assembling packet.In certain embodiments, can based on determining whether described object core has segmentation module 650 and decide " assembling " or " bridge joint " segments data packets.Have in the embodiment of segmentation module 650 at described object core, segments data packets is by " bridge joint ".Do not have in the embodiment of segmentation module 650 at described object core, segments data packets is by " assembling ".
In certain embodiments, when segmentation action is " assembling " (step 1160), by the segmentation module flowing distributor 550 or perform in stream distributor, packet restructuring is become packet (step 1164).After assembling packet from segments data packets, packet is sent to object core, and wherein said packet is received (step 1166) by the Packet engine that object core performs.In certain embodiments, before recombinated packet is sent to object core, segments data packets is stored in segmentation table 655.
In certain embodiments, when segmentation action is " bridge joint " (step 1160), packet is directed into their object core (step 1162) of restructuring.In certain embodiments, the Packet engine that object core performs receives segments data packets, and assembles them or they are sent to their segmentation module 650 of restructuring.In certain embodiments, before each segments data packets is sent to object core, segments data packets is stored in segmentation table 655.In other embodiments, Packet engine 548 can store them when receiving multiple segmentation.Packet engine can store multiple segmentation in memory cell or high-speed cache, and this memory cell or high-speed cache can be accessed by the core 505 and object core 505 receiving segmentation at first.In certain embodiments, segments data packets is sent out or is directed to object core with it by the order that client computer, server or other computing machines or equipment receive.
Have in the embodiment of TCP head and following arbitrary generation in packet, described segmentation action is " assembling ": described flow hit PCB; Described flow hits NATPCB and is provided with the mark of " assemble packets "; Described flow hits the service configured or the Packet engine of non-UDP type; And any RNAT flow.If this wherein any one does not all occur, so described segmentation action is " bridge joint ".Have in the embodiment of UDP head and following arbitrary generation in packet, described segmentation action is " assembling ": described flow hits NATPCB and is provided with the mark of " assemble packets "; Described flow hits the service configured or the Packet engine of non-UDP type.If this wherein any one does not all occur, so described segmentation action is " bridge joint ".
In certain embodiments, can by carrying out serving, RNAT, PCB and NATPCB search and determine segmentation action.In certain embodiments, service can be carried out in any Packet engine and RNAT searches.But the PCB/NATPCB that management connects can not reside in the Packet engine identical with the Packet engine receiving segmentation.
When said method 1150 is partly performed by stream distributor 550, those can be performed by the Packet engine 548 performed on the first core 505A by flowing the step that distributor 550 performs.In certain embodiments, segments data packets can be forwarded to acquiescence core, and this acquiescence core is exclusively used in process data packets segmentation.This system can be configured to all segments data packets to be forwarded to the first core 505A of the example having segmentation module 650 or perform segmentation module 650 thereon, instead of with flowing distributor 550 or the segmentation of RSS module 560 process data packets.Packet engine 548 can be divided into groups from segments data packets recombination data in conjunction with segmentation module 650, or segments data packets is directed to object core.
When giving tacit consent to the Packet engine 548 that core or the first core 505A perform and receiving the packet be segmented, the packet of segments data packets or restructuring can be sent to object core by core to the message delivery system of core or by the communication network in multiple nucleus system by described Packet engine 548.In certain embodiments, transmission segments data packets or packet can comprise: segments data packets or packet are copied to overall high-speed cache or memory cell, and send message to object core or the Packet engine that performs on object core, this message indicates described Packet engine from overall high-speed cache downloading data packet segmentation or packet.In other embodiments, packet or segments data packets can be encapsulated in another packet header, and this head designation data packet segmentation should be sent to the Packet engine 548 of object core 505.By the internal network in multiple nucleus system 545, these segments data packets can be sent to object Packet engine.
In other embodiments, can perform said method 1150 by stream distributor 550 or RSS module 560, described stream distributor 550 or RSS module 560 also perform or have segmentation module.Described segmentation module can process all segments data packets that are that tackled by stream distributor 550 or RSS module 560 or that receive.

Claims (22)

1. one kind provides symmetrical request and response to process, keep the Internet protocol address of client computer and the method for proxy client port simultaneously in the Packet engine of in multiple Packet engine, each of described multiple Packet engine performs on the core of multiple cores being arranged in the multiple nucleus system in the middle of described client-server, and the method comprises:
A) by the Packet engine on the first core being positioned at the multiple nucleus system in the middle of client-server from the request of stream distributor subscribing client, described client requests identification first tuple, described first tuple comprises client computer Internet protocol address, client computer port, server internet protocol address and Service-Port, described stream distributor based on the first tuple is applied first Hash produce value by this request forward to the first core, this value identification first core;
B) determine to act on behalf of the client computer port of this request by described Packet engine and keep client computer Internet protocol address;
C) cryptographic hash is calculated by described Packet engine by applying the second Hash to client computer Internet protocol address and object Internet protocol address, the port assignment table in the multiple port assignment table of this cryptographic hash identification;
D) determined to apply cryptographic hash identification first core of the 3rd Hash to the second tuple by described Packet engine, described second tuple at least comprises the first available port from identified port assignment table and client computer Internet protocol address; With
E) the client computer port revising this client requests by described Packet engine is described first port.
2. method according to claim 1, wherein, step (e) also comprises: by Packet engine, modified client requests is sent to server.
3. method according to claim 1, wherein, step (d) also comprises: the first port being determined selected port assignment table by Packet engine is disabled.
4. method according to claim 3, also comprises:
Second port of the port assignment table selected by Packet engine is selected; And
Determine that this second port can be used by Packet engine.
5. method according to claim 3, wherein, determines that described first port is unavailable and also comprises: determine that this first port uses.
6. method according to claim 1, also comprises: on each core of multiple nucleus system, store multiple port assignment table.
7. method according to claim 6, wherein, each port assignment table is positioned at the agent Internet protocol address of the core storing this port assignment table.
8. method according to claim 7, wherein, Packet engine selects port allocation table based on to the client computer Internet protocol address of the first packet and the Hash of destination address.
9. method according to claim 1, also comprises:
The first packet and the second packet is received by stream distributor;
By this stream distributor based on the first core described first packet be forwarded to the Hash of the first tuple in described multiple nucleus system, described first tuple at least comprises the first client computer Internet protocol address and first destination address of the first packet; With
By this stream distributor based on the second core described second packet be forwarded to the Hash of the second tuple in described multiple nucleus system, described second tuple at least comprises the second client computer Internet protocol address and second destination address of the second packet.
10. method according to claim 2, wherein, sends and also comprises: be sent to the server being positioned at object Internet protocol address.
11. methods according to claim 1, also comprise: the port assignment table selected by renewal, to arrange described first port for unavailable.
12. 1 kinds provide symmetrical request and response process, keep the Internet protocol address of client computer and the system of proxy client port simultaneously in the Packet engine of in multiple Packet engine, each of described multiple Packet engine performs on the core of multiple cores being arranged in the multiple nucleus system in the middle of described client-server, and this system comprises:
Be positioned at the multiple nucleus system in the middle of client-server;
Stream distributor, this stream distributor receives the request from client computer to server, and select the first core based on the value of identification first core, this value produces by the first tuple application Hash identified in described client requests, and described first tuple comprises client computer Internet protocol address, client computer port, server internet protocol address and Service-Port; With
The Packet engine that the first core in described multiple nucleus system performs, this Packet engine:
Receive described client requests,
Determine to act on behalf of the client computer port of this request and keep client computer Internet protocol address,
Cryptographic hash is calculated, the port assignment table in the multiple port assignment table of this cryptographic hash identification by applying the second Hash to client computer Internet protocol address and object Internet protocol address,
Determine the first core described in the cryptographic hash identification to the second tuple application the 3rd Hash, described second tuple at least comprises the first available port from identified port assignment table and client computer Internet protocol address, and
The client computer port revising described client requests is this first port.
13. systems according to claim 12, wherein, modified client requests is sent to server by described Packet engine.
14. systems according to claim 12, wherein, described Packet engine determines that the first port of selected port assignment table is unavailable.
15. systems according to claim 14, wherein, described Packet engine:
Second port of the port assignment table selected by selection; With
Determine that this second port can be used.
16. systems according to claim 14, wherein, described Packet engine determines that described first port uses.
17. systems according to claim 12, wherein, each core in described multiple nucleus system stores multiple port assignment table.
18. systems according to claim 17, wherein, each port assignment table is positioned at the agent Internet protocol address of the core storing this port assignment table.
19. systems according to claim 18, wherein, Packet engine selects port allocation table based on to the client computer Internet protocol address of the first packet and the Hash of destination address.
20. systems according to claim 12, wherein, described stream distributor:
Receive the first packet and the second packet;
Based on the first core described first packet be forwarded to the Hash of the first tuple in described multiple nucleus system, described first tuple at least comprises the first client computer Internet protocol address and first destination address of the first packet; With
Based on the second core described second packet be forwarded to the Hash of the second tuple in described multiple nucleus system, described second tuple at least comprises the second client computer Internet protocol address and second destination address of the second packet.
21. systems according to claim 13, wherein, modified client requests is sent to the server being positioned at object Internet protocol address by described Packet engine.
22. systems according to claim 12, wherein, described Packet engine upgrades described port assignment table, to arrange described first port for unavailable.
CN201080036985.3A 2009-06-22 2010-06-03 The system and method for source IP is kept in the multi-core environment of load balance Active CN102483707B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/489,165 US8788570B2 (en) 2009-06-22 2009-06-22 Systems and methods for retaining source IP in a load balancing multi-core environment
US12/489165 2009-06-22
PCT/US2010/037221 WO2011005390A2 (en) 2009-06-22 2010-06-03 Systems and methods for retaining source ip in a load balancing mutli-core environment

Publications (2)

Publication Number Publication Date
CN102483707A CN102483707A (en) 2012-05-30
CN102483707B true CN102483707B (en) 2015-08-19

Family

ID=42537639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201080036985.3A Active CN102483707B (en) 2009-06-22 2010-06-03 The system and method for source IP is kept in the multi-core environment of load balance

Country Status (6)

Country Link
US (2) US8788570B2 (en)
EP (1) EP2446358A1 (en)
CN (1) CN102483707B (en)
CA (1) CA2766286C (en)
IL (1) IL217123A0 (en)
WO (1) WO2011005390A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI816593B (en) * 2022-09-30 2023-09-21 大陸商達發科技(蘇州)有限公司 Method and computer program product and apparatus for load-balance of network processing unit

Families Citing this family (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9116734B1 (en) * 2011-01-14 2015-08-25 Dispersive Networks Inc. Dispersive storage area networks
US8990431B2 (en) * 2009-05-05 2015-03-24 Citrix Systems, Inc. Systems and methods for identifying a processor from a plurality of processors to provide symmetrical request and response processing
US8788570B2 (en) * 2009-06-22 2014-07-22 Citrix Systems, Inc. Systems and methods for retaining source IP in a load balancing multi-core environment
US8560604B2 (en) 2009-10-08 2013-10-15 Hola Networks Ltd. System and method for providing faster and more efficient data communication
US8533285B2 (en) * 2010-12-01 2013-09-10 Cisco Technology, Inc. Directing data flows in data centers with clustering services
US8996614B2 (en) * 2011-02-09 2015-03-31 Citrix Systems, Inc. Systems and methods for nTier cache redirection
US8446910B2 (en) * 2011-04-14 2013-05-21 Cisco Technology, Inc. Methods for even hash distribution for port channel with a large number of ports
US8812727B1 (en) * 2011-06-23 2014-08-19 Amazon Technologies, Inc. System and method for distributed load balancing with distributed direct server return
US9264432B1 (en) 2011-09-22 2016-02-16 F5 Networks, Inc. Automatic proxy device configuration
CN103188231A (en) * 2011-12-30 2013-07-03 北京锐安科技有限公司 Multi-core printed circuit board access control list (ACL) rule matching method
US8861401B2 (en) 2012-04-03 2014-10-14 International Business Machines Corporation Layer 2 packet switching without look-up table for ethernet switches
US8902896B2 (en) * 2012-04-16 2014-12-02 International Business Machines Corporation Packet switching without look-up table for ethernet switches
CN102811169B (en) * 2012-07-24 2015-05-27 成都卫士通信息产业股份有限公司 Virtual private network (VPN) implementation method and system for performing multi-core parallel processing by using Hash algorithm
US9178815B2 (en) 2013-03-05 2015-11-03 Intel Corporation NIC flow switching
US9516102B2 (en) * 2013-03-07 2016-12-06 F5 Networks, Inc. Server to client reverse persistence
US9800542B2 (en) * 2013-03-14 2017-10-24 International Business Machines Corporation Identifying network flows under network address translation
US9241044B2 (en) 2013-08-28 2016-01-19 Hola Networks, Ltd. System and method for improving internet communication by using intermediate nodes
US10044612B2 (en) * 2013-11-06 2018-08-07 Citrix Systems, Inc. Systems and methods for port allocation
CN103714511B (en) * 2013-12-17 2017-01-18 华为技术有限公司 GPU-based branch processing method and device
EP3087709A4 (en) * 2013-12-24 2017-03-22 Telefonaktiebolaget LM Ericsson (publ) Methods and apparatus for load balancing in a network
CN104821924B (en) * 2014-01-30 2018-11-27 西门子公司 A kind of processing method of network data packets, device and network processing device
US9424059B1 (en) * 2014-03-12 2016-08-23 Nutanix, Inc. System and methods for implementing quality of service in a networked virtualization environment for storage management
KR101945886B1 (en) * 2014-06-27 2019-02-11 노키아 솔루션스 앤드 네트웍스 오와이 Ultra high-speed mobile network based on layer-2 switching
US9667543B2 (en) 2014-08-08 2017-05-30 Microsoft Technology Licensing, Llc Routing requests with varied protocols to the same endpoint within a cluster
CN104281493A (en) * 2014-09-28 2015-01-14 般固(北京)科技股份有限公司 Method for improving performance of multiprocess programs of application delivery communication platforms
CN104572259A (en) * 2014-10-17 2015-04-29 新浪网技术(中国)有限公司 Method and device for data processing
US9553853B2 (en) * 2014-12-23 2017-01-24 Intel Corporation Techniques for load balancing in a packet distribution system
CN104539528B (en) * 2014-12-31 2017-12-22 迈普通信技术股份有限公司 Multi-core communication equipment and its message interaction method between radius server
US9641458B2 (en) * 2015-02-19 2017-05-02 Accedian Network Inc. Providing efficient routing of an operations, administration and maintenance (OAM) frame received at a port of an ethernet switch
DE102015206196A1 (en) * 2015-04-08 2016-10-13 Robert Bosch Gmbh Management of interfaces in a distributed system
US11057446B2 (en) 2015-05-14 2021-07-06 Bright Data Ltd. System and method for streaming content from multiple servers
US9998567B2 (en) 2015-08-31 2018-06-12 Keyssa Systems, Inc. Contactless communication interface systems and methods
US9954817B2 (en) * 2015-10-31 2018-04-24 Nicira, Inc. Software receive side scaling for packet re-dispatching
US10057168B2 (en) * 2015-10-31 2018-08-21 Nicira, Inc. Receive side scaling for overlay flow dispatching
US9948559B2 (en) * 2015-10-31 2018-04-17 Nicira, Inc. Software receive side scaling for overlay flow re-dispatching
US20170318082A1 (en) * 2016-04-29 2017-11-02 Qualcomm Incorporated Method and system for providing efficient receive network traffic distribution that balances the load in multi-core processor systems
CN105915462B (en) * 2016-06-03 2018-08-31 中国航天科技集团公司第九研究院第七七一研究所 A kind of symmetry RSS circuits towards TCP sessions
CN108377671B (en) * 2016-11-28 2020-11-20 华为技术有限公司 Method and computer equipment for processing message
US10873565B2 (en) * 2016-12-22 2020-12-22 Nicira, Inc. Micro-segmentation of virtual computing elements
US10530747B2 (en) 2017-01-13 2020-01-07 Citrix Systems, Inc. Systems and methods to run user space network stack inside docker container while bypassing container Linux network stack
US10681189B2 (en) 2017-05-18 2020-06-09 At&T Intellectual Property I, L.P. Terabit-scale network packet processing via flow-level parallelization
US10545921B2 (en) * 2017-08-07 2020-01-28 Weka.IO Ltd. Metadata control in a load-balanced distributed storage system
EP3767495B1 (en) 2017-08-28 2023-04-19 Bright Data Ltd. Method for improving content fetching by selecting tunnel devices
CN107832149B (en) * 2017-11-01 2020-05-12 西安微电子技术研究所 Receive-side Scaling circuit for multi-core processor dynamic grouping management
CN108111530B (en) * 2017-12-30 2020-11-13 世纪网通成都科技有限公司 Computer readable storage medium for detecting VOIP call state and detection system using the same
DE112018007217B4 (en) * 2018-04-10 2022-03-17 Mitsubishi Electric Corporation Security device with an attack detection device and a security risk state determination device and embedded device therefor
CN110390516B (en) * 2018-04-20 2023-06-06 伊姆西Ip控股有限责任公司 Method, apparatus and computer storage medium for data processing
EP4236263A3 (en) 2019-02-25 2023-09-06 Bright Data Ltd. System and method for url fetching retry mechanism
CN109936635B (en) * 2019-03-12 2021-09-28 北京百度网讯科技有限公司 Load balancing method and device
EP4030318A1 (en) 2019-04-02 2022-07-20 Bright Data Ltd. System and method for managing non-direct url fetching service
CN110034978B (en) * 2019-04-19 2021-03-23 杭州迪普科技股份有限公司 Method and device for testing performance of network equipment
US11351669B2 (en) * 2019-10-29 2022-06-07 Kyndryl, Inc. Robotic management for optimizing a number of robots
US11470010B2 (en) 2020-02-06 2022-10-11 Mellanox Technologies, Ltd. Head-of-queue blocking for multiple lossless queues
US11627185B1 (en) * 2020-09-21 2023-04-11 Amazon Technologies, Inc. Wireless data protocol
CN112929460A (en) * 2021-01-20 2021-06-08 苏州长风航空电子有限公司 IP address configuration method and configuration device based on Linux system
US11843540B2 (en) * 2021-03-05 2023-12-12 Fastly, Inc. System and method for deterministic hash addressing
US11917041B1 (en) * 2021-06-15 2024-02-27 Amazon Technologies, Inc. Symmetric communication for asymmetric environments
CN113783973B (en) * 2021-08-31 2023-09-15 上海弘积信息科技有限公司 Implementation method for NAT port allocation lock-free data flow under multi-core
CN113992758B (en) * 2021-12-27 2022-04-19 杭州金线连科技有限公司 Dynamic scheduling method and device for system data resources and electronic equipment
CN114422445B (en) * 2022-02-24 2023-05-30 成都北中网芯科技有限公司 Method for realizing load balancing and disordered recombination

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6070191A (en) * 1997-10-17 2000-05-30 Lucent Technologies Inc. Data distribution techniques for load-balanced fault-tolerant web access
US6748414B1 (en) * 1999-11-15 2004-06-08 International Business Machines Corporation Method and apparatus for the load balancing of non-identical servers in a network environment
EP1212680B1 (en) * 1999-08-13 2007-07-04 Sun Microsystems, Inc. Graceful distribution in application server load balancing
CN101256515A (en) * 2008-03-11 2008-09-03 浙江大学 Method for implementing load equalization of multicore processor operating system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7102996B1 (en) * 2001-05-24 2006-09-05 F5 Networks, Inc. Method and system for scaling network traffic managers
US8464265B2 (en) * 2006-04-03 2013-06-11 Secure64 Software Method and system for reallocating computational resources using resource reallocation enabling information
US8332925B2 (en) * 2006-08-08 2012-12-11 A10 Networks, Inc. System and method for distributed multi-processing security gateway
US8661160B2 (en) * 2006-08-30 2014-02-25 Intel Corporation Bidirectional receive side scaling
US8339991B2 (en) * 2007-03-01 2012-12-25 Meraki, Inc. Node self-configuration and operation in a wireless network
US8553537B2 (en) * 2007-11-09 2013-10-08 International Business Machines Corporation Session-less load balancing of client traffic across servers in a server group
US8990431B2 (en) * 2009-05-05 2015-03-24 Citrix Systems, Inc. Systems and methods for identifying a processor from a plurality of processors to provide symmetrical request and response processing
US8788570B2 (en) * 2009-06-22 2014-07-22 Citrix Systems, Inc. Systems and methods for retaining source IP in a load balancing multi-core environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6070191A (en) * 1997-10-17 2000-05-30 Lucent Technologies Inc. Data distribution techniques for load-balanced fault-tolerant web access
EP1212680B1 (en) * 1999-08-13 2007-07-04 Sun Microsystems, Inc. Graceful distribution in application server load balancing
US6748414B1 (en) * 1999-11-15 2004-06-08 International Business Machines Corporation Method and apparatus for the load balancing of non-identical servers in a network environment
CN101256515A (en) * 2008-03-11 2008-09-03 浙江大学 Method for implementing load equalization of multicore processor operating system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI816593B (en) * 2022-09-30 2023-09-21 大陸商達發科技(蘇州)有限公司 Method and computer program product and apparatus for load-balance of network processing unit

Also Published As

Publication number Publication date
CA2766286A1 (en) 2011-01-13
US20140321469A1 (en) 2014-10-30
IL217123A0 (en) 2012-02-29
US9756151B2 (en) 2017-09-05
CN102483707A (en) 2012-05-30
US20100322076A1 (en) 2010-12-23
EP2446358A1 (en) 2012-05-02
US8788570B2 (en) 2014-07-22
WO2011005390A2 (en) 2011-01-13
CA2766286C (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN102483707B (en) The system and method for source IP is kept in the multi-core environment of load balance
CN102549984B (en) Systems and methods for packet steering in a multi-core architecture
CN102460394B (en) Systems and methods for a distributed hash table in a multi-core system
CN102783090B (en) Systems and methods for object rate limiting in a multi-core system
CN102771085B (en) Systems and methods for maintaining transparent end to end cache redirection
CN102771089B (en) For the system and method by virtual server mixed mode process IP v6 and IPv4 flow
CN102907055B (en) Systems and methods for link load balancing on multi-core device
CN102771083B (en) Systems and methods for mixed mode of IPv6 and IPv4 DNS of global server load balancing
CN102714657B (en) Systems and methods for client IP address insertion via TCP options
CN102763374B (en) Systems and methods for policy based integration to horizontally deployed wan optimization appliances
CN102763375B (en) Systems and methods for gslb spillover
CN102714618B (en) Systems and methods for platform rate limiting
CN103202002B (en) For the system and method from load balance access gateway
CN102217273B (en) Systems and methods for application fluency policies
CN103583022B (en) For via the congested system and method for NIC aware application process NIC
CN102771084B (en) For managing the system and method for static proximity in multinuclear GSLB equipment
CN103155496B (en) For the system and method for the connection that management server in multiple nucleus system is initiated
CN103392314B (en) For the system and method that the N nuclear statistics information that can expand is polymerized
CN103155520B (en) The system and method for half virtualization driver in multi-core virtual Packet engine device
CN104365067A (en) Systems and methods for reassembly of packets distributed across a cluster
CN104364761A (en) Systems and methods for forwarding traffic in a cluster network
CN104380693A (en) Systems and methods for dynamic routing in a cluster
CN103392320A (en) Systems and methods for multi-level tagging of encrypted items for additional security and efficient encrypted item determination
CN103765851A (en) Systems and methods for transparent layer 2 redirection to any service
CN103154895A (en) Systems and methods for cookie proxy management across cores in a multi-core system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant