WO2000021231A2 - Method and system for data communication - Google Patents

Method and system for data communication Download PDF

Info

Publication number
WO2000021231A2
WO2000021231A2 PCT/SE1999/001479 SE9901479W WO0021231A2 WO 2000021231 A2 WO2000021231 A2 WO 2000021231A2 SE 9901479 W SE9901479 W SE 9901479W WO 0021231 A2 WO0021231 A2 WO 0021231A2
Authority
WO
WIPO (PCT)
Prior art keywords
host
data packets
network entity
receiving
sending
Prior art date
Application number
PCT/SE1999/001479
Other languages
French (fr)
Other versions
WO2000021231A3 (en
Inventor
Jan Kullander
Christofer Kanljung
Anders Svensson
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to EP99946532A priority Critical patent/EP1119954A2/en
Priority to JP2000575248A priority patent/JP2002527935A/en
Priority to AU58929/99A priority patent/AU751285B2/en
Priority to CA002346715A priority patent/CA2346715A1/en
Publication of WO2000021231A2 publication Critical patent/WO2000021231A2/en
Publication of WO2000021231A3 publication Critical patent/WO2000021231A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • H04L47/225Determination of shaping rate, e.g. using a moving window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/267Flow control; Congestion control using explicit feedback to the source, e.g. choke packets sent by the destination endpoint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/163In-band adaptation of TCP data exchange; In-band control procedures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0273Traffic management, e.g. flow control or congestion control adapting protocols for flow control or congestion control to wireless environment, e.g. adapting transmission control protocol [TCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • H04W80/06Transport layer protocols, e.g. TCP [Transport Control Protocol] over wireless

Definitions

  • the present invention relates generally to data communication between computers. More particularly the invention relates to the problem of efficiently communicating data packets over a network including both a sub-network, which is substantially error-immune and has a low latency, and an access link, which is comparatively error-prone and has a substantial latency.
  • TCP Transmission Control Protocol
  • TCP Transmission Control Protocol
  • This protocol is optimised for wired connections, which, have a high transmission quality and a low latency.
  • TCP is not very efficient for transmitting data over links that are error prone, have long delays and/or a high latency.
  • Wireless links constitute typical examples of such non-optimal links.
  • Mobile communication typically imposes a wireless link.
  • two computers, of which at least one is mobile cannot communicate efficiently via a standard TCP- connection, since the transmission algorithms in TCP postulate a much higher link quality than what a wireless connection normally can offer. Therefore, the comparatively poor quality of the wireless connection, in most cases, severely degenerates the performance of the connection. This is particularly true if the wireless link is a high-speed link with a considerable latency.
  • the protocol prescribes that a piece of information indicating the status of received data packets must be fed back from the receiving host to the sending host.
  • a simple positive acknowledgement protocol awaits an acknowledgement for each particular data packet before sending another data packet.
  • An example of a more efficient protocol is the so-called sliding window protocol.
  • Figure 1 illustrates a known method of using a sliding window protocol to make it possible for a sending host to transmit multiple data packets DPI - DP3 before obtaining feed-back information Ackl - Ack3 on the status of the transmitted data packets from a receiving host.
  • the sending host is represented to the left and the receiving host to the right.
  • a time scale is symbolised vertically, with increasing time downwards.
  • a congestion_window has the size W of three data packets. This means that three data packets DPI - DP3 may leave the sending host before a first status message Ackl from the receiving host arrives. Once such a message Ackl has come, the congestion_window slides one data packet and a fourth data packet DP4 may be sent.
  • each data packet is associated with a retransmission timer.
  • the retransmission timer is started when a data packet leaves the sending host. At the expiry of the retransmission timer the sending host retransmits the data packet.
  • the protocol may also be defined such that the receiving host returns a negative acknowledgement message for a data packet, if the data packet has been received, however incorrectly.
  • the data packet is, of course, retransmitted also when such a negative acknowledgement message arrives at the sending host.
  • a data packet will be retransmitted either at reception of a negative acknowledgement message or when the retransmission timer expires, whichever happens first.
  • the procedure is then repeated for all data packets in the message until the sending host has obtained positive acknowledgement messages for each data packet in the message.
  • the size W of the congestion_window thus corresponds to the number of data packets that may be sent out unacknowledged into the network between the sending and the receiving host.
  • the sending host can thus transmit data packets as fast as the network can transfer them. Consequently, a well-tuned sliding window protocol keeps the network completely saturated with data packets and obtains substantially higher throughput than a simple positive acknowledgement protocol (which is also known under the name stop-and-wait protocol) .
  • Slow Start the congestion_window is gradually increased as described above.
  • Slow Start is applied whenever a new connection is set up or when packet loss has been detected by the retransmission timer, e.g. after a period of congestion.
  • the congestion_window is initially set to one data packet.
  • the congestion_window is then increased to two data packets at reception of the first acknowledgement message.
  • the sending host then sends two more data packets and awaits the corresponding acknowledgement messages. When those arrive they each increase the congestion_window by one, so that four data packets may be sent unacknowledged, and so on.
  • Slow Start may sometimes be a misnomer, because under ideal conditions, the transmission rate is ramped up exponentially.
  • the receiving host always has a window limit, a so-called advertised_window, which ultimately restricts the transmission rate. Once this limit has been reached the congestion_window cannot be increased any further.
  • TCP includes one additional restriction.
  • a third window is applied for this purpose.
  • the size of the allowed_window is determined by the following expression:
  • allowed_window min (advertised_window, congestion_window) .
  • the congestion_window in its turn is set according to the following strategy.
  • the congestion_window and the advertised_window are of equal size.
  • the congestion_window is reduced by half (down to a minimum of one data packet) upon loss of data packets. For those data packets that remain in the allowed_window, the retransmission time is reduced exponentially.
  • the advertised_window will be increased until a packet is lost due to congestion.
  • the congestion_window will then be reduced radically to decrease the total load on the network. After that, the congestion_window will once more be increased until a data packet loss occurs and so on. In this case, a steady state will never be reached.
  • Congestion Avoidance is a second algorithm included in TCP, which is applied after Slow Start .
  • the congestion_window is increased until either (i) a steady state is reached or (ii) a data packet is lost.
  • the communicating hosts may be informed of the loss of a data packet in one of two alternative ways. Either because a retransmission timer expires or because a third algorithm called Fast Retransmi t is activated. This algorithm will be described after discussing the different methods applied upon data packet loss detection. If a data packet loss is discovered through expiration of a retransmission timer, the congestion_window is immediately reduced to one data packet.
  • the congestion_window is thereafter increased under the Slow Start algorithm until it reaches one half of the size it had before the retransmission timer expired. Then, the Congestion Avoidance algorithm is _ activated. During Congestion Avoidance the congestion_window is increased by one data packet only when all the packets in one window have been positively acknowledged.
  • the congestion_window is instantaneously decreased to half the size it had before the data packet was lost.
  • the Congestion Avoidance algorithm is then activated and applied as described above.
  • the Fast Retransmi t algorithm will be illustrated by means of an example.
  • a sending host transmits 10 data packets to a receiving host. All these data packets arrive correctly.
  • the receiving host returns a positive acknowledgement message for data packet number 10 to the sending host.
  • This message indicates to the sending host that all 10 data packets have been received correctly.
  • the sending host transmits data packet number 11. This data packet is however lost somewhere in the network. Later, the sending host transmits data packet number 12, which reaches the receiving host correctly. Since the positive acknowledgement messages are cumulative, the receiving host cannot now return a positive acknowledgement message for data packet number 12. Instead, another positive acknowledgement message for data packet number 10 is sent.
  • the sending host transmits data packet number 13. This data packet also reaches the receiving host correctly.
  • the receiving host feeds back yet another positive acknowledgement message for data packet number 10.
  • the sending host thus has received a third positive acknowledgement message for the 10th data packet, this is interpreted as a loss of data packet number 11.
  • This data packet is therefore retransmitted and if it reaches the receiving host correctly, a positive acknowledgement message for data packet number 13 will be returned.
  • a general and more detailed description of the Fast Retransmit algorithm can be found in W. Stevens, "TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms", Internet RFC 2001, Network Working Group, NOAO, January 1997.
  • FACK Forward Acknowledgement
  • TCP Transmission Control Protocol
  • Snoop is a protocol that improves TCP in wireless networks.
  • the protocol modifies network-layer software mainly at a base station, while preserving the end- to-end TCP semantics.
  • the general idea of the protocol is to cache data packets at the base station and perform local retransmissions across the wireless link.
  • S. Biaz et al "TCP over Wireless Networks Using Multiple Acknowledgements", Department of Computer Science at Texas A&M University, Technical Report 97-001, January 1997.
  • Unnecessary retransmissions are here avoided in the network by feeding back a partial acknowledgement for a data packet that has reached the base station, if it experiences difficulties on the wireless link.
  • the base station is responsible for retransmissions on the wireless link, while it delays timeout of the retransmission timer via the partial acknowledgement.
  • the Slow Start-, the Congestion Avoidance-, the Fast Retransmit- and the Fast .Recovery-algorithms are all applied in most modern implementations of TCP.
  • the FACK- and the Snoop algorithms are still not very frequently used.
  • the patent document EP, A2, 0 695 053 discloses an asymmetric protocol for wireless data communications with mobile terminals, according to which the terminals only transmit acknowledgement messages and requests for retransmission upon inquiry or when all data packets within a data block have been received.
  • base stations store channel information for the wireless links and status information of received and transmitted data packets.
  • a base station may also combine acknowledgements for multiple data packets into a single acknowledgement code, in order to reduce the power consumption in the mobile terminals.
  • wireless connections cause longer round- trip delays than wired connections.
  • the transmission rate increase for a wireless connection must, according to the prior art transport protocols, be much slower than for a corresponding wired connection. This is particularly true for wireless connections where the bandwidth-delay product is comparatively high.
  • the prior art protocols always seek to increase the throughput of a connection as far as the interconnecting networks permit.
  • the possible transmission rate increase for a particular connection is inversely correlated to the round- trip delay for the connection.
  • the shorter the round- trip delay the faster the increase rate.
  • the combination of characteristics typical for a wireless connection of high bandwidth-delay product and long round-trip delay consequently also constitutes a problem.
  • the present invention relates generally to data communication between host computers, and more particularly to the problems discussed above.
  • the means of solving these problems according the present invention are summarised below.
  • Another object of the invention is to minimise the influence of data packet losses in relatively error-prone access links, in a substantially less error-prone network.
  • One further object of the present invention is to increase the efficiency of the less error-prone links of a data communication system, and thereby admit transmission of a larger amount of data through the total system.
  • the proposed method for communicating data packets over a packet switched network via at least one wireless access link includes the following assumptions and steps.
  • the packet switched network is presumed to offer a connectionless delivery of data packets.
  • the protocol used by the sending host and the receiving host is a reliable sliding window transport protocol, through which data packets, whenever necessary, may be retransmitted from the sending to the receiving host.
  • the communicating hosts also take measures to protect the packet switched network from congestion.
  • the receiving host generates status messages indicating the condition of received data packets to the sending host. In response to the status messages the sending host takes appropriate data flow control measures.
  • a buffering network entity interfaces both the wireless access link and the packet switched network. During communication of data packets the buffering network entity performs the following steps. First, receiving data packets from the sending host.
  • a method of communicating data packets between two hosts according to the invention is hereby characterised by what is apparent from claim 1.
  • a proposed system includes a buffering network entity, which interfaces both a wireless access link and a packet switched network and thus directly or indirectly brings a sending host in contact with a receiving host.
  • the sending and the receiving hosts are assumed to operate according to a reliable sliding window transport protocol through which lost or erroneously received data packets may be retransmitted from the sending host to the receiving host.
  • the packet switched network is further assumed to offer a connectionless delivery of data packets.
  • the buffering network entity includes a receiving means for receiving data packets. This means also generates and returns to the sending host a first status message, which (i) indicates whether a particular data packet must be retransmitted or not and (ii) indicates whether a data packet has been lost or erroneously received in the packet switched network.
  • the buffering network entity also includes a means for storing correctly received data packets and a means for retrieving the stored data packets and transmitting them to the receiving host. Moreover, the buffering network entity includes a processing means for receiving the first status message from the receiving means and for receiving a second status message from the receiving host. The processing means generates a third status message in response to the first and second status messages and returns the third status message to the sending host.
  • the stored data packets are retransmitted to the receiving host whenever that proves to be necessary. Typically, such retransmission is initiated after the expiry of a retransmission timer or at reception of an explicit or implicit notification of loss.
  • the present invention thus prevents congestion control algorithms from being activated in a substantially error- immune network when data packets are lost in thereto connected and comparatively error-prone access links for other reasons than congestion. This of course, increases the efficiency not only in the substantially error-immune network per se, but also in systems, which include both substantially error-immune links and comparatively error-prone access links.
  • Figure 1 illustrates the known method of using a sliding window protocol being described above
  • Figure 2 illustrates the general method according to the invention by means of a sequence diagram
  • Figure 3 depicts a block diagram over a proposed system
  • Figures 4a-d illustrate embodiments of the proposed method for communicating data packets and generating status information
  • Figure 5 shows a flow diagram over an embodiment of the proposed method being performed by a buffering network entity
  • Figure 6 depicts a block diagram over a system according to the invention.
  • a sequence diagram in figure 2 gives a general illustration of the method according to the invention.
  • a sending host is here represented to the left in the diagram and a receiving host is represented to the right.
  • the part of the connection between the sending host and a buffering network entity IWU is referred to as a first leg A and the part of the connection between the buffering network entity and the receiving host is identified as a second leg B.
  • a time scale is symbolised vertically, with increasing time downwards. Data packets DPn up to a number n are assumed to have reached the receiving host correctly.
  • the receiving host therefore returns a positive acknowledgement message Ack n indicating this to the buffering network entity IWU.
  • the buffering network entity IWU receives the positive acknowledgement message Ack n.
  • a data packet DPm with number m arrives correctly at the buffering network entity IWU a few moments later.
  • the buffering network entity IWU subsequently feeds back to the sending host a first status information message S (A n , B n ) indicating that the receiving host has received all data packets DPn up to number n correctly, i.e. no errors or losses have occurred for any of those n data packets, neither in the first leg A nor the second leg B.
  • the buffering network entity IWU may also, simultaneously with this or at any other time after reception of the data packet DPm, send back a second status information message Ack m, which indicates to the sending host that all data packets DPm up to number m have reached the buffering network entity IWU successfully.
  • the first and second data packet status information messages S (A n , B n ) ; Ack m are preferably effectuated according to a selective acknowledgement algorithm, such as SACK.
  • SACK selective acknowledgement algorithm
  • the buffering network entity IWU feeds back data packet status information messages indicating this fact to the sending host.
  • the buffering network entity IWU could already have fed back a positive acknowledgement message Ack+, through which correct reception of the data packet at the buffering network entity IWU was announced.
  • a block diagram over a proposed system is depicted in figure 3.
  • a first mobile host 305 is here connected to a first base station 315 via a first access link 310.
  • the access link 310 is typically constituted by one or more wireless radio links in a cellular system. However, it may be an arbitrary kind of connection, which is suitable for the specific application.
  • the access link 310 may, for instance, be a satellite link, an optical link, a sonic link or a hydrophonic link.
  • the first base station 315 is further connected to a first buffering network entity 320, generally termed IWU (InterWorking Unit) .
  • IWU InterWorking Unit
  • the first base station 315 can either be separated from the first buffering network entity 320 (as shown in the figure 3) or be co-located with it, whichever is technically and/or economically the most appropriate.
  • the first buffering network entity 320 also interfaces a packet switched network 325.
  • the packet switched network 325 is presupposed to offer a connectionless delivery of data packets on a best-effort basis. This means briefly, that every data packet that is technically feasible to deliver will be delivered as soon as possible.
  • the Internet is a well-known example where many networks together provide a connectionless, best-effort datagram delivery. Further details as to the definition of the best-effort datagram can be found in the Internet RFC 1504.
  • At least one fixed host 330, at least one second buffering network entity 335 and a third base station 365 may be connected to the packet switched network 325.
  • the second buffering network entity 335 interfaces a second base station 340 and a second mobile host 350 via a second access link 345 in a manner corresponding to what has been described in connection with the first buffering network entity 320 above.
  • the third base station 365 which communicates with a third mobile host 355 over a third access link 360, may either be directly connected to the packet switched network 325 or be connected via a unit, which is not a buffering network entity.
  • a sending host may be arbitrary host 305, 330, 350; 355 and receiving host(s) may be one or more of the other hosts 305, 330, 350; 355.
  • the reliable sliding window transport protocol used by the sending and the receiving hosts is of a TCP-type (specified in the Internet RFC793) or of a type specified in the standard document ISO8073. Yet, any alternative sliding window transport protocol is naturally workable.
  • the sending host 310, 330, 350; 355 is notified of a data packet status for each of its transmitted data packets via a specific status message fed back from the buffering network unit 320; 335.
  • the status message (which e.g. may be TCP acknowledgement) is generated at the buffering network unit being closest to the sending host, i.e. the buffering network unit 320 or 335 respectively.
  • the first mobile host 305 sends data packets the first buffering network unit 320 generates the status information.
  • the fixed host 330 sends data packets to one of the mobile hosts 305 or 350 either the first 320 or the second 335 buffering network unit generates the status information depending on which host 305 or 350 is the receiving host.
  • the fixed host 330 should send data packets to the third mobile host 355, no such status information would be generated.
  • the buffering network unit 335 When the host 350 sends data packets the buffering network unit 335 generates the status information. If the mobile host 355 sends data packets the status information in question will only be generated if the receiving host is connected to the packet switched network 325 via a buffering network unit such as 320 or 335.
  • the data packet status information may e.g. be communicated to the sending host according to a selective acknowledgement algorithm (SACK) or according to a so-called TP4 algorithm. Further description of the TP4 algorithm can be found in the standard specification ISO 8073.
  • the fixed host FH is assumed to be the sending host and the second mobile- host MH2 is assumed to be the receiving host.
  • Data packets DP thus pass from the fixed host FH through the packet switched network to the second buffering network entity IWU2.
  • This part of the connection will be referred to as a first leg B.
  • the second buffering network entity IWU2 then forwards the data packets DP to the second mobile host MH2 via the second base station and the second access link.
  • This part of the connection will be referred to as a second leg C.
  • the second buffering network entity IWU2 here functions as the end receiver of the data packets DP from the packet switched network's point-of-view. This means that once a data packet DP has succeeded in reaching the second buffering network entity IWU2 correctly it will not be retransmitted from the fixed host FH.
  • a data packet DP is lost on the second access link between the second base station and the second mobile host MH2, that data packet DP will be retransmitted from the second buffering network entity IWU2 until the data packet DP has been received correctly at the second mobile host MH2.
  • a first part of the status message S(B, C) here corresponding to the first leg B, indicates if a data packet DP has been lost or degenerated in the packet switched network.
  • a second part of the status message S(B, C) here corresponding to the second leg C, indicates if a data packet DP has been lost or degenerated over the second access link.
  • the data packet rate from the fixed host FH will be reduced via at least one data flow control algorithm, otherwise not.
  • the first mobile host MH1 is the sending host and the fixed host FH is the receiving host.
  • Data packets DP hence leave the first mobile host MH1 over the first access link and pass via the first base station to the first buffering network entity IWU1.
  • This part of the connection will be referred to as a first leg A.
  • the first buffering network entity IWU1 then forwards the data packets DP to the fixed host FH through the packet switched network, which corresponds to a second leg B of the connection.
  • Such degenerated or lost packets DP must naturally be retransmitted from the first mobile host MH1 to the first buffering network entity IWU1.
  • the first buffering network entity IWU1 may also have to retransmit data packets DP to the fixed host FH. This is the case when, for instance, congestion in the packet switched network has caused the retransmission timer to expire.
  • a status message S (A, B) related to each data packet DP is generated in the first buffering network entity IWU1 and returned to the first mobile host MHl.
  • a first part of the status message S (A, B) here corresponding to the second leg B indicates if a data packet DP has been lost or degenerated in the packet switched network, while a second part, here corresponding to the first leg A, indicates if a data packet DP has been lost or degenerated in over the first access link.
  • the data packet rate from the first mobile host MHl will be reduced via at least one data flow control algorithm. The loss or degradation of a data packet DP over the first access link will, however, not influence the data packet rate from the first mobile host MHl in this way.
  • the first mobile host MHl is the sending host and the second mobile host MH2 is the receiving host.
  • the first mobile host MHl now transmits data packets DP over the first access link, via the first base station to the first buffering network entity IWU1.
  • This part of the connection will be referred to as a first leg A.
  • the first buffering network entity IWU1 subsequently passes the data packets DP via the packet switched network to the second buffering network entity IWU2.
  • This part of the connection will be referred to as a second leg B.
  • the second buffering network entity IWU2 sends the data packets DP to the second mobile host MH2 through the second base station and the second access link.
  • This last part of the connection constitutes a third leg C.
  • data packets DP may, if necessary, be retransmitted over any of the legs A, B and C respectively.
  • a data packet DP can either be retransmitted from the first mobile host MHl to the first buffering network entity IWU1, from the first buffering network entity IWU1 to the second buffering network entity IWU2 or from the second buffering network entity IWU2 to the second mobile host MH2. Regardless of what caused the loss or degrading of a particular data packet DP, retransmission of the data packet will always only be performed over the leg A, B; C where the data packet DP was either lost or degenerated.
  • Status messages S (A, B) ; S(B, C) indicating the status of each communicated data packet DP are generated in both the buffering network entities IWU1; IWU2.
  • the status messages S (A, B) ; S(B, C) indicate in which leg A, B; C a loss or degrading of a particular data packet DP has occurred. If the status messages S (A, B) ; S(B, C) announce that a data packet DP has been lost or degenerated in the packet switched network, here leg B, at least one data flow control algorithm will be triggered.
  • the data packet rate from the first mobile host MHl will, as a result of such an algorithm, be decreased.
  • a loss or degradation of a data packet DP in any of the other legs A or C will, on the other hand, not lead to a reduction of the data transmission rate from the first mobile host MHl.
  • the first mobile host MHl is here the sending host and the third mobile host MH3 is the receiving host.
  • the first mobile host MHl this time transmits data packets DP over the first access link, via the first base station to the first buffering network entity IWU1.
  • This part of the connection will be referred to as a first leg A.
  • the first buffering network entity IWU1 then passes the data packets DP via the packet switched network to the third mobile host MH3, via the third base station and the third access link.
  • This part of the connection will be referred to as a second leg B.
  • a retransmission of a lost or degenerated data packet DP can here either be carried out over the first leg A or over the second leg B.
  • a status message S (A, B) indicating the status of each communicated data packet DP is generated in the first buffering network entity IWU1 and returned to the first mobile host MHl.
  • the status message thus indicates S (A, B) if a certain data packet DP has been lost or degenerated over the first access link, i.e. leg A, or somewhere between the first buffering network entity IWU1 and the third mobile host MH3, i.e. leg B.
  • Only the loss or degeneration of a data packet DP in leg B will trigger data flow control algorithms and thus decrease the data packet rate from the first mobile host MHl.
  • Such loss or degrading is most likely to depend on poor quality of the third access link between the third base station and the third mobile host MH3, but since this fact is impossible to verify data flow control algorithms will nevertheless be activated.
  • a buffering network entity over a certain period of time receives more data packets via one of its interfaces than what can be delivered over its other interface, the superfluous data packets will be discarded by the buffering network entity.
  • the buffering network entity will then feed back status messages S(X, Y) to the sending host indicating that the superfluous data packets were lost in the packet switched network. This will in its turn trigger at least one data flow control algorithm, which directs the sending host to reduce its data packet rate. The rate will thus be gradually reduced until it meets the transmission capacity of the limiting interface at the buffering network entity.
  • Figure 5 illustrates an embodiment of the inventive method being carried out in a buffering network entity, when data packets are transmitted from a sending host to a receiving host via the buffering network entity.
  • the figure illustrates the possible fate of a particular data packet DP or a certain
  • SUBST ⁇ UTE SHEET (RULE 26) group of data packets DPs when passing through the buffering network entity. It is nevertheless important to bear in mind that since the reliable sliding window transport protocol allows many data packets to be outstanding in the network between the sending and the receiving host, the buffering network unit will at any given moment be carrying out many of the following steps simultaneously and in parallel. The procedure will thus be in different steps with regard to different data packets DPs.
  • a first step 500 the buffering network entity receives one or more data packets DP(s) from the sending host.
  • a following step 505 checks whether the data packet (s) DP(s) is (are) correct. If so, the procedure continues to step 520. Otherwise, a request is made in a step 510 for retransmission of the incorrectly received data packet(s) DP(s).
  • a status message S(X,-) indicating the erroneous reception of the data packet (s) DP(s) is fed back to the sending host in a subsequent step 515. The procedure then returns to step 505 in order to determine whether the retransmitted data packet (s) DP(s) arrive (s) correctly.
  • steps 510 and 515 are most efficiently being carried out as one joint step, where the status message S(X,-) per se is interpreted as a request for retransmission.
  • the steps 510 and 515 may, of course, also be carried out in reverse order or in parallel. Their relative order is nevertheless irrelevant for the result.
  • the buffering network entity in the steps 500 and 505 receives data packets DP(s) in an out-of-sequence order, so that a loss of earlier data packet (s) DP(s) likely to have occurred a retransmission of the assumedly lost data packet (s) is requested in the steps 510 and 515.
  • the step 520 checks whether the connection at the buffering network entity' s output interface has enough bandwidth BW, i.e. can transport data packets DPs at least as fast as data packets DPs arrive to the buffering network entity at the input interface. In case of insufficient bandwidth BW one or more data packets DP(s) are discarded in a step 525. The procedure then returns to the step 510, where retransmission is requested for the discarded data packet (s). If, on the other hand, in the step 520 it is found that the output interface has sufficient bandwidth BW the procedure continues to a step 530.
  • BW bandwidth
  • a status message S(X,-) indicating correct reception of the data packet (s) DP(s) may be fed back to the sending host. If there is a packet switched network between the sending host and the buffering network entity, a status message S(X,-) regarding the result of the transmission is regularly fed back from the buffering network entity to the sending host. If, however, there is no packet switched network between the sending host and the buffering network entity (but e.g. an access link) the step 530 may be empty. Step 530 namely provides information to the sending host, which is necessary to control the data flow from the sending host, and such information need only be communicated if the data packet (s) DP(s) has/have been transmitted over a packet switched network.
  • step 535 the correctly received data packet (s) DP(s) is/are stored in the buffering network entity.
  • a thereafter following step 540 forwards the data packet (s) DP(s) to the receiving host.
  • a step 545 checks whether a status message relating to the sent the data packet (s) DP(s) has been returned from the receiving host. If such a status message reaches the buffering network entity before the expiration of a retransmission timer a step 555 checks whether the status message indicates correct or incorrect reception of the data packet (s) DP(s).
  • a step 550 determines if the retransmission timer has expired, and in case the timer is still running the procedure is looped back to the step 545.
  • step 555 If however, no status message has been received when the retransmission timer expires or if it in step 555 is found that one or more data packets DP(s) have been received incorrectly, the data packet (s) DP(s) in question are retransmitted in a step 560.
  • a following step 565 returns a status message S(X, Y) to the sending host indicating the fact that retransmission of data packet (s) DP(s) was necessary. If there is a packet switched network between the buffering network entity and the receiving host, flow control algorithms triggered by the status message S (X, Y) will cause the sending host to reduce its data flow, otherwise the data flow from the sending host will not be affected.
  • the steps 560 and 565 are carried out reverse order or in parallel. Their relative order is irrelevant for the effect of the steps.
  • the procedure returns to the step 545, which checks whether a status message for the retransmitted data packet (s) DP(s) has been received. As soon as a status message indicating correct reception of the data packet (s) has/have reached the buffering network entity the procedure continues for this/these data packets DP(s) from the step 545, via the step 555, to a final step 570.
  • This final step 570 feeds back a status message S (X, Y) to the sending host, which indicates that the data packet (s) has/have been received correctly by the receiving host.
  • FIG. 6 depicts a block diagram over an arrangement according to the invention.
  • a sending host which may be either fixed or mobile, is here represented by a first general interface 600.
  • a receiving host which likewise may be either fixed or mobile, is correspondingly represented by a second general interface 650.
  • a buffering network entity IWU interfaces both the first and the second interface 600; 650.
  • the buffering network entity IWU in its turn includes a means 610 for receiving data packets DP from the first interface 600, a storage means 620 for storing data packets DP, a means 630 for transmitting data packets DP over the second interface 650 and a processing means 640 for generating status messages S(X, Y) and controlling the over all operation of the entity buffering network IWU in accordance with the method described in connection with figure 5 above.
  • the receiving means 610 Apart from receiving data packets DP the receiving means 610 also determines whether the data packets DP are received correctly and .generates in response to the condition of the received data packets DP a first status message S(X,-).
  • the receiving means 610 may have to discard data packets DP on instruction from the processing means 640. This happens if the processing means has found that the bandwidth / capacity over the interface 600 exceeds the capacity over the interface 650.
  • the discarded data packets DP are regarded as lost data packets.
  • a first status message S(X,-) indicating such a loss is therefore generated and returned after having discarded data packets in the receiving means 610.
  • This first status message S(X,-) is returned to the sending host.
  • the first status message S(X,-) is also forwarded to the processing means 640.
  • the means 610 generates requests for retransmission of data packets DP whenever that becomes necessary.
  • the status message S(X,-) itself may, of course, be interpreted as a requests for retransmission at the sending host.
  • Data packets DP having been received correctly by the means 610 are passed on to the storage means 620 for temporary storage.
  • the means for transmitting 630 retrieves data packets DP from the storage means 620 and sends them over the interface 650 to the receiving host.
  • the receiving host returns a second status message e.g.
  • the processing means 640 receives the second status message Ack ⁇ and generates a third and combined status message S (X, Y) , which is determined from the content of the first status message S(X,-) and the second status message Ack ⁇ .
  • the third status message S(X, Y) thus gives a total representation of how successfully a certain data packet or a certain group of data packets was passed over the respective communication legs before and after the buffering network entity.
  • the third status message S (X, Y) is transmitted from the processing means back to the sending host over the interface 600.
  • a particular data packet DP having been temporarily stored in the storage means 620 may be deleted as soon as a second status message Ack+ has been received for the data packet DP indicating that the data packet has been received correctly by the receiving host.
  • the data packet DP may of course also be deleted at any later, and perhaps more suitable, moment.

Abstract

The present invention relates to a method and a system for communicating data packets over a packet switched network (325) and at least one access link (310, 345). A buffering network entity (320, 335) acts as the end-receiver of the transmitted data packets from the sending host's point-of-view. The buffering network entity (320, 335) feeds back to the sending host a status message, which in case of a lost or erroneously received data packet indicates where this degeneration of data has occurred, i.e. in the packetswitched network or over the access link. The sending host regularly checks the returned status messages and data flow algorithms are only triggered if a data packet has been lost or degenerated in the packet switched network (325). The invention thus prevents data flow algorithms such as slow start and congestion avoidance algorithms from being activated in the substantially error-immune packet switched network when data packets are lost in comparatively error-prone access links. This significantly increases the efficiency of the overall system.

Description

METHOD AND SYSTEM FOR DATA COMMUNICATION
FIELD OF INVENTION
The present invention relates generally to data communication between computers. More particularly the invention relates to the problem of efficiently communicating data packets over a network including both a sub-network, which is substantially error-immune and has a low latency, and an access link, which is comparatively error-prone and has a substantial latency.
DESCRIPTION OF THE PRIOR ART TCP (Transmission Control Protocol) is today the most commonly used transport layer protocol for communicating data over an internet. This protocol is optimised for wired connections, which, have a high transmission quality and a low latency. However, TCP is not very efficient for transmitting data over links that are error prone, have long delays and/or a high latency. Wireless links constitute typical examples of such non-optimal links. Mobile communication typically imposes a wireless link. Thus, two computers, of which at least one is mobile, cannot communicate efficiently via a standard TCP- connection, since the transmission algorithms in TCP postulate a much higher link quality than what a wireless connection normally can offer. Therefore, the comparatively poor quality of the wireless connection, in most cases, severely degenerates the performance of the connection. This is particularly true if the wireless link is a high-speed link with a considerable latency.
To ensure a safe transmission of data packets from a sending to a receiving host, the protocol prescribes that a piece of information indicating the status of received data packets must be fed back from the receiving host to the sending host. A simple positive acknowledgement protocol awaits an acknowledgement for each particular data packet before sending another data packet. Naturally, such a protocol wastes a substantial amount of network bandwidth while the sending host waits for acknowledgements. An example of a more efficient protocol is the so-called sliding window protocol.
Figure 1 illustrates a known method of using a sliding window protocol to make it possible for a sending host to transmit multiple data packets DPI - DP3 before obtaining feed-back information Ackl - Ack3 on the status of the transmitted data packets from a receiving host. In figure 1 the sending host is represented to the left and the receiving host to the right. A time scale is symbolised vertically, with increasing time downwards. In this example a congestion_window has the size W of three data packets. This means that three data packets DPI - DP3 may leave the sending host before a first status message Ackl from the receiving host arrives. Once such a message Ackl has come, the congestion_window slides one data packet and a fourth data packet DP4 may be sent.
In order to assure delivery, each data packet is associated with a retransmission timer. The retransmission timer is started when a data packet leaves the sending host. At the expiry of the retransmission timer the sending host retransmits the data packet. The protocol may also be defined such that the receiving host returns a negative acknowledgement message for a data packet, if the data packet has been received, however incorrectly. The data packet is, of course, retransmitted also when such a negative acknowledgement message arrives at the sending host. Thus a data packet will be retransmitted either at reception of a negative acknowledgement message or when the retransmission timer expires, whichever happens first.
The procedure is then repeated for all data packets in the message until the sending host has obtained positive acknowledgement messages for each data packet in the message. The size W of the congestion_window thus corresponds to the number of data packets that may be sent out unacknowledged into the network between the sending and the receiving host.
By gradually increasing the congestion_window, it is possible to eliminate the idle time in the network completely. In the steady state, the sending host can thus transmit data packets as fast as the network can transfer them. Consequently, a well-tuned sliding window protocol keeps the network completely saturated with data packets and obtains substantially higher throughput than a simple positive acknowledgement protocol (which is also known under the name stop-and-wait protocol) .
Modern TCP applies four different algorithms for controlling the transmission of data packets over the Internet. According to a first algorithm termed Slow Start the congestion_window is gradually increased as described above. Slow Start is applied whenever a new connection is set up or when packet loss has been detected by the retransmission timer, e.g. after a period of congestion. The congestion_window is initially set to one data packet. The congestion_window is then increased to two data packets at reception of the first acknowledgement message. The sending host then sends two more data packets and awaits the corresponding acknowledgement messages. When those arrive they each increase the congestion_window by one, so that four data packets may be sent unacknowledged, and so on. The term Slow Start may sometimes be a misnomer, because under ideal conditions, the transmission rate is ramped up exponentially.
If capacity limitations of the network do not stop this exponential increase, the receiving host always has a window limit, a so-called advertised_window, which ultimately restricts the transmission rate. Once this limit has been reached the congestion_window cannot be increased any further.
To avoid increasing the congestion_window too quickly and thereby causing congestion, TCP includes one additional restriction. A third window, usually referred to as the allowed_window, is applied for this purpose. The size of the allowed_window is determined by the following expression:
allowed_window = min (advertised_window, congestion_window) .
The congestion_window in its turn is set according to the following strategy. In steady state, on a non-congested connection, the congestion_window and the advertised_window are of equal size. The congestion_window is reduced by half (down to a minimum of one data packet) upon loss of data packets. For those data packets that remain in the allowed_window, the retransmission time is reduced exponentially.
If the advertised window does not restrain the packet rate to a level that the network is able to sustain, the advertised_window is larger than a window size, which corresponds to the entire available capacity in the network, the congestion_window will be increased until a packet is lost due to congestion. The congestion_window will then be reduced radically to decrease the total load on the network. After that, the congestion_window will once more be increased until a data packet loss occurs and so on. In this case, a steady state will never be reached.
Congestion Avoidance is a second algorithm included in TCP, which is applied after Slow Start . Whenever a new connection is set up between two hosts the congestion_window is increased until either (i) a steady state is reached or (ii) a data packet is lost. The communicating hosts may be informed of the loss of a data packet in one of two alternative ways. Either because a retransmission timer expires or because a third algorithm called Fast Retransmi t is activated. This algorithm will be described after discussing the different methods applied upon data packet loss detection. If a data packet loss is discovered through expiration of a retransmission timer, the congestion_window is immediately reduced to one data packet. The congestion_window is thereafter increased under the Slow Start algorithm until it reaches one half of the size it had before the retransmission timer expired. Then, the Congestion Avoidance algorithm is_ activated. During Congestion Avoidance the congestion_window is increased by one data packet only when all the packets in one window have been positively acknowledged.
If the loss of a data packet is discovered through the Fast Retransmi t algorithm, the congestion_window is instantaneously decreased to half the size it had before the data packet was lost. The Congestion Avoidance algorithm is then activated and applied as described above.
The Fast Retransmi t algorithm will be illustrated by means of an example. Suppose a sending host transmits 10 data packets to a receiving host. All these data packets arrive correctly. As a consequence of this, the receiving host returns a positive acknowledgement message for data packet number 10 to the sending host. This message indicates to the sending host that all 10 data packets have been received correctly. The sending host then transmits data packet number 11. This data packet is however lost somewhere in the network. Later, the sending host transmits data packet number 12, which reaches the receiving host correctly. Since the positive acknowledgement messages are cumulative, the receiving host cannot now return a positive acknowledgement message for data packet number 12. Instead, another positive acknowledgement message for data packet number 10 is sent. The sending host then transmits data packet number 13. This data packet also reaches the receiving host correctly. As a response, the receiving host feeds back yet another positive acknowledgement message for data packet number 10. When the sending host thus has received a third positive acknowledgement message for the 10th data packet, this is interpreted as a loss of data packet number 11. This data packet is therefore retransmitted and if it reaches the receiving host correctly, a positive acknowledgement message for data packet number 13 will be returned. A general and more detailed description of the Fast Retransmit algorithm can be found in W. Stevens, "TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms", Internet RFC 2001, Network Working Group, NOAO, January 1997.
To sum up, the three algorithms described above, Slow Start opens up the congestion_window exponentially. Congestion Avoidance, on the other hand, opens up the congestion window linearly. Fast Retransmi t is an algorithm for detecting the loss of data packets.
Forward Acknowledgement (FACK) is a fourth control mechanism included in TCP, which actually is a developed version of the Congestion Avoidance and the Fast Retransmi t algorithms. Through FACK however, the outstanding data packets in the network may be more accurately controlled. FACK is also less bursty and can recover from periods of heavy loss better. Further details regarding FACK can be found in M, Mathis et al, "Forward Acknowledgement: Refining TCP Congestion Control", Proceedings of ACM Sigcomm '96, Stanford, USA, August 1996.
H. Balakrishnan et al describe a fifth mechanism under TCP, which is called Snoop, in the document "Improving TCP/IP Performance over Wireless Networks", Proceedings of ACM Mobicom '95, November 1995. Snoop is a protocol that improves TCP in wireless networks. The protocol modifies network-layer software mainly at a base station, while preserving the end- to-end TCP semantics. The general idea of the protocol is to cache data packets at the base station and perform local retransmissions across the wireless link. A further adaptation of TCP to wireless connections is presented in S. Biaz et al, "TCP over Wireless Networks Using Multiple Acknowledgements", Department of Computer Science at Texas A&M University, Technical Report 97-001, January 1997. Unnecessary retransmissions are here avoided in the network by feeding back a partial acknowledgement for a data packet that has reached the base station, if it experiences difficulties on the wireless link. The base station is responsible for retransmissions on the wireless link, while it delays timeout of the retransmission timer via the partial acknowledgement.
The Slow Start-, the Congestion Avoidance-, the Fast Retransmit- and the Fast .Recovery-algorithms are all applied in most modern implementations of TCP. The FACK- and the Snoop algorithms however, are still not very frequently used.
The patent document EP, A2, 0 695 053 discloses an asymmetric protocol for wireless data communications with mobile terminals, according to which the terminals only transmit acknowledgement messages and requests for retransmission upon inquiry or when all data packets within a data block have been received. According to the protocol, base stations store channel information for the wireless links and status information of received and transmitted data packets. A base station may also combine acknowledgements for multiple data packets into a single acknowledgement code, in order to reduce the power consumption in the mobile terminals.
Generally speaking, wireless connections cause longer round- trip delays than wired connections. As result of the longer round-trip delays, the transmission rate increase for a wireless connection must, according to the prior art transport protocols, be much slower than for a corresponding wired connection. This is particularly true for wireless connections where the bandwidth-delay product is comparatively high. The prior art protocols always seek to increase the throughput of a connection as far as the interconnecting networks permit. In addition to this, the possible transmission rate increase for a particular connection is inversely correlated to the round- trip delay for the connection. Hence, the shorter the round- trip delay, the faster the increase rate. The combination of characteristics typical for a wireless connection of high bandwidth-delay product and long round-trip delay consequently also constitutes a problem.
SUMMARY OF THE INVENTION
The present invention relates generally to data communication between host computers, and more particularly to the problems discussed above. The means of solving these problems according the present invention are summarised below.
As indicated earlier, problems occur when the networks, which connect a sending host with a receiving host suffer from both a high random loss of data packets and a high bandwidth-delay product .
Accordingly, it is an object of the present invention to unravel the above-mentioned problem.
Particularly, it is an object of the invention to increase the efficiency of a network with a high a bandwidth-delay product, being connected to an access link, which causes a high random loss of data.
Another object of the invention is to minimise the influence of data packet losses in relatively error-prone access links, in a substantially less error-prone network.
One further object of the present invention is to increase the efficiency of the less error-prone links of a data communication system, and thereby admit transmission of a larger amount of data through the total system.
The proposed method for communicating data packets over a packet switched network via at least one wireless access link includes the following assumptions and steps. The packet switched network is presumed to offer a connectionless delivery of data packets. The protocol used by the sending host and the receiving host is a reliable sliding window transport protocol, through which data packets, whenever necessary, may be retransmitted from the sending to the receiving host. The communicating hosts also take measures to protect the packet switched network from congestion. The receiving host generates status messages indicating the condition of received data packets to the sending host. In response to the status messages the sending host takes appropriate data flow control measures. A buffering network entity interfaces both the wireless access link and the packet switched network. During communication of data packets the buffering network entity performs the following steps. First, receiving data packets from the sending host. Second, either explicitly or implicitly, notifying the sending host of which data packets that have been received correctly by the buffering network entity, and in case of a lost or erroneously received data packet, indicating whether the data packet was lost or erroneously received over the access link or in the packet switched network. Third, storing the correctly received data packets. Fourth, forwarding the stored data packets to the receiving host. Finally, the buffering network entity performs local retransmissions of the stored data packets to the receiving host, whenever that becomes necessary. Typically such retransmissions occur after the expiry of a retransmission timer or at reception of an explicit or implicit notification of loss from the receiving host.
A method of communicating data packets between two hosts according to the invention is hereby characterised by what is apparent from claim 1.
A proposed system includes a buffering network entity, which interfaces both a wireless access link and a packet switched network and thus directly or indirectly brings a sending host in contact with a receiving host. The sending and the receiving hosts are assumed to operate according to a reliable sliding window transport protocol through which lost or erroneously received data packets may be retransmitted from the sending host to the receiving host. The packet switched network is further assumed to offer a connectionless delivery of data packets. The buffering network entity includes a receiving means for receiving data packets. This means also generates and returns to the sending host a first status message, which (i) indicates whether a particular data packet must be retransmitted or not and (ii) indicates whether a data packet has been lost or erroneously received in the packet switched network. The buffering network entity also includes a means for storing correctly received data packets and a means for retrieving the stored data packets and transmitting them to the receiving host. Moreover, the buffering network entity includes a processing means for receiving the first status message from the receiving means and for receiving a second status message from the receiving host. The processing means generates a third status message in response to the first and second status messages and returns the third status message to the sending host. The stored data packets are retransmitted to the receiving host whenever that proves to be necessary. Typically, such retransmission is initiated after the expiry of a retransmission timer or at reception of an explicit or implicit notification of loss.
The system according to the invention is hereby characterised by the features set forth in the characterising clause of claim 9.
The present invention thus prevents congestion control algorithms from being activated in a substantially error- immune network when data packets are lost in thereto connected and comparatively error-prone access links for other reasons than congestion. This of course, increases the efficiency not only in the substantially error-immune network per se, but also in systems, which include both substantially error-immune links and comparatively error-prone access links.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates the known method of using a sliding window protocol being described above;
Figure 2 illustrates the general method according to the invention by means of a sequence diagram;
Figure 3 depicts a block diagram over a proposed system;
Figures 4a-d illustrate embodiments of the proposed method for communicating data packets and generating status information;
Figure 5 shows a flow diagram over an embodiment of the proposed method being performed by a buffering network entity; Figure 6 depicts a block diagram over a system according to the invention.
The invention will now be described in more detail with reference to preferred exemplifying embodiments thereof and also with reference to the accompanying drawings.
DESCRIPTION OF PREFERRED EMBODIMENTS
A sequence diagram in figure 2 gives a general illustration of the method according to the invention. A sending host is here represented to the left in the diagram and a receiving host is represented to the right. The part of the connection between the sending host and a buffering network entity IWU is referred to as a first leg A and the part of the connection between the buffering network entity and the receiving host is identified as a second leg B. A time scale is symbolised vertically, with increasing time downwards. Data packets DPn up to a number n are assumed to have reached the receiving host correctly. The receiving host therefore returns a positive acknowledgement message Ack n indicating this to the buffering network entity IWU. The buffering network entity IWU receives the positive acknowledgement message Ack n. In this example, a data packet DPm with number m arrives correctly at the buffering network entity IWU a few moments later. The buffering network entity IWU subsequently feeds back to the sending host a first status information message S (An, Bn) indicating that the receiving host has received all data packets DPn up to number n correctly, i.e. no errors or losses have occurred for any of those n data packets, neither in the first leg A nor the second leg B. The buffering network entity IWU may also, simultaneously with this or at any other time after reception of the data packet DPm, send back a second status information message Ack m, which indicates to the sending host that all data packets DPm up to number m have reached the buffering network entity IWU successfully. The first and second data packet status information messages S (An, Bn) ; Ack m are preferably effectuated according to a selective acknowledgement algorithm, such as SACK. A detailed description of this algorithm can be found in M. Mathis et al, "TCP Selective Acknowledgement Options", Internet RFC 2018, Network Working Group, October 1996.
If a data packet is received erroneously or if a data packet is lost over any of the legs A or B, the buffering network entity IWU feeds back data packet status information messages indicating this fact to the sending host. The buffering network entity IWU would, in response to a lost or degraded data packet in the first leg A, return to the sending host data packet a status information message S (A, B) = [1,-], which indicates the transmission error on this particular leg A. In case of a loss or degeneration of a data packet over the second leg B, the buffering network entity IWU would return at least a data packet status information message S (A, B) = [0, 1] indicating the transmission error on this leg B. Optionally, the buffering network entity IWU could already have fed back a positive acknowledgement message Ack+, through which correct reception of the data packet at the buffering network entity IWU was announced.
A block diagram over a proposed system is depicted in figure 3. A first mobile host 305 is here connected to a first base station 315 via a first access link 310. The access link 310 is typically constituted by one or more wireless radio links in a cellular system. However, it may be an arbitrary kind of connection, which is suitable for the specific application. The access link 310 may, for instance, be a satellite link, an optical link, a sonic link or a hydrophonic link. The first base station 315 is further connected to a first buffering network entity 320, generally termed IWU (InterWorking Unit) . The first base station 315 can either be separated from the first buffering network entity 320 (as shown in the figure 3) or be co-located with it, whichever is technically and/or economically the most appropriate. The first buffering network entity 320 also interfaces a packet switched network 325. The packet switched network 325 is presupposed to offer a connectionless delivery of data packets on a best-effort basis. This means briefly, that every data packet that is technically feasible to deliver will be delivered as soon as possible. The Internet is a well-known example where many networks together provide a connectionless, best-effort datagram delivery. Further details as to the definition of the best-effort datagram can be found in the Internet RFC 1504.
In addition, at least one fixed host 330, at least one second buffering network entity 335 and a third base station 365 may be connected to the packet switched network 325. The second buffering network entity 335 interfaces a second base station 340 and a second mobile host 350 via a second access link 345 in a manner corresponding to what has been described in connection with the first buffering network entity 320 above. The third base station 365, which communicates with a third mobile host 355 over a third access link 360, may either be directly connected to the packet switched network 325 or be connected via a unit, which is not a buffering network entity.
The above-described system makes it possible to exchange data packets in any direction between any of the hosts 305, 330, 350 and 355 according to a reliable sliding window transport protocol. Consequently, a sending host may be arbitrary host 305, 330, 350; 355 and receiving host(s) may be one or more of the other hosts 305, 330, 350; 355. Preferably, the reliable sliding window transport protocol used by the sending and the receiving hosts is of a TCP-type (specified in the Internet RFC793) or of a type specified in the standard document ISO8073. Yet, any alternative sliding window transport protocol is naturally workable.
The sending host 310, 330, 350; 355 is notified of a data packet status for each of its transmitted data packets via a specific status message fed back from the buffering network unit 320; 335. The status message (which e.g. may be TCP acknowledgement) is generated at the buffering network unit being closest to the sending host, i.e. the buffering network unit 320 or 335 respectively. Thus, when the first mobile host 305 sends data packets the first buffering network unit 320 generates the status information. When the fixed host 330 sends data packets to one of the mobile hosts 305 or 350 either the first 320 or the second 335 buffering network unit generates the status information depending on which host 305 or 350 is the receiving host. If however, the fixed host 330 should send data packets to the third mobile host 355, no such status information would be generated. When the host 350 sends data packets the buffering network unit 335 generates the status information. If the mobile host 355 sends data packets the status information in question will only be generated if the receiving host is connected to the packet switched network 325 via a buffering network unit such as 320 or 335. The data packet status information may e.g. be communicated to the sending host according to a selective acknowledgement algorithm (SACK) or according to a so-called TP4 algorithm. Further description of the TP4 algorithm can be found in the standard specification ISO 8073.
In order to furthermore illustrate the invention, four different data communication examples will now be described with reference to figures 4a through 4d.
In a first data communication example, being illustrated in figure 4a, the fixed host FH is assumed to be the sending host and the second mobile- host MH2 is assumed to be the receiving host. Data packets DP thus pass from the fixed host FH through the packet switched network to the second buffering network entity IWU2. This part of the connection will be referred to as a first leg B. The second buffering network entity IWU2 then forwards the data packets DP to the second mobile host MH2 via the second base station and the second access link. This part of the connection will be referred to as a second leg C.
The second buffering network entity IWU2 here functions as the end receiver of the data packets DP from the packet switched network's point-of-view. This means that once a data packet DP has succeeded in reaching the second buffering network entity IWU2 correctly it will not be retransmitted from the fixed host FH. The second buffering network entity IWU2 returns a status message S(B, C) = [0,-] indicating this fact to the fixed host FH.
If, on the other hand, a data packet DP is lost on the second access link between the second base station and the second mobile host MH2, that data packet DP will be retransmitted from the second buffering network entity IWU2 until the data packet DP has been received correctly at the second mobile host MH2. Such a loss of a data packet is also indicated to the fixed host FH by the status message S(B, C) = [0, 1] fed back from the second buffering network entity IWU2. Data packets DP that are lost or degenerated after having left the fixed host FH, but before reaching the second buffering network entity IWU2, will be retransmitted from the fixed host FH. If the second buffering network entity IWU2 registers a loss or a degeneration of a data packet DP, it notifies the fixed host FH via the status message S(B, C) = [1,-].
According to the invention, a first part of the status message S(B, C) , here corresponding to the first leg B, indicates if a data packet DP has been lost or degenerated in the packet switched network. A second part of the status message S(B, C) , here corresponding to the second leg C, indicates if a data packet DP has been lost or degenerated over the second access link. In case the loss or degradation occurred in the packet switched network, the data packet rate from the fixed host FH will be reduced via at least one data flow control algorithm, otherwise not.
In a second data communication example illustrated in figure 4b, the first mobile host MH1 is the sending host and the fixed host FH is the receiving host. Data packets DP hence leave the first mobile host MH1 over the first access link and pass via the first base station to the first buffering network entity IWU1. This part of the connection will be referred to as a first leg A. The first buffering network entity IWU1 then forwards the data packets DP to the fixed host FH through the packet switched network, which corresponds to a second leg B of the connection.
The occasionally poor quality of the access link between the first mobile host MH1 and the first buffering network entity IWU1 may lead to degeneration or loss of data packets DP. Such degenerated or lost packets DP must naturally be retransmitted from the first mobile host MH1 to the first buffering network entity IWU1. Nevertheless, once a data packet DP has reached the first buffering network entity IWU1 correctly it never has to be retransmitted from the first mobile host MHl. However less likely, the first buffering network entity IWU1 may also have to retransmit data packets DP to the fixed host FH. This is the case when, for instance, congestion in the packet switched network has caused the retransmission timer to expire.
A status message S (A, B) related to each data packet DP is generated in the first buffering network entity IWU1 and returned to the first mobile host MHl. A first part of the status message S (A, B) here corresponding to the second leg B, indicates if a data packet DP has been lost or degenerated in the packet switched network, while a second part, here corresponding to the first leg A, indicates if a data packet DP has been lost or degenerated in over the first access link. In case the loss or degradation occurred in the packet switched network, the data packet rate from the first mobile host MHl will be reduced via at least one data flow control algorithm. The loss or degradation of a data packet DP over the first access link will, however, not influence the data packet rate from the first mobile host MHl in this way.
In a third data communication example, illustrated in figure 4c, the first mobile host MHl is the sending host and the second mobile host MH2 is the receiving host. The first mobile host MHl now transmits data packets DP over the first access link, via the first base station to the first buffering network entity IWU1. This part of the connection will be referred to as a first leg A. The first buffering network entity IWU1 subsequently passes the data packets DP via the packet switched network to the second buffering network entity IWU2. This part of the connection will be referred to as a second leg B. Finally, the second buffering network entity IWU2 sends the data packets DP to the second mobile host MH2 through the second base station and the second access link. This last part of the connection constitutes a third leg C.
In this case, data packets DP may, if necessary, be retransmitted over any of the legs A, B and C respectively. A data packet DP can either be retransmitted from the first mobile host MHl to the first buffering network entity IWU1, from the first buffering network entity IWU1 to the second buffering network entity IWU2 or from the second buffering network entity IWU2 to the second mobile host MH2. Regardless of what caused the loss or degrading of a particular data packet DP, retransmission of the data packet will always only be performed over the leg A, B; C where the data packet DP was either lost or degenerated. Status messages S (A, B) ; S(B, C) indicating the status of each communicated data packet DP are generated in both the buffering network entities IWU1; IWU2. The status messages S (A, B) ; S(B, C) indicate in which leg A, B; C a loss or degrading of a particular data packet DP has occurred. If the status messages S (A, B) ; S(B, C) announce that a data packet DP has been lost or degenerated in the packet switched network, here leg B, at least one data flow control algorithm will be triggered. The data packet rate from the first mobile host MHl will, as a result of such an algorithm, be decreased. A loss or degradation of a data packet DP in any of the other legs A or C will, on the other hand, not lead to a reduction of the data transmission rate from the first mobile host MHl.
A fourth data communication example will now be discussed with reference to figure 4d. The first mobile host MHl is here the sending host and the third mobile host MH3 is the receiving host. The first mobile host MHl this time transmits data packets DP over the first access link, via the first base station to the first buffering network entity IWU1. This part of the connection will be referred to as a first leg A. The first buffering network entity IWU1 then passes the data packets DP via the packet switched network to the third mobile host MH3, via the third base station and the third access link. This part of the connection will be referred to as a second leg B. A retransmission of a lost or degenerated data packet DP can here either be carried out over the first leg A or over the second leg B. A status message S (A, B) indicating the status of each communicated data packet DP is generated in the first buffering network entity IWU1 and returned to the first mobile host MHl. The status message thus indicates S (A, B) if a certain data packet DP has been lost or degenerated over the first access link, i.e. leg A, or somewhere between the first buffering network entity IWU1 and the third mobile host MH3, i.e. leg B. Only the loss or degeneration of a data packet DP in leg B will trigger data flow control algorithms and thus decrease the data packet rate from the first mobile host MHl. Such loss or degrading is most likely to depend on poor quality of the third access link between the third base station and the third mobile host MH3, but since this fact is impossible to verify data flow control algorithms will nevertheless be activated.
If, regardless of the communication case, a buffering network entity over a certain period of time receives more data packets via one of its interfaces than what can be delivered over its other interface, the superfluous data packets will be discarded by the buffering network entity. The buffering network entity will then feed back status messages S(X, Y) to the sending host indicating that the superfluous data packets were lost in the packet switched network. This will in its turn trigger at least one data flow control algorithm, which directs the sending host to reduce its data packet rate. The rate will thus be gradually reduced until it meets the transmission capacity of the limiting interface at the buffering network entity.
Figure 5 illustrates an embodiment of the inventive method being carried out in a buffering network entity, when data packets are transmitted from a sending host to a receiving host via the buffering network entity. The figure illustrates the possible fate of a particular data packet DP or a certain
SUBSTΓΓUTE SHEET (RULE 26) group of data packets DPs when passing through the buffering network entity. It is nevertheless important to bear in mind that since the reliable sliding window transport protocol allows many data packets to be outstanding in the network between the sending and the receiving host, the buffering network unit will at any given moment be carrying out many of the following steps simultaneously and in parallel. The procedure will thus be in different steps with regard to different data packets DPs.
In a first step 500 the buffering network entity receives one or more data packets DP(s) from the sending host. A following step 505 checks whether the data packet (s) DP(s) is (are) correct. If so, the procedure continues to step 520. Otherwise, a request is made in a step 510 for retransmission of the incorrectly received data packet(s) DP(s). A status message S(X,-) indicating the erroneous reception of the data packet (s) DP(s) is fed back to the sending host in a subsequent step 515. The procedure then returns to step 505 in order to determine whether the retransmitted data packet (s) DP(s) arrive (s) correctly.
In practice the steps 510 and 515 are most efficiently being carried out as one joint step, where the status message S(X,-) per se is interpreted as a request for retransmission. The steps 510 and 515 may, of course, also be carried out in reverse order or in parallel. Their relative order is nevertheless irrelevant for the result.
In case the buffering network entity in the steps 500 and 505 receives data packets DP(s) in an out-of-sequence order, so that a loss of earlier data packet (s) DP(s) likely to have occurred a retransmission of the assumedly lost data packet (s) is requested in the steps 510 and 515.
The step 520 checks whether the connection at the buffering network entity' s output interface has enough bandwidth BW, i.e. can transport data packets DPs at least as fast as data packets DPs arrive to the buffering network entity at the input interface. In case of insufficient bandwidth BW one or more data packets DP(s) are discarded in a step 525. The procedure then returns to the step 510, where retransmission is requested for the discarded data packet (s). If, on the other hand, in the step 520 it is found that the output interface has sufficient bandwidth BW the procedure continues to a step 530.
In the step 530 a status message S(X,-) indicating correct reception of the data packet (s) DP(s) may be fed back to the sending host. If there is a packet switched network between the sending host and the buffering network entity, a status message S(X,-) regarding the result of the transmission is regularly fed back from the buffering network entity to the sending host. If, however, there is no packet switched network between the sending host and the buffering network entity (but e.g. an access link) the step 530 may be empty. Step 530 namely provides information to the sending host, which is necessary to control the data flow from the sending host, and such information need only be communicated if the data packet (s) DP(s) has/have been transmitted over a packet switched network.
In an ensuing step 535 the correctly received data packet (s) DP(s) is/are stored in the buffering network entity. A thereafter following step 540 forwards the data packet (s) DP(s) to the receiving host. A step 545, checks whether a status message relating to the sent the data packet (s) DP(s) has been returned from the receiving host. If such a status message reaches the buffering network entity before the expiration of a retransmission timer a step 555 checks whether the status message indicates correct or incorrect reception of the data packet (s) DP(s). A step 550 determines if the retransmission timer has expired, and in case the timer is still running the procedure is looped back to the step 545. If however, no status message has been received when the retransmission timer expires or if it in step 555 is found that one or more data packets DP(s) have been received incorrectly, the data packet (s) DP(s) in question are retransmitted in a step 560. A following step 565 returns a status message S(X, Y) to the sending host indicating the fact that retransmission of data packet (s) DP(s) was necessary. If there is a packet switched network between the buffering network entity and the receiving host, flow control algorithms triggered by the status message S (X, Y) will cause the sending host to reduce its data flow, otherwise the data flow from the sending host will not be affected.
In alternative embodiments of the inventive method, the steps 560 and 565 are carried out reverse order or in parallel. Their relative order is irrelevant for the effect of the steps.
After the step 565 the procedure returns to the step 545, which checks whether a status message for the retransmitted data packet (s) DP(s) has been received. As soon as a status message indicating correct reception of the data packet (s) has/have reached the buffering network entity the procedure continues for this/these data packets DP(s) from the step 545, via the step 555, to a final step 570. This final step 570 feeds back a status message S (X, Y) to the sending host, which indicates that the data packet (s) has/have been received correctly by the receiving host.
Figure 6 depicts a block diagram over an arrangement according to the invention. A sending host, which may be either fixed or mobile, is here represented by a first general interface 600. A receiving host, which likewise may be either fixed or mobile, is correspondingly represented by a second general interface 650. A buffering network entity IWU interfaces both the first and the second interface 600; 650. The buffering network entity IWU in its turn includes a means 610 for receiving data packets DP from the first interface 600, a storage means 620 for storing data packets DP, a means 630 for transmitting data packets DP over the second interface 650 and a processing means 640 for generating status messages S(X, Y) and controlling the over all operation of the entity buffering network IWU in accordance with the method described in connection with figure 5 above.
Apart from receiving data packets DP the receiving means 610 also determines whether the data packets DP are received correctly and .generates in response to the condition of the received data packets DP a first status message S(X,-).
Furthermore, the receiving means 610 may have to discard data packets DP on instruction from the processing means 640. This happens if the processing means has found that the bandwidth / capacity over the interface 600 exceeds the capacity over the interface 650. The discarded data packets DP are regarded as lost data packets. A first status message S(X,-) indicating such a loss is therefore generated and returned after having discarded data packets in the receiving means 610.
This first status message S(X,-) is returned to the sending host. The first status message S(X,-) is also forwarded to the processing means 640. Furthermore, the means 610 generates requests for retransmission of data packets DP whenever that becomes necessary. As mentioned earlier the status message S(X,-) itself may, of course, be interpreted as a requests for retransmission at the sending host. Data packets DP having been received correctly by the means 610 are passed on to the storage means 620 for temporary storage. The means for transmitting 630 retrieves data packets DP from the storage means 620 and sends them over the interface 650 to the receiving host. In response to the sent data packets DP the receiving host returns a second status message e.g. in the form of a positive or a negative acknowledgement message Ack± indicating the condition of the data packets DP at the receiving host. The processing means 640 receives the second status message Ack± and generates a third and combined status message S (X, Y) , which is determined from the content of the first status message S(X,-) and the second status message Ack±. The third status message S(X, Y) thus gives a total representation of how successfully a certain data packet or a certain group of data packets was passed over the respective communication legs before and after the buffering network entity. The third status message S (X, Y) is transmitted from the processing means back to the sending host over the interface 600.
A particular data packet DP having been temporarily stored in the storage means 620 may be deleted as soon as a second status message Ack+ has been received for the data packet DP indicating that the data packet has been received correctly by the receiving host. The data packet DP may of course also be deleted at any later, and perhaps more suitable, moment.
The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.

Claims

1. Method for communicating data packets (DP) between a first host (305) and second host (330) over at least one access link (310) and at least one packet switched network (325) , wherein a buffering network entity (320) interfaces both the access link (310) and the packet switched network (325), the packet switched network (325) offers a connectionless delivery of data packets (DP) , and the first host (305) and second host (330) communicate data packets (DP) according to a reliable sliding window transport protocol, through which a receiving host (650) feeds back data packet status information (Ack±) indicating the condition of received data packets (DP) , and lost or erroneously received data packets (DP) are retransmitted from a sending host (600), the method comprising the steps of: receiving (500) data packets (DP) from the sending host (600) in the buffering network entity (IWU), notifying (520, 525) the sending host (600) of the condition of the received data packets (DP) , storing (530) in the buffering network entity (IWU) each correctly received data packet (DP) , forwarding (535) each stored data packet (DP) from the buffering network entity (IWU) to the receiving host (650), and performing retransmission (555) of a stored data packet (DP) from the buffering network entity (IWU) to the receiving host (650), in case the data packet (DP) is lost before reaching the receiving host (650) or if the data packet (DP) is received erroneously by the receiving host (650), c h a r a c t e r i s e d in that the method further comprises the step of: returning (560, 565) a status message (S(X,Y)) to the sending host (600), which for each lost or erroneously received data packet (DP) indicates whether the data packet
(DP) was lost or erroneously received over the access link (310) or in the packet switched network (325) .
2. Method according to claim 1, c h a r a c t e r i s e d in further comprising the steps of: checking in the sending host (600) the status message (S(X,Y)), and performing congestion control actions in the sending host (600) only if the data packet (DP) was lost or erroneously received in the packet switched network (325) .
3 Method according to claim 1 or 2, c h a r a c t e r i s e d in that: the sending host (600) is a fixed host (330), the receiving host (650) is mobile host (305), the reliable sliding window protocol is of TCP-type, and the buffering network entity (320) receives data packets (DP) from the sending host (600) via the packet switched network (325), notifies the sending host (600) of which data packets
(DP) that have been received correctly at the buffering network entity (320) according to a selective acknowledgement algorithm, and forwards correctly received data packets (DP) to the receiving host (650) according to a local retransmission algorithm.
4. Method according to claim 1 or 2 c h a r a c t e r i s e d in that: the sending host (600) is a fixed host (330), the receiving host (650) is a mobile host (305), the reliable sliding window protocol is of IS08073-type, the buffering network entity (320) : receives data packets (DP) from the sending host (600) via the packet switched network (325), notifies the sending host (600) of which data packets (DP) that have been received correctly by the buffering network entity (320) according to a TP4 algorithm, and forwards correctly received data packets (DP) to the receiving host (650) according to a local retransmission algorithm.
5. Method according to claim 1 or 2 , c h a r a c t e r i s e d in that the sending host (600) is a mobile host (305), the receiving host (650) is fixed host (330), the buffering network entity (320) : receives data packets (DP) from the sending host (600), notifies the sending host (600) of which data packets
(DP) that have been received correctly by the buffering network entity (320), and forwards correctly received data packets (DP) to the receiving host (540) according to a selective acknowledgement algorithm.
6. Method according to claim 1 or 2 , c h a r a c t e r i s e d in that the sending host (600) is a mobile host (305), the receiving host (650) is a mobile host (350), a first buffering network entity (320) : receives data packets (DP) from the sending host (600), notifies the sending host (600) of which data packets (DP) that have been received correctly, forwards correctly received data packets (DP) to the packet switched network (325) , and a second buffering network entity (335) : receives data packets (DP) from the packet switched network (325) , and forwards the received data packets (DP) to the receiving host (650) according to a local retransmission algorithm.
7. Method according to any of the claims 1 - 6, c h a r a c t e r i s e d in that given the status message (S(X,Y)) the buffering network entity (IWU) performing the steps of: estimating a first rate at which data packets (DP) are being communicated through the at least one access link (310), estimating a second rate at which data packets (DP) may be communicated through the at least one packet switched network (325) , and discarding superfluous data packets (DP) if the first rate exceeds the second rate.
8. Method according to any of the claims 1 - 7, c h a r a c t e r i s e d in that given the status message (S(X,Y)) the sending host (600) performing the step of: estimating a number of data packets (DP) being outstanding in the at least one packet switched network (325) .
9. A system for communicating data packets (DP) between a first host and second host according to a reliable sliding window transport protocol, through which a receiving host (650) generates status information indicating the condition of received data packets (DP) , and lost or erroneously received data packets (DP) are retransmitted from a sending host (600), the system comprising: at least one access link to which the first host is connected, at least one packet switched network offering a connectionless delivery of data packets (DP) to which the second host is connected, and at least one buffering network entity (IWU), which interfaces both the access link and the packet switched network, c h a r a c t e r i s e d in that the buffering network entity (IWU) comprises: a means (610) for receiving data packets (DP), generating a first status message (S(X,-)) indicating whether the received or missing data packets (DP) must be retransmitted or not and returning the first status message (S(X,-)) to the sending host (600), a means (620) for storing data packets (DP) being received correctly by the means (610) for receiving, a means (630) for retrieving data packets from the means (620) for storing and for transmitting the data packets (DP) to the receiving host (650) , and a processing means (640) for receiving the first status message (S(X,-)) from the means (610) for receiving and a second status message (Ack±) from the receiving host (650), generating a third status message (S(X, Y) ) in response to the first (S(X,-)) and the second (Ack±) status messages, and transmitting the third status message (S(X, Y) ) to the sending host (600).
10. A system according to claim 9, c h a r a c t e r i s e d in that the third status message (S(X,Y)) indicates whether a particular data packet (DP) has been lost or erroneously received over the access link or in the packet switched network.
PCT/SE1999/001479 1998-10-07 1999-08-27 Method and system for data communication WO2000021231A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP99946532A EP1119954A2 (en) 1998-10-07 1999-08-27 Method and system for data communication
JP2000575248A JP2002527935A (en) 1998-10-07 1999-08-27 Data communication methods and systems
AU58929/99A AU751285B2 (en) 1998-10-07 1999-08-27 Method and system for data communication
CA002346715A CA2346715A1 (en) 1998-10-07 1999-08-27 Method and system for data communication

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE9803423A SE513327C2 (en) 1998-10-07 1998-10-07 Systems and method of data communication
SE9803423-4 1998-10-07

Publications (2)

Publication Number Publication Date
WO2000021231A2 true WO2000021231A2 (en) 2000-04-13
WO2000021231A3 WO2000021231A3 (en) 2000-07-27

Family

ID=20412869

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE1999/001479 WO2000021231A2 (en) 1998-10-07 1999-08-27 Method and system for data communication

Country Status (6)

Country Link
EP (1) EP1119954A2 (en)
JP (1) JP2002527935A (en)
AU (1) AU751285B2 (en)
CA (1) CA2346715A1 (en)
SE (1) SE513327C2 (en)
WO (1) WO2000021231A2 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2358776A (en) * 1999-12-18 2001-08-01 Roke Manor Research Applying TCP congestion control to a wired portion of a communication path, but not to a wireless portion of the path
WO2002009389A2 (en) * 2000-07-24 2002-01-31 Nokia Corporation Tcp flow control
WO2002041603A1 (en) 2000-11-14 2002-05-23 Mitsubishi Denki Kabushiki Kaisha Data distribution control device and data distribution control method
GB2375001A (en) * 2001-04-06 2002-10-30 Motorola Inc Re-transmission protocol
WO2003065663A1 (en) * 2002-01-25 2003-08-07 Cyneta Networks, Inc. Packet retransmission in wireless packet data networks
WO2003069870A2 (en) * 2002-02-15 2003-08-21 Koninklijke Philips Electronics N.V. Modifications to tcp/ip for broadcast or wireless networks
WO2004017556A1 (en) * 2002-07-26 2004-02-26 Robert Bosch Gmbh Method and device for monitoring a data transmission
WO2004049639A1 (en) * 2002-11-25 2004-06-10 Intel Corporation Apparatus to speculatively identify packets for transmission and method therefor
WO2004051954A2 (en) * 2002-12-03 2004-06-17 Hewlett-Packard Development Company, L.P. A method for enhancing transmission quality of streaming media
WO2005002149A1 (en) * 2003-06-30 2005-01-06 Nokia Corporation Data transfer optimization in packet data networks
US6937570B2 (en) 2001-11-07 2005-08-30 Tektronix, Inc. Resource aware session adaptation system and method for enhancing network throughput
EP1681792A1 (en) * 2005-01-17 2006-07-19 Siemens Aktiengesellschaft Secure data transmission in a multi-hop system
CN1305281C (en) * 2003-12-02 2007-03-14 三星电子株式会社 Inter connected network protocol packet error processing equipment and its method and computer readable medium
WO2007085949A2 (en) * 2006-01-26 2007-08-02 Nokia Siemens Networks Oy Apparatus, method and computer program product providing radio network controller internal dynamic hsdpa flow control using one of fixed or calculated scaling factors
WO2007095967A1 (en) * 2006-02-27 2007-08-30 Telefonaktiebolaget Lm Ericsson (Publ) Flow control mechanism using local and global acknowledgements
US7855998B2 (en) 2001-11-07 2010-12-21 Tektronix, Inc. Gb parameter based radio priority
EP1449334B1 (en) * 2001-11-15 2011-01-05 TELEFONAKTIEBOLAGET LM ERICSSON (publ) Method and System of Transmitting Data
EP2296313A1 (en) * 2008-07-16 2011-03-16 Huawei Technologies Co., Ltd. Control method and device for wireless multi-hopping network congestion
US8533307B2 (en) 2002-07-26 2013-09-10 Robert Bosch Gmbh Method and device for monitoring a data transmission
US8577728B2 (en) 2008-07-11 2013-11-05 Zbd Displays Limited Display system
WO2016070919A1 (en) * 2014-11-06 2016-05-12 Nokia Solutions And Networks Oy Improving communication efficiency

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BALAKRISHNAN H ET AL: "A COMPARISON OF MECHANISMS FOR IMPROVING TCP PERFORMANCE OVER WIRELESS LINKS" IEEE / ACM TRANSACTIONS ON NETWORKING, vol. 5, no. 6, December 1997 (1997-12), pages 756-769, XP000734405 *
BALAKRISHNAN H ET AL: "IMPROVING RELIABLE TRANSPORT AND HANDOFF PERFORMANCE IN CELLULAR WIRELESS NETWORKS" WIRELESS NETWORKS, vol. 1, no. 4, 1 December 1995 (1995-12-01), pages 469-481, XP000543510 *
FIEGER A ET AL: "TRANSPORT PROTOCOLS OVER WIRELESS LINKS" PROCEEDINGS IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS, 1 July 1997 (1997-07-01), pages 456-460, XP000764778 *
PARK S -S ET AL: "PERFORMANCE IMPROVEMENTS OF TCP PROTOCOL FOR MOBILE DATA SERVICE" IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE, PHOENIX, ARIZONA, NOV. 3 - 8, 1997, vol. 3, 3 November 1997 (1997-11-03), pages 1871-1875, XP000737842 INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2358776A (en) * 1999-12-18 2001-08-01 Roke Manor Research Applying TCP congestion control to a wired portion of a communication path, but not to a wireless portion of the path
WO2002009389A2 (en) * 2000-07-24 2002-01-31 Nokia Corporation Tcp flow control
WO2002009389A3 (en) * 2000-07-24 2002-06-27 Nokia Corp Tcp flow control
WO2002041603A1 (en) 2000-11-14 2002-05-23 Mitsubishi Denki Kabushiki Kaisha Data distribution control device and data distribution control method
EP1249987A1 (en) * 2000-11-14 2002-10-16 Mitsubishi Denki Kabushiki Kaisha Data distribution control device and data distribution control method
EP1249987A4 (en) * 2000-11-14 2010-06-02 Mitsubishi Electric Corp Data distribution control device and data distribution control method
GB2375001A (en) * 2001-04-06 2002-10-30 Motorola Inc Re-transmission protocol
US6937570B2 (en) 2001-11-07 2005-08-30 Tektronix, Inc. Resource aware session adaptation system and method for enhancing network throughput
US8004969B2 (en) 2001-11-07 2011-08-23 Tektronix, Inc. Cell level congestion policy management
US7855998B2 (en) 2001-11-07 2010-12-21 Tektronix, Inc. Gb parameter based radio priority
EP1449334B1 (en) * 2001-11-15 2011-01-05 TELEFONAKTIEBOLAGET LM ERICSSON (publ) Method and System of Transmitting Data
WO2003065663A1 (en) * 2002-01-25 2003-08-07 Cyneta Networks, Inc. Packet retransmission in wireless packet data networks
WO2003069870A3 (en) * 2002-02-15 2003-11-13 Koninkl Philips Electronics Nv Modifications to tcp/ip for broadcast or wireless networks
WO2003069870A2 (en) * 2002-02-15 2003-08-21 Koninklijke Philips Electronics N.V. Modifications to tcp/ip for broadcast or wireless networks
US8533307B2 (en) 2002-07-26 2013-09-10 Robert Bosch Gmbh Method and device for monitoring a data transmission
WO2004017556A1 (en) * 2002-07-26 2004-02-26 Robert Bosch Gmbh Method and device for monitoring a data transmission
US7385926B2 (en) 2002-11-25 2008-06-10 Intel Corporation Apparatus to speculatively identify packets for transmission and method therefor
WO2004049639A1 (en) * 2002-11-25 2004-06-10 Intel Corporation Apparatus to speculatively identify packets for transmission and method therefor
CN100450052C (en) * 2002-11-25 2009-01-07 英特尔公司 Apparatus to speculatively identify packets for transmission and method therefor
WO2004051954A2 (en) * 2002-12-03 2004-06-17 Hewlett-Packard Development Company, L.P. A method for enhancing transmission quality of streaming media
WO2004051954A3 (en) * 2002-12-03 2004-08-19 Hewlett Packard Development Co A method for enhancing transmission quality of streaming media
US7693058B2 (en) 2002-12-03 2010-04-06 Hewlett-Packard Development Company, L.P. Method for enhancing transmission quality of streaming media
WO2005002149A1 (en) * 2003-06-30 2005-01-06 Nokia Corporation Data transfer optimization in packet data networks
CN1305281C (en) * 2003-12-02 2007-03-14 三星电子株式会社 Inter connected network protocol packet error processing equipment and its method and computer readable medium
EP1681792A1 (en) * 2005-01-17 2006-07-19 Siemens Aktiengesellschaft Secure data transmission in a multi-hop system
WO2007085949A2 (en) * 2006-01-26 2007-08-02 Nokia Siemens Networks Oy Apparatus, method and computer program product providing radio network controller internal dynamic hsdpa flow control using one of fixed or calculated scaling factors
WO2007085949A3 (en) * 2006-01-26 2007-11-29 Nokia Siemens Networks Oy Apparatus, method and computer program product providing radio network controller internal dynamic hsdpa flow control using one of fixed or calculated scaling factors
US8238242B2 (en) 2006-02-27 2012-08-07 Telefonaktiebolaget Lm Ericsson (Publ) Flow control mechanism using local and global acknowledgements
WO2007095967A1 (en) * 2006-02-27 2007-08-30 Telefonaktiebolaget Lm Ericsson (Publ) Flow control mechanism using local and global acknowledgements
US8577728B2 (en) 2008-07-11 2013-11-05 Zbd Displays Limited Display system
EP2296313A4 (en) * 2008-07-16 2011-08-10 Huawei Tech Co Ltd Control method and device for wireless multi-hopping network congestion
EP2296313A1 (en) * 2008-07-16 2011-03-16 Huawei Technologies Co., Ltd. Control method and device for wireless multi-hopping network congestion
US8593954B2 (en) 2008-07-16 2013-11-26 Huawei Technologies Co., Ltd. Method and apparatus for controlling congestion of wireless multi-hop network
WO2016070919A1 (en) * 2014-11-06 2016-05-12 Nokia Solutions And Networks Oy Improving communication efficiency
US10374757B2 (en) 2014-11-06 2019-08-06 Nokia Solutions And Networks Oy Improving communication efficiency

Also Published As

Publication number Publication date
AU751285B2 (en) 2002-08-08
EP1119954A2 (en) 2001-08-01
JP2002527935A (en) 2002-08-27
SE9803423D0 (en) 1998-10-07
AU5892999A (en) 2000-04-26
SE513327C2 (en) 2000-08-28
SE9803423L (en) 2000-04-08
WO2000021231A3 (en) 2000-07-27
CA2346715A1 (en) 2000-04-13

Similar Documents

Publication Publication Date Title
AU751285B2 (en) Method and system for data communication
US7013346B1 (en) Connectionless protocol
US7277390B2 (en) TCP processing apparatus of base transceiver subsystem in wired/wireless integrated network and method thereof
EP1195966B1 (en) Communication method
KR100785293B1 (en) System and Method for TCP Congestion Control Using Multiple TCP ACKs
EP1671424B1 (en) Fec-based reliability control protocols
US20040052234A1 (en) Method and system for dispatching multiple TCP packets from communication systems
US20030023746A1 (en) Method for reliable and efficient support of congestion control in nack-based protocols
EP1708400B1 (en) Loss tolerant transmission control protocol
JPH03165139A (en) Data communication method and data communication system
WO2000055640A1 (en) Dynamic wait acknowledge for network protocol
US8018846B2 (en) Transport control method in wireless communication system
JP2006506866A (en) Data unit transmitter and control method of the transmitter
WO2006027695A1 (en) Signaling a state of a transmission link via a transport control protocol
CN111193577A (en) Network system communication method and communication device using transmission timeout
KR100392169B1 (en) Method and apparatus for conveying data packets in a communication system
WO2018155406A1 (en) Communication system, communication device, method, and program
EP3939191B1 (en) Device and method for delivering acknowledgment in network transport protocols
KR100913897B1 (en) Method for controlling congestion of TCP for reducing the number of retransmission timeout
JP4531302B2 (en) Packet relay apparatus and method thereof
CN116566920A (en) Data transmission control method and related device
JP2006237968A (en) System and method for communication
JP2003198612A (en) File transferring method in packet communication network
JPH10200556A (en) Packet communication system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 1999946532

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 58929/99

Country of ref document: AU

ENP Entry into the national phase

Ref document number: 2346715

Country of ref document: CA

Ref country code: CA

Ref document number: 2346715

Kind code of ref document: A

Format of ref document f/p: F

ENP Entry into the national phase

Ref country code: JP

Ref document number: 2000 575248

Kind code of ref document: A

Format of ref document f/p: F

WWP Wipo information: published in national office

Ref document number: 1999946532

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWG Wipo information: grant in national office

Ref document number: 58929/99

Country of ref document: AU

WWW Wipo information: withdrawn in national office

Ref document number: 1999946532

Country of ref document: EP