WO2015104054A1 - Mapping of data into data containers - Google Patents

Mapping of data into data containers Download PDF

Info

Publication number
WO2015104054A1
WO2015104054A1 PCT/EP2014/050279 EP2014050279W WO2015104054A1 WO 2015104054 A1 WO2015104054 A1 WO 2015104054A1 EP 2014050279 W EP2014050279 W EP 2014050279W WO 2015104054 A1 WO2015104054 A1 WO 2015104054A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
mapping
transmission
throughput
acknowledgement
Prior art date
Application number
PCT/EP2014/050279
Other languages
French (fr)
Inventor
Orazio Toscano
Sergio Lanzone
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Priority to PCT/EP2014/050279 priority Critical patent/WO2015104054A1/en
Publication of WO2015104054A1 publication Critical patent/WO2015104054A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Abstract

A mapping is selected for use in mapping of data at a transmitting side (20)into data containers for transmission across a time division multiplexing (TDM) physical layer (70, 80, 90) of a communications network, to provide lossless transmission to a receiving side (30), by returning an acknowledgement to enable flow control. The selecting involves determining (110, 112, 114, 116, 420) whether transmission throughput of the data across the TDM physical layer is limited by any waiting for the return of the acknowledgement. Then the selecting (120, 122, 124, 450) of the mapping is made according to whether the transmission throughput is so limited. By such selecting according to whether the throughput is limited, wastage of bandwidth can be reduced. This mapping selection can be applied when mapping fiber channel line rates to OTN data containers.

Description

MAPPING OF DATA INTO DATA CONTAINERS
Technical Field
The present invention relates to methods for mapping data into data containers for transmission across a time division multiplexing physical layer of a communications network, and to corresponding apparatus and computer programs.
Background
It is known to provide lossless data connections between storage area networks using the well-known fibre channel (FC) protocol over Optical Transport Networks (OTN). The FC protocol does not allow any packet losses during the connections: i.e. every packet transmitted must be received. A buffer-to-buffer (B2B) flow control mechanism is implemented to avoid congestion and resulting loss of packets as follows:
The transmitter can transmit only when the receiver has at least one free buffer.
The availability of buffers at the receiver side is communicated to the transmitter via an acknowledgement in the form of an R-RDY signal.
Transmitting causes the transmitter to decrease its credit buffer by one for each transmitted frame.
The transmitter increases its credit buffer by one each time it receives the R-RDY signal from the remote receiver, and if its credit buffer runs down to zero then it waits for another R-RDY signal before sending another frame of data.
This can prevent any buffer overflow at the receiver and thus avoid loss of frames, and can avoid any need for retransmissions.
As Fibre Channel has a number of different line rates for the data, and as OTN provides a number of different data containers suitable for different data rates, a standard mapping has been agreed in ITU-T G.709 (see in particular chapter 17 of G.709/Y.1331 (02/2012)) to set out which size of data container to use for each of the FC line rates. Summary
Embodiments of the invention provide improved methods and apparatus. According to a first aspect of the invention, there is provided a method for mapping of data at a transmitting side into data containers for transmission across a time division multiplexing (TDM) physical layer of a communications network, to provide lossless transmission to a receiving side, the receiving side being arranged to return an acknowledgement to enable flow control of the data at the transmitting side. The method has steps of determining whether transmission throughput of the data across the TDM physical layer is limited by any waiting for the return of the acknowledgement across the TDM physical layer, and selecting a mapping for the data into the data containers for transmission across the TDM physical layer. The selecting is made according to the determination of whether the transmission throughput is so limited. A benefit of selecting the mapping according to whether the throughput is limited, is that a more efficient mapping can be found with less wastage of bandwidth than the standard mapping. Such limitation of throughput can be caused by any waiting caused by delays in the transmission or other reasons. The selecting of the mapping can in principle be carried out centrally or distributed at ingress nodes, and in principle can be carried out before sending, or updated periodically during sending, if consistent with the protocols being used, see figs 1 , 3, 4 and 5 for example.
Any other features or steps can be added to the method, or specifically disclaimed from the method, and some are set out in dependent claims and described in more detail.
One such additional feature of some embodiments is a step of estimating a transmission throughput according to an amount of delay in the transmissions between the flow control at the transmitting side and the receiving side. A benefit is that this is likely to provide a good estimation as this amount is likely to be a major factor in the reduced throughput, and in any variations of the reduced throughput. See fig 6 or fig 15 for example. This delay can be measured or estimated, see fig 7 for example. Another such additional feature is a step of measuring the transmission throughput and comparing it to an expected throughput if no waiting were to occur, to determine whether the transmission throughput is limited. A benefit is that this can provide a better basis for the mapping but may be harder or more costly to measure. See fig 8 for example.
Another such additional feature is the flow control comprising sending a predetermined amount of data without acknowledgement, and the method having the step of estimating the transmission throughput based on the predetermined amount of data sent without acknowledgement. A benefit is that this can be another substantial factor which can affect the waiting and thus affect the estimate of reduction in throughput. See fig 8 or fig 15 for example. Another such additional feature is the selecting of the mapping comprising selecting a size of the data containers. A benefit of this is that bandwidth wastage can be reduced if the size is adapted to the reduction in overall throughput. See fig 10 for example.
Another such additional feature is the selecting of the mapping comprising selecting multiplexing of the data with other data into the data containers. A benefit of this is that the containers can be filled more efficiently and thus bandwidth wastage reduced. This is also useful to help enable lower rate signals to be carried efficiently. See fig 1 1 for example.
Another such additional feature is a step of mapping the data into the data containers using the selected mapping. By covering the use as well as the selection of the mapping, the coverage can correspond better to time and location of the benefits, for example where the use may occur at different times, or may repeat, or may occur at different locations to the selection of the mapping.
Another such additional feature is a step of smoothing a flow rate of the data after any waiting, to provide more even filling of the data containers. A benefit of this is that it can help ensure the data containers are filled more efficiently. See fig 12 for example.
Another such additional feature is the data comprising data in fibre channel frames. This is a particularly valuable example of a type of client layer data flow which can provide lossless transmission. See figs 13 to 15 for example.
Another aspect provides a computer program on a non transitory computer readable medium and having instructions which when executed by a computer, cause the computer to carry out the method as set out above.
Another aspect provides apparatus for use in mapping of a data flow at a transmitting side into data containers for transmission across a TDM physical layer of a communications network, to provide lossless transmission to a receiving side, the receiving side being arranged to return an acknowledgement to enable flow control at the transmitting side. The apparatus has a mapping selector operative to determine whether transmission throughput of the data across the TDM layer is limited by any waiting for the return of the acknowledgement across the TDM physical layer, and to select a mapping for the data into the data containers for transmission across the TDM physical layer according to whether the transmission throughput is so limited.
Another such additional feature is the apparatus being operative also to measure the transmission throughput and compare it to an expected throughput if no waiting were to occur, to determine whether the transmission throughput is limited.
Another such additional feature is the flow control comprising sending a predetermined amount of data without acknowledgement, and the apparatus being operative also to estimate the transmission throughput based on the predetermined amount of data sent without acknowledgement.
Another such additional feature is the selecting of the mapping comprising selecting a size of the data containers.
Another such additional feature is the selecting of the mapping comprising selecting multiplexing of the data with other data into the data containers.
Another such additional feature is a mapper arranged to map the data into the data containers using the selected mapping.
Another such additional feature is a shaper for smoothing a flow rate of the data after any waiting, to provide more even filling of the data containers.
Another such additional feature is the data to be transmitted comprising fibre channel frames and the apparatus having an interface for receiving the fibre channel frames.
Any of the additional features can be combined together and combined with any of the aspects. Other effects and consequences will be apparent to those skilled in the art, especially over compared to other prior art. Numerous variations and modifications can be made without departing from the claims of the present invention. Therefore, it should be clearly understood that the form of the present invention is illustrative only and is not intended to limit the scope of the present invention. Brief Description of the Drawings:
How the present invention may be put into effect will now be described by way of example with reference to the appended drawings, in which:
Fig 1 shows shows a schematic view of apparatus according to a first embodiment,
Fig 2 shows a schematic view of a flow control mechanism,
Fig 3 shows some steps in the operation of the mapping selector according to a first embodiment,
Figs 4 and 5 show timing diagrams of flow control and throughput,
Figs 6, 7, 8 and 9 show steps according to embodiments showing different ways of determining whether transmission throughput is limited,
Figs 10 and 1 1 show steps according to embodiments showing different ways of selecting the mapping,
Fig 12 shows steps according to an embodiment having mapping with smoothing,
Fig 13 shows a schematic view of SANs coupled by an FC link carried over an OTN according to an embodiment,
Fig 14 shows a schematic view of an access node according to an embodiment and
Fig 15 shows steps for selecting a mapping for FC data to be transmitted over an OTN according to another embodiment. Detailed Description:
The present invention will be described with respect to particular embodiments and with reference to certain drawings but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are non- limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn to scale for illustrative purposes. It will be appreciated by those skilled in the art that block diagrams can represent conceptual views of illustrative circuitry embodying the functionality. Similarly, it will be appreciated that any flow charts, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
Definitions:
Where the term "comprising" is used in the present description and claims, it does not exclude other elements or steps and should not be interpreted as being restricted to the means listed thereafter. Where an indefinite or definite article is used when referring to a singular noun e.g. "a" or "an", "the", this includes a plural of that noun unless something else is specifically stated.
Elements or parts of the described nodes or networks may comprise logic encoded in media for performing any kind of information processing. Logic may comprise software encoded in a disk or other computer-readable medium and/or instructions encoded in an application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other processor or hardware.
References to software can encompass any type of programs in any language executable directly or indirectly on processing hardware.
References to processors, hardware, processing hardware or circuitry can encompass any kind of logic or analog circuitry, integrated to any degree, and not limited to general purpose processors, digital signal processors, ASICs, FPGAs, discrete components or logic and so on. References to a processor are intended to encompass implementations using multiple processors which may be integrated together, or co-located in the same node or distributed at different locations for example. References to acknowledgement are intended to encompass any kind of acknowledgement including for example acknowledgement of receipt of the data at the receiving side, whether buffered there or not, and to include acknowledgement of buffer space being available at the receiving side, or acknowledgement that no buffer space is available, and to include acknowledgement that the receiving side is ready, or is not ready for any reason, even if there is no receive buffer. The acknowledgement can be a signal in any form, to encompass a signal transition or a signal level, or a digital value for example.
References to determining whether transmission throughput is limited are intended to encompass determining in any manner, including for example by estimating, calculating, looking up, comparing to a threshold and so on.
References to transmission throughput are intended to encompass throughput averaged over time. The time should be long enough to include periodic fluctuations in transmission rate caused by waiting, if any.
References to waiting are intended to encompass the transmitting side reducing the transmission rate to zero, or to some intermediate level for example.
References to transmission throughput being limited are intended to encompass any amount of limiting below an expected transmission throughput for a case of no waiting.
Abbreviations
ASIC Application Specific Integrated Circuit
FC Fibre Channel
FIFO First In First Out
FPGA Field Programmable Gate Array
GFP Generic Framing Procedure
GMP Generic Mapping Procedure
ODU Optical Data Unit
OTN Optical Transport Network
OTU Optical Transport Unit
R-RDY Receive Ready
SAN Storage Area Network SDH Synchronous Digital Hierarchy
TDM Time Division Multiplexed
Trib card Tributary card
Introduction
By way of introduction to the embodiments, how they address some issues with conventional designs will be explained. In ITU-T G.709 the following mappings have been defined, and will be referred to as the standard mapping.
Figure imgf000010_0001
Table 1 mappings of FC line rates to different data containers for OTN
To enable a lossless connection, the transmitter needs a storage capacity to keep the packets or frames waiting for the acknowledgement to be returned from the receiver. The higher is the transmission rate of the FC links, the higher also is the storage capacity needed on the node. Currently most SAN equipment does not allow buffers with a capacity of more than 256 frames.
Furthermore the throughput of the connection is directly related to the dimension of the buffers in the nodes, the bitrate of the transmission and the latency due to the distance of the two nodes. For example, for a FC-1200 connection between two nodes with a buffer capacity of 256 frames, latency between the nodes causes the following effects:
a). At the beginning of the transmission the transmitter will send out all the frames while tokens are present in the credit buffer (i.e 256 frames are transmitted without waiting for acknowledgement). This transmission takes place faster for FC links having higher bit rates. b) Due to the latency the transmitter shall wait for the acknowledge from the receiver before it can transmit a next group of frames.
This means the transmission is not constant but is 'bursty' with the effect that the real transmission throughput of the connection will be lower than expected, increasing the time and therefore the costs. This problem might be addressed by:
1 . Increasing the buffer dimension of the nodes
2. Reducing the latency between the nodes
But point 1 is not practical with the present installed base of SANs and anyway would mean a cost in the design of the nodes, while for point 2, even a more careful planning of the path of the FC signals inside the OTN cannot avoid completely the latency due to the time needed to cross the fibers. A more detailed example showing the problem will now be described.
Known Example
This assumes a customer needs the transport of ten FC-100 streams over OTN. According to G.709 chapter 17 each FC-100 is mapped into an ODUO. Suppose the ODUO's are multiplexed into ODU1/OTU1 (i.e. 2xODU0^ODU1 -»OTU1 ). 5xOTU1 are therefore needed for the transport of the 10 FC-100 signals.
Suppose that the customer needs a daily based backup that will imply one hour of traffic for each OTU1 . Overall it will pay the equivalent of 1 xOTU1 for 5 hours on a daily basis. Suppose that due to the latency for the distance of the two SAN nodes, the FC-100 ports will work at 20% of the nominal speed (as a mean value) and therefore the customer will have to pay the equivalent of 1 xOTU1 for 25 hours on a daily basis (each upload will be five hours instead of one). Furthermore the ODUOs will be half used all the time. This is because the standard mapping in ITU-T has been designed to fulfil the transport of the FC- 100 full rate. Therefore there is a need to improve transport of lossless data protocols having flow control based on acknowledgements, such as SAN clients, over OTN or similar physical networks.
Introduction to features of embodiments
To address such issues, embodiments can involve identifying when the delay of the OTN network does not allow the use of the FC links at full speed, and then selecting which type of mapping to use, rather than the standard mapping specified by G.709. Thus the situation where the data containers specified by G.709 are partly empty owing to the above described waiting, can be avoided, or ameliorated.
In G.709 it is already specified how to measure the delay of the ODUk path (i.e. Delay Measurement). In some embodiments, the likely waiting can be estimated before the set-up of the real FC traffic, by measuring the delay of the ODUk path. In other embodiments the transmission throughput can be measured directly rather than estimating it. Either way can lead to a selection, based on the estimates, of whether to use either the standard G.709 mapping or a more optimized one. This could allow the customer to pay less for the same service, and leave more bandwidth available for use by other data or other customers. Fig 1 , first embodiment.
Figure 1 shows a schematic view of apparatus according to a first embodiment, having a transmitting side 20 and a receiving side 30 coupled by a time division multiplexed TDM physical layer of a communications network. The TDM physical layer can be implemented by an OTN, or an SDH network or any other kind of TDM based network. Parts of the TDM physical layer which are shown are a TDM transmitter tx 90, an intermediate TDM node 70 and a TDM receive part 80, though many other nodes and parts can be included. The transmitting side may be implemented as part of an access node for taking signals of any kind of lossless protocol having flow control depending on an acknowledgement returned from the receiving side and converting such signals into signals suitable for transmission across the TDM physical layer for example. Fibre channel is one example of such a lossless protocol, but others can be envisaged.
The transmitting side has a flow control part 40 depending on an acknowledgement received from the receiving side, so as to ensure lossless transmission. The transmitting side also has a mapper 50 for mapping the data into the TDM data containers for transmission over the TDM physical layer. The mapping used by the mapper is notably selected by a mapping selector 10. This makes a selection based on whether the transmission throughput across the TDM physical layer is limited by waiting for the acknowledgement to be returned. If so, then a more efficient mapping may be chosen rather than the standard mapping. The mapping selector is optionally co located with the mapper, though it can be located elsewhere in principle, or distributed between the receiving side and the sending side, or located centrally. The flow control can use any mechanism which provides lossless transmission based on an acknowledgement. The acknowledgement can be any kind of acknowledgement including acknowledgement of receipt (if there is no receive buffer for example) or acknowledgement of space being available in a receive buffer, for example. It can be in any format including a signal transition, a signal level, a digital value and so on. An example of a suitable flow control mechanism, as used in the known fibre channel protocol will now be described with reference to figure 2. Fig 2, known flow control mechanism
Figure 2 shows a schematic view of a flow control mechanism between a transmitting side 20 and a receiving side 30 having receive buffers 60. The flow control part in the transmitting side has a credit counter 42 which is incremented for each acknowledgement returned in the form of an R-Ready signal from the receive buffers. This signal indicates that one of the receive buffers has become free, as it contents have been passed on. The flow control part also has a function coupled to the credit counter 42, for sending 44 a frame only if the credit counter shows credit is available, indicating space in the receive buffers, in the sense that the counter shows a value greater than zero for example, or a value less than the capacity of the receive buffers, depending on the scheme. For example the credit counter value can represent a number of frames sent without acknowledgement, or can represent a number of the free receive buffers, where each buffer takes a frame.
In this kind of buffer-credit flow control, the source and destination agree beforehand a number of unacknowledged frames allowed to be sent before the transmitting side stops and waits. This agreed number depends on the buffer sizes, and the larger it is, the more that transmission delays in returning acknowledgements can be tolerated without causing waiting. In typical FC networks currently it is rare to have more than 256 buffer credits because of the cost of large buffers.
In other alternatives examples, it is possible to envisage other types of flow control and other types of counting. For example the counter could be reversed, and count down for each acknowledgement received and count up for each frame sent. Each acknowledgement could count for multiple frames or for any fraction of a frame.
Fig 3, steps in selecting a mapping according to an embodiment
Figure 3 shows some steps in the operation of the mapping selector according to a first embodiment. At step 100, an indication of data to be transmitted across the TDM physical layer is received, which triggers step 1 10. This involves determining whether transmission throughput of that data across the TDM physical layer is limited by waiting for return of acknowledgements from the receiving side buffers. As explained above, this waiting can be caused by transmission delays across the TDM physical layer. Whether the throughput is limited can be determined by measuring throughput or by estimating throughput for example. The estimate can be based on known or measured transmission delays, and based on how much data can be sent without acknowledgment, which in turn depends on buffer sizes.
At step 120, the selection of mapping is made according to whether the transmission throughput is limited. This can be implemented in various ways, for example by selecting a smaller size of data container, or by choosing to multiplexing other data into the standard size of container. Either way, the wastage of bandwidth can be reduced. The throughput value may be used directly, or may be compared to one or more thresholds and the selection based on the result or results of the comparison for example. At step 130, the selected mapping can be used in the mapper as the data comes in for transmission by the transmitting side.
Figs 4, 5, timing diagrams of flow control and throughput
Figure 4 shows a time chart of flow control actions with time flowing down the figure. In a left hand column are actions at the transmitting side, a central column indicates when TDM transmission is occurring or waiting, and a right hand column indicates actions at the receive buffers. At step 200, the transmitting side starts sending, as shown by the start of the black line in the TDM column which indicates transmission is occurring. At step 210, later in time, (by the amount of data transmission delay as shown) the receive buffers receive the start of the data. At step 220, the transmitting side has sent the agreed amount of data without receiving any acknowledge and so stops sending and waits. The black line in the TDM column stops. At step 230, the receive buffers which are now all full, begin to empty as data is passed on. This starts after a time shown as the receive buffer dwell time. This might be dependent on other circuitry or signalling downstream, such as response time of storage facilities in a node of a SAN, or intermediate buffers for example. At this point the receive buffers begin to send acknowledgements (acks) back across the TDM network to the transmitting side. Here at step 240 these acks are detected, after a transmission delay, and the transmitting side starts sending again. At step 250, the receive buffers are full once more, and the acks are no longer sent. This again is detected at the transmitting side which at step 260 stops sending data and waits. This process repeats at step 270 where the receive buffers restart returning acks, and at step 280 where these acks are detected at the transmitting side. Hence there is a stop start process of transmitting which results in a limited throughput corresponding to a mark space ratio of the line representing the TDM transmission. This is shown in another way in figure 5.
Figure 5 shows a timing graph with time flowing from left to right and the y-axis representing a transmission rate. There are three pulses of transmission activity shown, with the transmission rate corresponding to a maximum TDM transmission rate. In between are wait periods where the transmission rate is zero. The mark space ratio is approximately 2:1 , and this means the limited transmission throughput is about 65% of the maximum or expected throughput if there were no waiting, as shown by the dotted line at 65% of the height of the pulses. In principle, the buffer dwell time may be negligible in some embodiments. In principle the buffering at the receiver is not essential, embodiment can be envisaged where there is no receive buffer, and the acknowledgement indicates whether the receiver is ready, or is busy or not ready to receive for any other reason.
Figs 6, 7, 8 and 9, ways of determining whether transmission throughput is limited
Figure 6 shows steps according to an embodiment similar to that of figure 3 and corresponding reference numerals are used as appropriate. In this case, within box 1 10 is box 1 12 to show one way of implementing the step of box 1 10. This involves estimating transmission throughput for the data according to delay in transmissions between flow control at the transmitting side and the receive buffers. The delays can be measured for OTN for example or can be estimated based on information about the path taken, and information about delays of each link along the path. This may be enough to estimate waiting in some cases, for example if there is little or no buffering at the receiver, which might make dwell time, buffer size and how much data can be sent unacknowledged into negligible or much less significant factors. From an estimate of waiting, the determination of whether the transmission throughput is limited by the waiting can be made.
Various ways of determining whether the transmission throughput is limited by waiting can be envisaged. This can depend on the type of the acknowledgement and the type of flow control being used, which can be the known example for FC as described above, or other types of flow control . One way is to determine how much delay in returning the acknowledgement can be tolerated without causing waiting, and compare that to the estimated or measured delay. How much such delay can be tolerated can be calculated in some cases from a known value for a maximum transmission rate and from a quantity of data to be transmitted in one go. In some cases the quantity of data can be one frame, and if the acknowledgement is of a type which can be returned while the transmitting side is still transmitting that frame, then waiting can be avoided. In some cases the quantity of data can be a number of frames agreed in advance. One such case is described in more detail in relation to figure 15, but other embodiments using other types of flow control can be envisaged.
Figure 7 shows steps according to an embodiment similar to that of figure 6 and corresponding reference numerals are used as appropriate. In this case, within box 1 12 are boxes 1 1 1 and 1 13 showing two different ways of obtaining the delay value for use in step 1 12 in estimating the transmission throughput. These two different ways can be used as alternatives or can both be used together or in sequence. In step 1 1 1 , the delay is estimated, whereas in step 1 12, the delay is measured. In step 1 1 1 , the delay can be estimated based on knowledge of path lengths of the links used through the TDM network, (and find delay values from the path lengths) and knowledge of delays of each of the nodes which are traversed. One way is to take minimum and maximum values for these delays, and calculate a sum of the minimum values and a separate sum of the maximum values. These can be used in step 1 12 to calculate min and max estimates of transmission throughput.
In step 1 13 the delay is measured, for example by timing signals sent out and back across the TDM network, preferably along the same path as used by the data. For the OTN case, this can be achieved using already existing ODU fields for carrying timestamp information. An empty ODU can be sent with a timestamp, and at the receiving side it can be looped back with the same time stamp to be returned to the transmitting side. On its return, its timestamp can be compared with a current time at the transmitting side to determine a total time for transmitting and returning. This can be done periodically if the delay is likely to change over time or if a new path is taken for any reason.
Figure 8 shows steps according to an embodiment similar to that of figure 3 and corresponding reference numerals are used as appropriate. In this case, within box 1 10 is box 1 14 to show another way of implementing the step of box 1 10. This involves measuring the transmission throughput for the data. This can then be compared to the expected throughput if no waiting were to occur, to deduce whether the transmission throughput is limited by waiting. The transmission throughput can be measured in various ways such as measuring wait periods or counting frames received at the receiver or transmitted from the transmitting side.
Figure 9 shows steps according to an embodiment similar to that of figure 3 and corresponding reference numerals are used as appropriate. In this case, within box 1 10 is box 1 16 to show another way of implementing the step of box 1 10. This involves estimating transmission throughput for the data according to transmission delay (measured or estimated based on path information for example) and according to the agreed amount of data that can be sent without acknowledgement, if this is significant. Assumptions or measurements of dwell time may be provided if they are significant in estimating waiting. The amount of data that can be sent without acknowledgement can be set in various ways. It may be set according to how much can be buffered at the receive side, so as to ensure there is no buffer overflow, or according to other considerations.
Figs 10, 1 1 , ways of selecting mapping according to embodiments
Figure 10 shows steps according to an embodiment similar to that of figure 3 and corresponding reference numerals are used as appropriate. In this case, within box 120 is box 122 to show one way of implementing the step of box 120. This involves selecting a size of data container to use. If the transmission throughput is limited, then a smaller size of data container may be more efficient than a standard size suitable for an expected throughput.
Figure 1 1 shows steps according to an embodiment similar to that of figure 10 and corresponding reference numerals are used as appropriate. In this case, instead of step 122 there is a step 124 as one way of implementing step 120. In step 124 there is selection of multiplexing of the data with other data into the data containers. This is another way of avoiding or reducing wastage of bandwidth in the data containers for the TDM physical layer.
Fig 12, embodiment having mapping with smoothing
Figure 12 shows steps according to an embodiment similar to that of figure 3 and corresponding reference numerals are used as appropriate. In this case, the selected mapping is used at step 132 in a mapper with smoothing of a flow rate of the data after the waiting has caused disruption of the flow rate as shown in figure 5 for example. By smoothing the flow rate, using a shaper circuit or function for example, such as a first in first out (FIFO) memory, the gaps in the flow of the data as shown by figure 5 for example, can be filled in to some extent. This means the data containers can be filled more evenly, which can help avoid wastage of bandwidth. Figs 13, 14 embodiment in SAN
Figure 13 shows a schematic view of storage area networks SAN 310 at different locations coupled by an FC link 320 carried over a TDM physical layer in the form of OTN 300 according to an embodiment. In this case the transmitting side is implemented in an access node 330 at one side of the OTN. The receiving side is implemented an access node 340 at the other side of the OTN. The mapping can be implemented in the transmitting side according to a mapping selection generated by a mapping selector located there or located elsewhere in this view.
Figure 14 shows a schematic view of an access node according to an embodiment for use in the network of figure 13 or in other embodiments. The access node has a switching fabric 334 controlled by controller 332. The data containers in the form of ODUs are output from the switching fabric to the rest of the OTN by OTN output line card 337. ODUs are fed into the switching fabric by a trib card 336. This trib card has one or more FC interfaces 333 arranged to receive data in the form of FC frames over the FC link. A flow control mechanism is implemented by a wait function 342 in the data path from the FC interface, before a shaper 335 and a mapper 338. The wait function is controlled by a flow control function typically implemented in software by a processor and memory 331 having instructions for implementing the flow control as described above. The shaper is for smoothing the data flow rate and can be implemented by a FIFO as described above in relation to figure 12. The mapper carries out the mapping into data containers according to a mapping selected by a mapping selector 10. This mapping selector can also be implemented in software by a processor and memory 331 , to have a function as described above in more detail in relation to figures 3 to 8 for example, or as described below in relation to figure 15. After the mapping, the ODUs are completed by adding ODU overhead shown by part 339 before the ODUs are output to the switch fabric.
Note that the view of the access node shows paths and functions related to data passing in one direction from FC to OTN, in practice there are typically paths and functions related to the other direction, from OTN to FC, not shown in figure 14 for the sake of clarity.
Fig 15, FC-OTN mapping select embodiment using estimated throughput
Figure 15 shows steps for selecting a mapping for FC data to be transmitted over an OTN according to another embodiment, for use in the access node of figure 14 or in other embodiments. This embodiment involves estimating transmission throughput rather than measuring the throughput, and basing the estimate on transmission delay and credit size. At step 400 the delay D of the ODUk path is measured. As discussed above in relation to figure 7, an alternative is to estimate the delay rather than measure it. At step 410 there is a step of verifying the buffer credit size B of the SAN nodes (which is effectively amount which can be sent without acknowledgement). At step 420, there is a calculation of whether the measured values of D and B allow the FC-X data to be transmitted at its full specified line rate across the OTN. If yes, then at step 430 the standard mapping for the respective FC-X is selected and used. If not, then the transmission throughput is regarded as limited, and at step 440 there is a calculation of the transmission throughput in the form of the effective working speed of the FC-X over the OTN.
Based on this estimated value of transmission throughput, at step 450, a more efficient mapping can be selected, typically using a smaller than standard ODUy data container. At step 460 there is a step of mapping the FC-X into the ODUy data containers according to the selected mapping.
Note that where the mapping selector involves measuring delay or measuring transmission throughput, these steps may be implemented in software at the transmitting side or the receiving side in principle. Thus the mapping selector can be split or distributed between transmitting and receiving sides for example. Note that in an alternative embodiment, step 440 can be replaced by a step of measuring the effective transmission while using the standard mapping initially, in which case the steps of measuring the delay and obtaining the buffer credit size can be omitted. Then, if the throughput is limited, a more efficient mapping can be selected.
Note that steps 420 and 440 can be implemented in various ways. One way of calculating step 420 of whether the values of D and B allow full speed working is to consider that a frame is approximately 2kbits, and at a max transmission rate of 1 Gbps it may occupy about 4km of fiber in the OTN. To allow full working speed, there need to be enough buffer credits B to enable the full length of the path to be filled. Also, if an acknowledgement is sent back immediately that the first of the data is received, there needs to be enough buffer credits to continue transmitting long enough to enable the acknowledgement to be returned, which is effectively the same as filling the entire path again. Thus, if the path length is 20km, and propagation speed is 5 microseconds per km, this means a delay D of 100 microseconds each way, and 5 credits are needed to fill that path (4km x 5 credits =20km), and another 5 credits are needed to fill it again. Thus for a delay D of 100 microseconds each way, approximately 10 credits are needed, B=10, to achieve full speed working with no waiting (if effects of node delays and other factors are negligible).
One way of calculating step 440 to obtain the effective working speed is to determine what proportion of the full working speed can be achieved. An example for a max transmission rate of 1 Gbps where D=150 microseconds each way and B = 10 credits, gives a wait of 100 microseconds ((150 x 2)-(100 x 2)) after sending for 200 microseconds (time for 10 buffer credits). This means an effective transmission throughput of approximately 66% of the maximum transmission throughput. The mapping selection can be based on this value to select a suitable size of data container to reduce bandwidth wastage which would occur if the maximum transmission throughput was assumed.
An example selection based on the known example described above will now be set out. This related to transporting nxFC-100 over a network with a latency that causes their use at half speed, if the standard mapping specified currently by G.709 (i.e. GMP over ODU0). Assuming it is not possible to reduce the round trip delay (due to the physical constraints of the network), an optimized way to map FC-100 stream over OTN is:
· to reduce, via policing, each FC-100 stream for instance down to
200Mbit (i.e. the same throughput it would have using the standard GMP mapping into ODU0), and • to use a GFP mapping to multiplex more than one FC-100 client into one ODU1/OTU1 (e.g. up to 5 in this case).
In this example the customer will pay for a single OTU1 link x 5 hours on a daily basis, which matches the initial target. (In addition he may optimize the transport costs because he will use fewer links). The same optimization can be obtained for all the types of FC-x signals when they cannot work full speed due to the transport over OTN. As can be seen, such embodiments can help improve efficiency of the FC-x over OTN transport by finding a more effective mapping of FC-x over ODUk when the latency of the network causes waiting and thus prevents the FC-x from being transmitted at full speed. This can enable reduced costs for the customer and reduced wastage of the OTN bandwidth.
Other variations can be envisaged within the claims.

Claims

Claims:
1 . A method for mapping of data at a transmitting side into data containers for transmission across a time division multiplexing (TDM) physical layer of a communications network, to provide lossless transmission to a receiving side, the receiving side being arranged to return an acknowledgement to enable flow control of the data at the transmitting side,
the method having steps of:
determining whether transmission throughput of the data across the TDM physical layer is limited by any waiting for the return of the acknowledgement across the TDM physical layer, and
selecting a mapping for the data into the data containers for transmission across the TDM physical layer, the selecting being made according to the determination of whether the transmission throughput is so limited.
2. The method of claim 1 , having the step of estimating a transmission throughput according to an amount of delay in the transmissions between the flow control at the transmitting side and the receiving side.
3. The method of claim 1 or 2, having a step of measuring the transmission throughput and comparing it to an expected throughput if no waiting were to occur, to determine whether the transmission throughput is limited.
4. The method of any preceding claim, the flow control comprising sending a predetermined amount of data without acknowledgement, and the method having the step of estimating the transmission throughput based on the predetermined amount of data sent without acknowledgement.
5. The method of any preceding claim, the selecting of the mapping comprising selecting a size of the data containers.
6. The method of any preceding claim, the selecting of the mapping comprising selecting multiplexing of the data with other data into the data containers.
7. The method of any preceding claim, having the step of mapping the data into the data containers using the selected mapping.
8. The method of claim 7, having the step of smoothing a flow rate of the data after any waiting, to provide more even filling of the data containers.
9. The method of any preceding claim, the data comprising data in fibre channel frames.
10. A computer program on a non transitory computer readable medium and having instructions which when executed by a computer, cause the computer to carry out the method of any of claims 1 to 9.
1 1 . Apparatus for use in mapping of a data flow at a transmitting side into data containers for transmission across a time division multiplexing (TDM) physical layer of a communications network, to provide lossless transmission to a receiving side, the receiving side being arranged to return an acknowledgement to enable flow control at the transmitting side, the apparatus comprising a mapping selector operative to:
determine whether transmission throughput of the data across the TDM layer is limited by any waiting for the return of the acknowledgement across the TDM physical layer, and
select a mapping for the data into the data containers for transmission across the TDM physical layer according to whether the transmission throughput is so limited.
12. The apparatus of claim 1 1 , being operative also to estimate a transmission throughput according to an amount of delay in the transmissions between the flow control at the transmitting side and the receiving side.
13. The apparatus of claim 1 1 or 12, being operative also to measure the transmission throughput and compare it to an expected throughput if no waiting were to occur, to determine whether the transmission throughput is limited.
14. The apparatus of any claims 1 1 to 13, the flow control comprising sending a predetermined amount of data without acknowledgement, and the apparatus being operative also to estimate the transmission throughput based on the predetermined amount of data sent without acknowledgement.
15. The apparatus of any of claims 1 1 to 14, the selecting of the mapping comprising selecting a size of the data containers.
16. The apparatus of any of claims 1 1 to 15, the selecting of the mapping comprising selecting multiplexing of the data with other data into the data containers.
17. The apparatus of any of claims 1 1 to 16, the apparatus also comprising a mapper arranged to map the data into the data containers using the selected mapping.
18. The apparatus of claim 17, the apparatus having a shaper for smoothing a flow rate of the data after any waiting, to provide more even filling of the data containers.
19. The apparatus of any of claims 1 1 to 18, the data to be transmitted comprising fibre channel frames and the apparatus having an interface for receiving the fibre channel frames.
PCT/EP2014/050279 2014-01-09 2014-01-09 Mapping of data into data containers WO2015104054A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/050279 WO2015104054A1 (en) 2014-01-09 2014-01-09 Mapping of data into data containers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/050279 WO2015104054A1 (en) 2014-01-09 2014-01-09 Mapping of data into data containers

Publications (1)

Publication Number Publication Date
WO2015104054A1 true WO2015104054A1 (en) 2015-07-16

Family

ID=49956166

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/050279 WO2015104054A1 (en) 2014-01-09 2014-01-09 Mapping of data into data containers

Country Status (1)

Country Link
WO (1) WO2015104054A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107864099A (en) * 2017-10-23 2018-03-30 中国科学院空间应用工程与技术中心 A kind of flow control methods and system of isomery FC networks
CN109962802A (en) * 2017-12-26 2019-07-02 中兴通讯股份有限公司 Bandwidth adjusting method, device, system, transport plane node and storage medium
CN111201728A (en) * 2017-10-09 2020-05-26 华为技术有限公司 Data transmission method in optical network and optical network equipment
CN112929765A (en) * 2021-01-19 2021-06-08 赵晋玲 Multi-service transmission method, system and storage medium based on optical transmission network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030074449A1 (en) * 2001-10-12 2003-04-17 Rory Smith Bandwidth allocation in a synchronous transmission network for packet oriented signals
US20040085902A1 (en) * 2002-11-05 2004-05-06 Pierre Miller Method and system for extending the reach of a data communication channel using a flow control interception device
WO2005065161A2 (en) * 2003-12-30 2005-07-21 Cisco Technology, Inc. Apparatus and method for improved fibre channel oversubscription over transport
WO2006076652A2 (en) * 2005-01-14 2006-07-20 Cisco Technology Inc. Dynamic and intelligent buffer management for san extension
US20070260728A1 (en) * 2006-05-08 2007-11-08 Finisar Corporation Systems and methods for generating network diagnostic statistics

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030074449A1 (en) * 2001-10-12 2003-04-17 Rory Smith Bandwidth allocation in a synchronous transmission network for packet oriented signals
US20040085902A1 (en) * 2002-11-05 2004-05-06 Pierre Miller Method and system for extending the reach of a data communication channel using a flow control interception device
WO2005065161A2 (en) * 2003-12-30 2005-07-21 Cisco Technology, Inc. Apparatus and method for improved fibre channel oversubscription over transport
WO2006076652A2 (en) * 2005-01-14 2006-07-20 Cisco Technology Inc. Dynamic and intelligent buffer management for san extension
US20070260728A1 (en) * 2006-05-08 2007-11-08 Finisar Corporation Systems and methods for generating network diagnostic statistics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SI YIN ET AL: "Storage area network extension over passive optical networks (S-PONS)", IEEE COMMUNICATIONS MAGAZINE, IEEE SERVICE CENTER, PISCATAWAY, US, vol. 46, no. 1, 1 January 2008 (2008-01-01), pages 44 - 52, XP011224533, ISSN: 0163-6804, DOI: 10.1109/MCOM.2008.4427229 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111201728A (en) * 2017-10-09 2020-05-26 华为技术有限公司 Data transmission method in optical network and optical network equipment
US11082199B2 (en) 2017-10-09 2021-08-03 Huawei Technologies Co., Ltd. Data transmission method in optical network and optical network device
CN107864099A (en) * 2017-10-23 2018-03-30 中国科学院空间应用工程与技术中心 A kind of flow control methods and system of isomery FC networks
CN109962802A (en) * 2017-12-26 2019-07-02 中兴通讯股份有限公司 Bandwidth adjusting method, device, system, transport plane node and storage medium
CN112929765A (en) * 2021-01-19 2021-06-08 赵晋玲 Multi-service transmission method, system and storage medium based on optical transmission network
CN112929765B (en) * 2021-01-19 2023-05-12 赵晋玲 Multi-service transmission method, system and storage medium based on optical transmission network

Similar Documents

Publication Publication Date Title
CN101499957B (en) Multipath load balance implementing method and data forwarding apparatus
CN103155488B (en) Delay measurements system and delay measuring method and delay measurements equipment and delay measurements program
CN101193060B (en) Method for reliable E1 transmission based on forward error correction mechanism in packet network
US7876785B2 (en) Transport of aggregated client packets
US20050276223A1 (en) Bandwidth optimization in transport of Ethernet frames
JP5833253B2 (en) Method and related network element for providing delay measurement of an optical transmission network
EP2978237B1 (en) Device unit, node device, and method and system for adjusting tunnel bandwidth
US8284691B2 (en) Estimating data throughput
ES2953738T3 (en) Link Aggregation with Data Segment Fragmentation
JP2004274766A (en) Clock synchronization on packet network
WO2015104054A1 (en) Mapping of data into data containers
US20090003235A1 (en) Method and Apparatus For Data Frame Transmission
US20120051227A1 (en) Method for signalling of data transmission path properties
US9054824B2 (en) Inter-frame gap controller, traffic transmitter, transmission apparatus and inter-frame gap control method
EP2630752B1 (en) Layer one path delay compensation
JP5690938B2 (en) Method and apparatus for transmitting traffic in a communication network
US9800509B2 (en) System and method for efficient transport of large data files
EP1339181B1 (en) Method and device for providing a minimum congestion flow of Ethernet traffic transported over a SDH/SONET network
JP2011114380A (en) Communication system, transmitter, receiver, communication device, transmission line quality estimating method, and program
US20180035319A1 (en) Method and apparatus for monitoring a performance of an ethernet data stream
CN109716683B (en) Time synchronization in real-time content distribution systems
KR20120072204A (en) Apparatus for measuring available bandwith of satellite networks and method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14700344

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14700344

Country of ref document: EP

Kind code of ref document: A1