Title of Invention

A DATA UNIT BASED COMMUNICATION SYSTEM AND SAID COMMUNICATION SYSTEM

Abstract A method of processing a data unit in a data unit based communication system, the data unit of protocol layer (L3) for transmission in a data unit based communication system with a buffer (101, 104), an embedder (103) and a controller (102) connected to the buffer (101, 104) and the embedder (103), comprising the steps of: passing to a second protocol layer (L2) a given data unit of said first protocol layer (L3) that is to be transmitted, said second protocol layer (L2) lying below said first protocol layer (L3) under the control of the controller (102), the controller arranged to control the buffer (101, 104) by the controller (102); determining one or more numeric values, said, one or more numeric values belonging to at least one numerically quantifiable parameter associated with said given data unit of said first protocol layer (L3);
Full Text FORM 2
THE PATENTS ACT 1970
[39 OF 1970]
COMPLETE SPECIFICATION
[See Section 10; rule 13]
"A DATA UNIT BASED COMMUNICATION SYSTEM AND SAID COMMUNICATION

SYSTEM"


TELEFONAKTIEBOLAGET LM ERICSSON [PUBL], a Swedish company, of S-126 25 Stockholm, Sweden,
GRANTED
24-01-2005
The following specification particularly describes the nature of the invention and the manner in which it is to be performed :-


ORIGINAL
1083-MUMNP-2003

The present invention relates to a method of processing a data unit in a data unit based communication system, and said communication system.
A well, known principle for data exchange in networks is that of data unit exchange. This means that data to be sent is broken down into individual units. Rules for sending and receiving such units, as well as rules for the structure of the units themselves, are determined by so-called protocols. Protocols are sets of rules that allow the communication between a sending end and a receiving end, as the rules specify how and in what form data to be sent has to be prepared such that the receiving end may interpret the data and react in accordance to protocol defined rules to which both partners in the communication adhere. The two ends of a communication adhering to a specific protocol are also referred to as peers.
Such data units are sometimes referred to by different names, depending on the type of protocol involved, such as packets, frames, segments, datagrams, etc. For the purpose of clarity the present description uses the term "data unit" generically for any type of data unit associated with any type of protocol.
An important concept in communications using data unit exchange is that of protocol layering. This means that a number of protocols (sometimes also referred to as a suite) is organised in a hierarchy of layers, where each layer has specific functions and responsibilities. The concept of


layering is well known in the art and described in many textbooks, for example "TCP-IP illustrated, volume 1, The Protocols" by W. Richard Stevens, Addison Wesley, 1994, such that a detailed description is not necessary here.
The TCP/IP protocol suite is an example of a layered protocol hierarchy. A basic structure of a protocol hierarchy is defined by the OSI (Open System Interconnection) layer model. At a lowest layer, which is also referred to as the physical layer or LI, the functions of directly transporting data over a physical connection are handled. Above the physical layer, a second layer L2, which is also referred as to the link layer is provided. The link layer L2 fulfils the function of handling the transport of data units over links between communication nodes. Above the link layer L2 a third layer L3 is provided, which is also referred to as the network layer. The network layer handles the routing of data units in a given network. An example of a network layer protocol is the internet protocol (IP). Above the network layer, a fourth layer L4 is provided, which is also referred to as the transport layer. Examples of a transport layer protocol are the transmission control protocol (TCP) or the user datagram protocol (UDP).
In a data unit based communication system using a hierarchy of protocol layers, a communication comprises passing a given data unit downwards through the protocol hierarchy on the sending side, and passing a data unit upwards through the protocol hierarchy on the receiving side. When a data unit is passed downwards, each protocol will typically perform a certain function with respect thereto, e.g. add further information and change or adapt the structure to specific rules of that protocol layer. Typically each protocol layer will add its own header to a data unit received from a higher 'protocol layer and may also add delimiters. When a specific protocol layer receives a data


unit from a higher protocol layer, it will embed the higher layer data unit into a data unit adhering to the rules of the given protocol layer. The term "embedding" shall refer to both encapsulation in which one data unit of a higher layer is placed into one data unit of a given layer, and to segmentation, where one data unit of a higher layer is segmented into a plurality of data units of the given protocol layer.
An important aspect of the layering scheme is that the different layers are "transparent". This means that the peers in a layer are oblivious to what happens in another layer.
Typically, each protocol layer will perform some type of transmission control for its data units. Such transmission control can e.g. comprise the performing of a certain type of forward error correction, the setting of parameters associated with an automatic repeat request (ARQ) function, the scheduling of data units, or the performing of comparable operations.
It is known to implement protocol layers in such a way that they can be operated in a specific mode with respect to the transmission control. As an example, a so-called numbered mode (or I-mode) and a so-called unnumbered mode (UI-mode) are known. In the numbered mode, if it is determined that a sent data unit was not correctly received by the receiving peer, then the sending peer performs retransmission of said data unit. In this way it can be assured that all packets are correctly transmitted, although this may increase the delay, depending on how many packets have to be transmitted. On the other hand, in the unnumbered mode, no retransmissions are provided. This has the advantage of less delay, but the transmission reliability depends on the quality of the physical connection.

The possibility of setting a given protocol layer implementation into a specific transmission control mode has the advantage that the selection of the mode can e.g. be performed by a control procedure from a higher protocol layer, in order to optimise the sending of data units from said higher protocol layer. However, it does not provide very much flexibility, as a given protocol layer will typically handle a variety of different types of data units, that require different control settings with respect to the optimisation of the sending of the given type of data unit. As an example, if an application layer is sending a computer file, it desires to ensure a reliable transmission, and may therefore want to set a lower layer protocol implementation into the numbered mode, or the application layer may want to send data that requires real¬time transmission, such as a video stream belonging to a video telephone, in which case transmission speed is more important than reliability, such that the application layer may want to set a lower layer protocol implementation into the unnumbered mode. However, if both computer file data and video data are being sent, then the setting of the lower layer protocol implementation into a given transmission control mode will not provide an optimum solution.
EP-0 973 3 02 Al addresses this problem and proposes a system-, in which a given protocol layer that receives higher data unit layers and embeds these higher data layers into data units of said given layer, is arranged to discriminate the higher layer data unit layers by reading the header information and determining the type of the higher layer data unit. Then,, a classification is performed in accordance with the identified type. In this way, the given protocol layer can flexibly set the transmission reliability of its data units depending on the type of the higher layer data unit that is embedded in its own data units. As an example, if the system of EP- 0 973 02 Al is

applied to a link layer in the TCP/IP suite, then the link layer can identify if the network layer IP data unit that it receives carries a TCP data unit, in which case the link layer data units are sent in the numbered mode, or if the network layer IP data unit carries UDP data unit, in which case the link layer data units are sent in the unnumbered mode.
However, the system of EP-0 973 3 02 Al is not always practical, as it requires the parsing of higher layer data units in order to identify type information, which e.g. does not work when the higher layer data unit has an encrypted header and/or payload.
[Object of the present invention]
It is desirable to provide an improved method and system of processing data units of a higher protocol layer at a given protocol layer, which is simple to implement, but flexibly provides improved transmission properties.
[Summary of the invention]
This object is achieved by the subject-matter of the independent claims. Advantageous embodiments are described in the dependent claims.
In accordance with an embodiment of the present invention, at a given protocol layer, e.g. a link layer L2, one or more numeric values of one or more numerically quantifiable parameters associated with a received given data unit of a higher protocol layer, e.g. the network layer L3, are determined. In other words, one value of one numerically quantifiable parameter can be determined, or several values of one numerically quantifiable parameter, or one or more respective values for each of a plurality of numerically quantifiable parameters. The at least one numeric value is

not derived from information contained in the given data unit of the higher protocol layer. In other words, instead of analysing the content of the higher layer data unit, one or more simple physical properties that are numerically evaluatable are measured, and the embedding and/or transmission control is performed in accordance with the determined value. Consequently, no parsing of higher layer data units or other similar complicated processing is necessary.

As an example, a numerically quantifiable parameter can be the size of the higher layer data unit. In other words, at a given protocol layer e.g. L2, a higher layer data unit, e.g. from the L3 layer, is received, and the size of said L3 data unit is measured. Then the embedding operation for embedding said L3 data unit into one or more L2 data units, or the transmission control operation for transmitting the one or more L2 data units into which said L3 data unit has been embedded, is performed in accordance with the result of said size measurement.



Preferably, the size measurement is used as a basis for adjusting the transmission control to optimise predetermined target properties. More specifically, the L2 data units are transmitted with parameters set for optimising throughput if the L3 data unit falls into a predetermined size range, and otherwise the L2 data units are transmitted with optimised delay. Namely, if the L3 data unit is found to have a size indicative of a maximum size e.g. falls into a range around or is equal to the TCP maximum segment size if the L3 data units carry TCP data units, then the transmission of the L2 data units, into which said L3 data unit has been embedded, is optimised for throughput, as it may be assumed that the 'maximum size L3 data unit belongs to a larger amount of data being sent from above the L3 layer. On the other hand, if the L3 data unit is smaller in size, then the transmission control is


optimised for delay, as it can be assumed that the smaller L3 data units are associated with control operations, such as synchronisation or acknowledgment messages, where delay optimisation is more suitable than throughput optimisation.
Other examples of a numerically quantifiable parameter that can be used in the context of the present invention are a buffer fill level of a buffer holding data units of the upper protocol layer (e.g. L3), or of a buffer holding data units of the protocol layer (e.g. L2) receiving the upper layer data units. Another numerically quantifiable parameter is the inter-arrival time of the upper layer data units i.e. the time that passes between the arrival of two consecutive upper layer data units.
Each of the given examples of numerically quantifiable parameters can be used alone as a basis for providing one or more values to be used in performing the embedding and/or transmission control, or can be used together with one or more of the named numerically quantifiable parameters for providing such values to be used in performing the embedding and/or transmission control.
According to a preferred-embodiment, the method of the present application also comprises a step of discriminating a group of data units of the higher protocol layer to which the higher layer data unit belongs. This discrimination can occur on the basis of source information and/or destination information in the higher layer data unit and/or a protocol identifier in the higher layer data unit. Especially, if the present preferred embodiment is applied in the context of TCP/IP, the discrimination can consist in determining the flow to which the higher layer data unit belongs, A flow is defined by the source and destination IP address, the source and destination port number and a protocol identifier.

The result of said discrimination can be used in the transmission control procedure for the data units into which the higher data unit layer is embedded. More specifically, referring to the above described example in which the size of the higher layer data unit was compared with a reference range or a reference value indicative of a maximum data unit size, it is possible to enhance such a comparison by taking into consideration the type of the higher layer data unit. Namely, the type of the higher layer data unit (or the type of a data unit contained in the higher layer data unit) can be used as a means for determining a reference range or reference value with which to compare the size of the higher layer data unit. As an example, if the higher layer data unit is identified as being of the first type (i.e. belonging to a first predetermined group),then the size of said higher data unit is compared to a first size reference value (or to a first set of size reference values, i.e. a discrete range of values, or to a first continuous range of values), whereas if the higher layer data unit is discriminated as being of a second type (i.e. belonging to a second predetermined group), then the size of said higher layer data unit is compared to a second size reference value (or to a second set of size reference values, or to a second continuous range of values).
On the other hand, the result of said discriminating step can also be used as a basis for determining the numerically quantifiable parameter of which a numeric value is determined. Namely, the numerically quantifiable parameter can be associated with a buffer fill level of a buffer holding data units of the lower protocol layer, where the numeric value is the number of data units of the lower protocol layer (e.g. L2) in the buffer that embed data units of the higher protocol layers belonging to the discriminated group, e.g. belonging to the discriminated flow. Equally, when using the inter-arrival time of thehigher layer da,ta unit as the numerically quantifiable parameter, then the numeric value can be chosen as the inter-arrival time value for those higher layer data units belonging to the discriminated group. In the latter case,
i.e. when determining the inter-arrival time of data units belonging to a group, it is possible to add a further condition, e.g. to determine the inter-arrival time of such data units of the group that fall into a given size category. For example, if the group is a flow and the size
category is selected to relate to data units that contain an acknowledgement (i.e. have a minimum size), then it is possible to use the inter-arrival time of the acknowledgment data units in the flow as a basis for the transmission control.
The transmission control performed in accordance with the numeric value of the numerically quantifiable.parameter can consist in any suitable measures such as the adjusting of forward error correction, the adjusting of ARQ settings, or the adjusting of scheduling. Furthermore, if the embedding operation comprise a segmentation of the higher layer data unit into several lower layer data units, then this segmentation operation can be performed in accordance with the numeric value, namely by adjusting the size of the lower layer segments in accordance with the determined value. The transmission control can consist in making adjustments for the data units at the layer receiving the higher layer data units, or at a layer below. For example, if the method of the invention is applied to a link layer L2, then adjustments can be made for the L2 data units, but also for LI data units.
As already mentioned, the transmission control can be performed to optimise certain target parameters, depending on the determined numeric value. Examples of such target parameters are the throughput (the.amount of payload data



transported per unit of time) or the (average) transmission delay.
According to another preferred embodiment, the transmission control for transmitting the lower layer data units, into which a higher layer data unit has been embedded, comprises a discrimination of the lower layer data units, such that each of the one or more lower layer data units is classified into one of a plurality of predetermined transmission categories on the basis of the discrimination result. This discrimination of lower layer data units can advantageously be combined with the discrimination of higher layer data units in such a way that the lower layer data units embedding a higher layer data unit belonging to a predetermined group are themselves divided into sub¬groups associated with the group of the higher layer data unit. For example, if the group to which the higher data unit (e.g. L3 data unit) belongs is a flow, then such a flow can be divided into sub-flows at the lower protocol layer (e.g. L2) that embeds the data units of the flow.
According to another embodiment of the invention, one or more numeric values of a numerically quantifiable parameter are used in a congestion 'alleviation procedure, such that a decision step for deciding whether to perform a congestion alleviation measure with respect to a data unit or not depends on said numeric value or values. The congestion alleviation procedure can e.g. comprise a data unit dropping procedure and/or a data unit marking procedure. Therefore, the congestion alleviation measure can consist in dropping a data unit from the buffer, or in adding a marking to a data unit, said marking informing the communication end-points that congestion is taking place. An example of such a marking is the Explicit Congestion Notification (ECN) known from TCP/IP.
For example, L3 data units received at layer 2 are buffered before processing. Furthermore, a congestion alleviation procedure for performing a congestion alleviation measure with respect to one or more buffered L3 data units is provided. The congestion alleviation procedure can be triggered by a number of predetermined conditions, e.g. when a link-overload of the link over which the L2 data units are to be sent is detected, or a buffer overflow of the buffer storing the received L3 data units is detected. According to the embodiment, a decision step for deciding whether to perform a congestion alleviation measure or not depends on the described numeric value, e.g. on the size of the L3 data unit, or on the inter-arrival time of L3 data units. Although the above example related to the buffering of L3 data units received at L2 (sometimes also referred to as service data units SDUs), the concept of making a congestion alleviation decision dependent on a numeric value associated with a given L3 data unit is also applicable to one or more L2 data units (L2 PDUs) in which the given L3 data unit is embedded. However, especially if the congestion alleviation procedure consists in a data unit dropping procedure, it is preferable to perform any congestion alleviation at the highest sub-layer, i.e. at the SDU level, in order "to avoid unnecessary embedding operations for data that is possibly dropped.
The method and system of the present invention can be put to practice in any suitable or appropriate way by hardware, by software or by any suitable combination of hardware and software.
[Brief description of figures]
The present invention shall now be described by referring to detailed embodiments, which are only intended to be illustrative, but not limiting, with reference to the appended drawings, in which:


Fig. 1 shows a flow .chart for explaining an embodiment of the present invention;
Fig. 2 shows a flow chart for explaining another
embodiment of the present invention, in which the numeric value is determined based on data unit size,-
Fig. 3 shows a flow chart for explaining another
embodiment of the present invention, in which higher layer data units are additionally discriminated;
5 Fig. 4 shows a flow chart for explaining another
embodiment of the present invention, in which the numeric value is determined based on the discrimination result;
Fig. 5 shows an overview of a protocol structure to which the present invention can be applied;
Fig. 6 schematically illustrates the concept of flow-splitting in the context of the*example shown in Fig. 5;
Figs. 7a-7c
show routines of an embodiment of the invention in which data units are buffered and a data unit dropping procedure is provided;
Fig. 8 shows an example of a data unit dropping procedure;
Fig. 9 shows a further example of a data unit dropping procedure; and


Fig. 10 shows a schematic block diagram of a system that is arranged to perform the routines and procedures of Figs. 7 to 9.
[Detailed description of embodiments]
In the following, detailed embodiments of the present invention shall be described in order to give the skilled person a full and complete understanding. However, these embodiments are illustrative and not intended to be limiting, as the scope of the invention is defined by the appended claims.
The following examples shall be described in the context of TCP/IP. However, it may be noted that the present invention is applicable to any protocol hierarchy in which higher layer data units are embedded into lower layer data units, and where a transmission control is performed for the lower layer data units.
The following embodiments shall be described in the context of applying the invention to a link layer L2, said link layer embedding L3 data units from the network layer. However, this is only an example, and the present invention can be applied at any level of a protocol hierarchy i.e. also below or above the link layer.
Fig. 1 shows a flow chart describing a first embodiment of the present invention. As can be seen, in a first step SI a L3 data unit is passed to the link layer L2 belov; the layer L3. Then, in step S2, a numeric value of a numerically quantifiable parameter associated with the L3 data unit is determined. It may be noted that the term "associated with the L3 data unit" is to be understood broadly as relating to any numerically quantifiable parameter derivable from the L3 data unit, where said parameter can relate to the L3 data unit as such, or to a data unit from a higher layer

than L3 embedded in the L3 data unit. Then, in step S3, the received L3 data unit is embedded into one or more L2 data units. Finally, in step S4, transmission control for the one or more L2 data units and/or LI data units that embed the received L3 data unit is performed according to the numeric value determined in step S2.
Although the example of Fig. 1 shows steps SI to S4 in a specific sequence, this is only an example and other arrangements of steps SI to S4 are possible.
The adjustments performed in step S4 will depend on the specific circumstances and the system under consideration. For example, the transmission control may comprise adjusting a forward error correction for the data units of the L2 protocol layer or for the LI data units below the L2 protocol layer. The type of forward error correction can be chosen as is desirable or appropriate. For example, the transmission control may comprise adjusting a transmission power and/or a data rate (e.g. by selecting a spreading factor in a CDMA system) over a given link and/or a degree of interleaving. The mentioned link can e.g. be a wireless link.
If the L2 protocol layer comprises a function of providing automatic retransmission of L2 data units under predetermined conditions, then the transmission control may comprise adjusting said retransmission function.
If the L2 protocol layer comprises a function for the scheduling of the L2 data units, then the transmission control may comprise adjusting said scheduling.
In the event that the embedding in step S,3 is a segmentation operation, then the embodiment of Fig. 1 can be arranged such that the segmentation operation is performed according to said numeric value. In other words,

the segmentation operation in dependence on the numeric value can be an alternative to making the transmission control in step S4 dependent on the numeric value, or the segmentation operation in dependence on the numeric value can be a supplement to making the transmission control in step S4 dependent on the numeric value, i.e. can be provided together with such transmission control.
The adjusting of said segmentation operation may e.g. comprise adjusting the size of the L2 data units into which a given L3 data unit is segmented.
Fig. 2 shows a flowchart of another embodiment of the present invention. The same reference signs as in Fig. 1 refer to the same steps in Fig. 2. Namely, first an L3 data unit is passed to the L2 layer in step SI. Then, in step S21, which represents an example of the more general step S2 shown in Fig. 1, the size of the received L3 data unit is measured. In other words, it is determined how much data is contained in the received L3 data unit, in any suitable dimension, such as bits or bytes.
The L2 data unit is embedded into one or more L2 units in step S3. Then, the transmission control is performed in accordance with steps S41, S42 and S43, which represent a specific example of the general procedure represented as S4 in Fig. 1. More specifically, step S41 compares the measured size of the L3 data unit with at least one reference size, where said reference size preferably represents a maximum data unit size. Then, if the size of the L3 data unit is equal to the reference size, the transmission control is optimised for throughput (step S42) , and if the size of the L3 data unit is different from the reference size, then transmission control is optimised with respect to delay (step S43).


In the above example, step S41 consists in comparing the measured size with a reference value. Naturally, the measured size can also be compared to a range of values, where said range can consist of a discrete number of
i reference values, or can be continuous, or can be a
combination of discrete and continuous values. Also, in the above example, the reference value (or range) is indicative of a maximum data unit size. However, this is only an example, and the reference value (or range) can be chosen
) in any suitable or desirable way, depending on the given system and circumstances. For example, the reference value can also be indicative of a minimum data unit size that identifies a data unit carrying an acknowledgment.
The optimising of transmission control for throughput can e.g. consist in reducing the amount of forward error correction and enabling ARQ for ensuring transmission reliability. On the other hand, optimising transmission control for reducing delay can consist in increasing
forward error correction, but disabling ARQ. This shall be explained in more detail in the following in the specific context of TCP/IP.
Non-real- time applications often run on top of TCP, whereas real-time applications usually run on top of UDP. The main design requirement for link layer (L2) protocols for non-real time applications is throughput optimisation. This is for example achieved by tuning the trade-off between forward error correction and ARQ, so that the overall throughput is optimised. It is possible to obtain higher throughputs by weakening forward error correction and enabling ARQ. Throughput is the quantity of transmitted payload per unit of time. The throughput increases with, decreasing forward error correction, as forward error correction increases overhead, which leads to a reduced amount of payload being transmitted.

In an encryption based forward error correction data is divided into blocks, where each block is the result of an encryption operation that combines the payload with redundancy information that allows a decision on the part of the receiver whether a block has been correctly received and decrypted. Then a block error rate may be defined as the rate of events in which a received block is judged as having an error. It may be noted that reduced forward error correction leads to an increased block error rate, but it can be shown that throughput can nonetheless be increased. For example, when calculating throughput as the product of data rate multiplied by (1 - block error rate), a data rate of 8 kbit/s and block error rate of 0.01 leads to a throughput of 7.92 kbit/s, whereas a data rate of 12 kbit/s (achieved by less forward error correction) and a block error rate of 0.1 (due to the reduced forward error correction) lead to a throughput of 10.8 kbit/s. Consequently, although the block error rate is ten times higher, the increased payload data rate leads to an overall increased throughput.
The problem is that even in a flow corresponding to a TCP based bulk data transfer, which should overall be optimised 'for throughput, 'there are individual data units or segments, which are delay sensitive. Examples are the connection set-up messages, which should be exchanged as quickly as possible to be able to start with data transmission. Another example is the TCP acknowledgement messages in reverse direction, or HTTP requests, which should be received as soon as possible.
In order to differentiate between L3 data units that should be optimised for throughput and such that should be optimised for delay reduction, the present embodiment determines the size of the L3 data unit, see step S21 in Fig. 2.

Namely, L3 data units for which the throughput should be optimised in general belong to a larger application data amount, which is segmented into transport layer data units, e.g. TCP segments. Therefore, these transport layer data units, which are then embedded in network layer (L3) data units generally have a maximum transfer unit size, e.g. the TCP maximum segments size (MSS) .
Consequently, as already mentioned, step S41 of Fig. 2 can be implemented in such a way that the comparison is
conducted with respect to a reference value or reference range -indicative of a predetermined maximum transfer unit size. Typically, the value of the maximum transfer unit size is 256 byte, 512 byte, 536 byte or 1460 byte, such that taking into account the IP header of 40 byte, suitable reference size values used in step S41 could be 296 byte, 552 byte, 576 byte or 1500 byte.
As already mentioned, step S41 can be implemented in such a way that the measured size of the L3 data unit is compared with a plurality of reference sizes, e.g. the previously mentioned series of discrete values 296, 552, 576 and 1500 bytes, or with a suitable continuous range, or with a combination of discrete values and continuous values. Then, if the measured size of the L3 unit is equal to any one of the plurality of reference sizes, or falls into the predetermined range or ranges, then step S42 is enabled, to thereby optimise the transmission control for throughput.
In this connection, it can be mentioned that the reference size values may not only take into account the IP header, but also the possibility of header compression. In other words, the plurality of size reference values used in step S41 also contains a set of values that take into account header compression.


In the example of Fig. 2, other data units having different sizes than the reference size are sent with optimised delay (step S43). In other words, they are sent with minimum delay. In the example of TCP/IP this means that all pure TCP control messages (such as synchronisation messages, acknowledgement messages,. . .) are sent with minimum delay. Also, the final data unit belonging to a larger application data amount is also transmitted with optimised (e. g. minimum) delay.
As already indicated above, delay optimisation can for example be achieved by choosing the appropriate coding scheme and/or adjusting the transmission power. If one assumes an additional delay for a retransmission to be 100 ms, one can conclude that the transmission delay becomes less and less important for small data units. Therefore, it is better to use stronger forward error correction (e.g. more redundancy) and/or higher transmission power to improve the mean delay.
For example, if one considers a data unit of 100 byte size, and using the previously mentioned example values of a data rate of 8 kbit/s, block error rate of 0.01 and a block size of 20 byte, i then a 'mean delay of 100 + 5' ms results, or assuming a data rate of 12 kbit/s, a block error rate of 0.1 and a block size of 3 0 byte, then a mean delay of 8 0 + 40 ms results. In the first example, one needs five blocks to transmit 100 byte, in the second only 4, due to the increased data rate and therefore block size. In the first case, the probability that the one of the five blocks is corrupted is approximately 5 %, leading on average to an additional delay of 5 ms. The same calculation provides an additional delay of 40 ms for the second case-. Consequently, the increased forward error correction, although the data rate is reduced, leads to an improved mean delay.


Another example is when the transmission control comprises regulation via the transmission power. This can be achieved e.g. by setting a target signal-to-interference ratio for the desired link, which is maintained by a power control loop. A higher signal-to-interference ratio results in an improved link quality, i.e. the number of transmission eixoxs on the link.- is reduced, at the price of increased transmission power. Less transmission errors reduce the delay of error recovery and therefore the mean transmission delay is reduced.
In the. example of Fig- 2, the outcome of decision step S41 is to either optimise the transmission for throughput or for delay. Naturally, this is only an example. The transmission control can also be optimised with regard to other target parameters, as is suitable or desired for a specific application or system. Such target parameters include system resources like e.g. transmission power, spreading codes, number of assigned physical channels. Furthermore, the outcome of decision step S41 can also lead to more than the two procedures S42, S43. For example, it is possible that step S41 is implemented in such a way that the L3 data unit size is compared to n reference sizes or n reference' ranges, and the- outcome leads' to a corresponding number of n transmission control procedures, each transmission control procedure corresponding to one size reference value or range, and additionally one further default transmission control procedure for all those L3 data units that do not have the size of one of the n size reference values or does not fall into one of the n ranges.
In the embodiments described above, the transmission control performed in step S4 or steps S41 to S43 was performed on the basis of the numeric value dete-rmined in step S2. It should De noted that the transmission control can also be based on further measurements or determinations conducted with respect to a received L3 data unit. Namely,

as shown in figure 3, in which steps SI, S21 and S3 are identical to the corresponding steps in Pig. 2, such that a further description is not necessary, a step S5 is performed after step S3, said step S5 discriminating a group to which the L3 data unit belongs. Then, in step S40, which is an example of general step S4 of Fig. 1, the transmission control for the one or more L2 data units that embed the received L3 data unit, is performed on the basis of the numeric value determined in step S21 and the discrimination result of the discrimination step S5.
Preferably, the discrimination is performed on the basis of destination information contained in the L3 data unit and/or a protocol identifier contained in the L3 data unit. For example, the group into which the L3 data unit is classified can be the flow to which it belongs. As already mentioned previously, in. the context of TCP/IP, the group into which the L3 data unit is classified can be the flow to which it belongs, and the flow is defined by a source and destination IP address, a source and destination port number, and a protocol identifier. As another example, the group to which an L3 data unit belongs can also be determined by the type of payload being transmitted, which •type can e.g. be determined by checking a protocol identifier in the data unit. This means checking the protocol identifier of the L3 data unit itself, or checking one or more protocol identifiers of data units being transported as payload in the L3 data unit.
The performing of the transmission control in step S40 can be conducted in any suitable or desirable way, depending on the specific circumstances of the application. For example, procedure S40 of Fig. 3 can be arranged in a similar way as steps S41, S42 and S43 of Fig. 2, with the addition that the size reference used in the comparison is also selected in dependence on the group into which the L3 data unit is classified. For example, the group can be determined on the

basis of the type of data unit embedded in the L3 data unit, e.g. the discrimination consists in determining if the IP data unit carries a TCP data unit or a UDP data unit. If it is determined that it carries a TCP data unit, then a first set of size reference values can be used, and if it is determined that the IP data unit carries a UDP data unit, then a second set of size reference values can be used in the comparison.
In Fig. 3, the discrimination result obtained in step S5 is used as an element in the transmission control operation of procedure S40. However, the discrimination of a received L3 data unit into one of a predetermined number of groups can also be used as a basis for determining the numeric value that is then later to be used in the transmission control operation. This is shown in Fig. 4. After having received the L3 data unit in step SI, the discrimination step S5 is performed. Then, in step S22, which is an example of the general step S2 shown in Fig. 1, the numeric value is determined on the basis of the discrimination result. Thereafter steps S3 and S4 are conducted, as already explained in connection with Fig. 1.
For example, the numerically quantifiable parameter can be associated with a buffer fill level of a buffer holding L2 data units, where the numerically quantifiable parameter is the number of L2 data units in the buffer that embed L3 data units belonging to the group discriminated in step S5. The numerically quantifiable parameter can alternatively be the inter-arrival time of L3 data units belonging to the discriminated group. The advantages of determining the numerically quantifiable parameter in this way shall be discussed in more detail with respect to the example shown in Fig. 5.
Fig. 5 shows an embodiment of the present invention implemented in the context of a link layer operating in

accordance with the universal mobile telephone system (UMTS) as e.g. described in the Technical Specification 3G TS 25.301 V3.3.0 (1999-12), published by the 3rd Generation Partnership Project (http://www.3gpp.org). This document is herewith fully incorporated by reference into the present disclosure.
It may be noted that Fig. 5 shows a preferred application of the present invention. However, the present invention is
by no means restricted thereto. Much rather, the method and system of the present invention can be applied in the context of any mobile communication device, e.g. also one operating in accordance with the general package audio service (GPRS) or any other mobile communication standard.
Furthermore, a method and system of the present invention are naturally not restricted to the application in mobile communication systems, but can be applied in any communication system having a protocol hierarchy.
In Fig. 5, a radio resource control (RRC) procedure 2 01 is provided, which controls the operation of other parts of the general L2 implementation, said L2 protocol implementation consisting of sub-layers L2/PDCP, L2/BMC, L2/RLC and L2/MAC. Fig. '5 schematically shows control connections 204, 205, 206, 207 and 208 to the other procedures, that shall be described in the following. Namely, a one or more packet data convergence protocol (PDCP) procedures 2 02 are implemented for performing the functions of the packet data convergence protocol. Furthermore, a BMC (Broadcast/Multicast Control) procedure 2 03 is implemented for performing the function of controlling the sending of data units to a plurality of destinations, via multicasting or broadcasting. Reference numeral 2 09 refers to radio link control (RLC) procedures that implement the radio link control protocol. 211 identifies the medium access control (MAC) part of the L2 layer. Finally, 302 schematically represents the physical layer LI.
The radio resource control procedure 201 receives control information from the higher layer L3 (not shown) . The user-plane passes down L3 data units that are to be embedded into L2 data units, to the PDCP procedure 2 02, to the BMC procedure 203 and/or a RLC procedure 209.
Reference numeral 210 represents logical channels in the L2 layer, while reference numerals 3 01 refers to-transport channels between L2 and LI.
The general control for the L2 layer implementation is performed by the RRC procedure 201. More specifically, user data is transmitted on a radio access bearer. Via the radio resource control procedure 2 01, one or more radio bearers are configured, which determine the layer 1 and layer 2 configuration of the radio protocols. In the PDCP procedure 2 02, IP header compression is applied and a multiplexing of multiple traffic flows onto one logical channel is used. The RLC procedure applies backward error correction, such a ARQ, for a logical channel, among other things. The MAC procedure 211 performs a scheduling of L2 data units and performs the distribution onto the transport channels 3 01.
As an embodiment of the present invention, the function of the PDCP procedure 202 is extended by a splitting function, which separates the received traffic flow (e. g. an IP flow) into sub-flows, which are then transmitted via separate RLC connections. Since IP header compression is applied in PDCP, the necessary flow information is already available in PDCP. An example, a corresponding flow splitting is schematically depicted in Fig. 6. A flow of IP data units received at the PDCP procedure 2 02 is split into four sub-flows, each respectively, handled by a separate RLC procedure 209a, 209b, 209c and 209d.The splitting is performed in accordance with the numeric value of the numerically quantifiable parameter, e.g. the L3 data unit size, a buffer fill level, or an inter-arrival time. Additionally, the splitting can also be conducted in dependence on the type of data unit.
For example, the numerical value can be the L3 unit size and the type of data unit being transported. Then the splitting shown in Fig. 6 can be arranged in such a way that data units having the size of an acknowledgment and being TCP data units can be transmitted with an optimum RLC configuration, i.e. minimum delay, and the other IP data units are separated onto RLC connections, which are optimised for the respective data unit size and/or respective data unit type.
An alternative way of splitting the IP flow into a plurality of RLC sub-flows is to separate different phases of the IP flow, e.g. to separate the TCP slow-start phase from the congestion avoidance phase. If the slow-start phase is configured such that a lower transmission delay is achieved, the end-to-end performance will be greatly enhanced and link "resources (e.g. codes) are released more quickly.
Methods for reducing the transmission delays are e.g. a higher signal/interference-ratio target, stronger forward error correction, more aggressive ARQ settings etc. To differentiate the different phases of a TCP flow, the above described methods of employing a buffer fill-level as the numerically quantifiable parameter, or using the inter-arrival time can be used. As already mentioned previously, the determination of an inter-arrival time can also be combined with the determination of a data unit size range, in that the value used as a basis for transmission control is the inter-arrival time of data units falling into a predetermined size range.
For example, if the RLC buffer level is used, then it could be concluded that if the RLC buffer fill level is below a predetermined threshold, the TCP flow is probably in the slow-start phase (or at the end of application data), since no new data is arriving. Above this predetermined threshold, or above a second predetermined threshold, it is very likely that TCP is in the congestion avoidance phase. As a consequence, if the buffer fill level indicates a slow-s.tart phase, then the L2 data units are placed into an RLC connection optimised for reducing delay, and if the buffer fill level indicates a congestion avoidance phase, then the L2 data units are placed into a RLC connection optimised for throughput.
Corresponding considerations apply when using the inter-arrival time of the IP data units as a numerically quantifiable parameter. Namely, if the inter-arrival time is above a predetermined threshold, then this may indicate a slow-start phase, whereas if the inter-arrival time is below this same predetermined threshold, or below a second threshold, then this may indicate a congestion avoidance phase.
It may be noted that an alternative to splitting the flow into several RLC sub-flows, as shown in Fig. 6, in which each stream of data units is handled by a separate RLC procedure 209a, 209b, 209c or 209d, is to transmit all data units in one adaptive sub-flow. In this case, all data units are handled by the same RLC procedure, however, each data unit is handled differently (different forward error correction, different segmentation sizes, -different ARQ, different transmission power (e.g. different target signal-to- interference ratio), etc.).

In the previous embodiments, a numeric value associated with at least one numerically quantifiable parameter associated with an L3 data unit was used as a parameter for controlling the embedding of the L3 data unit and/or controlling the transmission of L2 data units embedding said L3 data unit. Now, in connection with Figures 7 to 10 embodiments will be described, which relate to applying the concept of employing a numerically quantifiable parameter to the performing of a congestion alleviation procedure for buffered L3 data units and/or buffered L2 data units.
In the. following examples, the congestion alleviation procedure will be described in terms of a data unit dropping procedure.
Figure 7a shows a routine for buffering received L3 data units. It may be noted that such received L3 (generally.-higher layer) data units received at a given layer (e.g. L2) are referred to as service data units (SDUs), whereas the data units into which such an SDU is embedded (encapsulated or segmented) is referred to as a protocol data unit (PDU) . As a consequence, the buffer into which such received L3 data units are placed can also be referred to as. an SDU buffer. This SDU buffer stores the received L3 data units before they are processed (embedded) into L2 data units. Returning to the routine of Fig. 7a, a first step SI consists in passing an L3 data unit to the L2 Layer, after which one or more numeric values belonging to at least one numerically quantifiable parameter associated with the received L3 data unit are determined in step S2 . these steps SI and S2 are just like the steps of same preference numeral described in connection with Figures 1 to and can be embodied as described in detail- with respect o these Figures. Therefore, the numerically quantifiable arameter can e.g. be the data unit size or the inter-rrival time of the data units. Finally, the received L3 ata unit or SDU is placed into an SDU buffer in step S6,where it can be buffered until further processing of the SDU takes place.
It may be noted that step S2 does not necessarily have to be performed prior to the buffering, and could also be performed subsequently, as will be explained in connection with the embodiment of Fig. 9.
The buffering of received SDUs is especially advantageous in situations where the immediate processing at the receiving layer is not guaranteed or feasible, e.g. if the receiving layer is a link layer for sending data over a wireless link.
Fig. 7b shows a routine for processing buffered SDUs, namely for embedding the SDUs into L2 PDUs. In step S31 an SDU is taken out of the SDU buffer and embedded into one or more L2 PDUs. In step S7 the one or more resulting L2 PDUs are buffered in a PDU buffer. The buffered L2 PDUs can then be transmitted according to anyone of the transmission control procedures S4, S4 0, S41 to S43 explained in connection with Figures 1 to 4. It may be noted that the processing routine of Fig. 7b could also be embodied by directly, passing' the L2 PDUs from step S31 to a transmission control procedure S4, S40, S41-S43.
Fig. 7c shows a basic routine for managing the contents of one or both of the SDU buffer and PDU buffer. More specifically, in a step S8 it is determined whether a triggering condition for performing dropping of buffered data units is met or not, and if it is, a data unit dropping procedure S9 is conducted. The triggering conditions can be chosen as is suitable or desirable, e.g. a data unit dropping procedure can be. called for if the buffer is an overflow state (e.g. the1 amount of data in the buffer exceeds an overflow limit), and/or if tne link over which the L2 PDUs are to be transmitted is in an overload



state (e.g. the bandwidth momentarily provided by the link is lower than a predetermined limit percentage of the bandwidth demanded for sending the data that is to be transmitted over said link).
It may be noted that the dropping of data units is not only useful to relieve an overflow or overload state at the layer where the present embodiments are implemented, but also serves as an indication to higher layers that congestion has occurred. The slow start and congestion avoidance algorithms known from TCP/IP are examples of mechanisms with which flow control can respond to data unit loss along a transmission path.
The data unit dropping procedure S9 comprises a decision step for deciding whether a buffered data unit under considerat ion is to be dropped or not, where said decision step depends on one or more numeric values that belong to at least one numerically quantifiable parameter associated with a given L3 data unit. The given L3 data unit is the buffered data unit under consideration if this data unit under consideration is an L3 data unit (i.e. if the dropping of an SDU is considered, then the numerically quantifiable parameter is associated with said SDU under consideration, e.g. the size of said SDU or the inter-arrival time associated therewith), and said given L3 data unit is the L3 data unit embedded in said data unit under consideration if the data unit under consideration is an L2 data unit (i.e. if the dropping of an L2 PDU is considered, then the numerically quantifiable parameter is associated with the SDU embedded in the L2 PDU under consideration, e.g. the size of said SDU or the inter-arrival time associated therewith).
As indicated above, the data dropping procedure can be conducted in one or both of the SDU buffer and the PDU buffer. However, it is preferable to perform any data unit
dropping at the highest sub-layer, i.e. at the SDU buffer, in order to avoid unnecessary embedding operations for data that is possibly dropped.
Figure 8 shows an example of a data unit dropping procedure S9. In a first step S91, a first data unit is considered. The selection of the data unit to be considered can be done as is suitable or desirable, and can depend on the way the data units are buffered. For example, if the data units are queued, then a data unit at a predetermined position in the queue can be selected (such as the first or last data unit),, or any other type of selection routine can be chosen, such as the random selection of a data unit.
Then, in step S92, the above mentioned decision step is performed, i.e. it is determined whether the numeric value
(e.g. data unit size or inter-arrival time) of the L3 data unit or SDU associated with the data unit under consideration (either the SDU itself or an L2 PDU embedding the SDU) fulfils a predetermined condition or not. If the predetermined condition is fulfilled, the procedure branches to step S96 and drops the data unit under consideration. If not, the procedure branches to step S93, where it is determined whether there are further data units present in the buffer, beyond those already considered. If yes, then the procedure branches to step S94, in which a next data unit is selected, and then the procedure loops back to step S92. The selection of a next data unit to be considered can be performed in any desirable or suitable way and will generally be linked to the method used in step S91. For example, S94 can consist in simply selecting the following or preceding data unit in the queue, or in performing another random selection that only excludes previously selected data units.
If step S93 shows that no more data units are left that have not yet been considered for step S92 (which indicates

that none of the data units in the buffer fulfilled the dropping criterion laid out by step S92), then the procedure goes to step S95, in which a default procedure for selecting a data unit is performed, e.g. a data unit at a predetermined position in a queue is selected, or a random selection is performed. Then, the thus selected data unit is dropped in step S96.
The condition checked in step S92 is preferably whether the numeric value (or values) fall above or below a specific threshold. For example, if the numeric value is the size of the L3. data unit or SDU, then it is preferable to drop data units that exceed a certain size. The advantages of doing this will be explained later. If the numeric value is the inter-arrival time, then it is possible to drop data units whose inter-arrival time exceeds a certain time period, or it is possible to drop data units whose inter-arrival time falls short of a certain time period.
In the procedure of Fig. 8, it was assumed that the numeric value is available for checking at S92. This can e.g. be the case by retrieving the value or values determined in step S2 of Fig. 7a. However, it is also possible to include a determination step; in the- data dropping procedure -;S9 . This is shown in the example of Fig. 9, which relates to the case where the numeric value is the size of the L3 data unit, and the procedure comprises a step S97 for measuring the size, and where the decision step is provided in S920 as a comparison of the measured size with a predetermined threshold Th, where the condition for dropping is met if the measured size exceeds the threshold Th. The remaining steps are the same as in Fig. 8 and will therefore not be described again.
The advantage of dropping data units that exceed a certain size (i.e. not dropping data Units that fall below said certain size) is that often the dropping of small data
units will impact the data flow at higher layers much more heavily than the dropping of larger data units. Namely, small data units will often not contain payload for a data transmission (i.e. data from yet a higher layer, e.g. an application layer), but rather signalling messages that are important for setting up a connection (such as synchronisation messages), for confirming correct transmission (such as acknowledgment messages) or for releasing a connection (such as finish messages). In TCP, examples of such signalling are SYN, ACK and FIN messages. A sender of such signalling messages can often only recover from the loss of such a signalling message by performing a time-out, i.e. waiting until a predetermined time since sending the lost data unit has expired before undertaking any action to remedy the situation. As time-out periods are usually set conservatively (i.e. long), the dropping of such a signalling message can have strong impact on the overall transmission of data units, in contrast to dropping a data unit containing payload. Namely, most protocols for transporting data units are specifically equipped and designed to handle the loss of payload, such that the impact is less severe. As a consequence, overall performance can be enhanced by not dropping small data units, i.e. only dropping data units that exceed a certain size threshold.
As an example, SYN messages are sent at the beginning of a connection set up. If one is lost, the set up will not continue until a time-out has occurred. Consequently, the start of sending payload is strongly delayed. FIN messages are sent at the end of a connection, indicating a closing of the connection. Dropping such a data unit and consequently thereby providing the sender of that message with an indirect congestion indication, is of no value, as this sender is at the end of the transmission anyway. In other words, it is much better to drop -a data unit from a


different source, where the congestion indication will still have an effect.
Dropping ACK messages does not improve the traffic situation as TCP ACKs are cumulative.
It may be added that it is also preferable to drop larger data units in place of smaller "ones because the dropping of small data units will generally not improve the buffering delay at the link. In other words, more is done to alleviate the condition that triggered the data unit dropping (e.g. link overload or buffer overflow) if large data units are dropped than if small ones are dropped.
From the above discussion, it can be seen that the setting of the threshold Th is preferably done in dependence on the possible signalling messages sent at higher layers. As an example, if the above embodiment is applied to a layer 2 on top of which TCP is run, then the threshold Th should be set such SYN, ACK and FIN messages are not dropped, i.e. the threshold Th should be set larger or equal to the expected size of the SYN, ACK and FIN messages.
It may be added that 'the step of discriminating a group, into an L3 data unit belongs (e.g. whether it belongs to a specific flow), described above in connection with step S5 in Figures 3 and 4, can also be used in connection with data unit dropping procedure. Namely, in addition to making the data unit dropping procedure dependent on the numeric value, the result of such a discrimination step can also be taken into account.
For example, if the discrimination step S5 consists in determining which flow an L3 data unit belongs to, then the decision step S92 or S92 0 can be amended to also take the discriminated flow into account, e.g. such that no data units from a specific flow or group of flows are dropped,

or that only data units from the specific flow or flow group are dropped.
Fig. 10 shows a schematic representation of a system in which the methods described in connection with Figs. 7 to 9 can be applied. The L2 implementation comprises an SDU buffer 101, an embedder 103 and a PDU buffer 104, as well as a controller 102 for controlling the operation of each of the entities 101 to 103. The controller 102 is arranged to execute suitable routines, e.g. the routine shown in Fig. 7a for buffering received SDUs in the SDU buffer 101 and the routine shown in Fig. 7b for operating the embedder 103 and PDU buffer 104. The term "implementation" refers to any hardware, software or combination of hardware and software suitable to execute the described routines and provide the desired functionality. As a consequence, the entities 101-104 can also be provided by any suitable hardware, software or combination of hardware and software.
In the above examples of Figures 7 to 10, the congestion alleviation procedure was shown as a data unit dropping procedure. Alternatively or additionally, the congestion alleviation procedure can also comprise a data unit marking procedure, in which-the'congestion alleviation measure•does not consist in dropping the data unit, but adding an indicator or notification at a predetermined position (e.g. in a specified header field) in the data unit, where said indicator or notification informs the communication end-points of the data unit that congestion is taking place. An example of such a concept is the Explicit Congestion Notification (ECN) known in the context of TCP/IP, see.e.g. RfC 3168.
Data unit marking should preferably be implemented with respect to the SDU buffer, as the SDUs will be marked if the decision step for deciding on.the taking of a congestion alleviation measure decides to take a measure.

If a combination of data unit dropping and data unit marking is used a congestion alleviation procedure, then it is preferable that the SDUs be discriminated, e.g. according to protocol identifiers, and SDUs belonging to a predetermined category should be dropped and not marked. In other words, data units for which it does make sense to add a marking, such as User Datagram Protocol' (UDP) data units, should not be marked, but rather dropped if it is decided to take a congestion alleviation measure. Namely, as is well known, UDP peers are not responsive to congestion, such that marking with a congestion notification makes no sense. With respect to data units discriminated into a category that is responsive to congestion, such as Transmission Control Protocol (TCP) data units, the congestion alleviation measure can be either dropping or marking, depending on the specific preferences or desires. For example, for congestion responsive SDUs the decision step can be implemented in such a way that data unit marking is performed for a first triggering condition (e.g. a triggering condition indicating that the SDU buffer is in danger of becoming congested, like the exceeding of a first threshold buffer fill level) and data unit dropping is performed for a: second triggering condition (e.g. exceeding a second threshold higher than the first threshold, which means that the buffer is congested, such that the buffer load must be reduced).
Although the present invention has been described with reference to specific embodiments these embodiments only serve to provide the skilled person with a full and complete understanding of the invention, but are not intended to limit the scope. The scope of the invention is much rather defined by the appended claims. Reference numerals in the claims are intended to make the claims easier to understand and also do not restrict the scope.


WE CLAIM :
1. A method of processing a data unit in a data unit based communication system, the data unit of protocol layer (L3) for transmission in a data unit based communication system with a buffer (101, 104), an embedder (103) and a controller (102) connected to the buffer (101, 104) and the embedder (103), comprising the steps of:
passing to a second protocol layer (L2) a given data unit of said first protocol layer (L3) that is to be transmitted, said second protocol layer (L2) lying below said first protocol layer (L3) under the control of the controller (102), the controller arranged to control the buffer (101, 104) by the controller (102);
determining one or more numeric values, said, one or more numeric values belonging to at least one numerically quantifiable parameter associated with said given data unit of said first protocol layer (L3);
embedding by the embedder (103) under the control of the controller (102) said given data unit of said first protocol layer (L3) into one or more data units of said second protocol layer (L2),
performing transmission control for said one or more data units of said second protocol layer (L2) that embed said given data unit of said first protocol layer (L3),


of said second protocol layer (L2) being arranged to determine by the controller (102) one or more numeric values belonging to at least one numerically quantifiable parameter associated with said given data unit of said first protocol layer (L3), embed by the embedder (103) under control of the controller (102) said given data unit of said first protocol layer (L3) into one or more data units of said second protocol layer (L2), and perform transmission control for said one or more data units of said second protocol layer (L2) that embed said given data unit of said first protocol layer (L3) in accordance with said one or more numeric values of said at least one numerically quantifiable parameter determined by the controller (102).
The data unit based communication system as claimed in claim 18, wherein said at least one numerically quantifiable parameter is the size of said given data unit of said first protocol layer (L3).
The data unit based communication system as claimed in claim 18 or 19, wherein said at least one numerically quantifiable parameter is associated with a buffer fill level of a buffer holding data units of said first protocol layer (L3) or said second protocol layer (L2).
The data unit based communication system as claimed in any one of claims 18 to 20, wherein said at least one numerically quantifiable parameter is an inter-arrival time of data units of said first protocol layer (L3).


The data unit based communication system as claimed in any one of claims 18 to 21, wherein said implementation of said second protocol layer (L2) is arranged to discriminate a group of data units of said first protocol layer (L3) to which said given data unit of said first protocol layer (L3) belongs.
The data unit based communication system as claimed in claim 22, wherein said implementation of said second protocol layer (L2) is arranged to discriminate said group on the basis of source information and/or destination information and/or a protocol identifier contained in said data units of said first protocol layer (L3).
The data unit based communication system as claimed in claim 22 or 23, wherein said implementation of said second protocol layer (L2) is arranged to also perform said transmission control for said one or more data units of said second protocol layer (L2) that embed said given data unit of said first protocol layer (L3) in accordance with a result of said discriminating.
The data unit based communication system of claim 22 or 23, wherein said implementation of said second protocol layer (L2) is arranged to determine said numeric value on the basis of a result of said discriminating.


The data unit based communication system as claimed in claim 25, wherein there is a buffer for holding data units of said second protocol layer (L2), said numerically quantifiable parameter being the number of data units of said second protocol layer (L2) in said buffer that embed data units of said first protocol layer (L3) belonging to said group.
The data unit based communication system as claimed in claim 25, wherein there is a timer for measuring an inter-arrival time of data units of said first protocol layer (L3) belonging to said group, where said numerically quantifiable parameter is the inter-arrival time of data units of said first protocol layer (L3) belonging to said group.
The data unit based communication system of one of claims 18 to 27, wherein said transmission control consists of adjusting a forward error correction for said data units of said second protocol layer (L2) or for data units of a third protocol layer (LI) below said second protocol layer (L2).
The data unit based communication system as claimed in claim 28, there is a function (RLC) for controlling the sending of said data units of said second protocol layer (L2) over a link, and said transmission control consists of adjusting a transmission power and/or a data rate over said link and/or a degree of interleaving.


The data unit based communication system as claimed in any one of claims 18 to 19, wherein said implementation of said second protocol layer (L2) consists of a function of providing automatic retransmission of data units of said second protocol layer (L2) under predetermined conditions and where said transmission control consists of adjusting said retransmission function.
The data unit based communication system as claimed in any one of claims 18 to 30, wherein said implementation of said second protocol layer (L2) consists of a function (MAC) for scheduling of said data units of said second protocol layer (L2), and where said transmission control consists of adjusting said scheduling.
The data unit based communication system as claimed in any one of claims 18 to 31, wherein said implementation of said second protocol layer (L2) is arranged to perform a segmentation operation for data units of said first protocol layer (L3), and where said transmission control consists of adjusting said segmentation operation.
The data unit based communication system as claimed in claim 32, wherein the adjusting of said segmentation operation consists of adjusting the size of the data units of said


The data unit based communication system as claimed in any one of claims 18 to 33, wherein said transmission control consists of discriminating said one or more data units of said second protocol layer (L2) in which said given data unit of said first protocol layer (L3) is embedded on the basis of said numeric value, and placing each of said one or more data units of said second protocol layer (L2) into one of a plurality of predetermined transmission categories on the basis of said discrimination result.


Dated this 25th day of November, 2003.
OF REMFRY & SAGAR ATTORNEY FOR THE APPLICANTS

Documents:

1083-mumnp-2003-cancelled page(24-01-2005).pdf

1083-mumnp-2003-claim(granted)-(24-01-2005).doc

1083-mumnp-2003-claim(granted)-(24-01-2005).pdf

1083-mumnp-2003-correspondence(ipo)-(08-09-2006).pdf

1083-mumnp-2003-correspondence1(29-11-2006).pdf

1083-mumnp-2003-correspondence2(28-03-2006).pdf

1083-mumnp-2003-drawing(24-01-2005).pdf

1083-mumnp-2003-form 1(25-11-2003).pdf

1083-mumnp-2003-form 13(14-06-2006).pdf

1083-mumnp-2003-form 19(25-11-2003).pdf

1083-mumnp-2003-form 2(granted)-(24-01-2005).doc

1083-mumnp-2003-form 2(granted)-(24-01-2005).pdf

1083-mumnp-2003-form 3(25-11-2003).pdf

1083-mumnp-2003-pct-ipea-409(25-11-2003).pdf

1083-mumnp-2003-pct-isa-210(25-11-2003).pdf

1083-mumnp-2003-petition under rule 137(25-01-2005).pdf

1083-mumnp-2003-power of attorney(13-05-2005).pdf

1083-mumnp-2003-power of authority(20-01-2005).pdf

1083-mumnp-2003-power of authority(24-01-2005).pdf

abstract 1.jpg


Patent Number 202837
Indian Patent Application Number 1083/MUMNP/2003
PG Journal Number 15/2007
Publication Date 13-Apr-2007
Grant Date 08-Sep-2006
Date of Filing 25-Nov-2003
Name of Patentee TELEFONAKTIEBOLAGET LM ERICSSON [PUBL]
Applicant Address A SWEDISH COMPANY, OF S – 126 25 STOCKHOLM, SWEDEN.
Inventors:
# Inventor's Name Inventor's Address
1 1) MICHAEL MEYER, 2) REINER LUDWIG, 3) JOACHIM SACHS, 4) MATS SAGFORS OF GROSSHEIDSTRASSE 27, D - 52080 AACHEN, GERMANY.
PCT International Classification Number N/A
PCT International Application Number PCT/EQ02/05624
PCT International Filing date 2002-05-22
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 NA