CA2263444A1 - Decoder utilizing soft information output to minimize error rates - Google Patents
Decoder utilizing soft information output to minimize error rates Download PDFInfo
- Publication number
- CA2263444A1 CA2263444A1 CA002263444A CA2263444A CA2263444A1 CA 2263444 A1 CA2263444 A1 CA 2263444A1 CA 002263444 A CA002263444 A CA 002263444A CA 2263444 A CA2263444 A CA 2263444A CA 2263444 A1 CA2263444 A1 CA 2263444A1
- Authority
- CA
- Canada
- Prior art keywords
- codewords
- zero
- vector
- decoder
- bit position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/06—Dc level restoring means; Bias distortion correction ; Decision circuits providing symbol by symbol detection
- H04L25/067—Dc level restoring means; Bias distortion correction ; Decision circuits providing symbol by symbol detection providing soft decisions, i.e. decisions together with an estimate of reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/13—Linear codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/27—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
- H03M13/2906—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using block codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
- H03M13/2933—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using a block and a convolutional code
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/3738—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with judging correct decoding
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/45—Soft decoding, i.e. using symbol reliability information
- H03M13/451—Soft decoding, i.e. using symbol reliability information using a set of candidate code words, e.g. ordered statistics decoding [OSD]
- H03M13/456—Soft decoding, i.e. using symbol reliability information using a set of candidate code words, e.g. ordered statistics decoding [OSD] wherein all the code words of the code or its dual code are tested, e.g. brute force decoding
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/63—Joint error correction and other techniques
- H03M13/6306—Error control coding in combination with Automatic Repeat reQuest [ARQ] and diversity transmission, e.g. coding schemes for the multiple transmission of the same information or the transmission of incremental redundancy
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0064—Concatenated codes
- H04L1/0065—Serial concatenated codes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0071—Use of interleaving
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/09—Error detection only, e.g. using cyclic redundancy check [CRC] codes or single parity bit
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/63—Joint error correction and other techniques
- H03M13/6331—Error control coding in combination with equalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/12—Arrangements for detecting or preventing errors in the information received by using return channel
- H04L1/16—Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
- H04L1/18—Automatic repetition systems, e.g. Van Duuren systems
- H04L1/1809—Selective-repeat protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/12—Arrangements for detecting or preventing errors in the information received by using return channel
- H04L1/16—Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
- H04L1/18—Automatic repetition systems, e.g. Van Duuren systems
- H04L1/1812—Hybrid protocols; Hybrid automatic repeat request [HARQ]
- H04L1/1816—Hybrid protocols; Hybrid automatic repeat request [HARQ] with retransmission of the same, encoded, message
Abstract
A coding system is disclosed wherein the receive side includes a decoder capable of producing, in addition to the hard information decoded output, either or both symbol and/or bit soft information values. For a certain information bit position, a value proportional to the joint probability that a received symbol and the set of all hypothesized transmitted codewords led to the estimated or detected hard information output is calculated. The calculated probabilities with respect to plural codewords having a zero in that certain information bit position are compared to the calculated probabilities with respect to the plural codewords having a one in that certain information bit position. The result of the comparison provides an indication of whether the hard information output in that same information bit position is more likely a one or a zero. The output soft information values are further processed in comparison to a preset threshold, with instances exceeding the threshold triggering block rejections and/or retransmissions.
Description
W098/07247 PCT~S97/14260 DECODER UTILIZING SOFT INFORMATION OU1~U1 TO MINIMIZE ERROR RATES
BACKGROUND OF THE INVENTION
Technical Field of the Invention The present invention relates to the coding and decoding of digital data for transmission over a communications channel and, in particular, to a system decoder utilizing soft information outputs to minimize error rates in the transmitted digital data.
Description of Related Art There exist many applications where large volumes of digital data must be transmitted and received in a substantially error free manner. In telecommunications systems, in particular, it is imperative that the reception of digital data be accomplished as reliably as is possible. Reliable communication of digital data is difficult, however, because the communications channels utilized for data transmission are plagued by error introducing factors. For example, such errors may be attributable to transient conditions in the channel (like noise or multi-path fading). The influence of such factors results in instances where the digital data is not transmitted properly or cannot be reliably received.
Considerable attention has been directed toward discovering methods for addressing the instances of errors which typically accompany data transmission activities.
For example, it is well known in the art to employ forward error correction (FEC) codes and other means to locate, counteract, correct and/or eliminate these errors. The data stream is thus encoded in accordance with a plurality of predefined codewords established in a codebook for that particular encoding scheme. Once encoded, the random errors introduced therein during transmission are 3~ relatively easily located and corrected during a W098/07247 PCT~S97/14260 corresponding decoding process using well known mathematical processes.
The codewords output from the encoder are transmitted over a communications channel and corrupted to some degree by noise to create a vector. During decoding, the received vector comprising the encoded data (perhaps including errors) is compared against each of the plurality of codewords in the codebook for the specific encoding process used. The codeword closest to the received vector is selected, and its corresponding data word is output from the decoder. This output is often referred to as the hard information output.
In many coding systems, a decoder additionally produces soft (or side) information outputs to help 1~ another decoder identify, and perhaps correct, introduced errors. For example, in one scenario used in the Global System for Mobile (GSM) communication, an inner decoder comprising an equalizer generates a soft information output derived from path metric differences, and an outer decoder comprising an error control decoder utilizes the output soft information to detect and correct introduced errors. In another scenario used in the PRISM-TDMA
communications system, the inner decoder comprises an improved multiband excitation (IMBE) error control decoder generating estimates of the number of channel errors, and the outer stage decoder comprises a speech decoder which utilizes the output error estimates in determining whether to discard data.
Soft information outputs have historically been generated by the decoder in conjunction with the selection of the closest codeword and its associated hard information output. The reliability information comprising the soft information output is calculated for each individual symbol (bit) within the hard information output. Accordingly, in such decoders the reliability of each symbol (bit) within the hard information output vector is derived without taking into consideration either the remaining symbols within that hard information output vector or any other considered codewords (and associated hard information output vectors). This is achieved by comparing the probability of the received data given a bit with a logical value of one was transmitted to the probability of the received data given a bit with a logical value of zero was transmitted.
In the Hoeher reference (EP 0625829 A2), disclosure is made of a method for providing reliability information by comparing the decoded sequence to a small list of alternative sequences which differ from the decoded sequence at least in the symbol for which reliability is being evaluated.
In the Reed, et al. reference (US 4,939,731), disclosure is made of an automatic repeat request system wherein a request for retransmission is issued if a received packet cannot be corrected, and the transmission rate is reduced if the cumulative errors exceed a certain rate.
SUMMARY OF THE INVENTION
A decoder in a coding communications system of the present invention produces, in addition to a hard information output estimate of a received symbol, a soft (or side) information output comprising the relative reliability of the detection of that estimated output.
The relative reliability soft information output for a certain information bit position is determined by calculating a value proportional to the joint probability that the received symbol and the set of all hypothesized transmitted codewords led to the estimated or detected hard information output. A comparison is then made of the calculated probabilities with respect to plural codewords having a zero in that certain information bit position and the calculated probabilities with respect to the plural codewords having a one in that certain information bit position. The result of the comparison provides an AME~DED ~iHFFT
i CA 02263444 1999-02-11 -3a-indication of whether the hard information output in that same information bit position is more likely a one or a zero.
In another aspect of the present invention, the soft information outputs are further processed in comparison to a preset threshold. Responsive to instances where the soft information outputs exceed the threshold, data block rejections and/or retransmissions are ordered.
BRIEF DESCRIPTION OF THE DRAWINGS
, A more complete understanding of the method and apparatus of the present invention may be acquired by ~EN~E~ EET
W098/07247 PCT~S97114260 reference to the following Detailed Description when taken in conjunction with the accompanying Drawings wherein:
FIGURE l is a block diagram of a prior art cross-interleaved coding system;
FIGURE 2 is a block diagram of a prior art coding system having soft information output decoding capabilities;
FIGURE 3 is a block diagram of a binary block coding system of the present invention;
FIGURE 4 is a block diagram of a particular implementation of the coding system of FIGURE 3;
FIGUR~ 5 is a block diagram of a non-binary block coding system of the present invention;
FIGURE 6 is a ~lock diagram of a coding system implementing block failure and retransmission functions responsive to soft information outputs; and FIGURES 7-12 are functional block diagrams of the decoder of the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS
Reference is now made to FIGURE 1, wherein there is shown a block diagram of a prior art cross-interleaved coding system l0. On the transmit side 12 of the system l0, encoding of a received original data stream is accomplished in three steps. First, the data stream received on line 14 is encoded by a first, outer encoder 16. Next, the encoded data stream output on line 18 is interleaved by an interleaver 20. Finally, the interleaved data stream output on line 22 is encoded by a second, inner encoder 24. The encoded data stream output on line 26 is then transmitted over a communications channel 28. On the receive side 30 of the system l0, decoding of the transmitted data stream received on line 26' to recover the original data stream is accomplished by performing corresponding decoding and de-interleaving steps in a complementary three-step order W098/07247 PCT~S97/14260 utilizing an inner decoder 32, a de-interleaver 34, and an outer decoder 36, respectively.
Reference is now made to FIGURE 2, wherein there is shown a block diagram of a prior art coding system 38 having soft information output decoding capabilities. On the transmit side 40, the system 38, like the system l0 of FIGURE 1, includes an outer encoder 42, an interleaver 44, and an inner encoder 46. The data stream output on line 48 from the transmit side 40 is carried by a communications channel 50. On the receive side 52, the system 38, like the system l0 of FIGUR~ l, includes an inner decoder 54, a de-interleaver 56, and an outer decoder 58 for processing the channel transmitted data stream to recover the originally input data stream (hard information outputs).
The inner decoder 54 further functions to generate soft (or side) information outputs on line 62 in conjunction with the decoding operation performed on the data stream received on line 48' after transmission over the communications channel 50. For example, the inner decoder 54 may comprise an equalizer, with the soft information outputs derived from path metric differences.
In another example, the inner decoder 54 may comprise an improved multiband excitation (IMBE) decoder, with the soft information outputs estimating the number of introduced channel errors. The inner decoded data stream output on line 64, along with the soft information outputs, output on line 62, are de-interleaved by the de-interleaver 56 and passed on to the outer decoder 58. The received soft information outputs are then used by the outer decoder 58 to assist in the decoding operation and, in particular, to help identify and correct errors in the data stream.
The decoder 54 typically functions to calculate reliability information comprising the soft information output for each individual symbol (bit) within the hard information output. This is achieved by taking the W098l07247 PCT~S97114260 logarithm of the ratio of two probabilities: first, the probability of the received data given a bit with a logical value of one was tranmitted; and second, the probability of the received data given a bit with a logical value of zero was transmitted. Accordingly, in such decoders 54 the reliability of each symbol (bit) within the hard information output vector is derived without taking into consideration either the remaining symbols within that hard information output vector or any other considered codewords (and associated hard information output vectors).
Reference is now made to FIGURE 3, wherein there is shown a block diagram of a binary block coding system 100 of the present invention. The system 100 includes a transmit side 102 and a receive side 104. On the transmit side 102, the system 100 includes an (n,k) binary block encoder 106 wherein a block of "k" data digits received on line 108 is encoded by a codeword of "n" code digits in length (wherein n~k) for output on line 110. The data digits (dl,d2,d3,...,dk) in each block comprise a k-dimensional vector d. Similarly, the code digits (c1,c2,c3,...,cn) in each codeword comprise an n-dimensional vector c. Each codeword includes m check digits (wherein m=n-k) among the code digits, with the check digits being useful in later detecting (and perhaps correcting) t number of ch~nnel errors introduced into the (n,k) code, where t depends on the particular code being used. The transmit side 102'of the system 100 may further include a number of other encoding and/or interleaving elements as generally referred to at 112. Although illustrated as being positioned before the encoder 106, it will be understood that other encoding and/or interleaving elements may be positioned after as well.
The coded output vector c of the (n,k) binary block encoder 106 is transmitted through a communications channel 114 which may introduce a number of errors producing an n-dimensional vector received codeword W098t07247 PCT~S97/14260 r=(r1,r2,r3,...,r~) on line 116. On the receive side 104, the system 100 includes a maximum likelihood (n,k) binary block decoder 118 which decodes a received codeword r of n data digits in length to output a block x of k data digits, wherein x=(x1,x2,x3,...,xk) is not necessarily equal to d due to the introduced channel errors. The decoding operation performed by the decoder 118 computes the Euclidean distance D(r,Yi) between r and every codeword Yi~(Yil~Yi2~Yi3~ Yin) in the appropriate codebook, wherein 10 Yin is in +1 form, for i=1 to 2k. In the standard maximum likelihood decoder 118, the codeword y closest to r is chosen, and its corresponding hard information bits x=(x1,x2,x3,...,xk) are produced as the hard output on line 120. The receive side 104 of the system 100 may further include a number of other encoding and/or interleaving elements as generally referred to at 122, positioned before and/or after the decoder 118, corresponding to those included at 112 in the transmit side 102.
In addition to the hard information bits x output from the decoder 118 on line 120, the decoder further generates a soft (or side) information vector output 8=(Sl,S2,S3, . . ., Sk) on line 124. In accordance therewith, a set C of the L closest codewords y to r is identified.
The choice of L involves a tradeoff between complexity of the following calculation and soft information output quality. Prel-m;n~ry testing has indicated that choosing L as small as between two and four results in satisfactory soft information output values. For each information bit location j, for j=l to k, the set C is partitioned into a first set C(i'~), and a second set C~ . A codeword y is included in the first set C~i'~~ if the j-th information bit corresponding to that codeword is zero (0). Conversely, a codeword is included in the second set C~i'1) if the j-th information bit corresponding to that codeword is one ~1).
The soft information output vector 8 thus may determined by the decoder 118 as follows:
W098/07247 PCT~S97/14260 -D(rsYi)2 ~
e 2a2 y ~c(i.~) s,=ln-D(r~y~)2 (1) ~ e 2O2 yi~C~
wherein: C~2 represents the variance of additive white Gaussian noise;
j = l to k, to index across the information bit locations; and i = l to n, to index across all of the codewords.
In both the numerator and the denominator of Equation l, the exponential portion produces a value that is proportional to the joint probability of having received the vector r and of a certain codeword y having been transmitted. Said another way, the exponential portion produces a value that is proportional to the probability of receiving the vector r given a certain codeword y having been transmitted. The numerator thus sums the calculated joint probabilities (taken by the exponential portion) for all the codewords y included in the first set C~i~~~ (i.e., those included codewords y having a zero in the j-th information bit position). In the denominator, a summation is made of the calculated joint probabilities (taken by the exponential portion) for all the codewords y included in the second set C~ (i.e., those included codewords y having a one in the j-th information bit position). The likelihood of having either a zero or a one for the hard information output x bit at issue is taken by dividing the numerator by the denominator. The resulting ratio is greater than one if the probability of having a zero in the j-th information bit exceeds the probability of having a one. Conversely, the resulting CA 02263444 1999-02-ll W O 98/07247 PCT~US97/14260 ratio is less than one if the probability of having a one in the j-th information bit exceeds the probability of having a zero. By taking the logarithm, when the value of s; is greater than zero (sj>0), this means that a zero (o) is more likely for the corresponding hard information bit Xk. Conversely, when the value of s is less than zero (sj~0), this means that a one (1) is more likely for the corresponding hard information bit Xk.
Equation (1~ can be approximated fairly well as follows:
~n D(r~j) ~n D(r~j) yj~C~~) 2a2 t y~C(i,l) (2) The common portion of the first and second terms of Equation 2, like with the exponential portion of Equation 1, produces a value that is proportional to the joint probability of having received the vector r and of a lS certain codeword y having been transmitted. The first term then calculates the negative of the closest Euclidean distance between the received data vector r and a hypothetical codeword y having a logical value of zero in the j-th information bit position (i.e., those codewords included in the first set C~j'~~). The second term calculates the closest Euclidean distance between the received data vector r and a hypothetical codeword y having a logical value of one in the j-th information bit position (i.e., those codewords included in the second set C~ ). When the value of sj is greater than zero ~Sj>0), this means that a zero ~0) is more likely for the corresponding hard information bit xk. Conversely, when the value of s; is less than zero ~sj<0), this means that a one ~1) is more likely for the corresponding hard information bit xk.
An advantage of using the approximation of Equation ~2) is that noise variance, which would have to be estimated otherwise, can now be dropped. This results in W098l07247 PCT~S97114260 a scaling, which is irrelevant. It should be noted that in the event that C(~~~ is empty, the value of si is set equal to +~. Similarly, in the event that C~ is empty, the value of s; is set equal to -~.
It is noted that the mathematical operation of minimizing the Euclidean distance D(r,Yi) is equivalent to maximizing the correlation p, as follows:
Pj ~ rlYiJ (3) wherein: n is the block length of the codeword; and i = 1 to n, to index over all codewords Yi.
It is then possible to rewrite the soft information output of Equation (1) using standard manipulations as follows:
~ Pi~
~ eO2 yi~C(i~~) sj=ln (4) ~ ea2 ' y ~C ~,1) J
and the approximation of Equation (2) may be rewritten as follows:
max Pi max p J Yi~C(~~) a;2 y,~C~ J2 (5) Either Equation (2) or Equation (5) may then be used by the decoder 118 in determining the soft information output. The choice of one over the other is made in accordance with whether distances or correlations are more readily available in and to the decoder 118.
Reference is now made to FIGURE 4 wherein there is shown a block diagram of a particular implementation of W O 98/07247 PCT~Sg7/14260 the coding system 100 of FIGURE 3. The system 100 includes a transmit side 102 and a receive side 104. On the transmit side 102, the system 100 includes additional encoding and/or interleaving elements 112 comprising a first, outer encoder 120, a second, middle encoder 122, and an interleaver 124. The outer encoder 120 comprises an (N2,N1) cyclic re~l]n~ncy check (CRC) encoder providing message error detection capability. The middle encoder 122 comprises a convolutional encoder which operates, in connection with the inner, (n,k) binary block encoder 106 to provide a concatenated encoding operation useful in providing error correction capability. The interleaver 124 positioned between the middle encoder 122 and the inner encoder 106 is useful in preventing burst errors introduced by either the communications channel 114 or the subsequent decoding operation.
The inner encoder 106 preferably utilizes (2k,k) Hadamard encoding to implement a special class of binary block codes referred to as orthogonal codes. Use of such orthogonal codes is preferred due to their relatively simple decoder implementations and possibility of noncoherent demodulation. The encoding process may be described ~Y yi=f(x1,x2,x3,...,xk) for i=1 to 2k, wherein Yi is the output codeword, f refers to the encoding function, and b refers to the bits to be encoded. The code book for the encoder 106 thus consists of 2k codewords y.
The coded output of the inner (2k,k) Hadamard encoder 106 is transmitted through the communications channel 114 to the receive side 104 of the system 100. The receive side 104 includes an inner binary block decoder 118 which decodes the Hadamard encoded codewords utilizing a fast Hadamard transform (FHT) to find the correlations of Equation ~3). The Hadamard transform is well known in 3~ signal processing and coding theory, and a number of fast implementations have been developed. An appropriate one .
W098/07247 PCT~S97/14260 of those implementations is selected for use in the decoder 118.
A 2k x 2k Hadamard matrix can be constructed by the following induction:
H(~)=[1] (6) H(m-l) H(m-') H(m)= (7) H(m~ H (m - ~) The columns of H(k~ are the codeword of the Hadamard code in a specific order. The Hadamard transform components of Equations (6) and (7) coincide with the correlations of Equation (3), that is:
rH(~)=(P~ P2~) In this implementation, the matrix H is symmetric. This is in fact the Hadamard transform. To obtain the hard information for output on line 130, the index to the largest Pi, for i=1 to 2k, is found, and the corresponding natural binary representation thereto is output.
The receive side 104 of the system 100 further includes a soft information generator 132 associated with the decoder 118. The soft information output is determined by the soft information generator 132 using Equation (5), and output on line 134. The L largest pi's, for i=1 to 2k, and their indices are selected. In this case, I represents the set of those indices, and Bj(i) is the j-th bit of the natural binary representation of i.
For the j-th information bit, the following provides a simple and reasonably accurate approximate log-likellhood ratio:
.. . . , ~ . .....
W098t07247 PCT~S97/14260 max p max p s~I & B,(i)=O i~I & B,(i)=l (9) where the maximum is set to -~ if the set is empty. When the value of s; is greater than zero (Sj>0), this means that a zero (0) is more likely for the corresponding hard information bit Xk. Conversely, when the value of s; is less than zero (Sj~0), this means that a one (1) is more likely for the corresponding hard information bit Xk.
Put another way, the soft information generator 132 produces soft output 8=(Sl,S2,S3,...,S ~ on line 134 for the corresponding hard information output x=(x1,x2,x3,.. ,xk ) output on line 130. This soft -information output is based on the following algorithm:
si=~, e j - ~, e ~ (10) J'~
where:
M~l { ilXi I ~ f (Xl~ xk) yj} (11) For each hard information bit xi, the code book is partitioned into two subsets, corresponding to xi=l or 15xi=0. The correlation values associated with the codeword in the same subset are added to obtain a likelihood indication of whether xi=1 or xi=0. The soft information bit si for a corresponding hard information bit xi is generated by taking the difference of the likelihood 20indications.
The receive side 104 of the system lO0 further includes additional decoding and de-interleaving elements generally referred to at 136 comprising a de-interleaver 138, a middle decoder 140 and an outer decoder 142. The 25de-interleaver 138 is connected to lines 130 and 134 to W O 98/07247 PCTrUS97J14260 receive the outputs of the decoder 118 and soft information generator 132. The output lines 144 (hard information bits) and 146 ~soft information bits) from the de-interleaver 138 are connected to the middle decoder 140 which comprises a convolutional decoder. The middle decoder 140 utilizes the soft information output received on line 146 to minimize the bit error rate or sequence error rate of the hard information output received on line 144. This is accomplished, for example, by detecting errors in the received bit stream and then correcting as many of the detected errors as is possible. The output line 148 (hard information bits only) from the middle decoder 140 is connected to the outer decoder 142 which comprises a cyclic redundancy check (CRC) decoder. The outer decoder 142 functions to detect any rem~; nl ng errors in the data stream.
Reference is now made to FIGURE 5 wherein there is shown a block diagram of a non-binary block coding system 200 of the present invention. The system 200 includes a transmit side 202 and a receive side 204. On the transmit side 202, the system 200 includes a non-binary block encoder 206 wherein a received data signal d on line 208 is encoded for output on line 210. Examples of such encoders include: an M-ary block Reed-Solomon encoder; or a modulator for an M-ary signal constellation (such as quadrature amplitude modulation (QAM) or phase shift keying ~PSK)). The transmit side 202 of the system 200 may further include a number of other encoding and/or interleaving elements as generally referred to at 212.
Although illustrated as being positioned before the encoder 206, it will be understood that other encoding and/or interleaving elements 212 may be positioned after as well. The coded output c of the encoder 206 is transmitted through a communications channel 214 which may introduce a number of errors producing a received code signal r on line 216. On the receive side 204, the system 200 includes a correspondingly appropriate non-binary W098/07247 PCT~S97/14260 decoder 218 which decodes the received codeword r to generate an output z, wherein z is not necessarily equal to d due to the introduced channel errors.
The decoding operation performed by the decoder 118 computes the Euclidean distance D(r,Yi) between r and every codeword Yi. The codeword y closest to r is chosen, and its corresponding hard information symbols Z= (Z~ Zk) ~ and hard information bits x=(x1,...,~k), are produced as the output on line 220, wherein the symbol Zj 10 corresponds to the m bits (x(j1~m~1,,xjm). The receive side 204 of the system 100 may further include a number of other encoding and/or interleaving elements as generally referred to at 222, positioned before and/or after the decoder 218, corresponding to those included at 212 in the transmit side 202.
In addition to the hard information symbols z and bits x output from the decoder 218 on line 220, the decoder further generates a soft symbol information vector output s' on line 224, and a soft ~it information vector output 8 on line 226. In accordance with this operation, a set C of the L closest codeword y to r is identified.
The choice of L involves a tradeoff between complexity of the following calculation and soft information output quality. Preliminary testing has indicated that choosing L as small as between two and four results in satisfactory soft information output values. In a manner similar to the binary case described above, the set C is partitioned into M subsets C(i'0),...,C(i'M-1), for each location j. To simplify the computations, the following approximation is made: for each symbol 1, we find:
~l~ C~ D(r~i) (12) Only the symbols lo and 11 are kept with:
W098/07247 PCT~S97/14260 o min ~1 (13) ~ , nmin ~;l (14) wherein, nmin denotes the second smallest number (i.e., the next minimum~. The remaining symbols are treated as if their probability is zero. Thus, the most likely information symbol is lo/ the next most likely information 6ymbol is 11, and the soft value for the symbol at location lS:
S~ilo_ ~jl, (15) As a final step, soft values are calculated for the information bits. The m bits with index (j-l)m+h, for h=l,...,m, correspond to the j-th symbol. Assuming a map is given between the blocks of m bits and the symbols, for each h, the symbol set ~o,...,M-1} is partitioned into a first subset E(h~) and a second subset E(h'1) of symbols.
A symbol set M is included in the first subset E (h~ ~1 if the h-th information bit is zero ~0). Conversely, a symbol 1~ set is included in the second subset E(h'1) if the h-th information bit is one (1). Then, the soft value for the bit at location (j-l)m+h is:
min ~-.1 min ~.1 (16) S(~._ I )m th l~E (h,O) I~ E (h, 1 ) which is interpreted in the same manner as Equation (2).
The issue of whether the soft symbol s~ or soft bit s values output on lines 224 and 226, respectively, are more mP~nlngful to the receive side 204 of the system 200 depends on the decoding elements, generally referred to CA 02263444 1999-02-ll . W 098107247 PCTAJS97/14260 at 222, included in the system after the decoder 218. For instance, suppose the next decoder stage is a Reed-Solomon decoder over an alphabet of size M. In such a case, soft symbol values are more useful. On the other hand, if the next stage is a binary block decoder, or a speech decoder that processes bits, then soft bit values are more useful.
The soft values (bit and/or symbol) are useful in ways other than to assist with error minimization and correction in subsequent stage decoders. Referring now to FIGURE 6, there is shown a block diagram of a receive side 250 of a data communications system 252 wherein the soft value output(s) 254 from a decoder 256 (like those shown in FIGURES 3 and 5) are processed by a threshold error detector 258. An output 260 from the threshold error detector 258 is set on a block by block basis if the soft value output(s) 254 corresponding to that block exceed a given threshold value. The threshold, for example, may be set to a value indicative of the presence of more errors in the data stream than are capable of correction by the receive side 250 of the system 252.
Responsive to the output 260 being set, a processor 262 on the receive side 250 of the system 252 may fail the block. In this connection, it will be noted that the threshold error detector 258 performs an analogous function to the cyclic redundancy check decoder of FIGURE
4 which is used for pure error detection. Thus, when the system includes a threshold error detector 2~8 and processor 262, there may be' no need to include either a cyclic re~1ln~ncy check encoder on the transmit side 264, or a cyclic redundancy check decoder on the receive side 250. The failure decision made by the processor 262 could depend on whether any soft value outputs exceed the - threshold or, alternatively, on whether a certain minimum number of the soft value outputs exceed the threshold.
The processor 262 may further respond to the output 260 indicative of the soft value outputs 254 exceeding the set threshold by signaling the transmit side 264 of the system -18- .
252 over line 266 to request retransmission of the portion of the data stream containing the errors.
Reference is now made to FIGURES 7-12 wherein there are shown functional block diagrams of the decoder 118, 218 or 256 (including the soft information generator 132) of the present invention. The decoder 118, 218 or 256 is preferably implemented as a specialized digital signal processor (DSP) or in an application specific integrated circuit (ASIC). It will, of course, be understood that the decoder 118, 218 or 256 may alternatively be implemented using discrete components and perhaps distributed processing. In either case, the decoder lI8, 218 or 256 performs the functional operations illustrated in FIGURES 7-12 to implement the mathematical operations previously described.
Referring first to FIGURE 7, the decoder 118, 218 or 256 receives the vector r for processing. The decoder 118, 218 or 256 stores a codebook 300 containing all codewords y for the particular coding scheme being implemented. A comparator 302 receives the vector r and each of the codewords y, and functions to select the particular one of the codewords which is closest to the received vector. This operation may comprises, for example, a maximum likelihood operation. The choice of the particular type of comparison performed depends in large part on the type of coding scheme being used. In any event, the various available comparison functions as well as their specific implementations are well known to those skilled in the art. The selected closest codeword is then converted to its corresponding information data comprising the output hard information vector x. The decoder 118, 218 or 256 further includes a functionality for determining the reliability 304 of the output hard information vector x. This reliability information is 3~ output as a vector s, and it often referred to as soft (or side) information output.
W098l07247 PCT~S97tl42~
Reference is now made to FIGURE 8 wherein there is shown a functional block diagram of a first embodiment of the reliability determination functionality 304 of the decoder 118, 218 or 256. The reliability determination functionality 304 receives the vectors r, x and y. An included probability determination functionality 306 takes as its inputs the received vector r and the plurality of codewords y and then calculates values proportional to the probability of receiving the received vector r given the hypothetical transmission of each of the plurality of codewords y. See, for example, exponential portion of Equations 1 and 4, and the corresponding portions of E~uations 2 and 5. These probability values are output on line 308. A comparator 310 receives the probability values on line 308, the codewords y, and the hard information output vector x, and compares the calculated probabilities with respect to certain ones of the codewords having a logical zero in a given information bit location to the calculated probabilities with respect to certain ones of the codewords having a logical one in a given information bit location. This comparison is made for each of the information bit locations in the hard information output vector x. The results of the comparisons are output on line 311 to a likelihood functionality 312 which determines for each of the information bit locations in the hard information output vector x whether a logical one or a logical zero is more likely.
Reference is now made to FIGURE 9 wherein there is shown a functional block diagram of the comparator 310.
In accordance, for example, with the numerator of Equations 1 and 4, a first summer 314 sums the calculated probabilities with respect to certain ones of the codewords having a logical zero in a given information bit location. Similarly, a second summer 316 sums the calculated probabilities with respect to certain ones of the codewords having a logical one in a given information CA 02263444 l999-02-ll W098/07247 PCT~S97114260 bit location. Again, these sums are taken for each of the information bit locations in the hard information output vector x. A ratiometer 318 then takes the ratio of the summed probabilities output from first and second summers 314 and 316 for each of the information bit locations.
The logarithm 320 iS then taken of the calculated ratio for output on line 311 as the result of the comparison operation.
Reference is now made to FIGURE lO wherein there is shown, in a functional block diagram, a second embodiment of a portion of the reliability determination functionality 304. In accordance with, for example, the first term of Equation 2, a first element 322 calculates the closest Euclidean distance between the received vector r and a hypothetical transmitted codeword having a logical zero in a given information bit location. Similarly, a second element 324 calculates the closest Euclidean distance between the received vector r and a hypothetical transmitted codeword having a logical one in a given information bit location. Again, these calculation are taken for each of the information bit locations in the hard information output vector x. A summer 326 then subtracts the output of the first element 322 from the output of the second element 324 to generate a result for output on line 311.
Reference is now made to FIGURE ll wherein there is shown, in a functional block diagram, a second embodiment of a portion of the reliability determination functionality 304. In accordance with, for example, the first term of Equation 5, a first correlator 328 calculates the maximal correlation between the received vector r and a hypothetical transmitted codeword having - a logical zero in a given informatlon bit location.
Similarly, a second correlator 330 calculates the maximal correlation between the received vector r and a hypothetical transmitted codeword having a logical one in a given information bit location. Again, these '' ' '~
correlation between the received vector r and a hypothetical transmitted codeword having a logical one in a given information bit location. Again, these calculation are taken for each of the information bit locations in the hard information output vector x. A
summer 332 then subtracts the output of the second correlator 330 from the output of the first correlator 328 to generate a result for output on line 311.
Reference is now made to FIGURE 12 wherein there is shown a functional block diagram of the likelihood functionality 312 comprising a portion of reliability determination functionality 304. A first threshold detector 334 receives the result output on line 311, and determines whether the result is greater than zero. If yes, this means that the corresponding information bit location in the hard information output vector x more likely has a logical value of zero. Conversely, a second threshold detector 334 receives the result operation output on line 311, and determines whether the result is less than zero. If yes, this means that the corresponding information bit location in the hard information output vector x more likely has a logical value of one.
AMENOED SH~ET
BACKGROUND OF THE INVENTION
Technical Field of the Invention The present invention relates to the coding and decoding of digital data for transmission over a communications channel and, in particular, to a system decoder utilizing soft information outputs to minimize error rates in the transmitted digital data.
Description of Related Art There exist many applications where large volumes of digital data must be transmitted and received in a substantially error free manner. In telecommunications systems, in particular, it is imperative that the reception of digital data be accomplished as reliably as is possible. Reliable communication of digital data is difficult, however, because the communications channels utilized for data transmission are plagued by error introducing factors. For example, such errors may be attributable to transient conditions in the channel (like noise or multi-path fading). The influence of such factors results in instances where the digital data is not transmitted properly or cannot be reliably received.
Considerable attention has been directed toward discovering methods for addressing the instances of errors which typically accompany data transmission activities.
For example, it is well known in the art to employ forward error correction (FEC) codes and other means to locate, counteract, correct and/or eliminate these errors. The data stream is thus encoded in accordance with a plurality of predefined codewords established in a codebook for that particular encoding scheme. Once encoded, the random errors introduced therein during transmission are 3~ relatively easily located and corrected during a W098/07247 PCT~S97/14260 corresponding decoding process using well known mathematical processes.
The codewords output from the encoder are transmitted over a communications channel and corrupted to some degree by noise to create a vector. During decoding, the received vector comprising the encoded data (perhaps including errors) is compared against each of the plurality of codewords in the codebook for the specific encoding process used. The codeword closest to the received vector is selected, and its corresponding data word is output from the decoder. This output is often referred to as the hard information output.
In many coding systems, a decoder additionally produces soft (or side) information outputs to help 1~ another decoder identify, and perhaps correct, introduced errors. For example, in one scenario used in the Global System for Mobile (GSM) communication, an inner decoder comprising an equalizer generates a soft information output derived from path metric differences, and an outer decoder comprising an error control decoder utilizes the output soft information to detect and correct introduced errors. In another scenario used in the PRISM-TDMA
communications system, the inner decoder comprises an improved multiband excitation (IMBE) error control decoder generating estimates of the number of channel errors, and the outer stage decoder comprises a speech decoder which utilizes the output error estimates in determining whether to discard data.
Soft information outputs have historically been generated by the decoder in conjunction with the selection of the closest codeword and its associated hard information output. The reliability information comprising the soft information output is calculated for each individual symbol (bit) within the hard information output. Accordingly, in such decoders the reliability of each symbol (bit) within the hard information output vector is derived without taking into consideration either the remaining symbols within that hard information output vector or any other considered codewords (and associated hard information output vectors). This is achieved by comparing the probability of the received data given a bit with a logical value of one was transmitted to the probability of the received data given a bit with a logical value of zero was transmitted.
In the Hoeher reference (EP 0625829 A2), disclosure is made of a method for providing reliability information by comparing the decoded sequence to a small list of alternative sequences which differ from the decoded sequence at least in the symbol for which reliability is being evaluated.
In the Reed, et al. reference (US 4,939,731), disclosure is made of an automatic repeat request system wherein a request for retransmission is issued if a received packet cannot be corrected, and the transmission rate is reduced if the cumulative errors exceed a certain rate.
SUMMARY OF THE INVENTION
A decoder in a coding communications system of the present invention produces, in addition to a hard information output estimate of a received symbol, a soft (or side) information output comprising the relative reliability of the detection of that estimated output.
The relative reliability soft information output for a certain information bit position is determined by calculating a value proportional to the joint probability that the received symbol and the set of all hypothesized transmitted codewords led to the estimated or detected hard information output. A comparison is then made of the calculated probabilities with respect to plural codewords having a zero in that certain information bit position and the calculated probabilities with respect to the plural codewords having a one in that certain information bit position. The result of the comparison provides an AME~DED ~iHFFT
i CA 02263444 1999-02-11 -3a-indication of whether the hard information output in that same information bit position is more likely a one or a zero.
In another aspect of the present invention, the soft information outputs are further processed in comparison to a preset threshold. Responsive to instances where the soft information outputs exceed the threshold, data block rejections and/or retransmissions are ordered.
BRIEF DESCRIPTION OF THE DRAWINGS
, A more complete understanding of the method and apparatus of the present invention may be acquired by ~EN~E~ EET
W098/07247 PCT~S97114260 reference to the following Detailed Description when taken in conjunction with the accompanying Drawings wherein:
FIGURE l is a block diagram of a prior art cross-interleaved coding system;
FIGURE 2 is a block diagram of a prior art coding system having soft information output decoding capabilities;
FIGURE 3 is a block diagram of a binary block coding system of the present invention;
FIGURE 4 is a block diagram of a particular implementation of the coding system of FIGURE 3;
FIGUR~ 5 is a block diagram of a non-binary block coding system of the present invention;
FIGURE 6 is a ~lock diagram of a coding system implementing block failure and retransmission functions responsive to soft information outputs; and FIGURES 7-12 are functional block diagrams of the decoder of the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS
Reference is now made to FIGURE 1, wherein there is shown a block diagram of a prior art cross-interleaved coding system l0. On the transmit side 12 of the system l0, encoding of a received original data stream is accomplished in three steps. First, the data stream received on line 14 is encoded by a first, outer encoder 16. Next, the encoded data stream output on line 18 is interleaved by an interleaver 20. Finally, the interleaved data stream output on line 22 is encoded by a second, inner encoder 24. The encoded data stream output on line 26 is then transmitted over a communications channel 28. On the receive side 30 of the system l0, decoding of the transmitted data stream received on line 26' to recover the original data stream is accomplished by performing corresponding decoding and de-interleaving steps in a complementary three-step order W098/07247 PCT~S97/14260 utilizing an inner decoder 32, a de-interleaver 34, and an outer decoder 36, respectively.
Reference is now made to FIGURE 2, wherein there is shown a block diagram of a prior art coding system 38 having soft information output decoding capabilities. On the transmit side 40, the system 38, like the system l0 of FIGURE 1, includes an outer encoder 42, an interleaver 44, and an inner encoder 46. The data stream output on line 48 from the transmit side 40 is carried by a communications channel 50. On the receive side 52, the system 38, like the system l0 of FIGUR~ l, includes an inner decoder 54, a de-interleaver 56, and an outer decoder 58 for processing the channel transmitted data stream to recover the originally input data stream (hard information outputs).
The inner decoder 54 further functions to generate soft (or side) information outputs on line 62 in conjunction with the decoding operation performed on the data stream received on line 48' after transmission over the communications channel 50. For example, the inner decoder 54 may comprise an equalizer, with the soft information outputs derived from path metric differences.
In another example, the inner decoder 54 may comprise an improved multiband excitation (IMBE) decoder, with the soft information outputs estimating the number of introduced channel errors. The inner decoded data stream output on line 64, along with the soft information outputs, output on line 62, are de-interleaved by the de-interleaver 56 and passed on to the outer decoder 58. The received soft information outputs are then used by the outer decoder 58 to assist in the decoding operation and, in particular, to help identify and correct errors in the data stream.
The decoder 54 typically functions to calculate reliability information comprising the soft information output for each individual symbol (bit) within the hard information output. This is achieved by taking the W098l07247 PCT~S97114260 logarithm of the ratio of two probabilities: first, the probability of the received data given a bit with a logical value of one was tranmitted; and second, the probability of the received data given a bit with a logical value of zero was transmitted. Accordingly, in such decoders 54 the reliability of each symbol (bit) within the hard information output vector is derived without taking into consideration either the remaining symbols within that hard information output vector or any other considered codewords (and associated hard information output vectors).
Reference is now made to FIGURE 3, wherein there is shown a block diagram of a binary block coding system 100 of the present invention. The system 100 includes a transmit side 102 and a receive side 104. On the transmit side 102, the system 100 includes an (n,k) binary block encoder 106 wherein a block of "k" data digits received on line 108 is encoded by a codeword of "n" code digits in length (wherein n~k) for output on line 110. The data digits (dl,d2,d3,...,dk) in each block comprise a k-dimensional vector d. Similarly, the code digits (c1,c2,c3,...,cn) in each codeword comprise an n-dimensional vector c. Each codeword includes m check digits (wherein m=n-k) among the code digits, with the check digits being useful in later detecting (and perhaps correcting) t number of ch~nnel errors introduced into the (n,k) code, where t depends on the particular code being used. The transmit side 102'of the system 100 may further include a number of other encoding and/or interleaving elements as generally referred to at 112. Although illustrated as being positioned before the encoder 106, it will be understood that other encoding and/or interleaving elements may be positioned after as well.
The coded output vector c of the (n,k) binary block encoder 106 is transmitted through a communications channel 114 which may introduce a number of errors producing an n-dimensional vector received codeword W098t07247 PCT~S97/14260 r=(r1,r2,r3,...,r~) on line 116. On the receive side 104, the system 100 includes a maximum likelihood (n,k) binary block decoder 118 which decodes a received codeword r of n data digits in length to output a block x of k data digits, wherein x=(x1,x2,x3,...,xk) is not necessarily equal to d due to the introduced channel errors. The decoding operation performed by the decoder 118 computes the Euclidean distance D(r,Yi) between r and every codeword Yi~(Yil~Yi2~Yi3~ Yin) in the appropriate codebook, wherein 10 Yin is in +1 form, for i=1 to 2k. In the standard maximum likelihood decoder 118, the codeword y closest to r is chosen, and its corresponding hard information bits x=(x1,x2,x3,...,xk) are produced as the hard output on line 120. The receive side 104 of the system 100 may further include a number of other encoding and/or interleaving elements as generally referred to at 122, positioned before and/or after the decoder 118, corresponding to those included at 112 in the transmit side 102.
In addition to the hard information bits x output from the decoder 118 on line 120, the decoder further generates a soft (or side) information vector output 8=(Sl,S2,S3, . . ., Sk) on line 124. In accordance therewith, a set C of the L closest codewords y to r is identified.
The choice of L involves a tradeoff between complexity of the following calculation and soft information output quality. Prel-m;n~ry testing has indicated that choosing L as small as between two and four results in satisfactory soft information output values. For each information bit location j, for j=l to k, the set C is partitioned into a first set C(i'~), and a second set C~ . A codeword y is included in the first set C~i'~~ if the j-th information bit corresponding to that codeword is zero (0). Conversely, a codeword is included in the second set C~i'1) if the j-th information bit corresponding to that codeword is one ~1).
The soft information output vector 8 thus may determined by the decoder 118 as follows:
W098/07247 PCT~S97/14260 -D(rsYi)2 ~
e 2a2 y ~c(i.~) s,=ln-D(r~y~)2 (1) ~ e 2O2 yi~C~
wherein: C~2 represents the variance of additive white Gaussian noise;
j = l to k, to index across the information bit locations; and i = l to n, to index across all of the codewords.
In both the numerator and the denominator of Equation l, the exponential portion produces a value that is proportional to the joint probability of having received the vector r and of a certain codeword y having been transmitted. Said another way, the exponential portion produces a value that is proportional to the probability of receiving the vector r given a certain codeword y having been transmitted. The numerator thus sums the calculated joint probabilities (taken by the exponential portion) for all the codewords y included in the first set C~i~~~ (i.e., those included codewords y having a zero in the j-th information bit position). In the denominator, a summation is made of the calculated joint probabilities (taken by the exponential portion) for all the codewords y included in the second set C~ (i.e., those included codewords y having a one in the j-th information bit position). The likelihood of having either a zero or a one for the hard information output x bit at issue is taken by dividing the numerator by the denominator. The resulting ratio is greater than one if the probability of having a zero in the j-th information bit exceeds the probability of having a one. Conversely, the resulting CA 02263444 1999-02-ll W O 98/07247 PCT~US97/14260 ratio is less than one if the probability of having a one in the j-th information bit exceeds the probability of having a zero. By taking the logarithm, when the value of s; is greater than zero (sj>0), this means that a zero (o) is more likely for the corresponding hard information bit Xk. Conversely, when the value of s is less than zero (sj~0), this means that a one (1) is more likely for the corresponding hard information bit Xk.
Equation (1~ can be approximated fairly well as follows:
~n D(r~j) ~n D(r~j) yj~C~~) 2a2 t y~C(i,l) (2) The common portion of the first and second terms of Equation 2, like with the exponential portion of Equation 1, produces a value that is proportional to the joint probability of having received the vector r and of a lS certain codeword y having been transmitted. The first term then calculates the negative of the closest Euclidean distance between the received data vector r and a hypothetical codeword y having a logical value of zero in the j-th information bit position (i.e., those codewords included in the first set C~j'~~). The second term calculates the closest Euclidean distance between the received data vector r and a hypothetical codeword y having a logical value of one in the j-th information bit position (i.e., those codewords included in the second set C~ ). When the value of sj is greater than zero ~Sj>0), this means that a zero ~0) is more likely for the corresponding hard information bit xk. Conversely, when the value of s; is less than zero ~sj<0), this means that a one ~1) is more likely for the corresponding hard information bit xk.
An advantage of using the approximation of Equation ~2) is that noise variance, which would have to be estimated otherwise, can now be dropped. This results in W098l07247 PCT~S97114260 a scaling, which is irrelevant. It should be noted that in the event that C(~~~ is empty, the value of si is set equal to +~. Similarly, in the event that C~ is empty, the value of s; is set equal to -~.
It is noted that the mathematical operation of minimizing the Euclidean distance D(r,Yi) is equivalent to maximizing the correlation p, as follows:
Pj ~ rlYiJ (3) wherein: n is the block length of the codeword; and i = 1 to n, to index over all codewords Yi.
It is then possible to rewrite the soft information output of Equation (1) using standard manipulations as follows:
~ Pi~
~ eO2 yi~C(i~~) sj=ln (4) ~ ea2 ' y ~C ~,1) J
and the approximation of Equation (2) may be rewritten as follows:
max Pi max p J Yi~C(~~) a;2 y,~C~ J2 (5) Either Equation (2) or Equation (5) may then be used by the decoder 118 in determining the soft information output. The choice of one over the other is made in accordance with whether distances or correlations are more readily available in and to the decoder 118.
Reference is now made to FIGURE 4 wherein there is shown a block diagram of a particular implementation of W O 98/07247 PCT~Sg7/14260 the coding system 100 of FIGURE 3. The system 100 includes a transmit side 102 and a receive side 104. On the transmit side 102, the system 100 includes additional encoding and/or interleaving elements 112 comprising a first, outer encoder 120, a second, middle encoder 122, and an interleaver 124. The outer encoder 120 comprises an (N2,N1) cyclic re~l]n~ncy check (CRC) encoder providing message error detection capability. The middle encoder 122 comprises a convolutional encoder which operates, in connection with the inner, (n,k) binary block encoder 106 to provide a concatenated encoding operation useful in providing error correction capability. The interleaver 124 positioned between the middle encoder 122 and the inner encoder 106 is useful in preventing burst errors introduced by either the communications channel 114 or the subsequent decoding operation.
The inner encoder 106 preferably utilizes (2k,k) Hadamard encoding to implement a special class of binary block codes referred to as orthogonal codes. Use of such orthogonal codes is preferred due to their relatively simple decoder implementations and possibility of noncoherent demodulation. The encoding process may be described ~Y yi=f(x1,x2,x3,...,xk) for i=1 to 2k, wherein Yi is the output codeword, f refers to the encoding function, and b refers to the bits to be encoded. The code book for the encoder 106 thus consists of 2k codewords y.
The coded output of the inner (2k,k) Hadamard encoder 106 is transmitted through the communications channel 114 to the receive side 104 of the system 100. The receive side 104 includes an inner binary block decoder 118 which decodes the Hadamard encoded codewords utilizing a fast Hadamard transform (FHT) to find the correlations of Equation ~3). The Hadamard transform is well known in 3~ signal processing and coding theory, and a number of fast implementations have been developed. An appropriate one .
W098/07247 PCT~S97/14260 of those implementations is selected for use in the decoder 118.
A 2k x 2k Hadamard matrix can be constructed by the following induction:
H(~)=[1] (6) H(m-l) H(m-') H(m)= (7) H(m~ H (m - ~) The columns of H(k~ are the codeword of the Hadamard code in a specific order. The Hadamard transform components of Equations (6) and (7) coincide with the correlations of Equation (3), that is:
rH(~)=(P~ P2~) In this implementation, the matrix H is symmetric. This is in fact the Hadamard transform. To obtain the hard information for output on line 130, the index to the largest Pi, for i=1 to 2k, is found, and the corresponding natural binary representation thereto is output.
The receive side 104 of the system 100 further includes a soft information generator 132 associated with the decoder 118. The soft information output is determined by the soft information generator 132 using Equation (5), and output on line 134. The L largest pi's, for i=1 to 2k, and their indices are selected. In this case, I represents the set of those indices, and Bj(i) is the j-th bit of the natural binary representation of i.
For the j-th information bit, the following provides a simple and reasonably accurate approximate log-likellhood ratio:
.. . . , ~ . .....
W098t07247 PCT~S97/14260 max p max p s~I & B,(i)=O i~I & B,(i)=l (9) where the maximum is set to -~ if the set is empty. When the value of s; is greater than zero (Sj>0), this means that a zero (0) is more likely for the corresponding hard information bit Xk. Conversely, when the value of s; is less than zero (Sj~0), this means that a one (1) is more likely for the corresponding hard information bit Xk.
Put another way, the soft information generator 132 produces soft output 8=(Sl,S2,S3,...,S ~ on line 134 for the corresponding hard information output x=(x1,x2,x3,.. ,xk ) output on line 130. This soft -information output is based on the following algorithm:
si=~, e j - ~, e ~ (10) J'~
where:
M~l { ilXi I ~ f (Xl~ xk) yj} (11) For each hard information bit xi, the code book is partitioned into two subsets, corresponding to xi=l or 15xi=0. The correlation values associated with the codeword in the same subset are added to obtain a likelihood indication of whether xi=1 or xi=0. The soft information bit si for a corresponding hard information bit xi is generated by taking the difference of the likelihood 20indications.
The receive side 104 of the system lO0 further includes additional decoding and de-interleaving elements generally referred to at 136 comprising a de-interleaver 138, a middle decoder 140 and an outer decoder 142. The 25de-interleaver 138 is connected to lines 130 and 134 to W O 98/07247 PCTrUS97J14260 receive the outputs of the decoder 118 and soft information generator 132. The output lines 144 (hard information bits) and 146 ~soft information bits) from the de-interleaver 138 are connected to the middle decoder 140 which comprises a convolutional decoder. The middle decoder 140 utilizes the soft information output received on line 146 to minimize the bit error rate or sequence error rate of the hard information output received on line 144. This is accomplished, for example, by detecting errors in the received bit stream and then correcting as many of the detected errors as is possible. The output line 148 (hard information bits only) from the middle decoder 140 is connected to the outer decoder 142 which comprises a cyclic redundancy check (CRC) decoder. The outer decoder 142 functions to detect any rem~; nl ng errors in the data stream.
Reference is now made to FIGURE 5 wherein there is shown a block diagram of a non-binary block coding system 200 of the present invention. The system 200 includes a transmit side 202 and a receive side 204. On the transmit side 202, the system 200 includes a non-binary block encoder 206 wherein a received data signal d on line 208 is encoded for output on line 210. Examples of such encoders include: an M-ary block Reed-Solomon encoder; or a modulator for an M-ary signal constellation (such as quadrature amplitude modulation (QAM) or phase shift keying ~PSK)). The transmit side 202 of the system 200 may further include a number of other encoding and/or interleaving elements as generally referred to at 212.
Although illustrated as being positioned before the encoder 206, it will be understood that other encoding and/or interleaving elements 212 may be positioned after as well. The coded output c of the encoder 206 is transmitted through a communications channel 214 which may introduce a number of errors producing a received code signal r on line 216. On the receive side 204, the system 200 includes a correspondingly appropriate non-binary W098/07247 PCT~S97/14260 decoder 218 which decodes the received codeword r to generate an output z, wherein z is not necessarily equal to d due to the introduced channel errors.
The decoding operation performed by the decoder 118 computes the Euclidean distance D(r,Yi) between r and every codeword Yi. The codeword y closest to r is chosen, and its corresponding hard information symbols Z= (Z~ Zk) ~ and hard information bits x=(x1,...,~k), are produced as the output on line 220, wherein the symbol Zj 10 corresponds to the m bits (x(j1~m~1,,xjm). The receive side 204 of the system 100 may further include a number of other encoding and/or interleaving elements as generally referred to at 222, positioned before and/or after the decoder 218, corresponding to those included at 212 in the transmit side 202.
In addition to the hard information symbols z and bits x output from the decoder 218 on line 220, the decoder further generates a soft symbol information vector output s' on line 224, and a soft ~it information vector output 8 on line 226. In accordance with this operation, a set C of the L closest codeword y to r is identified.
The choice of L involves a tradeoff between complexity of the following calculation and soft information output quality. Preliminary testing has indicated that choosing L as small as between two and four results in satisfactory soft information output values. In a manner similar to the binary case described above, the set C is partitioned into M subsets C(i'0),...,C(i'M-1), for each location j. To simplify the computations, the following approximation is made: for each symbol 1, we find:
~l~ C~ D(r~i) (12) Only the symbols lo and 11 are kept with:
W098/07247 PCT~S97/14260 o min ~1 (13) ~ , nmin ~;l (14) wherein, nmin denotes the second smallest number (i.e., the next minimum~. The remaining symbols are treated as if their probability is zero. Thus, the most likely information symbol is lo/ the next most likely information 6ymbol is 11, and the soft value for the symbol at location lS:
S~ilo_ ~jl, (15) As a final step, soft values are calculated for the information bits. The m bits with index (j-l)m+h, for h=l,...,m, correspond to the j-th symbol. Assuming a map is given between the blocks of m bits and the symbols, for each h, the symbol set ~o,...,M-1} is partitioned into a first subset E(h~) and a second subset E(h'1) of symbols.
A symbol set M is included in the first subset E (h~ ~1 if the h-th information bit is zero ~0). Conversely, a symbol 1~ set is included in the second subset E(h'1) if the h-th information bit is one (1). Then, the soft value for the bit at location (j-l)m+h is:
min ~-.1 min ~.1 (16) S(~._ I )m th l~E (h,O) I~ E (h, 1 ) which is interpreted in the same manner as Equation (2).
The issue of whether the soft symbol s~ or soft bit s values output on lines 224 and 226, respectively, are more mP~nlngful to the receive side 204 of the system 200 depends on the decoding elements, generally referred to CA 02263444 1999-02-ll . W 098107247 PCTAJS97/14260 at 222, included in the system after the decoder 218. For instance, suppose the next decoder stage is a Reed-Solomon decoder over an alphabet of size M. In such a case, soft symbol values are more useful. On the other hand, if the next stage is a binary block decoder, or a speech decoder that processes bits, then soft bit values are more useful.
The soft values (bit and/or symbol) are useful in ways other than to assist with error minimization and correction in subsequent stage decoders. Referring now to FIGURE 6, there is shown a block diagram of a receive side 250 of a data communications system 252 wherein the soft value output(s) 254 from a decoder 256 (like those shown in FIGURES 3 and 5) are processed by a threshold error detector 258. An output 260 from the threshold error detector 258 is set on a block by block basis if the soft value output(s) 254 corresponding to that block exceed a given threshold value. The threshold, for example, may be set to a value indicative of the presence of more errors in the data stream than are capable of correction by the receive side 250 of the system 252.
Responsive to the output 260 being set, a processor 262 on the receive side 250 of the system 252 may fail the block. In this connection, it will be noted that the threshold error detector 258 performs an analogous function to the cyclic redundancy check decoder of FIGURE
4 which is used for pure error detection. Thus, when the system includes a threshold error detector 2~8 and processor 262, there may be' no need to include either a cyclic re~1ln~ncy check encoder on the transmit side 264, or a cyclic redundancy check decoder on the receive side 250. The failure decision made by the processor 262 could depend on whether any soft value outputs exceed the - threshold or, alternatively, on whether a certain minimum number of the soft value outputs exceed the threshold.
The processor 262 may further respond to the output 260 indicative of the soft value outputs 254 exceeding the set threshold by signaling the transmit side 264 of the system -18- .
252 over line 266 to request retransmission of the portion of the data stream containing the errors.
Reference is now made to FIGURES 7-12 wherein there are shown functional block diagrams of the decoder 118, 218 or 256 (including the soft information generator 132) of the present invention. The decoder 118, 218 or 256 is preferably implemented as a specialized digital signal processor (DSP) or in an application specific integrated circuit (ASIC). It will, of course, be understood that the decoder 118, 218 or 256 may alternatively be implemented using discrete components and perhaps distributed processing. In either case, the decoder lI8, 218 or 256 performs the functional operations illustrated in FIGURES 7-12 to implement the mathematical operations previously described.
Referring first to FIGURE 7, the decoder 118, 218 or 256 receives the vector r for processing. The decoder 118, 218 or 256 stores a codebook 300 containing all codewords y for the particular coding scheme being implemented. A comparator 302 receives the vector r and each of the codewords y, and functions to select the particular one of the codewords which is closest to the received vector. This operation may comprises, for example, a maximum likelihood operation. The choice of the particular type of comparison performed depends in large part on the type of coding scheme being used. In any event, the various available comparison functions as well as their specific implementations are well known to those skilled in the art. The selected closest codeword is then converted to its corresponding information data comprising the output hard information vector x. The decoder 118, 218 or 256 further includes a functionality for determining the reliability 304 of the output hard information vector x. This reliability information is 3~ output as a vector s, and it often referred to as soft (or side) information output.
W098l07247 PCT~S97tl42~
Reference is now made to FIGURE 8 wherein there is shown a functional block diagram of a first embodiment of the reliability determination functionality 304 of the decoder 118, 218 or 256. The reliability determination functionality 304 receives the vectors r, x and y. An included probability determination functionality 306 takes as its inputs the received vector r and the plurality of codewords y and then calculates values proportional to the probability of receiving the received vector r given the hypothetical transmission of each of the plurality of codewords y. See, for example, exponential portion of Equations 1 and 4, and the corresponding portions of E~uations 2 and 5. These probability values are output on line 308. A comparator 310 receives the probability values on line 308, the codewords y, and the hard information output vector x, and compares the calculated probabilities with respect to certain ones of the codewords having a logical zero in a given information bit location to the calculated probabilities with respect to certain ones of the codewords having a logical one in a given information bit location. This comparison is made for each of the information bit locations in the hard information output vector x. The results of the comparisons are output on line 311 to a likelihood functionality 312 which determines for each of the information bit locations in the hard information output vector x whether a logical one or a logical zero is more likely.
Reference is now made to FIGURE 9 wherein there is shown a functional block diagram of the comparator 310.
In accordance, for example, with the numerator of Equations 1 and 4, a first summer 314 sums the calculated probabilities with respect to certain ones of the codewords having a logical zero in a given information bit location. Similarly, a second summer 316 sums the calculated probabilities with respect to certain ones of the codewords having a logical one in a given information CA 02263444 l999-02-ll W098/07247 PCT~S97114260 bit location. Again, these sums are taken for each of the information bit locations in the hard information output vector x. A ratiometer 318 then takes the ratio of the summed probabilities output from first and second summers 314 and 316 for each of the information bit locations.
The logarithm 320 iS then taken of the calculated ratio for output on line 311 as the result of the comparison operation.
Reference is now made to FIGURE lO wherein there is shown, in a functional block diagram, a second embodiment of a portion of the reliability determination functionality 304. In accordance with, for example, the first term of Equation 2, a first element 322 calculates the closest Euclidean distance between the received vector r and a hypothetical transmitted codeword having a logical zero in a given information bit location. Similarly, a second element 324 calculates the closest Euclidean distance between the received vector r and a hypothetical transmitted codeword having a logical one in a given information bit location. Again, these calculation are taken for each of the information bit locations in the hard information output vector x. A summer 326 then subtracts the output of the first element 322 from the output of the second element 324 to generate a result for output on line 311.
Reference is now made to FIGURE ll wherein there is shown, in a functional block diagram, a second embodiment of a portion of the reliability determination functionality 304. In accordance with, for example, the first term of Equation 5, a first correlator 328 calculates the maximal correlation between the received vector r and a hypothetical transmitted codeword having - a logical zero in a given informatlon bit location.
Similarly, a second correlator 330 calculates the maximal correlation between the received vector r and a hypothetical transmitted codeword having a logical one in a given information bit location. Again, these '' ' '~
correlation between the received vector r and a hypothetical transmitted codeword having a logical one in a given information bit location. Again, these calculation are taken for each of the information bit locations in the hard information output vector x. A
summer 332 then subtracts the output of the second correlator 330 from the output of the first correlator 328 to generate a result for output on line 311.
Reference is now made to FIGURE 12 wherein there is shown a functional block diagram of the likelihood functionality 312 comprising a portion of reliability determination functionality 304. A first threshold detector 334 receives the result output on line 311, and determines whether the result is greater than zero. If yes, this means that the corresponding information bit location in the hard information output vector x more likely has a logical value of zero. Conversely, a second threshold detector 334 receives the result operation output on line 311, and determines whether the result is less than zero. If yes, this means that the corresponding information bit location in the hard information output vector x more likely has a logical value of one.
AMENOED SH~ET
Claims (31)
1. A decoding method, comprising the steps of:
comparing a received vector r with each one of a plurality of codewords y, each codeword having an associated hard information vector x;
selecting (302) the codeword y closest to the received vector r; and outputting the hard information vector x corresponding to the selected codeword y;
the method characterized by the step of ;
determining (304) a reliability vector s for the output hard information vector x, the reliability vector s determined for each symbol location of the hard information vector x, as a function of each of the plurality of codewords y and the received vector r.
comparing a received vector r with each one of a plurality of codewords y, each codeword having an associated hard information vector x;
selecting (302) the codeword y closest to the received vector r; and outputting the hard information vector x corresponding to the selected codeword y;
the method characterized by the step of ;
determining (304) a reliability vector s for the output hard information vector x, the reliability vector s determined for each symbol location of the hard information vector x, as a function of each of the plurality of codewords y and the received vector r.
2. The method of claim 1 wherein the hard information vector x comprises a plurality of information bits, and wherein the step of determining comprises the steps of:
calculating (306) values that are proportional to the probability of receiving the received vector r given the transmission of each of the plurality of codewords y;
for each of the plurality of information bits in the information vector x, comparing (310) the calculated probabilities with respect to certain ones codewords y having a zero in a given information bit position to the calculated probabilities with respect to the same certain ones of codewords y having a one in that certain information bit position; and identifying (312) from the comparison an indication of whether the hard information output x in that same information bit position is more likely a one or a zero.
calculating (306) values that are proportional to the probability of receiving the received vector r given the transmission of each of the plurality of codewords y;
for each of the plurality of information bits in the information vector x, comparing (310) the calculated probabilities with respect to certain ones codewords y having a zero in a given information bit position to the calculated probabilities with respect to the same certain ones of codewords y having a one in that certain information bit position; and identifying (312) from the comparison an indication of whether the hard information output x in that same information bit position is more likely a one or a zero.
3. The method of claim 2 wherein the step of comparing comprises the steps of:
first summing (314) the calculated values with respect to the certain ones of the codewords y having a zero in the given information bit position; and second summing (316) the calculated values with respect to the same certain ones of the certain codewords y having a one in the given information bit position.
first summing (314) the calculated values with respect to the certain ones of the codewords y having a zero in the given information bit position; and second summing (316) the calculated values with respect to the same certain ones of the certain codewords y having a one in the given information bit position.
4. The method of claim 3 wherein the step of comparing further comprises the steps of:
taking (318) the ratio of the results of the first and second summing steps; and taking (320) a logarithm of the generated ratio.
taking (318) the ratio of the results of the first and second summing steps; and taking (320) a logarithm of the generated ratio.
5. The method of claim 4 wherein the step of identifying comprises the steps of:
indicating that a zero is more likely if the result of taking the logarithm of the generated ratio is greater than zero; and indicating that a one is more likely if the result of taking the logarithm of the generated ratio is less than zero.
indicating that a zero is more likely if the result of taking the logarithm of the generated ratio is greater than zero; and indicating that a one is more likely if the result of taking the logarithm of the generated ratio is less than zero.
6. The method as in claim 2 further including the step of selecting the certain ones of the codewords y from the plurality of codewords y.
7. The method as in claim 6 wherein step of selecting comprises the step of choosing as the certain ones of the codewords y those ones of the plurality of codewords y closest to the received vector r.
8. The method as in claim 6 wherein the step of choosing selects those ones of the plurality of codewords y having a minimal Euclidean distance with respect to the received vector r.
9. The method as in claim 6 wherein the step of choosing selects those ones of the plurality of codewords y having a maximal correlation with respect to the received vector r.
10. The method of claim 1 wherein the step of determining comprises the steps of:
first minimizing (322) the distance between the received vector r and certain ones of the codewords y having a zero in the given information bit position; and second minimizing (324) the distance between the received vector r and certain ones of the codewords y having a one in the given information bit position.
first minimizing (322) the distance between the received vector r and certain ones of the codewords y having a zero in the given information bit position; and second minimizing (324) the distance between the received vector r and certain ones of the codewords y having a one in the given information bit position.
11. The method of claim 10 wherein the step of determining further comprises the step of subtracting (326) the result of the first minimization from the result of the second minimization.
12. The method of claim 11 wherein the step of determining further comprises the steps of:
indicating (334) that a zero is more likely if the result of subtraction is greater than zero; and indicating (336) that a one is more likely if the result of the subtraction is less than zero.
indicating (334) that a zero is more likely if the result of subtraction is greater than zero; and indicating (336) that a one is more likely if the result of the subtraction is less than zero.
13. The method of claim 1 wherein the step of determining comprises the steps of:
first maximizing (328) the correlation between the received vector r and certain ones of the codewords y having a zero in the given information bit position; and second maximizing (330) the correlation between the received vector r and certain ones of the codewords y having a one in the given information bit position.
first maximizing (328) the correlation between the received vector r and certain ones of the codewords y having a zero in the given information bit position; and second maximizing (330) the correlation between the received vector r and certain ones of the codewords y having a one in the given information bit position.
14. The method of claim 13 wherein the step of determining further comprises the step of subtracting (332) the result of the second maximization from the result of the first maximization.
15. The method of claim 14 wherein the step of determining further comprises the steps of:
indicating (334) that a zero is more likely if the result of subtraction is greater than zero; and indicating (336) that a one is more likely if the result of the subtraction is less than zero.
indicating (334) that a zero is more likely if the result of subtraction is greater than zero; and indicating (336) that a one is more likely if the result of the subtraction is less than zero.
16. The method as in claim 1 wherein plurality of codewords y comprise Hadamard codewords.
17. The method as in claim 1 wherein plurality of codewords y comprise Reed-Solomon codewords.
18. The method as in claim 1 further comprising the steps of:
processing (258) the generated reliability vector s to identify the presence of errors;
comparing (262) a number of the identified errors to a predetermined threshold; and signaling (266) for a rejection of the received data transmission if the number of identified errors exceeds the predetermined threshold.
processing (258) the generated reliability vector s to identify the presence of errors;
comparing (262) a number of the identified errors to a predetermined threshold; and signaling (266) for a rejection of the received data transmission if the number of identified errors exceeds the predetermined threshold.
19. The method of claim 18 further including the step of requesting re-transmission of the data if the number of identified errors exceeds the predetermined threshold.
20. A decoder, comprising:
means (302) for comparing a received vector r with each one of a plurality of codewords y, each codeword having an associated hard information vector x, selecting the codeword y closest to the received vector r, and outputting the hard information vector x corresponding to the selected codeword y;
the decoder characterized by:
means (304) for determining a reliability vector s for the output hard information vector x, the reliability vector s determined for each symbol location of the hard information vector x as a function of each of the plurality of codewords y and the received vector r.
means (302) for comparing a received vector r with each one of a plurality of codewords y, each codeword having an associated hard information vector x, selecting the codeword y closest to the received vector r, and outputting the hard information vector x corresponding to the selected codeword y;
the decoder characterized by:
means (304) for determining a reliability vector s for the output hard information vector x, the reliability vector s determined for each symbol location of the hard information vector x as a function of each of the plurality of codewords y and the received vector r.
21. The decoder of claim 20 wherein the hard information vector x comprises a plurality of information bits, and wherein the means for determining comprises:
means (306) for calculating values that are proportional to the probability of receiving the received vector r given the transmission of each of the plurality of codewords y; and means (310) for comparing, for each of the plurality of information bits in the information vector x, the calculated probabilities with respect to certain ones codewords y having a zero in a given information bit position to the calculated probabilities with respect to the same certain ones of codewords y having a one in that certain information bit position to identify whether the hard information output x in that same information bit position is more likely a one or a zero.
means (306) for calculating values that are proportional to the probability of receiving the received vector r given the transmission of each of the plurality of codewords y; and means (310) for comparing, for each of the plurality of information bits in the information vector x, the calculated probabilities with respect to certain ones codewords y having a zero in a given information bit position to the calculated probabilities with respect to the same certain ones of codewords y having a one in that certain information bit position to identify whether the hard information output x in that same information bit position is more likely a one or a zero.
22. The decoder of claim 21 wherein the means for comparing comprises:
first means (314) for summing the calculated values with respect to the certain ones of the codewords y having a zero in the given information bit position;
second means (316) for summing the calculated values with respect to the same certain ones of the certain codewords y having a one in the given information bit position;
means (318) for taking the ratio of the results of the first and second summing steps; and means (320) for taking a logarithm of the generated ratio.
first means (314) for summing the calculated values with respect to the certain ones of the codewords y having a zero in the given information bit position;
second means (316) for summing the calculated values with respect to the same certain ones of the certain codewords y having a one in the given information bit position;
means (318) for taking the ratio of the results of the first and second summing steps; and means (320) for taking a logarithm of the generated ratio.
23. The decoder of claim 22 further including:
means (334) for indicating that a zero is more likely if the result of taking the logarithm of the generated ratio is greater than zero; and means (336) for indicating that a one is more likely if the result of taking the logarithm of the generated ratio is less than zero.
means (334) for indicating that a zero is more likely if the result of taking the logarithm of the generated ratio is greater than zero; and means (336) for indicating that a one is more likely if the result of taking the logarithm of the generated ratio is less than zero.
24. The decoder of claim 20 wherein the means for determining comprises:
first means (322) for minimizing the distance between the received vector r and certain ones of the codewords y having a zero in the given information bit position;
second means (324) for minimizing the distance between the received vector r and certain ones of the codewords y having a one in the given information bit position; and means (326) for subtracting the result of the first minimization from the result of the second minimization.
first means (322) for minimizing the distance between the received vector r and certain ones of the codewords y having a zero in the given information bit position;
second means (324) for minimizing the distance between the received vector r and certain ones of the codewords y having a one in the given information bit position; and means (326) for subtracting the result of the first minimization from the result of the second minimization.
25. The decoder of claim 24 further including:
means (334) for indicating that a zero is more likely if the result of subtraction is greater than zero; and means (336) for indicating that a one is more likely if the result of the subtraction is less than zero.
means (334) for indicating that a zero is more likely if the result of subtraction is greater than zero; and means (336) for indicating that a one is more likely if the result of the subtraction is less than zero.
26. The decoder of claim 20 wherein the means for determining comprises:
first means (328) for maximizing the correlation between the received vector r and certain ones of the codewords y having a zero in the given information bit position;
second means (330) for maximizing the correlation between the received vector r and certain ones of the codewords y having a one in the given information bit position; and means (332) for subtracting the result of the second maximization from the result of the first maximization.
first means (328) for maximizing the correlation between the received vector r and certain ones of the codewords y having a zero in the given information bit position;
second means (330) for maximizing the correlation between the received vector r and certain ones of the codewords y having a one in the given information bit position; and means (332) for subtracting the result of the second maximization from the result of the first maximization.
27. The decoder of claim 26 further including:
means (334) for indicating that a zero is more likely if the result of subtraction is greater than zero; and means (336) for indicating that a one is more likely if the result of the subtraction is less than zero.
means (334) for indicating that a zero is more likely if the result of subtraction is greater than zero; and means (336) for indicating that a one is more likely if the result of the subtraction is less than zero.
28. The decoder as in claim 20 wherein plurality of codewords y comprise Hadamard codewords.
29. The decoder as in claim 20 wherein plurality of codewords y comprise Reed-Solomon codewords.
30. The decoder as in claim 20 further comprising:
means (258) for processing the generated reliability vector s to identify the presence of errors;
means (262) for comparing a number of the identified errors to a predetermined threshold; and means for signaling (266) for a rejection of the received data transmission if the number of identified errors exceeds the predetermined threshold.
means (258) for processing the generated reliability vector s to identify the presence of errors;
means (262) for comparing a number of the identified errors to a predetermined threshold; and means for signaling (266) for a rejection of the received data transmission if the number of identified errors exceeds the predetermined threshold.
31. The decoder of claim 30 further including means for requesting re-transmission of the data if the number of identified errors exceeds the predetermined threshold.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/699,101 | 1996-08-16 | ||
US08/699,101 US5968198A (en) | 1996-08-16 | 1996-08-16 | Decoder utilizing soft information output to minimize error rates |
PCT/US1997/014260 WO1998007247A1 (en) | 1996-08-16 | 1997-08-13 | Decoder utilizing soft information output to minimize error rates |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2263444A1 true CA2263444A1 (en) | 1998-02-19 |
Family
ID=24807935
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002263444A Abandoned CA2263444A1 (en) | 1996-08-16 | 1997-08-13 | Decoder utilizing soft information output to minimize error rates |
Country Status (10)
Country | Link |
---|---|
US (1) | US5968198A (en) |
EP (1) | EP0919086B1 (en) |
JP (1) | JP2000516415A (en) |
KR (1) | KR20000029992A (en) |
CN (1) | CN1232589A (en) |
AU (1) | AU722457B2 (en) |
CA (1) | CA2263444A1 (en) |
DE (1) | DE69712492T2 (en) |
TW (1) | TW338859B (en) |
WO (1) | WO1998007247A1 (en) |
Families Citing this family (120)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100498752B1 (en) * | 1996-09-02 | 2005-11-08 | 소니 가부시끼 가이샤 | Apparatus and method for receiving data using bit metrics |
CA2234008A1 (en) * | 1998-04-06 | 1999-10-06 | Wen Tong | Encoding methods and apparatus |
US6145110A (en) * | 1998-06-22 | 2000-11-07 | Ericsson Inc. | Digital data decoder that derives codeword estimates from soft data |
FI105734B (en) | 1998-07-03 | 2000-09-29 | Nokia Networks Oy | Automatic retransmission |
EP1125202B1 (en) * | 1998-10-30 | 2003-06-18 | Infineon Technologies AG | Storage device for storing data and method for operating storage devices for data storage |
US6477200B1 (en) | 1998-11-09 | 2002-11-05 | Broadcom Corporation | Multi-pair gigabit ethernet transceiver |
US6381726B1 (en) * | 1999-01-04 | 2002-04-30 | Maxtor Corporation | Architecture for soft decision decoding of linear block error correcting codes |
DE19912825C1 (en) * | 1999-03-22 | 2000-08-10 | Siemens Ag | Method of detecting data symbols for mobile radio systems, e.g. GSM systems |
WO2002013418A1 (en) * | 2000-08-03 | 2002-02-14 | Morphics Technology, Inc. | Flexible tdma system architecture |
US6731700B1 (en) * | 2001-01-04 | 2004-05-04 | Comsys Communication & Signal Processing Ltd. | Soft decision output generator |
US7570576B2 (en) * | 2001-06-08 | 2009-08-04 | Broadcom Corporation | Detection and mitigation of temporary (bursts) impairments in channels using SCDMA |
US7673223B2 (en) * | 2001-06-15 | 2010-03-02 | Qualcomm Incorporated | Node processors for use in parity check decoders |
US6633856B2 (en) * | 2001-06-15 | 2003-10-14 | Flarion Technologies, Inc. | Methods and apparatus for decoding LDPC codes |
US6938196B2 (en) * | 2001-06-15 | 2005-08-30 | Flarion Technologies, Inc. | Node processors for use in parity check decoders |
US7349691B2 (en) * | 2001-07-03 | 2008-03-25 | Microsoft Corporation | System and apparatus for performing broadcast and localcast communications |
CN1149803C (en) | 2001-09-30 | 2004-05-12 | 华为技术有限公司 | Data retransmission method based on bit conversion |
US20030123595A1 (en) * | 2001-12-04 | 2003-07-03 | Linsky Stuart T. | Multi-pass phase tracking loop with rewind of future waveform in digital communication systems |
US7164734B2 (en) * | 2001-12-04 | 2007-01-16 | Northrop Grumman Corporation | Decision directed phase locked loops (DD-PLL) with excess processing power in digital communication systems |
US20030128777A1 (en) * | 2001-12-04 | 2003-07-10 | Linsky Stuart T. | Decision directed phase locked loops (DD-PLL) with multiple initial phase and/or frequency estimates in digital communication systems |
US6826235B2 (en) * | 2002-01-04 | 2004-11-30 | Itran Communications Ltd. | Robust communications system utilizing repetition code and cumulative decoder associated therewith |
US8078808B2 (en) * | 2002-04-29 | 2011-12-13 | Infineon Technologies Ag | Method and device for managing a memory for buffer-storing data blocks in ARQ transmission systems |
US7035361B2 (en) | 2002-07-15 | 2006-04-25 | Quellan, Inc. | Adaptive noise filtering and equalization for optimal high speed multilevel signal decoding |
US7003536B2 (en) * | 2002-08-15 | 2006-02-21 | Comsys Communications & Signal Processing Ltd. | Reduced complexity fast hadamard transform |
US6961888B2 (en) * | 2002-08-20 | 2005-11-01 | Flarion Technologies, Inc. | Methods and apparatus for encoding LDPC codes |
US7237180B1 (en) | 2002-10-07 | 2007-06-26 | Maxtor Corporation | Symbol-level soft output Viterbi algorithm (SOVA) and a simplification on SOVA |
US7209867B2 (en) * | 2002-10-15 | 2007-04-24 | Massachusetts Institute Of Technology | Analog continuous time statistical processing |
AU2003287628A1 (en) * | 2002-11-12 | 2004-06-03 | Quellan, Inc. | High-speed analog-to-digital conversion with improved robustness to timing uncertainty |
US20040157626A1 (en) * | 2003-02-10 | 2004-08-12 | Vincent Park | Paging methods and apparatus |
BR0318142A (en) * | 2003-02-26 | 2006-02-07 | Flarion Technologies Inc | Smooth information conversion for iterative decoding |
US6957375B2 (en) * | 2003-02-26 | 2005-10-18 | Flarion Technologies, Inc. | Method and apparatus for performing low-density parity-check (LDPC) code operations using a multi-level permutation |
US20070234178A1 (en) * | 2003-02-26 | 2007-10-04 | Qualcomm Incorporated | Soft information scaling for interactive decoding |
US20040190661A1 (en) * | 2003-03-26 | 2004-09-30 | Quellan, Inc. | Method and system for equalizing communication signals |
CN100483952C (en) * | 2003-04-02 | 2009-04-29 | 高通股份有限公司 | Extracting soft information in a block-coherent communication system |
US7231557B2 (en) * | 2003-04-02 | 2007-06-12 | Qualcomm Incorporated | Methods and apparatus for interleaving in a block-coherent communication system |
US7434145B2 (en) * | 2003-04-02 | 2008-10-07 | Qualcomm Incorporated | Extracting soft information in a block-coherent communication system |
US8196000B2 (en) * | 2003-04-02 | 2012-06-05 | Qualcomm Incorporated | Methods and apparatus for interleaving in a block-coherent communication system |
US7308057B1 (en) | 2003-06-05 | 2007-12-11 | Maxtor Corporation | Baseline wander compensation for perpendicular recording |
US7349496B2 (en) * | 2003-06-27 | 2008-03-25 | Nortel Networks Limited | Fast space-time decoding using soft demapping with table look-up |
US7085282B2 (en) * | 2003-07-01 | 2006-08-01 | Thomson Licensing | Method and apparatus for providing forward error correction |
DE112004001455B4 (en) | 2003-08-07 | 2020-04-23 | Intersil Americas LLC | Cross-talk cancellation method and system |
US7804760B2 (en) | 2003-08-07 | 2010-09-28 | Quellan, Inc. | Method and system for signal emulation |
US20050050119A1 (en) * | 2003-08-26 | 2005-03-03 | Vandanapu Naveen Kumar | Method for reducing data dependency in codebook searches for multi-ALU DSP architectures |
JP4510832B2 (en) | 2003-11-17 | 2010-07-28 | ケラン インコーポレイテッド | Method and system for antenna interference cancellation |
US7237181B2 (en) * | 2003-12-22 | 2007-06-26 | Qualcomm Incorporated | Methods and apparatus for reducing error floors in message passing decoders |
US7616700B2 (en) | 2003-12-22 | 2009-11-10 | Quellan, Inc. | Method and system for slicing a communication signal |
US7281192B2 (en) * | 2004-04-05 | 2007-10-09 | Broadcom Corporation | LDPC (Low Density Parity Check) coded signal decoding using parallel and simultaneous bit node and check node processing |
GB0410617D0 (en) * | 2004-05-12 | 2004-06-16 | Ttp Communications Ltd | Path searching |
DE102004025826A1 (en) * | 2004-05-24 | 2005-12-22 | Micronas Gmbh | Method for recovering a bit sequence from QPSK or QAM symbols |
US20050289433A1 (en) * | 2004-06-25 | 2005-12-29 | Itschak Weissman | Discrete universal denoising with error correction coding |
US7269781B2 (en) | 2004-06-25 | 2007-09-11 | Hewlett-Packard Development Company, L.P. | Discrete universal denoising with reliability information |
US7395490B2 (en) * | 2004-07-21 | 2008-07-01 | Qualcomm Incorporated | LDPC decoding methods and apparatus |
US7346832B2 (en) * | 2004-07-21 | 2008-03-18 | Qualcomm Incorporated | LDPC encoding methods and apparatus |
US7127659B2 (en) * | 2004-08-02 | 2006-10-24 | Qualcomm Incorporated | Memory efficient LDPC decoding methods and apparatus |
WO2006043013A1 (en) * | 2004-10-22 | 2006-04-27 | Nortel Networks Limited | Fast space-time decoding using soft demapping with table look-up |
US7522883B2 (en) | 2004-12-14 | 2009-04-21 | Quellan, Inc. | Method and system for reducing signal interference |
US7725079B2 (en) | 2004-12-14 | 2010-05-25 | Quellan, Inc. | Method and system for automatic control in an interference cancellation device |
US7971131B1 (en) | 2005-05-06 | 2011-06-28 | Hewlett-Packard Development Company, L.P. | System and method for iterative denoising and error correction decoding |
US7434146B1 (en) | 2005-05-06 | 2008-10-07 | Helwett-Packard Development Company, L.P. | Denoising and error correction for finite input, general output channel |
US7673222B2 (en) * | 2005-07-15 | 2010-03-02 | Mediatek Incorporation | Error-correcting apparatus including multiple error-correcting modules functioning in parallel and related method |
US7603591B2 (en) * | 2005-07-19 | 2009-10-13 | Mediatek Incorporation | Apparatus selectively adopting different determining criteria in erasure marking procedure when performing decoding process, and method thereof |
US8286051B2 (en) | 2005-07-15 | 2012-10-09 | Mediatek Inc. | Method and apparatus for burst error detection and digital communication device |
WO2007127369A2 (en) | 2006-04-26 | 2007-11-08 | Quellan, Inc. | Method and system for reducing radiated emissions from a communications channel |
GB2441376B (en) * | 2006-09-01 | 2009-08-05 | Toshiba Res Europ Ltd | Wireless communication apparatus |
ATE479265T1 (en) * | 2007-03-21 | 2010-09-15 | Marvell Israel Misl Ltd | USF CODING |
US8365040B2 (en) | 2007-09-20 | 2013-01-29 | Densbits Technologies Ltd. | Systems and methods for handling immediate data errors in flash memory |
US8694715B2 (en) | 2007-10-22 | 2014-04-08 | Densbits Technologies Ltd. | Methods for adaptively programming flash memory devices and flash memory systems incorporating same |
US8254244B2 (en) | 2007-10-30 | 2012-08-28 | Qualcomm Incorporated | Arrangement and method for transmitting control information in wireless communication systems |
US8607128B2 (en) * | 2007-12-05 | 2013-12-10 | Densbits Technologies Ltd. | Low power chien-search based BCH/RS decoding system for flash memory, mobile communications devices and other applications |
US8341335B2 (en) | 2007-12-05 | 2012-12-25 | Densbits Technologies Ltd. | Flash memory apparatus with a heating system for temporarily retired memory portions |
WO2009074978A2 (en) | 2007-12-12 | 2009-06-18 | Densbits Technologies Ltd. | Systems and methods for error correction and decoding on multi-level physical media |
US8972472B2 (en) | 2008-03-25 | 2015-03-03 | Densbits Technologies Ltd. | Apparatus and methods for hardware-efficient unbiased rounding |
DE102008040797B4 (en) * | 2008-07-28 | 2010-07-08 | Secutanta Gmbh | Method for receiving a data block |
US8458574B2 (en) | 2009-04-06 | 2013-06-04 | Densbits Technologies Ltd. | Compact chien-search based decoding apparatus and method |
US8819385B2 (en) | 2009-04-06 | 2014-08-26 | Densbits Technologies Ltd. | Device and method for managing a flash memory |
US8995197B1 (en) | 2009-08-26 | 2015-03-31 | Densbits Technologies Ltd. | System and methods for dynamic erase and program control for flash memory device memories |
US9330767B1 (en) | 2009-08-26 | 2016-05-03 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Flash memory module and method for programming a page of flash memory cells |
US8730729B2 (en) | 2009-10-15 | 2014-05-20 | Densbits Technologies Ltd. | Systems and methods for averaging error rates in non-volatile devices and storage systems |
US8724387B2 (en) | 2009-10-22 | 2014-05-13 | Densbits Technologies Ltd. | Method, system, and computer readable medium for reading and programming flash memory cells using multiple bias voltages |
US9037777B2 (en) | 2009-12-22 | 2015-05-19 | Densbits Technologies Ltd. | Device, system, and method for reducing program/read disturb in flash arrays |
US8745317B2 (en) | 2010-04-07 | 2014-06-03 | Densbits Technologies Ltd. | System and method for storing information in a multi-level cell memory |
US8510639B2 (en) | 2010-07-01 | 2013-08-13 | Densbits Technologies Ltd. | System and method for multi-dimensional encoding and decoding |
US8964464B2 (en) | 2010-08-24 | 2015-02-24 | Densbits Technologies Ltd. | System and method for accelerated sampling |
US9063878B2 (en) | 2010-11-03 | 2015-06-23 | Densbits Technologies Ltd. | Method, system and computer readable medium for copy back |
US8850100B2 (en) | 2010-12-07 | 2014-09-30 | Densbits Technologies Ltd. | Interleaving codeword portions between multiple planes and/or dies of a flash memory device |
US8990665B1 (en) | 2011-04-06 | 2015-03-24 | Densbits Technologies Ltd. | System, method and computer program product for joint search of a read threshold and soft decoding |
US8996790B1 (en) | 2011-05-12 | 2015-03-31 | Densbits Technologies Ltd. | System and method for flash memory management |
US9396106B2 (en) | 2011-05-12 | 2016-07-19 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Advanced management of a non-volatile memory |
US9195592B1 (en) | 2011-05-12 | 2015-11-24 | Densbits Technologies Ltd. | Advanced management of a non-volatile memory |
US9501392B1 (en) | 2011-05-12 | 2016-11-22 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Management of a non-volatile memory module |
US9110785B1 (en) | 2011-05-12 | 2015-08-18 | Densbits Technologies Ltd. | Ordered merge of data sectors that belong to memory space portions |
US9372792B1 (en) | 2011-05-12 | 2016-06-21 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Advanced management of a non-volatile memory |
US8996788B2 (en) | 2012-02-09 | 2015-03-31 | Densbits Technologies Ltd. | Configurable flash interface |
US8947941B2 (en) | 2012-02-09 | 2015-02-03 | Densbits Technologies Ltd. | State responsive operations relating to flash memory cells |
US8892986B2 (en) | 2012-03-08 | 2014-11-18 | Micron Technology, Inc. | Apparatuses and methods for combining error coding and modulation schemes |
US8996793B1 (en) | 2012-04-24 | 2015-03-31 | Densbits Technologies Ltd. | System, method and computer readable medium for generating soft information |
US8838937B1 (en) | 2012-05-23 | 2014-09-16 | Densbits Technologies Ltd. | Methods, systems and computer readable medium for writing and reading data |
US8879325B1 (en) | 2012-05-30 | 2014-11-04 | Densbits Technologies Ltd. | System, method and computer program product for processing read threshold information and for reading a flash memory module |
US9921954B1 (en) | 2012-08-27 | 2018-03-20 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Method and system for split flash memory management between host and storage controller |
US9368225B1 (en) | 2012-11-21 | 2016-06-14 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Determining read thresholds based upon read error direction statistics |
US9069659B1 (en) | 2013-01-03 | 2015-06-30 | Densbits Technologies Ltd. | Read threshold determination using reference read threshold |
US9136876B1 (en) | 2013-06-13 | 2015-09-15 | Densbits Technologies Ltd. | Size limited multi-dimensional decoding |
US9413491B1 (en) | 2013-10-08 | 2016-08-09 | Avago Technologies General Ip (Singapore) Pte. Ltd. | System and method for multiple dimension decoding and encoding a message |
US9786388B1 (en) | 2013-10-09 | 2017-10-10 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Detecting and managing bad columns |
US9348694B1 (en) | 2013-10-09 | 2016-05-24 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Detecting and managing bad columns |
US9397706B1 (en) | 2013-10-09 | 2016-07-19 | Avago Technologies General Ip (Singapore) Pte. Ltd. | System and method for irregular multiple dimension decoding and encoding |
US9536612B1 (en) | 2014-01-23 | 2017-01-03 | Avago Technologies General Ip (Singapore) Pte. Ltd | Digital signaling processing for three dimensional flash memory arrays |
US10120792B1 (en) | 2014-01-29 | 2018-11-06 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Programming an embedded flash storage device |
US20150247928A1 (en) * | 2014-02-28 | 2015-09-03 | Texas Instruments Incorporated | Cooperative location sensor apparatus and system for low complexity geolocation |
US9542262B1 (en) | 2014-05-29 | 2017-01-10 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Error correction |
US9892033B1 (en) | 2014-06-24 | 2018-02-13 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Management of memory units |
US9584159B1 (en) | 2014-07-03 | 2017-02-28 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Interleaved encoding |
US9972393B1 (en) | 2014-07-03 | 2018-05-15 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Accelerating programming of a flash memory module |
US9449702B1 (en) | 2014-07-08 | 2016-09-20 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Power management |
US9524211B1 (en) | 2014-11-18 | 2016-12-20 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Codeword management |
US10305515B1 (en) | 2015-02-02 | 2019-05-28 | Avago Technologies International Sales Pte. Limited | System and method for encoding using multiple linear feedback shift registers |
US9489976B2 (en) | 2015-04-06 | 2016-11-08 | Seagate Technology Llc | Noise prediction detector adaptation in transformed space |
US10628255B1 (en) | 2015-06-11 | 2020-04-21 | Avago Technologies International Sales Pte. Limited | Multi-dimensional decoding |
US9851921B1 (en) | 2015-07-05 | 2017-12-26 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Flash memory chip processing |
US9954558B1 (en) | 2016-03-03 | 2018-04-24 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Fast decoding of data stored in a flash memory |
CN106846008B (en) * | 2016-12-27 | 2021-06-29 | 北京五八信息技术有限公司 | Business license verification method and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4573155A (en) * | 1983-12-14 | 1986-02-25 | Sperry Corporation | Maximum likelihood sequence decoder for linear cyclic codes |
GB8628821D0 (en) * | 1986-12-02 | 1987-01-07 | Plessey Co Plc | Data transmission systems |
US5392299A (en) * | 1992-01-15 | 1995-02-21 | E-Systems, Inc. | Triple orthogonally interleaed error correction system |
US5457704A (en) * | 1993-05-21 | 1995-10-10 | At&T Ipm Corp. | Post processing method and apparatus for symbol reliability generation |
US5442627A (en) * | 1993-06-24 | 1995-08-15 | Qualcomm Incorporated | Noncoherent receiver employing a dual-maxima metric generation process |
-
1996
- 1996-08-16 US US08/699,101 patent/US5968198A/en not_active Expired - Lifetime
-
1997
- 1997-08-13 JP JP10509998A patent/JP2000516415A/en active Pending
- 1997-08-13 KR KR1019997001260A patent/KR20000029992A/en not_active Application Discontinuation
- 1997-08-13 CN CN97198561A patent/CN1232589A/en active Pending
- 1997-08-13 EP EP97937236A patent/EP0919086B1/en not_active Expired - Lifetime
- 1997-08-13 AU AU39799/97A patent/AU722457B2/en not_active Ceased
- 1997-08-13 CA CA002263444A patent/CA2263444A1/en not_active Abandoned
- 1997-08-13 WO PCT/US1997/014260 patent/WO1998007247A1/en not_active Application Discontinuation
- 1997-08-13 DE DE69712492T patent/DE69712492T2/en not_active Expired - Lifetime
- 1997-08-15 TW TW086111774A patent/TW338859B/en active
Also Published As
Publication number | Publication date |
---|---|
TW338859B (en) | 1998-08-21 |
DE69712492D1 (en) | 2002-06-13 |
US5968198A (en) | 1999-10-19 |
EP0919086B1 (en) | 2002-05-08 |
CN1232589A (en) | 1999-10-20 |
KR20000029992A (en) | 2000-05-25 |
EP0919086A1 (en) | 1999-06-02 |
AU3979997A (en) | 1998-03-06 |
AU722457B2 (en) | 2000-08-03 |
WO1998007247A1 (en) | 1998-02-19 |
DE69712492T2 (en) | 2002-11-28 |
JP2000516415A (en) | 2000-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU722457B2 (en) | Decoder utilizing soft information output to minimize error rates | |
US5606569A (en) | Error correcting decoder and decoding method for receivers in digital cellular communication systems | |
US6427219B1 (en) | Method and apparatus for detecting and correcting errors using cyclic redundancy check | |
EP0914719B1 (en) | Method and apparatus for detecting communication signals having unequal error protection | |
EP1897259B1 (en) | A decoder and a method for determining a decoding reliability indicator | |
EP0632613A1 (en) | Method and apparatus for data recovery in a radio communication system | |
AU4655997A (en) | Error correction with two block codes | |
EP1166514B1 (en) | Method and receiver for receiving and decoding signals modulated with different modulation methods | |
EP3994799B1 (en) | Iterative bit flip decoding based on symbol reliabilities | |
EP0931382B1 (en) | Method and apparatus for decoding block codes | |
US20020108090A1 (en) | Blind transport format detection of turbo-coded data | |
US6732321B2 (en) | Method, apparatus, and article of manufacture for error detection and channel management in a communication system | |
EP0910907A1 (en) | Method and apparatus for concatenated coding of mobile radio signals | |
WO2002037693A2 (en) | Reliable detection of a transport format identifier in a transport format identification field of a digital communication system | |
US7937643B1 (en) | Mobile communication device and data reception method | |
US6438121B1 (en) | Recognition and utilization of auxiliary error control transmissions | |
CN101141229A (en) | Apparatus and method for detecting puncture position | |
US7388522B2 (en) | Sequentially decoded low density parity coding (LDPC) forward error correction (FEC) in orthogonal frequency division modulation (OFDM) systems | |
US6677865B1 (en) | Method and configuration for decoding information | |
CN113366872A (en) | LPWAN communication protocol design using parallel concatenated convolutional codes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Discontinued |