WO2003044962A2 - Viterbi convolutional coding method and apparatus - Google Patents

Viterbi convolutional coding method and apparatus Download PDF

Info

Publication number
WO2003044962A2
WO2003044962A2 PCT/US2002/036998 US0236998W WO03044962A2 WO 2003044962 A2 WO2003044962 A2 WO 2003044962A2 US 0236998 W US0236998 W US 0236998W WO 03044962 A2 WO03044962 A2 WO 03044962A2
Authority
WO
WIPO (PCT)
Prior art keywords
state
stage
decoding
digital signal
viterbi
Prior art date
Application number
PCT/US2002/036998
Other languages
French (fr)
Other versions
WO2003044962A3 (en
Inventor
Saeid Safavi
Afshin Niktash
Behzad Mohebbi
Fadi Kurdahi
Original Assignee
Morpho Technologies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Morpho Technologies filed Critical Morpho Technologies
Priority to AU2002357739A priority Critical patent/AU2002357739A1/en
Publication of WO2003044962A2 publication Critical patent/WO2003044962A2/en
Publication of WO2003044962A3 publication Critical patent/WO2003044962A3/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6502Reduction of hardware complexity or efficient processing
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/3961Arrangements of methods for branch or transition metric calculation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4107Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing add, compare, select [ACS] operations
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/413Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors tail biting Viterbi decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4161Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management
    • H03M13/4169Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management using traceback
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/39Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes
    • H03M13/41Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors
    • H03M13/4161Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management
    • H03M13/4192Sequence estimation, i.e. using statistical methods for the reconstruction of the original codes using the Viterbi algorithm or Viterbi processors implementing path management using combined traceback and register-exchange
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6569Implementation on processors, e.g. DSPs, or software implementations
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6577Representation or format of variables, register sizes or word-lengths and quantization
    • H03M13/6583Normalization other than scaling, e.g. by subtraction
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6577Representation or format of variables, register sizes or word-lengths and quantization
    • H03M13/6583Normalization other than scaling, e.g. by subtraction
    • H03M13/6586Modulo/modular normalization, e.g. 2's complement modulo implementations

Definitions

  • the present invention generally relates to digital encoding and decoding. More particularly, this invention relates to a method and apparatus for executing a Viterbi convolutional coding algorithm using a multi-dimensional array of programmable elements.
  • Convolutional encoding is widely used in digital communication and signal processing to protect transmitted data against noise.
  • Convolutional encoding is a technique that systematically adds redundancy to a bitstream of data. Input bits to a convolutional encoder are convolved in a way in which each bit can influence the output more than once.
  • the rate of the encoder is the ratio of the number of input bits to output bits of the encoder.
  • CDMA2000 has code rates of 1/2, 1/3, 1/4 and 1/6
  • WCDMA/TD- SCDMA have code rates of 1/2 and 1/3.
  • the Global System for Mobile (GSM) standard uses a constraint length of 5, and IEEE 802.1 la employs convolutional encoders which use a constraint length of 7.
  • Figures 1 A and IB show simplified block diagrams of WCDMA convolutional encoders with respective code rates of 1/2 and 1/3.
  • Convolutional encoding involves the modulo-2 addition of selected taps of a data sequence that is serially time-delayed by a number of delay elements (D) or shift registers.
  • the length of the data sequence delay is equal to K-l, where K is the number of stages in each shift register, also called the constraint length of the code.
  • Each input bit enters a shift register/delay element, and the output is derived by combining the bits in the shift register/delay element in a way determined by the structure of the encoder in use. Thus, every bit that is transmitted influences the same number of outputs as there are stages in the shift register.
  • the output bits are transmitted through a communication channel and are decoded by employing a decoder at the receiving end.
  • Viterbi algorithm operates by finding the most likely state transition sequence in a state diagram.
  • the Viterbi algorithm includes the following decoding steps: 1) Branch Metrics Calculation; 2) Add-Compare and Select; and 3) Survivor Paths Storage. Survivor paths decoding is carried out using two possible approaches: Trace Back or Register-Exchange. These steps and associated approaches will be explained in further detail.
  • Convolutional encoding and decoding and in particular Viterbi decoding, are processing-intensive, and consume large amounts of processing resources. Accordingly, there is a need for a system and method in which convolutional codes can be processed efficiently and at high speed. Further, there is a need for a platform for executing a method which can be used in any one of a number of current or future wireless communication standards.
  • FIG. 1 A shows a convolutional encoder for WCDMA with a code rate of 1/2.
  • FIG. IB shows a convolutional encoder for WCDMA with a code rate of 1/3.
  • FIG. 2 is a simplified block diagram of a reconfigurable digital signal processor for executing a Viterbi algorithm.
  • FIG. 3 is a detailed block diagram of a reconfigurable digital signal processor for executing a Viterbi algorithm.
  • FIG. 4 is a trellis diagram illustrating a trace-back method.
  • FIG. 5 shows a register exchange method
  • FIG. 6 shows a state diagram of a trellis for a Viterbi decoder employed in CDMA2000/WCDMA with a constraint length of 9 and a rate of 1/2.
  • FIG. 7 is a state diagram of an assignment to an 8X8 array of reconfigurable cells (RC array) for a Viterbi decoder employed in CDMA2000/WCDMA according to an embodiment.
  • FIG. 8 illustrates a collapse process for one row of the RC array.
  • FIG. 9 shows a data re-shuffle process for a column of the RC array.
  • FIG. 10 illustrates state path metrics locations after a column data re- shuffle within the RC array.
  • FIG. 11 shows a Viterbi flow chart for execution by an RC array, in accordance with an embodiment.
  • FIG. 12 shows a trace-back method in a hybrid approach.
  • FIG. 13 illustrates a sliding window method and a direct metric transfer method.
  • FIG. 14 is a block diagram of a modular comparison stage in ACS.
  • FIG. 15 is a flowchart of an optimized Viterbi method in accordance with an embodiment.
  • FIG. 16 is a table showing the effect on cycle count by parallel execution of multiple Viterbi decoders.
  • FIG. 17 is a state allocation table for four parallel Viterbi decoders.
  • FIG. 18 shows shuffling for a Viterbi decoding routine for IEEE 802.1 la executed on two rows of an RC array according to an embodiment.
  • FIG. 19 shows shuffling for a Viterbi coding routine for WCDMA executed on two rows of an RC array according to an alternative embodiment.
  • FIG. 20 illustrates a software simulation of a bit error rate performance of one embodiment.
  • FIG. 21 illustrates an actual simulation of bit error rate performance of a particular architecture.
  • One method includes configuring a portion of an array of independently reconfigurable processing elements for performing a special Viterbi decoding algorithm. The method further includes executing the Viterbi decoding routine on data blocks received at the configured portion of the array of processing elements.
  • FIG. 2 illustrates a simplified block diagram of a reconfigurable DSP (rDSP) 100 designed by Morpho Technologies, Inc., of Irvine CA, and the assignees hereof.
  • the rDSP 100 includes a reconfigurable processing unit 102 comprising an array of reconfigurable processing cells (RCs).
  • the rDSP 100 further includes a general-purpose reduced instruction set computer (RISC) processor 104 and a set of I/O interfaces 106, all of which can be implemented as a single chip.
  • the RCs in the RC array 102 are coarse-grain, but also provide extensive support for key bit-level functions.
  • the RISC processor 104 controls the operation of the RC array 102.
  • the input/output (I/O) interfaces 106 handle data transfers between external devices and the rDSP 100. Dynamic reconfiguration of the RC array can be done in one cycle by caching on the chip several contexts from an off-chip memory (not shown).
  • FIG. 3 illustrates an rDSP chip 200 in greater detail, showing: the RISC processor 104 with its associated instruction cache 202 and memory controller 204; an RC array 102 comprising an 8-row by 8-column array of RCs 206; a context memory 208; a frame buffer 210; and a direct memory access 212 with its coupled memory controller 214.
  • Each RC includes several functional units (e.g. MAC, arithmetic logic unit, etc.) and a small register file, and is preferably configured through a 32-bit context word, however other bit-lengths can be employed.
  • the frame buffer 210 acts as an internal data cache for the RC array 102, and can be implemented as a two-port memory.
  • the frame buffer 210 makes memory accesses transparent to the RC array 102 by overlapping computation processes with data load and store processes.
  • the frame buffer 210 can be organized as 8 banks of Nxl6 frame buffer cells, where N can be sized as desired.
  • the frame buffer 210 can thus provide 8 RCs (1 row or 1 column) with data, either as two 8-bit operands or one 16-bit operand, on every clock cycle.
  • the context memory 208 is the local memory in which to store the configuration contexts of the RC array 102, much like an instruction cache. A context word from a context set is broadcast to all eight RCs 206 in a row or column.
  • All RCs 206 in a row (or column) can be programmed to share a context word and perform the same operation.
  • the RC array 102 can operate in Single Instruction, Multiple Data form (SIMD).
  • SIMD Single Instruction, Multiple Data form
  • the context memory can have a 2-port interface to enable the loading of new contexts from off-chip memory (e.g. flash memory) during execution of instructions on the RC array 102.
  • RC cells 206 in the array 102 can be connected in two levels of hierarchy. First, RCs 206 within each quadrant of 4x4 RCs can be fully connected in a row or column. Furthermore, RCs 206 in adjacent quadrants can be connected via "fast lanes", or high-speed interconnects, which can enable an RC 206 in a quadrant to broadcast its results to the RCs 206 in adjacent quadrants.
  • the RISC processor 104 handles general -purpose operations, and also controls operation of the RC array 102. It initiates all data transfers to and from the frame buffer 210, and configuration loads to the context memory 208 through a DMA controller 216. When not executing normal RISC instructions, the RISC processor 104 controls the execution of operations inside the RC array 102 every cycle by issuing special instructions, which broadcast SIMD contexts to RCs 206 or load data between the frame buffer 210 and the RC array 102. This makes programming simple, since one thread of control flow is running through the system at any given time.
  • a Viterbi algorithm is divided into a number of sub-processes or steps, each of which is executed by a number of RCs 206 of the RC array 102, and the output of which is used by other same or other RCs 206 in the array.
  • Embodiments of the Viterbi decoding steps configured generally for a digital signal processor and in some cases specifically for an rDSP, will now be described in greater detail.
  • the branch metric is the squared Euclidean distance between the received noisy symbol, ykirt (soft decision valued), and the ideal noiseless output symbol of that transition for each state in the trellis. That is, the branch metric for the transition from state / to state at the trellis stage n is
  • branch metrics can be represented as:
  • the maximum path metrics can be chosen, which gives the maximum confidence of the path.
  • the new accumulated branch metric of each next state is the path metrics with maximum likelihood, which is in a preferred case the maximum of two path metrics.
  • the path metric associated with each state should be stored in each stage to be used for decoding.
  • the amount of memory to be allocated for storage depends on trace back or register exchange decoding scheme, as well as the length of the block.
  • the survivor path of each state is stored.
  • One bit is assigned to each state to indicate if the survivor branch is the upper or the lower path.
  • the value of the accumulated branch metric is also stored for a next trellis stage. Using the one-bit information of each state, it is possible to trace back the survivor path starting from the final stage.
  • the decoded output sequence can be obtained from the identified survivor path through the trellis.
  • Figure 4 shows this method.
  • Figure 5 illustrates a "register exchange” method, in which a register is assigned to each state, and contains information bits for the survivor path from the initial state to the current state.
  • the register keeps the partially decoded output sequence along the path.
  • the register exchange approach eliminates the need to trace back, since the register of the final state contains the decoded output sequence.
  • the register exchange approach uses more hardware resources due to the need to copy the contents of all the registers in one stage to the next stage.
  • the Viterbi algorithm according to an embodiment is mapped to a selected subset of RCs 206 in the RC array 102.
  • the basic mapped code includes 6 stages, the development of which is discussed further below.
  • each RC 206 will have 4 states.
  • NextStates(RC, ) ⁇ next (S 4l ) , next (S 4l+2 ) ⁇ , i e ⁇ 0, 1 , ..., 63 ⁇ (Eq.
  • Figure 7 shows the assigned current and next state to each RC.
  • STAGE 1 BRANCH METRICS CALCULATION
  • the operation of branch metrics calculation is based on (Eq. 1-5) above.
  • the incoming soft data yi, y 2 are assumed to be in a group, which correspond to the output data in the encoder (1/2) for a certain input.
  • boo[k] through b 1 1 [k] have to be computed for every RC.
  • a condition register is used with bits associated with conditions required in each RC 206 through different stages.
  • RC 0 in Figure 7 has current states 0, 1, 2, 3.
  • the encoded data is assumed to be 8-bit signed, referred to as a soft input.
  • the proper branch metric is added to/subtracted from current path metric of each present state, then for every next state the incoming path metrics to that state are compared, and the greater one is chosen as the new path metric of the next state.
  • the incoming path metrics of each next state are examined one-by- one, 64 at a time, over the entire RC array 102.
  • Registers R0 to R3 are assigned for current state path metrics and are reused for the next state.
  • the steps for computing path metrics of first 2 next states are as follows.
  • the second group of next states can be updated with similar steps.
  • Reg 11 reg 0 +/- Branch metrics 1 : cycle
  • Reg 12 reg 0 -/+ Branch metrics 1 : cycle
  • Reg 0 reg 1 -/+ Branch metrics 1 : (rO used as temp, reg) cycle
  • Reg 8 reg 1 +/- Branch metrics 1 : cycle
  • the survivor path ending of each state is stored in the frame buffer 210.
  • the single bits are first packed into bytes and then the final 8 words (16 bits) are stored. Since each RC 206 has 4 bits of data needing to be stored in the frame buffer 210, the first two bits in RCs 206 in each column will collapse into a 16-bit data word. The second two bits will collapse into another 16-bit data word.
  • the collapse procedure of the first column of RCs is shown in figure 9.
  • the first step is to collect the path information of state 0 through state 127, distributed in 64 RCs as shown in Figure 8, then the second step is to collect the information of states 128 to 255.
  • the following sub-step shows the detailed procedure of each major step. In the following case, the contexts are broadcast to a row. The following procedures are used to collect the transition information of state 0 to 127.
  • the result is stored in the frame buffer 210.
  • This stage can also be modified for optimization, which will be discussed below.
  • the updated state metrics (next field) need to be moved into the original order (current field) as shown in figure 7, so that the same procedures can be applied to the next trellis stage.
  • this step is applied to R0-R3.
  • Re-ordering requires both column-wise context broadcast and row-wise context broadcast.
  • the first and second steps are used to exchange the data in row-wise and column-wise modes, respectively.
  • Figure 9 shows the data re-shuffle for the first group of state path metrics in the first column between different rows, in 2 clock cycles.
  • Figure 10 shows the path metrics location in the RC array 102 after row data exchange. Since there are two groups of data in each RC 206, it will take 4 clock cycles to completely reshuffle between rows.
  • path metrics of all states in each RC 206 are compared and the largest one chosen and its index recorded. Then the comparison is carried out between neighbor RCs 206 in each row, and finally between the largest value of rows. As this stage may provide negligible performance improvements, it may be eliminated in other embodiments.
  • TRACE BACK This stage is for decoding the bits based on the survivor path ending to state 0 (or with maximum path metrics). As the survivor paths of all states have been stored in the frame buffer 210, this stage moves backward from the last state to the first state using the up-low bit of each state to find its previous state. The decoded bit corresponding to every state transition is also identified.
  • An example computer program code below shows the execution of the trace back process:
  • Next_addr start_addr
  • stage 3 The number of cycles required in stage 3 can also reduced, as the up-low bits do not need to be packed, and the survivor path does not need to be stored at every iteration but only in every n th iteration.
  • One possible drawback of this approach can be found at stage 4.
  • the re-ordering (re-shuffling) stage is more 0 time consuming due to re-ordering of decoded bit registers.
  • the optimum n is 16, in which a single register per state is used for decoded bits. Up to a 35% reduction in the number of cycles required can be realized.
  • Figure 12 shows the hybrid method using a single 16 bit register for a decoded bit sequence of each state.
  • the decoder processing can be performed on the received sequence as a whole, or the original frame can be segmented prior to processing.
  • the latter case would require a sliding window approach in which state metrics computation of segment (window) i + 1 will be done in parallel to the trace back computation of segment i as shown in Fig.13 (i.e. overlap between windows).
  • a sliding window approach in which state metrics computation of segment (window) i + 1 will be done in parallel to the trace back computation of segment i as shown in Fig.13 (i.e. overlap between windows).
  • an alternative approach to a sliding window is provided which eliminates the need for overlap during metric calculation. This approach is based on direct metric transfer between consecutive sub-segments.
  • each segment within a frame is divided into non-overlapping sub-segments which are processed sequentially by direct metric transfer.
  • the data frames are first buffered and then applied to the RCs 206 configured as the Viterbi decoder.
  • the buffer length is the segment length plus survivor depth of the decoder.
  • the Viterbi decoder performs a standard Viterbi algorithm by computing path metrics stage by stage until the end of sequence is reached.
  • the received data sequence is then traced back using the present method which consumes up to about 20% less cycles as compared to conventional trace back methods.
  • the next sub-segment would use the survivor metrics of a previous sub-segment as its initial condition.
  • path metrics in the add, compare and select (ACS) stage grows gradually stage-by-stage. Due to finite arithmetic precision, the result of an overflow changes the survivor path selection and hence decoding may become invalid. There should be a normalization operation to rescale all path metrics to avoid this problem.
  • Reset Redundancy is introduced into the input sequence in order to force the survivor sequence to merge after some number of ACS recursion for each state. Using a small block size, so that the path metrics cannot grow beyond the 16 bit precision of the registers, is also an alternative.
  • Difference Metric ACS The algorithm is reformulated to keep track of differences between metrics for each pair of states.
  • Variable shift After some fixed number of recursions, the minimum survivor path is subtracted from all the survivor metrics.
  • Modulo Normalization Use the two's complement representation of the branch and survivor metrics and modulo arithmetic during ACS operations.
  • ALU arithmetic logic unit
  • FIG 14. A block diagram of the modulo approach is shown in figure 14. The optimization methods discussed above can be applied to the initial mapping.
  • the conceptual flow chart of the optimized mapping is shown in figure 15. As can be seen, there is a new stage 0 for loading a state number for every register allocated to decoded bits.
  • stage 0 For each state there is at least one register for path metrics and another register for decoded bits. Initial state numbers are loaded to bits 8-15 of each decoded bits register at this stage. As 8 bits are used for state index and the rest of the 8 bits for decoded bits of 8 subsequent trellis stages, stage 0 is executed once per 8 iterations.
  • Stage 2 is modified for subtraction instead of comparison to comply with modulo normalization.
  • Applying the hybrid trace back and register exchange method there is no need in stage 3 to store survivor paths. Instead, first the path metrics as well as decoded bits are reordered to move to a new state in stage 4, and then the decoded bits registers of all states (once it is full) are stored. The frequency of execution of stage 3 will now be once every 8 trellis stages. However the amount of data is roughly equivalent to 256 16-bit registers.
  • trace back stage as shown in figure 13, there are three trace back sections. Section D is associated with overlapped tailing stages. The decoded bits are not stored, and will be overwritten by the next block. The middle part however is the final decoded bit section and the result is stored. Also the A part, corresponding to the tail part of previous block, is now used to store the decoded bits of heading part.
  • mappings can be used to perform parallel Viterbi decoding processes on multiple blocks of RCs. To do this, the mapping can be changed so that only a small part of the RC array 102 is assigned to one Viterbi decoding. That is, there can be more states associated with every RC 206.
  • FIG. 16 illustrates the effect of parallel Viterbi execution on cycle count, for a Viterbi decoding process with constraint length of 7 and coding rate of 1/2.
  • the dark area shows the cases that cannot be efficiently implemented on the rDSP due to a shortage of registers.
  • fewer RCs are used for each parallel Viterbi. Hence the number of registers grows and the cycle count improves.
  • FIG. 17 shows the state assignment to the RCs. Every two rows of RCs perform a separate Viterbi decoding, as shown: ⁇ Loop 1 :
  • the first step is row-wise between 2 rows of each row pair, and the rest are column-wise, and the same for all rows. However, in the last permutation, every RC has proper states, but the register orders may be incorrect. Extra registers can be used in intermediate moves to eventually achieve a proper order of register- states.
  • Another alternative mapping method uses a limited number of RCs for Viterbi decoding. This can be the result of using an RC array with fewer RCs in order to reduce power consumption and reduce area or footprint of the array.
  • the preferred mapping includes assigning eight registers for eight states. Hence, two rows of an RC array can accommodate 128 states, and the operations can be simply re-executed on the next 128 states.
  • the hybrid trace back method may not be efficient in this case.
  • the path metrics are stored at every iteration into memory and there is no benefit of reducing the frequency of execution of stage 3.
  • the portion of cycles for trace back is very small compared to that of other cases.
  • the extra burden of the hybrid method on shuffling stage is now important.
  • the trace back method with survivor path accumulation discussed above with reference to stage 2 and 3 of the preliminary mapping, is applicable. Other optimization methods may be used as before.
  • the shuffling stage is different in this alternative approach and is illustrated in figure 19.
  • the number of cycles for data shuffling in mapped algorithm is 27. But the total cycles of stage 4 is 110 cycles, and most of the cycles will be used for data movement from and to the frame buffer. The total number of cycles is therefore 4.7 times that of the basic mapping scheme. The total memory usage is less, as the volume of data stored for survivor path is roughly half (i.e. no need to store the index).
  • the evaluation is based on an encoded bits block size of 210 and an overlap of 96 as before.
  • a series of simulations were performed on MATLAB and MULATE to study the performance of the above implementation.
  • the encoded outputs are assumed as antipodal signals. At the receiver end, these levels are received in noise (AWGN channel assumption).
  • a soft input Viterbi decoder is implemented in which the received data is first quantized (with an 8-bit quantizer) and then applied to the Viterbi decoder. Compared to the hard decision, the soft technique results in better performance of the Viterbi algorithm, since it better estimates the noise. The hard decision introduces a significant amount of quantization noise prior to execution of the Viterbi algorithm.
  • the soft input data to the Viterbi decoder can be represented in unsigned or 2's complement format, depending on the quantizer design.
  • the quantizer is assumed to be linear with a dynamic range matching its input data.
  • Figure 20 summarizes the MATLAB simulation results for frame lengths of 210 and 2100 for both 8-bit soft and hard Viterbi decoders. Hard and soft Viterbi decoder results are presented as measures of upper and lower bit error rate (BER) bounds. Soft decoding has a 2dB gain in signal-to-noise ratio (SNR) as compared to hard decoding at BERs of about 1 xe "5 . In addition, there is no significant performance difference between segments of 210 bits and 2100 bits.
  • SNR signal-to-noise ratio
  • the simulation result of MULATE is illustrated in figure 21.
  • the BER of MULATE is extracted out of a simulated 400 random packets for SNR 1-3 dB and 8000 for SNR 4 dB.

Abstract

A method and apparatus for executing a Viterbi decoding routine, in which the routine (input) is mapped to an array of interconnected reconfigurable processing elements (D). The processing elements function in parallel, and pass results to other processing elements to reduce the number of processing steps for executing the Viterbi decoding routine. Accordingly, the present invention may be used to perform the decoding routine with any number of constraint lengths and code rates, and be independent of a specific communication standard. Further, the present invention reduces power consumption and area in the use of circuits for performing the coding routine.

Description

VITERBI CONVOLUTIONAL CODING METHOD AND APPARATUS
BACKGROUND OF THE INVENTION
This patent application claims priority from U.S. Provisional Patent
Application No. 60/332,398, filed November 16, 2001, entitled "VITERBI CONVOLUTIONAL CODING METHOD AND APPARATUS." This . application is also related to U.S. Patent No. 6,448,910 to Lu and assigned to Morpho Technologies, Inc., entitled "METHOD AND APPARATUS FOR CONVOLUTION ENCODING AND VITERBI DECODING OF DATA THAT UTILIZE A CONFIGURABLE PROCESSOR TO CONFIGURE A PLURALITY OF RE-CONFIGURABLE PROCESSING ELEMENTS," and which is incorporated by reference herein for all purposes.
The present invention generally relates to digital encoding and decoding. More particularly, this invention relates to a method and apparatus for executing a Viterbi convolutional coding algorithm using a multi-dimensional array of programmable elements.
Convolutional encoding is widely used in digital communication and signal processing to protect transmitted data against noise. Convolutional encoding is a technique that systematically adds redundancy to a bitstream of data. Input bits to a convolutional encoder are convolved in a way in which each bit can influence the output more than once.
The so-called second and third generation (2G/3G) communication standards IS-95, CDMA2000, WCDMA and TD-SCDMA, use convolutional codes having a constraint length of 9 with different code rates. The rate of the encoder is the ratio of the number of input bits to output bits of the encoder. For example, CDMA2000 has code rates of 1/2, 1/3, 1/4 and 1/6, while WCDMA/TD- SCDMA have code rates of 1/2 and 1/3. The Global System for Mobile (GSM) standard uses a constraint length of 5, and IEEE 802.1 la employs convolutional encoders which use a constraint length of 7. Figures 1 A and IB show simplified block diagrams of WCDMA convolutional encoders with respective code rates of 1/2 and 1/3. Convolutional encoding involves the modulo-2 addition of selected taps of a data sequence that is serially time-delayed by a number of delay elements (D) or shift registers. The length of the data sequence delay is equal to K-l, where K is the number of stages in each shift register, also called the constraint length of the code.
Each input bit enters a shift register/delay element,, and the output is derived by combining the bits in the shift register/delay element in a way determined by the structure of the encoder in use. Thus, every bit that is transmitted influences the same number of outputs as there are stages in the shift register. The output bits are transmitted through a communication channel and are decoded by employing a decoder at the receiving end.
One approach for decoding a convolutional encoded bit stream at a receiver is to use a Viterbi algorithm. The Viterbi algorithm operates by finding the most likely state transition sequence in a state diagram. In a decoding process, the Viterbi algorithm includes the following decoding steps: 1) Branch Metrics Calculation; 2) Add-Compare and Select; and 3) Survivor Paths Storage. Survivor paths decoding is carried out using two possible approaches: Trace Back or Register-Exchange. These steps and associated approaches will be explained in further detail.
Convolutional encoding and decoding, and in particular Viterbi decoding, are processing-intensive, and consume large amounts of processing resources. Accordingly, there is a need for a system and method in which convolutional codes can be processed efficiently and at high speed. Further, there is a need for a platform for executing a method which can be used in any one of a number of current or future wireless communication standards.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 A shows a convolutional encoder for WCDMA with a code rate of 1/2.
FIG. IB shows a convolutional encoder for WCDMA with a code rate of 1/3. FIG. 2 is a simplified block diagram of a reconfigurable digital signal processor for executing a Viterbi algorithm.
FIG. 3 is a detailed block diagram of a reconfigurable digital signal processor for executing a Viterbi algorithm. FIG. 4 is a trellis diagram illustrating a trace-back method.
FIG. 5 shows a register exchange method.
FIG. 6 shows a state diagram of a trellis for a Viterbi decoder employed in CDMA2000/WCDMA with a constraint length of 9 and a rate of 1/2.
FIG. 7 is a state diagram of an assignment to an 8X8 array of reconfigurable cells (RC array) for a Viterbi decoder employed in CDMA2000/WCDMA according to an embodiment.
FIG. 8 illustrates a collapse process for one row of the RC array.
FIG. 9 shows a data re-shuffle process for a column of the RC array.
FIG. 10 illustrates state path metrics locations after a column data re- shuffle within the RC array.
FIG. 11 shows a Viterbi flow chart for execution by an RC array, in accordance with an embodiment.
FIG. 12 shows a trace-back method in a hybrid approach.
FIG. 13 illustrates a sliding window method and a direct metric transfer method.
FIG. 14 is a block diagram of a modular comparison stage in ACS.
FIG. 15 is a flowchart of an optimized Viterbi method in accordance with an embodiment.
FIG. 16 is a table showing the effect on cycle count by parallel execution of multiple Viterbi decoders.
FIG. 17 is a state allocation table for four parallel Viterbi decoders.
FIG. 18 shows shuffling for a Viterbi decoding routine for IEEE 802.1 la executed on two rows of an RC array according to an embodiment.
FIG. 19 shows shuffling for a Viterbi coding routine for WCDMA executed on two rows of an RC array according to an alternative embodiment.
FIG. 20 illustrates a software simulation of a bit error rate performance of one embodiment. FIG. 21 illustrates an actual simulation of bit error rate performance of a particular architecture.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Methods for decoding signals that have been encoded by a convolutional encoding scheme are disclosed herein. One method includes configuring a portion of an array of independently reconfigurable processing elements for performing a special Viterbi decoding algorithm. The method further includes executing the Viterbi decoding routine on data blocks received at the configured portion of the array of processing elements.
Figure 2 illustrates a simplified block diagram of a reconfigurable DSP (rDSP) 100 designed by Morpho Technologies, Inc., of Irvine CA, and the assignees hereof. The rDSP 100 includes a reconfigurable processing unit 102 comprising an array of reconfigurable processing cells (RCs). The rDSP 100 further includes a general-purpose reduced instruction set computer (RISC) processor 104 and a set of I/O interfaces 106, all of which can be implemented as a single chip. The RCs in the RC array 102 are coarse-grain, but also provide extensive support for key bit-level functions. The RISC processor 104 controls the operation of the RC array 102. The input/output (I/O) interfaces 106 handle data transfers between external devices and the rDSP 100. Dynamic reconfiguration of the RC array can be done in one cycle by caching on the chip several contexts from an off-chip memory (not shown).
Figure 3 illustrates an rDSP chip 200 in greater detail, showing: the RISC processor 104 with its associated instruction cache 202 and memory controller 204; an RC array 102 comprising an 8-row by 8-column array of RCs 206; a context memory 208; a frame buffer 210; and a direct memory access 212 with its coupled memory controller 214. Each RC includes several functional units (e.g. MAC, arithmetic logic unit, etc.) and a small register file, and is preferably configured through a 32-bit context word, however other bit-lengths can be employed.
The frame buffer 210 acts as an internal data cache for the RC array 102, and can be implemented as a two-port memory. The frame buffer 210 makes memory accesses transparent to the RC array 102 by overlapping computation processes with data load and store processes. The frame buffer 210 can be organized as 8 banks of Nxl6 frame buffer cells, where N can be sized as desired. The frame buffer 210 can thus provide 8 RCs (1 row or 1 column) with data, either as two 8-bit operands or one 16-bit operand, on every clock cycle. The context memory 208 is the local memory in which to store the configuration contexts of the RC array 102, much like an instruction cache. A context word from a context set is broadcast to all eight RCs 206 in a row or column. All RCs 206 in a row (or column) can be programmed to share a context word and perform the same operation. Thus the RC array 102 can operate in Single Instruction, Multiple Data form (SIMD). For each row and each column there may be 256 context words that can be cached on the chip. The context memory can have a 2-port interface to enable the loading of new contexts from off-chip memory (e.g. flash memory) during execution of instructions on the RC array 102.
RC cells 206 in the array 102 can be connected in two levels of hierarchy. First, RCs 206 within each quadrant of 4x4 RCs can be fully connected in a row or column. Furthermore, RCs 206 in adjacent quadrants can be connected via "fast lanes", or high-speed interconnects, which can enable an RC 206 in a quadrant to broadcast its results to the RCs 206 in adjacent quadrants.
The RISC processor 104 handles general -purpose operations, and also controls operation of the RC array 102. It initiates all data transfers to and from the frame buffer 210, and configuration loads to the context memory 208 through a DMA controller 216. When not executing normal RISC instructions, the RISC processor 104 controls the execution of operations inside the RC array 102 every cycle by issuing special instructions, which broadcast SIMD contexts to RCs 206 or load data between the frame buffer 210 and the RC array 102. This makes programming simple, since one thread of control flow is running through the system at any given time. In accordance with an embodiment, a Viterbi algorithm is divided into a number of sub-processes or steps, each of which is executed by a number of RCs 206 of the RC array 102, and the output of which is used by other same or other RCs 206 in the array. Embodiments of the Viterbi decoding steps, configured generally for a digital signal processor and in some cases specifically for an rDSP, will now be described in greater detail.
BRANCH METRICS CALCULATION The branch metric is the squared Euclidean distance between the received noisy symbol, y„ (soft decision valued), and the ideal noiseless output symbol of that transition for each state in the trellis. That is, the branch metric for the transition from state / to state at the trellis stage n is
Bχn)Xyn-C !(n))2 (Eq. 1-1) Where CtJ{n) is the ideal noiseless output symbol of the transition from state / to state/ If M,{n) is defined as the path metric for state y at trellis stage n and {.} as the set of states that have transitions to state j, then the most likely path coming into state y at trellis stage n is the path that has the minimum metric:
M,(n) = min [M,(n-1) + B, ,(«)] (Eq. 1 -2)
('I After the most likely transition to state y at trellis stage n is computed, the path metric for state j, M,{n) is updated and this most likely transition, say from state m to state j, is appended to the survivor path of state m at stage (n-1) so as to form the survivor path of state y at the stage n.
According to one embodiment, only the differences between the branch metrics in (Eq. 1 - 1 ) are evaluated, thus the terms
Figure imgf000008_0001
in the branch metrics will be subtracted from each other during comparison. Further, for anti-podal signaling where the transition output symbols are binary and represented by -a and a (i,e. Cυ € {-a, a}, a> 0) the term (C,,,) is always a constant, a . Thus, comparing the branch metric between state / and state j, Bn{n) to the branch metric between state k and statey, Bkd{n), yields:
BtJ(n)-BKl{n {!2,„c, j+2y,A Z& '. (Eq.1-3)
If all the branch metrics are divided by a constant la, the comparison results will remain unchanged. Thus, the branch metrics can be represented as:
Figure imgf000009_0001
Therefore, only negation operations are required to compute the branch metrics. For example, if the ideal symbol is (0,1) and the received noisy symbol is ( «. yn+ι), then the branch metric is yn+(-yn+ι). If B l{ή) is defined as:
s,,/(n)
Figure imgf000009_0002
= fe f ;c c:;:i (Eq. 1-5)
Accordingly, the maximum path metrics can be chosen, which gives the maximum confidence of the path.
ADD, COMPARE AND SELECT After the branch metrics associated with each transition are calculated, they will be added to previous accumulated branch metrics of the source of transition to build path metrics. Thus for every next-state there will be 2 paths, with two different path metrics. The new accumulated branch metric of each next state is the path metrics with maximum likelihood, which is in a preferred case the maximum of two path metrics.
SURVIVOR PATH STORAGE
The path metric associated with each state should be stored in each stage to be used for decoding. The amount of memory to be allocated for storage depends on trace back or register exchange decoding scheme, as well as the length of the block.
In a "trace-back" method, the survivor path of each state is stored. One bit is assigned to each state to indicate if the survivor branch is the upper or the lower path. Furthermore, the value of the accumulated branch metric is also stored for a next trellis stage. Using the one-bit information of each state, it is possible to trace back the survivor path starting from the final stage. The decoded output sequence can be obtained from the identified survivor path through the trellis. Figure 4 shows this method.
Figure 5 illustrates a "register exchange" method, in which a register is assigned to each state, and contains information bits for the survivor path from the initial state to the current state. The register keeps the partially decoded output sequence along the path. The register exchange approach eliminates the need to trace back, since the register of the final state contains the decoded output sequence. However the register exchange approach uses more hardware resources due to the need to copy the contents of all the registers in one stage to the next stage.
MAPPING OF THE VITERBI ALGORITHM
The Viterbi algorithm according to an embodiment is mapped to a selected subset of RCs 206 in the RC array 102. An exemplary mapping is based on K=9 and R=l/2. However, this approach is applicable for other K and R values. The same approach can also be adapted for a generic mapping, so that the same hardware can be used for different applications. The basic mapped code includes 6 stages, the development of which is discussed further below.
STATE ASSIGMENTS TO RCS For the case of CDMA2000/WCDMA with constraint length of 9 and rate of 1/2, the state transitions can be represented in a trellis diagram as shown in Figure 6. Input and output of a convolutional encoder corresponding to this trellis diagram is stated for each branch. For example, 0/1 1 means that input 0 in the encoder will generate output 1 1 corresponding to polynomial G0, Gi . As shown, the probable next states for every state pair are the same. The next states of present state S, is : next (S,) = { Sj I j=128t+ floor(i/2) , 1=0, 1 } (Eq. 2-1)
Since there are 256 states in each trellis stage, each RC 206 will have 4 states. The present states and next states are assigned to the RCs as : PresentStates(RC, )={ S4l , S4.+1 , S4l+2 , S4l+3 } , i e {0, 1, ..., 63} (Eq.
2-2)
NextStates(RC, )={ next (S4l) , next (S4l+2) } , i e {0, 1 , ..., 63} (Eq.
2-3) Figure 7 shows the assigned current and next state to each RC.
STAGE 1 : BRANCH METRICS CALCULATION The operation of branch metrics calculation is based on (Eq. 1-5) above. The incoming soft data yi, y2 are assumed to be in a group, which correspond to the output data in the encoder (1/2) for a certain input. Exemplary computer code below shows the calculation: for ( =0;k< FRAME_LENGTH;k++)
{ boo [k] = -y. [k] - ya [k] b [k] = -y. [k] + y2 [k] b10 [k] = +y. [k] - y2 [k]
Figure imgf000011_0001
};
where boo[k] through bn[k] are branch metrics associated with convolutional encoder output of 00 to 11, as shown in figure 6. Because boo[k]=-bn [k], b0ι[k]=- bιo[k], it can be further optimized for different RCs 206 as: for (k=0 ; k< FRAME_LENGTH ; k++ ) { bιo [k] = y k] - y2 [k ; b.. [k] = y. [k] + y2 [k] ;
};
As can be seen from figure 6, boo[k] through b 1 1 [k] have to be computed for every RC. Thus it is sufficient to calculate only bιo[k] and bi i [k] at every iteration and add to/subtract from proper accumulated branch metrics in ADC stage. In order to do the add or subtract on different RCs at the same time, a condition register is used with bits associated with conditions required in each RC 206 through different stages. For example, RC 0 in Figure 7 has current states 0, 1, 2, 3. For the state group 0 and 1, they need the branch metrics bi i and -bi ι, for the state group 2 and 3, they need branch metrics -bio an bio. But for RC 2 with current states 8,9,10,1 1 the required branch metrics for group 0 (8,9) are -b]0 and bio and for group 1(10,11) are bn and -bn. This order further changes in other RCs.
The encoded data is assumed to be 8-bit signed, referred to as a soft input.
The operations in this stage, and required number of cycles, are:
Set flag based on pre-defined condition (cond 1) 1 cycle
Load yx [k] y2 [k] to all of the RCs from Frame Buffer and perform y. _k] ( + / - ) y2 [k] based on flag: 1 cycle
Perform y. [k] ( - / + ) y2 [k] based on flag: 1 cycle
STAGE 2: ADD, COMPARE & SELECT
In this stage, the proper branch metric is added to/subtracted from current path metric of each present state, then for every next state the incoming path metrics to that state are compared, and the greater one is chosen as the new path metric of the next state. As there are 4 current and 4 next states associated with every RC 206, the incoming path metrics of each next state are examined one-by- one, 64 at a time, over the entire RC array 102.
Registers R0 to R3 are assigned for current state path metrics and are reused for the next state. The steps for computing path metrics of first 2 next states are as follows. The second group of next states can be updated with similar steps.
The following steps are applied to state 4K and state 4K+1
Set flag based on pre-defined condition (cond 2): cycle
Reg 11 = reg 0 +/- Branch metrics 1 : cycle
Reg 12 = reg 0 -/+ Branch metrics 1 : cycle Reg 0 = reg 1 -/+ Branch metrics 1 : (rO used as temp, reg) cycle
Reg 8 = reg 1 +/- Branch metrics 1 : cycle
Set flag based on reg 0 - reg 1 1 : cycle
If flag=l, then reg 0 = reg 11 else reg 0 = reg 0: cycle If flag= 1 , then reg 5 = 0 else reg 5=1 : 1 cycle
Set flag based on reg 8 - reg 12: 1 cycle If flag=l, then reg 1= reg 12 else reg l=reg 8: 1 cycle
If flag=l , then reg 6 = 0 else reg 6=1 : 1 cycle
In this approach the result of add, compare, and select is used to update assigned next states of each RC 206 as well as to keep track of the survivor path using a single bit 0 or 1 associated with upper or lower previous state respectively. This approach has been modified for optimization purposes, and will be discussed further below.
STAGE 3: STORING SURVIVOR PATH
In this stage, the survivor path ending of each state is stored in the frame buffer 210. However, as there may be a single bit representing the survivor path of each state, the single bits are first packed into bytes and then the final 8 words (16 bits) are stored. Since each RC 206 has 4 bits of data needing to be stored in the frame buffer 210, the first two bits in RCs 206 in each column will collapse into a 16-bit data word. The second two bits will collapse into another 16-bit data word. The collapse procedure of the first column of RCs is shown in figure 9.
There are two steps to collect the path information bits in each RC 206. The first step is to collect the path information of state 0 through state 127, distributed in 64 RCs as shown in Figure 8, then the second step is to collect the information of states 128 to 255. The following sub-step shows the detailed procedure of each major step. In the following case, the contexts are broadcast to a row. The following procedures are used to collect the transition information of state 0 to 127.
Left shift by 14, 12, 10, 8, 6, 4, 2 for the col 0, 1 , 2, 3, 4, 5, 6, 7: 1 cycle
Assemble the col 0 and 1 , col 2 and 3, col 4 and 5, col 6 and 7 into four 4-bit data: 1 cycle Assemble the col 0 and 2, col 4 and 6 into two 8-bit data: 1 cycle
Assemble the col 0 and 4 into 16-bit data: 1 cycle Write out data: 1 cycle
The above procedure is repeated for the transition information of states 128 to 255.
The result is stored in the frame buffer 210. This stage can also be modified for optimization, which will be discussed below.
STAGE 4: STATE RE-ORDERING
In this step, the updated state metrics (next field) need to be moved into the original order (current field) as shown in figure 7, so that the same procedures can be applied to the next trellis stage. As the same registers are used for next state and present states, this step is applied to R0-R3. Re-ordering requires both column-wise context broadcast and row-wise context broadcast. The first and second steps are used to exchange the data in row-wise and column-wise modes, respectively. Figure 9 shows the data re-shuffle for the first group of state path metrics in the first column between different rows, in 2 clock cycles. Figure 10 shows the path metrics location in the RC array 102 after row data exchange. Since there are two groups of data in each RC 206, it will take 4 clock cycles to completely reshuffle between rows.
STAGE 5: FINDING MAXIMUM MATRIX
In order to choose the most probable end state of the trellis, there could be a maximum finder stage to compare path metrics of all states and to pick the path metrics with greatest value. Although in convolutional encoding, there are usually zero tail bits appended to the end of input data to take the trellis to state "zero," if the segment size is large and a smaller block is used instead, then this stage may be beneficial.
In this stage, path metrics of all states in each RC 206 are compared and the largest one chosen and its index recorded. Then the comparison is carried out between neighbor RCs 206 in each row, and finally between the largest value of rows. As this stage may provide negligible performance improvements, it may be eliminated in other embodiments.
TRACE BACK This stage is for decoding the bits based on the survivor path ending to state 0 (or with maximum path metrics). As the survivor paths of all states have been stored in the frame buffer 210, this stage moves backward from the last state to the first state using the up-low bit of each state to find its previous state. The decoded bit corresponding to every state transition is also identified. An example computer program code below shows the execution of the trace back process:
State=' 00000000 ' ; Next_addr = start_addr ; Next_base = start_addr ; for ( i=n- l ; ι>=0 ; l — ) { trans [ l ] = read_data@next_addr ; prev = ( state & 127 ) «1 ; transjoit = ( state & 128 ) »7 ; bitpos = ( 255 - state ) % 8 ; branch = ( trans [ I ] »bιtpos ) & 1 ; s tate = prev | branch ; next_base = next_base - 4 ; next_addr = next_base + state >> 6 + ( state & 7 ) ; } RC ARRAY MAPPING OPTIMIZATION
In order to optimize the mapping, the execution flow is discussed. As shown in Figure 1 1 , the total execution cycle in trace forward is 52 cycles. Stage five will be executed once per block, so the portion of its execution load per bit is negligible. The trace back stage takes 18 cycles per bit. There will be an overhead of about 10% for index addressing and loops. Thus, employing the mapping shown in Figure 1 1 will result in about 77 cycles per decoded bit.
In this evaluation, the effect of block overlap is neglected. When the size of the input stream is large, the input sequence can be divided into small-sized blocks. This will reduce the delay between input stream and decoded output. Also, memory assigned to survivor paths can be conserved. The partitioned blocks should have an overlap of about 5*constraint lengths to prevent errors in the decoding of heading or tailing bits of each block. This will be discussed later in detail. HYBRID REGISTER EXCHANGE AND TRACE-BACK As shown in figure 1 1, the trace back stage takes up a large portion of the total number of cycles. As an alternative to trace back, a register-exchange 5 method similar to that explained above can be used for decoding each transmitted bit while doing trace forward.
In this approach, the transmitted bit associated with each transaction from present state to next state and for all states is decoded. This growing bit sequence is kept, so that after choosing the final state the bit sequence associated with that o state will be the decoded bits. However, this growing decoded bit sequence should be stored within the RCs 206 and for each state. For large trellis sizes, this may become impractical. Furthermore, this sequence should be re-ordered as the next state in stage 4 is re-shuffled, so that it moves to the correct RC, which could lead to stage 4 being complicated and time-consuming. 5 An alternative is to use a hybrid "register-exchange and trace-back" method. In this method, the bit sequence is kept for a certain number of stages n, then stored into memory. Eventually, instead of keeping the up-low bit in memory to find the correct survivor path, segments of decoded bits are kept for each path. In the trace back stage, after finding the survivor state, decoded bits of the 0 preceding n stages can be accessed. The trace back for every state need not be done. After finding one state and picking the n decoded bit sequence, the method can jump to the nth preceding stage (present stage - n). This approach shares the effect of trace back cycles over n bits, so that the portion of trace back cycles on total cycles/decoded bit will be reduced from 18 to 18/n, assuming that trace back 5 requires 18 cycles per iteration.
The number of cycles required in stage 3 can also reduced, as the up-low bits do not need to be packed, and the survivor path does not need to be stored at every iteration but only in every nth iteration. One possible drawback of this approach can be found at stage 4. The re-ordering (re-shuffling) stage is more 0 time consuming due to re-ordering of decoded bit registers. In one embodiment, the optimum n is 16, in which a single register per state is used for decoded bits. Up to a 35% reduction in the number of cycles required can be realized. Figure 12 shows the hybrid method using a single 16 bit register for a decoded bit sequence of each state. Note that in order to keep track of the survivor path, a way of recording the previous state at every n stage is needed. Due to the reordering of this register between stages, the initial state of each register at first stage is not known. It may not be sufficient to include only a single up-low bit to specify the previous state. Therefore 8 bits (MSB) of this register can be assigned to the index of the previous state, that is, 256 possible states. Although the need for a previous state index decreases n from 16 to 8, it still reduces the total cycles by about 30%.
SEGMENT OVERLAPPING IN TRACE BACK In a typical Viterbi decoder, depending on the data frame size and the memory availability for each specific implementation, the decoder processing can be performed on the received sequence as a whole, or the original frame can be segmented prior to processing. The latter case would require a sliding window approach in which state metrics computation of segment (window) i + 1 will be done in parallel to the trace back computation of segment i as shown in Fig.13 (i.e. overlap between windows). For optimum performance using an RC array 102, an alternative approach to a sliding window is provided which eliminates the need for overlap during metric calculation. This approach is based on direct metric transfer between consecutive sub-segments. More specifically, each segment within a frame is divided into non-overlapping sub-segments which are processed sequentially by direct metric transfer. The data frames are first buffered and then applied to the RCs 206 configured as the Viterbi decoder. The buffer length is the segment length plus survivor depth of the decoder. The Viterbi decoder performs a standard Viterbi algorithm by computing path metrics stage by stage until the end of sequence is reached. The received data sequence is then traced back using the present method which consumes up to about 20% less cycles as compared to conventional trace back methods. In addition, when sub-segments are not initialized (i.e. for the intermediate sub-segments), the next sub-segment would use the survivor metrics of a previous sub-segment as its initial condition.
This results in a reliable survivor calculation at the beginning of a new sub-segment with no need for overlap or initialization. The sliding window approach applied to the segments avoids the unreliable period by introducing an overlap between consecutive segments. Depending on the method, the overlap can be D (survivor depth) or D+A (survivor depth plus acquisition period). At the same time however, it leads to a Viterbi decoder performance which is virtually independent of the segment length, as illustrated in Figure 13. Therefore, small buffers can be used prior to the RCs 206 which are configured as the Viterbi decoder, which can also reduce power consumption.
BRANCH METRIC NORMALIZATION
The value of path metrics in the add, compare and select (ACS) stage (stage 2) grows gradually stage-by-stage. Due to finite arithmetic precision, the result of an overflow changes the survivor path selection and hence decoding may become invalid. There should be a normalization operation to rescale all path metrics to avoid this problem. Several methods of normalization are described below. Reset: Redundancy is introduced into the input sequence in order to force the survivor sequence to merge after some number of ACS recursion for each state. Using a small block size, so that the path metrics cannot grow beyond the 16 bit precision of the registers, is also an alternative. Difference Metric ACS: The algorithm is reformulated to keep track of differences between metrics for each pair of states.
Variable shift: After some fixed number of recursions, the minimum survivor path is subtracted from all the survivor metrics.
Fixed shift: when all survivor metrics become negative (or all positive), the survivor metrics are shifted up (or down) by a fixed amount. Modulo Normalization: Use the two's complement representation of the branch and survivor metrics and modulo arithmetic during ACS operations. As the arithmetic logic unit (ALU) in an RC 206 preferably uses 2's complement representation, implementation of the modulo normalization can be most efficient. The comparison stage in ACS is changed to subtraction. A block diagram of the modulo approach is shown in figure 14. The optimization methods discussed above can be applied to the initial mapping. The conceptual flow chart of the optimized mapping is shown in figure 15. As can be seen, there is a new stage 0 for loading a state number for every register allocated to decoded bits. For each state there is at least one register for path metrics and another register for decoded bits. Initial state numbers are loaded to bits 8-15 of each decoded bits register at this stage. As 8 bits are used for state index and the rest of the 8 bits for decoded bits of 8 subsequent trellis stages, stage 0 is executed once per 8 iterations.
Stage 2 is modified for subtraction instead of comparison to comply with modulo normalization. Applying the hybrid trace back and register exchange method, there is no need in stage 3 to store survivor paths. Instead, first the path metrics as well as decoded bits are reordered to move to a new state in stage 4, and then the decoded bits registers of all states (once it is full) are stored. The frequency of execution of stage 3 will now be once every 8 trellis stages. However the amount of data is roughly equivalent to 256 16-bit registers. In trace back stage, as shown in figure 13, there are three trace back sections. Section D is associated with overlapped tailing stages. The decoded bits are not stored, and will be overwritten by the next block. The middle part however is the final decoded bit section and the result is stored. Also the A part, corresponding to the tail part of previous block, is now used to store the decoded bits of heading part.
The loops for these 3 sections are not shown in the flow chart in Figure 15. As discussed before, 8 decoded bits are fetched at every execution of trace back loop. The trace back jumps from stage i to stage i-8 on the trellis diagram, and 1/8 of cycle count for trace back will be reflected to final cycles/bit. MAPPING VARIATIONS
Although the previous sections generally describe implementation of a Viterbi algorithm for K=9 and R=l/2, embodiments of this invention can be applied to other cases as well. For other encoding rates, only the first stage of the mapping should be changed, and instead of reading two bytes, n bytes (R=l/n) may be read. Puncturing also can be applied to this stage for other rates. Other constraint lengths require different state assignments to the RC array. This can affect the implementation of the basic stages and consequently the cycles/bit figure.
PARALLEL MULTI-BLOCK VITERBI
With access to multiple blocks of input encoded data, different mappings can be used to perform parallel Viterbi decoding processes on multiple blocks of RCs. To do this, the mapping can be changed so that only a small part of the RC array 102 is assigned to one Viterbi decoding. That is, there can be more states associated with every RC 206.
Parallel mapping is preferred if there are enough registers in each RC to accommodate more states. FIG. 16 illustrates the effect of parallel Viterbi execution on cycle count, for a Viterbi decoding process with constraint length of 7 and coding rate of 1/2. The dark area shows the cases that cannot be efficiently implemented on the rDSP due to a shortage of registers. As the parallelism increases, fewer RCs are used for each parallel Viterbi. Hence the number of registers grows and the cycle count improves.
It can also be seen that using more than one register per state for keeping decoded bits reduces the speed. Although using more registers leads to less frequent writing of decoded bits into the frame buffer as well as a fewer number of trace back loop executions per bit, shuffling these registers together with state registers takes more cycles.
An implementation of a Viterbi algorithm for K=7 , R=l/2 on 2 rows of RCs, for a total of four parallel decoding process, includes similar stages as discussed above. Figure 17 shows the state assignment to the RCs. Every two rows of RCs perform a separate Viterbi decoding, as shown: ■Loop 1 :
♦ Stage 0 :
♦ Update working condition registers (1) ♦Loop overhead ( p)
♦ Stage 1 :
♦ Reading YI Y2 ( p) ♦Split Y1,Y2 (1 -2) ♦ADD Y1 + Y2 (1 ) ♦SUB Y1-Y2 (1)
♦ Stage 2:
♦Set flag for condition (p)
♦ Branch metrics computation (2*p)
♦ Set flag (p) ♦New path metrics (p)
♦ Decoded bit detection (p) ♦Store Decoded bit (p)
♦ State 3 :
♦Store in FB every 16*m-l cycles ( p*(8*m+2)/(16*m-l) ) ♦Stage 4:
♦Shuffle ( 8*m+8) ■ Loop 2 :
♦Trace Back :
♦Once every 16*m-l cycles ( 25*p/(16*m-l) )
Here,/? is the number of parallel Viterbi processes, and m is the number of registers assigned to decoded bits for each state. The reordering stage in this mapping uses a different permutation, illustrated in Figure 18, in which K=7 and
P=4. The first step is row-wise between 2 rows of each row pair, and the rest are column-wise, and the same for all rows. However, in the last permutation, every RC has proper states, but the register orders may be incorrect. Extra registers can be used in intermediate moves to eventually achieve a proper order of register- states.
Another alternative mapping method uses a limited number of RCs for Viterbi decoding. This can be the result of using an RC array with fewer RCs in order to reduce power consumption and reduce area or footprint of the array. The method of mapping is basically similar to the parallel Viterbi decoding method discussed above. For constraint length of K=7, the code is mostly the same as that of the previous section. However the degree of parallelism changes and as a result the cycles/bit will be several times higher.
For constraint length of K=9, there may be insufficient storage in each RC to keep the entire states. Accordingly, it is necessary to load/store the path metrics from/to frame buffer after each trellis stage. The preferred mapping includes assigning eight registers for eight states. Hence, two rows of an RC array can accommodate 128 states, and the operations can be simply re-executed on the next 128 states. The hybrid trace back method may not be efficient in this case. The path metrics are stored at every iteration into memory and there is no benefit of reducing the frequency of execution of stage 3. In addition, the portion of cycles for trace back is very small compared to that of other cases. The extra burden of the hybrid method on shuffling stage is now important. The trace back method with survivor path accumulation, discussed above with reference to stage 2 and 3 of the preliminary mapping, is applicable. Other optimization methods may be used as before.
The shuffling stage is different in this alternative approach and is illustrated in figure 19. There are four register exchanges between two rows (left), and for each pair of registers in every row there are two shuffling steps similar to steps 2 and 3 of figure 18. There is another series of similar steps for the second series of 128 steps, performed after storing the result of first series and loading the second series.
The number of cycles for data shuffling in mapped algorithm is 27. But the total cycles of stage 4 is 110 cycles, and most of the cycles will be used for data movement from and to the frame buffer. The total number of cycles is therefore 4.7 times that of the basic mapping scheme. The total memory usage is less, as the volume of data stored for survivor path is roughly half (i.e. no need to store the index). The evaluation is based on an encoded bits block size of 210 and an overlap of 96 as before.
BIT ERROR RATES
A series of simulations were performed on MATLAB and MULATE to study the performance of the above implementation. In the simulations, the encoded outputs are assumed as antipodal signals. At the receiver end, these levels are received in noise (AWGN channel assumption). A soft input Viterbi decoder is implemented in which the received data is first quantized (with an 8-bit quantizer) and then applied to the Viterbi decoder. Compared to the hard decision, the soft technique results in better performance of the Viterbi algorithm, since it better estimates the noise. The hard decision introduces a significant amount of quantization noise prior to execution of the Viterbi algorithm. In general, the soft input data to the Viterbi decoder can be represented in unsigned or 2's complement format, depending on the quantizer design. The quantizer is assumed to be linear with a dynamic range matching its input data.
It is also assumed that the data frame contains a minimum of 210 bits, as is the case for voice frames. The maximum frame length directly relates to the frame buffer size. Figure 20 summarizes the MATLAB simulation results for frame lengths of 210 and 2100 for both 8-bit soft and hard Viterbi decoders. Hard and soft Viterbi decoder results are presented as measures of upper and lower bit error rate (BER) bounds. Soft decoding has a 2dB gain in signal-to-noise ratio (SNR) as compared to hard decoding at BERs of about 1 xe"5. In addition, there is no significant performance difference between segments of 210 bits and 2100 bits.
The simulation result of MULATE is illustrated in figure 21. The BER of MULATE is extracted out of a simulated 400 random packets for SNR 1-3 dB and 8000 for SNR 4 dB.
Other embodiments, combinations and modifications of this invention will occur readily to those of ordinary skill in the art in view of these teachings. Therefore, this invention is to be limited only by the following claims, which include all such embodiments and modifications when viewed in conjunction with the above specification and accompanying drawings.

Claims

WHAT IS CLAIMED IS:
1. In a digital signal processor having a local memory and a global memory, a hybrid register exchange and trace back method used for decoding convolutional encoded signals, comprising: accumulating segments of decoded bits associated with each survivor path for a number of trellis stages in the local memory.
2. The method of claim 1, further comprising transferring, after a number of trellis stages, said segments of decoded bits from the local memory to the global memory.
3. In a digital signal processor decoding convolutional encoded signals, wherein the digital signal processor includes a core processor and a plurality of reconfigurable processor cells arranged in a two dimensional array, a method for connecting segments of decoded bits associated with every survivor path comprising: assigning an initial state number to each segment of decoded bits corresponding to a survivor path; and buffering segments of the decoded bits within at least a portion of the plurality of reconfigurable processing cells.
4. In a digital signal processor executing a Viterbi algorithm for decoding convolutional encoded signals, wherein the digital signal processor comprises a core processor and a plurality of reconfigurable processor cells aπanged in a two dimensional array, a method for normalizing path metrics associated with every survivor path at every trellis stage, comprising: executing a modulo arithmetic with at least a portion of the plurality of reconfigurable processor cells based on two's complement subtraction in an add, compare, and select (ACS) stage of the Viterbi algorithm.
5. In a digital signal processor, comprising a core processor and a plurality of reconfigurable processor cells arranged in a two dimensional array, a method for parallel decoding of convolutional encoded signals, comprising: assigning multiple portions of said plurality of reconfigurable processor cells to decode multiple segments of the convolutional encoded signals.
6. The method of claim 5, further comprising configuring at least one portion of said plurality of reconfigurable processor cells to decode convolutional encoded signals with variable constraint lengths and encoding rates.
7. In a digital signal processor, a method for reducing memory usage and computational overhead in decoding convolutional encoded signals, comprising: executing a combination of parallel and serial Viterbi decoding based on a sliding window and a direct metric transfer.
PCT/US2002/036998 2001-11-16 2002-11-15 Viterbi convolutional coding method and apparatus WO2003044962A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002357739A AU2002357739A1 (en) 2001-11-16 2002-11-15 Viterbi convolutional coding method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US33239801P 2001-11-16 2001-11-16
US60/332,398 2001-11-16

Publications (2)

Publication Number Publication Date
WO2003044962A2 true WO2003044962A2 (en) 2003-05-30
WO2003044962A3 WO2003044962A3 (en) 2003-10-30

Family

ID=23298053

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/036998 WO2003044962A2 (en) 2001-11-16 2002-11-15 Viterbi convolutional coding method and apparatus

Country Status (3)

Country Link
US (1) US20030123579A1 (en)
AU (1) AU2002357739A1 (en)
WO (1) WO2003044962A2 (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7266725B2 (en) 2001-09-03 2007-09-04 Pact Xpp Technologies Ag Method for debugging reconfigurable architectures
DE19651075A1 (en) 1996-12-09 1998-06-10 Pact Inf Tech Gmbh Unit for processing numerical and logical operations, for use in processors (CPU's), multi-computer systems, data flow processors (DFP's), digital signal processors (DSP's) or the like
DE19654595A1 (en) 1996-12-20 1998-07-02 Pact Inf Tech Gmbh I0 and memory bus system for DFPs as well as building blocks with two- or multi-dimensional programmable cell structures
ATE243390T1 (en) 1996-12-27 2003-07-15 Pact Inf Tech Gmbh METHOD FOR INDEPENDENT DYNAMIC LOADING OF DATA FLOW PROCESSORS (DFPS) AND COMPONENTS WITH TWO- OR MULTI-DIMENSIONAL PROGRAMMABLE CELL STRUCTURES (FPGAS, DPGAS, O.L.)
US6542998B1 (en) 1997-02-08 2003-04-01 Pact Gmbh Method of self-synchronization of configurable elements of a programmable module
US8686549B2 (en) 2001-09-03 2014-04-01 Martin Vorbach Reconfigurable elements
DE19861088A1 (en) 1997-12-22 2000-02-10 Pact Inf Tech Gmbh Repairing integrated circuits by replacing subassemblies with substitutes
US8230411B1 (en) 1999-06-10 2012-07-24 Martin Vorbach Method for interleaving a program over a plurality of cells
EP2226732A3 (en) 2000-06-13 2016-04-06 PACT XPP Technologies AG Cache hierarchy for a multicore processor
US8058899B2 (en) 2000-10-06 2011-11-15 Martin Vorbach Logic cell array and bus system
US6934343B2 (en) * 2000-11-15 2005-08-23 Texas Instruments Incorporated Computing the full path metric in viterbi decoding
US7210129B2 (en) * 2001-08-16 2007-04-24 Pact Xpp Technologies Ag Method for translating programs for reconfigurable architectures
US7844796B2 (en) 2001-03-05 2010-11-30 Martin Vorbach Data processing device and method
US9037807B2 (en) 2001-03-05 2015-05-19 Pact Xpp Technologies Ag Processor arrangement on a chip including data processing, memory, and interface elements
US7444531B2 (en) 2001-03-05 2008-10-28 Pact Xpp Technologies Ag Methods and devices for treating and processing data
EP1402382B1 (en) 2001-06-20 2010-08-18 Richter, Thomas Data processing method
US7996827B2 (en) 2001-08-16 2011-08-09 Martin Vorbach Method for the translation of programs for reconfigurable architectures
US7434191B2 (en) 2001-09-03 2008-10-07 Pact Xpp Technologies Ag Router
US8686475B2 (en) 2001-09-19 2014-04-01 Pact Xpp Technologies Ag Reconfigurable elements
AU2003208266A1 (en) 2002-01-19 2003-07-30 Pact Xpp Technologies Ag Reconfigurable processor
WO2003071432A2 (en) 2002-02-18 2003-08-28 Pact Xpp Technologies Ag Bus systems and method for reconfiguration
US8914590B2 (en) 2002-08-07 2014-12-16 Pact Xpp Technologies Ag Data processing method and device
AU2003286131A1 (en) 2002-08-07 2004-03-19 Pact Xpp Technologies Ag Method and device for processing data
US7657861B2 (en) 2002-08-07 2010-02-02 Pact Xpp Technologies Ag Method and device for processing data
US7394284B2 (en) 2002-09-06 2008-07-01 Pact Xpp Technologies Ag Reconfigurable sequencer structure
US7260154B1 (en) * 2002-12-30 2007-08-21 Altera Corporation Method and apparatus for implementing a multiple constraint length Viterbi decoder
EP1676208A2 (en) 2003-08-28 2006-07-05 PACT XPP Technologies AG Data processing device and method
US7343530B2 (en) * 2004-02-10 2008-03-11 Samsung Electronics Co., Ltd. Turbo decoder and turbo interleaver
EP1735913A1 (en) * 2004-04-05 2006-12-27 Koninklijke Philips Electronics N.V. Four-symbol parallel viterbi decoder
US8111617B2 (en) * 2004-08-13 2012-02-07 Broadcom Corporation Multiple independent pathway communications
KR100725931B1 (en) * 2004-12-17 2007-06-11 한국전자통신연구원 Hybrid trace back apparatus and high-speed viterbi decoding system using it
US7441174B2 (en) * 2005-09-07 2008-10-21 The University Of Hong Kong Embedded state metric storage for MAP decoder of turbo codes
EP1974265A1 (en) 2006-01-18 2008-10-01 PACT XPP Technologies AG Hardware definition method
US8755515B1 (en) 2008-09-29 2014-06-17 Wai Wu Parallel signal processing system and method
US8638886B2 (en) * 2009-09-24 2014-01-28 Credo Semiconductor (Hong Kong) Limited Parallel viterbi decoder with end-state information passing
US10075186B2 (en) 2015-11-18 2018-09-11 Cisco Technology, Inc. Trellis segment separation for low-complexity viterbi decoding of high-rate convolutional codes
US9935800B1 (en) 2016-10-04 2018-04-03 Credo Technology Group Limited Reduced complexity precomputation for decision feedback equalizer
US10728059B1 (en) 2019-07-01 2020-07-28 Credo Technology Group Limited Parallel mixed-signal equalization for high-speed serial link

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5881106A (en) * 1994-09-05 1999-03-09 Sgs-Thomson Microelectronics S.A. Signal processing circuit to implement a Viterbi algorithm

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4730322A (en) * 1985-09-27 1988-03-08 California Institute Of Technology Method and apparatus for implementing a maximum-likelihood decoder in a hypercube network
JPS62233933A (en) * 1986-04-03 1987-10-14 Toshiba Corp Viterbi decoding method
US5105387A (en) * 1989-10-13 1992-04-14 Texas Instruments Incorporated Three transistor dual port dynamic random access memory gain cell
KR940010435B1 (en) * 1992-08-31 1994-10-22 삼성전자 주식회사 Path memory apparatus of viterbi decoder
US5490178A (en) * 1993-11-16 1996-02-06 At&T Corp. Power and time saving initial tracebacks
US5781756A (en) * 1994-04-01 1998-07-14 Xilinx, Inc. Programmable logic device with partially configurable memory cells and a method for configuration
US5586128A (en) * 1994-11-17 1996-12-17 Ericsson Ge Mobile Communications Inc. System for decoding digital data using a variable decision depth
FI100564B (en) * 1995-12-04 1997-12-31 Nokia Telecommunications Oy A method for generating transition metrics and a receiver for a cellular radio system
US5838729A (en) * 1996-04-09 1998-11-17 Thomson Multimedia, S.A. Multiple mode trellis decoder for a digital signal processing system
US5878098A (en) * 1996-06-27 1999-03-02 Motorola, Inc. Method and apparatus for rate determination in a communication system
JP3266182B2 (en) * 1997-06-10 2002-03-18 日本電気株式会社 Viterbi decoder
JP3277856B2 (en) * 1997-08-29 2002-04-22 日本電気株式会社 Viterbi decoder
US6456628B1 (en) * 1998-04-17 2002-09-24 Intelect Communications, Inc. DSP intercommunication network
US6269129B1 (en) * 1998-04-24 2001-07-31 Lsi Logic Corporation 64/256 quadrature amplitude modulation trellis coded modulation decoder
US7020214B2 (en) * 2000-09-18 2006-03-28 Lucent Technologies Inc. Method and apparatus for path metric processing in telecommunications systems
WO2002033964A1 (en) * 2000-10-17 2002-04-25 Koninklijke Philips Electronics N.V. Multi-standard channel decoder
US6934343B2 (en) * 2000-11-15 2005-08-23 Texas Instruments Incorporated Computing the full path metric in viterbi decoding
KR20030005768A (en) * 2001-07-10 2003-01-23 삼성전자 주식회사 State metric calculating apparatus(ACS) for viterbi decoder
US20030081569A1 (en) * 2001-10-25 2003-05-01 Nokia Corporation Method and apparatus providing call admission that favors mullti-slot mobile stations at cell edges

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5881106A (en) * 1994-09-05 1999-03-09 Sgs-Thomson Microelectronics S.A. Signal processing circuit to implement a Viterbi algorithm

Also Published As

Publication number Publication date
US20030123579A1 (en) 2003-07-03
WO2003044962A3 (en) 2003-10-30
AU2002357739A8 (en) 2003-06-10
AU2002357739A1 (en) 2003-06-10

Similar Documents

Publication Publication Date Title
US20030123579A1 (en) Viterbi convolutional coding method and apparatus
US7398458B2 (en) Method and apparatus for implementing decode operations in a data processor
KR101175826B1 (en) Apparatus and method for viterbi decoding in software-defined radio systems
JP3935471B2 (en) Pipeline architecture for maximum a posteriori (MAP) decoder
JP4907802B2 (en) Butterfly processor device used for communication decoding
JP2002171173A (en) Reconstitutable architecture for decoding data communication signal transmitted according to one of plural decoding scheme and method for dealing with path metric of communication decoding device for decoding either superimposed code or turbo code
AU2005280196A1 (en) A method of and apparatus for implementing a reconfigurable trellis-type decoding
KR101127333B1 (en) Optimized viterbi decoder and gnss receiver
US20100211858A1 (en) Scalable VLIW Processor For High-Speed Viterbi and Trellis Coded Modulation Decoding
Pandita et al. Design and implementation of a Viterbi decoder using FPGAs
JP2002152057A (en) Device and method for calculating proper path metric address for trellis processing arrangement
Lee et al. Design space exploration of the turbo decoding algorithm on GPUs
US20070205921A1 (en) Four-Symbol Parallel Viterbi Decoder
EP1739843B1 (en) Power efficient Viterbi decoder
US7979781B2 (en) Method and system for performing Viterbi decoding using a reduced trellis memory
Qiao et al. QFEC ASIP: A flexible quad-mode FEC ASIP for polar, LDPC, turbo, and convolutional code decoding
CN106452461A (en) Method for realizing viterbi decoding through vector processor
US7661059B2 (en) High performance turbo and Viterbi channel decoding in digital signal processors
US8006066B2 (en) Method and circuit configuration for transmitting data between a processor and a hardware arithmetic-logic unit
US20070230606A1 (en) Viterbi traceback
CN101160729B (en) Addressing strategy for processing recursive data in parallel
JP2001024526A (en) Viterbi decoder
Wang et al. Convolutional Decoding on Deep-pipelined SIMD Processor with Flexible Parallel Memory
CN101527573B (en) Viterbi decoder
Niktash et al. A reconfigurable processor for forward error correction

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP