Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Low-complexity LDPC decoding algorithms for ultra-high-order modulated signals

Open Access Open Access

Abstract

Low-complexity decoding algorithms and ultra-high-order modulation formats are necessary to meet data rate requirements in excess of 1 Tbps. The information bottleneck algorithm has been effectively applied to the LDPC decoding algorithm in recent years, and its performance is comparable to that of the double-precision information propagation technique. However, the application of information bottleneck decoding algorithms in ultra-high order modulation forms has received little attention. Furthermore, the number of table lookups required for a single decoding loop is square to the node degree, which is undesirable for optical communications. We present a low-complexity LDPC decoding technique for ultra-high-order modulated signals in this study. First, the algorithm employs multivariate covariates to build an information bottleneck framework, which introduces the processing required for applying the information bottleneck algorithm to 1024-QAM signals and the requirement of combining higher-order modulation formats with LDPC codes. The technique makes use of a bidirectional recursive network and the symmetry of quantized information to reuse the same set of tables, considerably reducing the number of table lookup operations required in the decoding process. Constructing a coherent optical communication system with 1024-QAM signals proves that the proposed algorithm can operate effectively. The performance sacrifice of 0.2 ∼ 0.3 dB reduces the number of table lookup operations required for decoding from square to linear magnitude, which greatly reduces the decoding time required in optical communication.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The scale, communication rate, and transmission capacity of optical communication systems have been steadily increasing in recent years due to the emergence of new services like big data, artificial intelligence, and industrial Internet. It is now more crucial than ever to increase the efficiency and channel capacity of optical communication systems [1]. Due to their theoretical performance nearing the Shannon limit, low-density parity-check (LDPC) codes will be used as forward error correction codes for next-generation optical fiber communication systems [24]. There are now two primary categories for the information iteration of LDPC decoding algorithms. The first group consists of iterative algorithms that use arithmetic operations, such as the cutting-edge minimum sum (MS) algorithms, which directly use probabilistic information in their computational results and intermediate processes, such as log-likelihood ratios. Another novel class of methods utilizes information bottleneck (IB) algorithms to generate full integer lookup tables (LUT) instead of arithmetic operations [515]. By compressing the input data, LUT maximizes the mutual information between the code parts and the associated data. The description in ultra-high order modulation is missing from the current level of IB decoding algorithms. However, ultra-high order modulation is one of the required criteria for channel capacity improvement. Making the IB decoding method suitable to communication systems with ultra-high order modulation formats remains a challenging endeavor.

The IB-based decoding algorithm has reduced decoding complexity than the iterative decoding technique because all complex function operations in the information updating process are condensed to simple table lookup operations. Although this algorithm performs well in terms of decoding performance and complexity, there is a phenomenon of double counting of some information in its local node operations, resulting in a squared increase in the number of LUT lookup operations with the degree of LDPC nodes ${d_c}$ during the iterative process [1620], increasing the decoding delay. [20] splits the input of each check node using a tree structure, which considerably reduces the amount of LUT lookup operations, but the loss of mutual information reduces decoding performance [19]. Also, the reuse of intermediate information is proposed in [20] to reduce the required operations to look up the LUT in the sequential structure, but the operation to look up the LUT and the node degree are still squared.

We describe a low-complexity LDPC decoding technique for ultra-high-order modulated signals in this research. The program employs multivariate parameters to build an information bottleneck framework, covers the application of the IB decoding algorithm for 1024-QAM signals in detail, and discusses the need to integrate higher-order modulation with LDPC for channel capacity augmentation. We use a bidirectional recurrent network and quantized information symmetry to realize table multiplexing, giving full play to intermediate information, greatly reducing the number of times needed to find the LUT in the sequential structure, and reducing the decoding delay. This greatly reduces the decoding delay for the high rate decoding required for optical communications. A coherent optical communication system based on 1024-QAM signals is built to verify the decoding algorithm’s performance. The experimental results show that the proposed decoding algorithm operates effectively in the communication system with 1024-QAM signals. And the number of lookup LUT operations required for the check and variable nodes of the sequential structure during a single iteration is reduced from $[({d_c} - 2)({d_c} + 3)]/2$ and $[({d_v} - 1)({d_v} + 2)]/2$ to $[(2{d_c} - 4) + \left \lfloor {(2{{\rm {d}}_c} - 3)/2} \right \rfloor$ and $[(2{d_v} - 4) + \left \lceil {(2{{\rm {d}}_v} - 1)/2} \right \rceil$ at the expense of about 0.2 $\sim$ 0.3 dB performance.

2. Theoretical analysis and methodology

2.1 Calculation of bit information

For the higher-order modulation format of M-QAM, ${\log _2}M$ bits are mapped into the complex modulation symbol ${s_l} = s_l^{re} + js_l^{im}$. In this paper, the 10 bits represented by a single 1024-QAM symbol are labeled as ${c_v}(v \in \{ 0,1, \ldots,9\} )$. The first five bits represent the real part of the transmitted signal $s_l^{re}$, and the last five bits represent the imaginary part of the transmitted signal $s_l^{im}$.

Since the real and imaginary parts of the 1024-QAM signal have the same statistical distribution, the conclusions drawn of the signal. For the real part, the joint distribution $p(s_l^{re},\widetilde r_l^{re})$ of the transmitted and received signals can be calculated as [14]

$$p(s_l^{re},\widetilde r_l^{re}) = \frac{1}{{\sqrt {1024\pi {\sigma ^2}} }}{{\mathop{\rm e}\nolimits} ^{\left( { - \frac{{|\widetilde r_l^{re} - s_l^{re}{|^2}}}{{{\sigma ^2}}}} \right)}}$$
where ${\sigma ^2}$ is the variance of the signal noise, $s_l^{re}$ is the real part of the transmitted signal, and $\widetilde r_l^{re}$ is the real part of the received signal.

Since the IB algorithm is only applicable to the compression of discrete random variables, the channel output signal needs to be discretized. To begin, we perform a mean-square estimation of the amplitude of the received signal $\widetilde r_l^{re}$. Through this step, we can obtain an accurate estimate of the signal amplitude, which provides the basis for the subsequent limiting operation. Second, based on the results of the amplitude estimation, we restrict the received signal to the amplitude concentration region [−M,M] defined in the first step. This limiting operation aims to ensure that the amplitude information of the signal can be maximally concentrated and retained. Finally, to satisfy the requirements of the information bottleneck algorithm, we uniformly divide the limiting interval into n quantization intervals. Furthermore, based on the quantization interval range, $p(s_l^{re},\widetilde r_l^{re})$ is integrated in each interval to yield n discretized joint distributions, which are the inputs to the sequential IB technique in [12]. We can identify the mapping $r_l^{re}$ with the most mutual information between $s_l^{re}$ and $\widetilde r_l^{re}$ by continuously optimizing the quantized regions with the sequential IB technique.

It was pointed out in the article [15] that the conditional distribution $p({c_v}|s_l^{re})$ can be determined directly for a given bit labeling. In turn, the joint distribution between the mapping $r_l^{re}$ and the ${v^{th}}$ bit ${c_v}$ is obtained by Eq. (2). Where $p(s_l^{re},r_l^{re})$ can be generated by the sequential IB algorithm.

$$p({c_v},r_l^{re}) = \sum_{{s_{re}}} {p({c_v}|s_l^{re})p(s_l^{re},r_l^{re})}$$

The IB algorithm decoding process requires full integer symbols instead of double-precision decimals for information transfer and computation, so the integrity of the bit information represented by each integer is critical. The log-likelihood ratio L carried by the five bits of the real part of the 1024-QAM signal can be calculated as

$$p({c_v}|r_l^{re}) = \frac{{p({c_v},r_l^{re})}}{{p(r_l^{re})}}$$
$$L({c_v}|r_l^{re}) = \log \frac{{p({c_v} = 0|r_l^{re})}}{{p({c_v} = 1|r_l^{re})}}$$

Figure 1 illustrates the value of L corresponding to the index on each bit associated with the real part of a 1024-QAM signal at ${\sigma ^2} = 0.1$. In the following, we refer to the LLR value corresponding to each bit index as bit information. Various bits belonging to the same index have distinct L representations. Assuming the obtained indexes are immediately sent to the checksum node, the checksum node will be missing the knowledge corresponding to each bit, which will be deadly to the decoding performance.

 figure: Fig. 1.

Fig. 1. L of the corresponding index for the 1024-QAM real part. (a)$v = 0$. (b)$v = 1$. (c)$v = 2$. (d)$v = 3$. (e)$v = 4$.

Download Full Size | PDF

2.2 Alignment of bits of information based on information bottlenecks

Previous research [15] proposed the concept of information alignment for 16-QAM signals but did not account for higher-order modulation formats. In this section, we utilize the properties of multivariate IB to describe in detail the use of the IB algorithm for 1024-QAM signals.

In information theory, we denote by $I({c_v};r_l^{re},v)$ the mutual information received between some bit $v$ of the discrete quantized mapping $r_l^{re}$ and the $L$-value ${c_v}$ corresponding to $v$. By the chain rule of mutual information, $I({c_v};r_l^{re},v)$ can be expressed as

$$I({c_v};r_l^{re},v) = I({c_v};v|r_l^{re}) + I({c_v};r_l^{re})$$
where $I({c_v};r_l^{re})$ is the mutual information of the quantized index $r_l^{re}$ and ${c_v}$. $I({c_v};v|r_l^{re})$ is the additional information obtained by the bit index in the case of known quantization mapping.

The purpose is to replace the original mapping of bit information with the new mapping. Therefore, Eq. (5) can also be expressed as

$$I({c_v};r_l^{re},v) \ge I({c_v};r_l^{re},{z_l}) = I({c_v};r_l^{re}|{z_l}) + I({c_v};{z_l})$$
where ${z_l}$ is the new mapping after information alignment. The purpose of this section is to maximize the compression of the mutual information of ${c_v}$ and $r_l^{re}$ into the new mapping ${z_l}$. By observing Eq. (6), it is easy to know that if $I({c_v};{z_l})$ is large enough, it will make $I({c_v};r_l^{re},v) \approx I({c_v};{z_l})$, which is consistent with the design goal of maximizing the mutual information in using the information bottleneck approach. Therefore, we transform the bit-information alignment problem into a mutual information maximization problem between ${c_v}$ and ${z_l}$. Specifically, the mutual information $I({c_v};{z_l})$ between the bit information ${c_v}$ and the new mapping ${z_l}$ is optimized so that all information between the same indexes of different bits and the LLR value is preserved as much as possible. The description of the bitwise information alignment problem in the information bottleneck framework is shown in Fig. 2. Of these, $(r_l^{re},v)$ onstitutes a multivariate observed random variable, ${c_v}$ is a correlated random variable, and ${z_l}$ is a compressed random variable. In practice, we strive to optimize the mutual information between the compressed variable ${z_l}$ and ${c_v}$ by retaining as much bit information ${c_v}$ as possible in various symbol bits. Simultaneously, in order to obtain a brief representation of the observed variable, we strive to minimize the mutual information between the observed variable and the compressed variable.

 figure: Fig. 2.

Fig. 2. Information bottleneck framework for bit information alignment.

Download Full Size | PDF

In the 1024-QAM real part signal, the joint distribution $p({c_v},r_l^{re},v)$ between $v$ and ${c_v}$ is derived by Eq. (7) in turn. Subsequently, the independence between the bit information is used to obtain the total probability distribution of the 5-bit information, which is the input of the sequential IB algorithm. Subsequently, the independence between the bit is used to obtain the total probability distribution of the 5-bit, which is the input of the sequential IB algorithm. Note that $p({c_v},r_l^{re}|v) = p({c_v},r_l^{re})$ since the information of the bit token $v$ is attached to ${c_v}$. The task of the sequential IB algorithm is to maximize the mutual information of the 5 bit messages restricted to the space ${z_l}$ to obtain the final mapping $p({z_l}|r_l^{re},v)(v \in \{ 0,1,2,3,4\} )$ such that $I({c_v};{z_l})$ is maximized. The probability distribution of the mutual transmission in the decoder will be replaced by $p({c_v},{z_l})$, which is calculated by Eq. (8). In this case, $p({c_v},{z_l})$ will be used as the input for constructing the LUT in the decoding process in the next section.

$$p({c_v},r_l^{re},v) = p({c_v},r_l^{re}|v)p(v) = p({c_v},r_l^{re})p(v)$$
$$p({c_v},{z_l}) = \sum_{v = 0}^{{v_{\max }}} {p(v)\sum_{r_l^{re} \in \tau } {p({z_l}|r_l^{re},v)p({c_v},r_l^{re}|v)} }$$

It is shown in Fig. 3 as an example of bit 0 and bit 1, showing the new mapping after applying the information bottleneck method for bit information alignment. The arrows containing the meaning of the different mappings before and after the information alignment are used for guidance. Obviously, after such a compression operation, almost all relevant information in bit 0 and bit 1 is maximally preserved and compressed in the new mapping ${p_1}({z_l}|r_l^{re},v)(v \in \{ 0,1\} )$.

 figure: Fig. 3.

Fig. 3. Merged mapping of $v = 0$ and $v=1$.

Download Full Size | PDF

2.3 Performance simulation

In order to investigate the performance of different modulation formats, the simulated BER curves of Eb/No are derived by choosing regular LDPC codes (2448,1224) with code rate 0.5 and transmitting the same number of symbols (25600). The results are shown in Fig. 4(a), where the lowest BER achievable for a 1024-QAM signal is one order of magnitude higher than that of 16-QAM and the number of transmitted information bits is twice as high for the same number of transmitted symbols. Clearly, ultra-high-order modulated signals are one of the necessary factors in high-capacity and high-rate communication systems. The role of bit alignment is illustrated in Fig. 4(b), where the IB decoding algorithm without bit alignment will not guarantee correct decoding.

 figure: Fig. 4.

Fig. 4. (a): Performance of different modulation formats combining LDPC codes. (b):Performance of bit alignment.

Download Full Size | PDF

3. Bidirectional information bottleneck decoding algorithm design

In the previous section, the processing of the IB algorithm applied to the ultra-high-order modulation format is described, and the importance of high-order modulation for enhancing channel capacity and the necessity of bit alignment is analyzed. The actual optical communication system not only needs to ensure the communication performance but also needs to consider the transmission time. In this section, a bidirectional cyclic network is utilized to build a bidirectional information bottleneck decoder, which gives full play to the role of intermediate information and reduces the complexity of decoding calculation.

3.1 Construction of the LUT

In [12], in order to solve the problem that the traditional IB algorithm occupies too much memory, it is proposed to split the overall lookup operation of a check node of degree ${d_c}$ into serial connections of $({d_c} - 2)$ partial operations as shown in Fig. 5. The construction technique for all LUT is similar throughout. We will use ${x_0}$ as an example to demonstrate the construction procedure of a certain LUT. Assuming that the bit corresponding to input message ${z_l}$ is ${b_l}$ and the bit corresponding to ${t_0}$ is $b_0^{'}$, the joint probability distribution of messages ${z_l}$ and ${b_l}$ is $p({b_l},{z_l})$. From [12], $b_l^{'} = \oplus _{l = 0}^{l + 1}{b_l}$, calculated as

$$p(b_0^{'},z_0,z_1) = \sum_{({b_0},{b_1}):b_0^{'} = {b_0} \oplus {b_1}} {p({b_0},z_0)} p({b_1},z_1) $$

 figure: Fig. 5.

Fig. 5. Two Input Split Check Node Operation.

Download Full Size | PDF

$p(b_0^{'},z_0^{},z_1^{})$ serves as the input distribution to the information bottleneck algorithm, and the output combines the mapping table ${x_0}$ from $[z_0^{},z_1^{}]$ to ${t_0}$ and the joint distribution $p(b_0^{'},{t_0})$. Note that $p(b_0^{'},{t_0})$ is used as the input for constructing the next LUT ${x_1}$. By analogy, all LUT can be constructed. However, it is easy to see from Fig. 5 that a checkpointing node of degree ${d_c}$ needs to perform ${d_c}({d_c} - 2)$ operations of finding LUT in a single iteration. Obviously, The number of LUT lookup operations grows squarely with the degree of the check node. That will lead to a large decoding delay, which is detrimental in short-range high-speed optical communication.

3.2 Check node operation

A bidirectional recurrent network is a neural network that can provide comprehensive past and future contextual information to the output layer for any point in the input sequence [21]. Within this section, we will play the role of intermediate data for the design of the decoding algorithm by using the feature of a bidirectional recurrent network connecting past and future to reduce the delay in the decoding process of sequential structure.

Assuming that the received signals are $[z_0^{in},z_1^{in}, \ldots,z_{{d_c} - 2}^{in},z_{{d_c}-1}^{in}]$. Two cycles are required in a check node of degree ${d_c}$ to acquire the output information of the first and final nodes, as well as the intermediate data of all nodes, according to the properties of a two-way recurrent network. The intermediate data is then used to output the necessary information for the remaining nodes. The operational process of the algorithm is described using the check node of ${d_c} = 6$ as an example, as shown in Fig. 6. For the check node of degree 6, the bidirectional process involves four two-input LUT, corresponding to the ${x_0}$ to ${x_3}$ of layer1 and layer2 in Fig. 6. The sequential direction is followed to find the LUT ${x_i}(i = 0,2, \ldots,{d_c} - 3)$ and output the corresponding sequential information ${t_i}(i = 0,2, \ldots,{d_c} - 4)$ in turn. After that, the reverse order operation is executed, as shown in layer2 in Fig. 6. In the opposite order, the LUT $x_i^{'}(i = 0,2, \ldots,{d_c} - 3)$ and the output corresponding inverse order information $t_i^{'}(i = 0,2, \ldots,{d_c} - 4)$ are found in turn. Note that since the input quantized information $[z_0^{in},z_1^{in}, \ldots,z_{{d_c} - 2}^{in},z_{{d_c} - 1}^{in}]$ obeys the same probability distribution and satisfies symmetry, any symmetric combination of the two cyclic processes can reuse the same LUT, i.e., ${x_i} = x_i^{'}$. In addition, the distribution of information combinations of $[t_2^{'}, z_0^{in}]$ and $[t_2^{},z_5^{in}]$ satisfies the symmetry and obeys the same distribution as information combinations $[t_2^{'}, z_1^{in}]$ and $[t_2^{},z_4^{in}]$, so the generation of $z_1^{out}$ and $z_4^{out}$ can reuse the ${x_3}$ tables generated in the loop. By analogy, it is only necessary to use the symmetry of $[t_0^{},t_1^{'}]$ and $[t_1^{},t_0^{'}]$ to additionally generate tables ${x_4}$ that reuse $z_2^{out}$ and $z_3^{out}$ to complete the single iteration process.

 figure: Fig. 6.

Fig. 6. Bidirectional operation at ${d_c} = 6$.

Download Full Size | PDF

By taking advantage of the symmetric distribution of channel quantization messages, the proposed bidirectional cyclic IB decoding algorithm fully utilizes the intermediate information. For a check node of degree ${d_c}$, $(2{d_c} - 4)$ lookup LUT operations are required to obtain the output information of the first edge and the last edge during the bidirectional cyclic process. The remaining edges need to perform another $\left \lfloor {(2{d_c} - 3)/2} \right \rfloor$ lookup LUT operations in total. Therefore, a total of $(2{d_c} - 4) + \left \lfloor {(2{d_c} - 3)/2} \right \rfloor$ LUT lookup operations are required in the whole iteration. Compared with the traditional IB decoding algorithm, which requires $[({d_c} - 2)({d_c} + 3)]/2$ lookup operations. The proposed algorithm reduces the number of operations for the lookup of LUT from a square ratio to a linear ratio at the cost of adding a small number of LUT, thus greatly reducing the computational complexity.

3.3 Variable node operation

Considering LDPC codes often have a low variable node degree, variable nodes in the decoding algorithm are designed using a tree-like structure. When the node degree is modest, the tree structure has a smaller depth than the sequential structure [20], and the number of LUT lookup operations is better than our proposed structure in the check node. However, the literature [12] indicates that when conducting the variable node operation, an additional information $z_0^{v,in}$ from the quantization channel is involved. The probability distribution of this message is independent, and the design of the variable nodes is not addressed in [20]. Placing $z_0^{v,in}$ arbitrarily will violate the probability distribution’s symmetry. This asymmetry forces the system to use distinct tables or resources, which adds additional hardware requirements. As a result, this additional message must be treated individually in the structure’s design.

In the process of updating the information of variable nodes, $z_0^{v,in}$ needs to be placed to the initial or end position to prevent unnecessary LUT from being added by affecting the symmetry of the combination of probability distributions in a layer at other positions. According to the parity of ${d_v}$, we adopt the following design for the variable nodes: if ${d_v}$ is even, $z_0^{v,in}$ will be used as the entry of the last layer; while if ${d_v}$ is odd, $z_0^{v,in}$ as the investment of the last edge in the first layer. The operation process is similar to that of the check node after finding the input location of $z_0^{v,in}$. After the outputs of the first and last nodes are produced, the intermediate information provided by the cyclic process is used for the output of the remaining nodes.

As shown in Fig. 7, the construction of the variable node is described in terms of ${d_v} = 5$. With ${d_v}$ as an odd number, $z_0^{v,in}$ is used as the input of the last edge of layer1 to ensure that all the previous input combinations obey the same probability distribution. Since $z_0^{v,in}$ is not symmetric with respect to the information passed to the variable node by the check node, each layer of $z_0^{v,in}$ in the variable node will be placed in a different position according to the symmetry property as compared to the check node cyclic program. As a result, the cyclic process of variable nodes only necessitates the addition of N unique LUT based on the number of layers N of the tree structure.

 figure: Fig. 7.

Fig. 7. Tree structure of variable nodes with ${d_v} = 5$.

Download Full Size | PDF

Similar to the check node, the variable node needs to perform a total of $(2{d_v} - 4) + \left \lceil {(2{{\rm {d}}_v} - 1)/2} \right \rceil$ lookup operations during the iterative process.

3.4 Performance simulation

In Fig. 8, we compare the performance of the belief propagation, min-sum decoding algorithm and the conventional IB decoding algorithm using regular LDPC codes with different code rates and code lengths. According to the curves in the figure, for various code rates, the suggested bidirectional IB decoding method loses between 0.2 and 0.3 dB to the traditional IB algorithm. The proposed technique has nearly little loss for very short codes. The loss that results is primarily attributable to the wide range and erroneous bit alignment for 1024-QAM signals. Performance is rarely affected at all by the change in decoder structure alone. When comparing the proposed bi-directional IB decoding to the least-sum decoding, it is discovered that the bi-directional IB decoding performs better for various code lengths and code rates. Finally, we note that the proposed algorithm is very close to belief propagation, which is a better result since the proposed algorithm only deals with unsigned integers and the number of lookup operations is reduced from square to linear.

 figure: Fig. 8.

Fig. 8. Comparison of each decoding algorithm for code rate. (a)0.5.(b)0.82.

Download Full Size | PDF

4. Experimental verification

4.1 Performance

The coherent optical communication system that we built to test the performance of the proposed bidirectional IB decoding method is depicted in Fig. 9. To begin, the transmitter generates a binary random number of length 3357, which contains a 13-bit preamble code at the start. The LDPC encoder uses the obtained random number to build an LDPC code of length 4095. The LDPC code is then mapped to a 1024QAM signal, and Nyquist shaping is done with an RRC shaping filter. Finally, a 10-bit digital-to-analog converter (DAC,Fujitsu) with a sampling frequency of 64GSa/s splits the 1024-QAM electrical signal into two halves to form a single analog signal with a rate of 40Gbit/s that is delivered to the I/Q modulator(FTM7962EP) for loading into the carrier beam. The output strength of the signal light laser is adjusted at 12 dBm. After the signal light has traveled through a 5 km single-mode fiber, a variable optical attenuator (VOA) is utilized to change the received optical strength. In the receiving section, the coherent receiver(ICR-100-EVK) combines the light provided by the 12 dBm local oscillation laser (LO) with the signal light to receive it. The data is shown and received using a digital oscilloscope(Tektronix DSA72004C) with a sampling rate of 100 GSa/s. The received signal is downsampled after passing through a matching filter that corresponds to the forming filter, and the sampling rate is adjusted to double the symbol rate for each error parameter estimation. The Gram-Schmidt algorithm, for example, adjusts and normalizes IQ imbalance, while the Gardner approach recovers clock inaccuracy, and the cascaded multimode algorithm (CMMA) balances and compensates for channel noise. Ultimately, the required frequency and phase shifts are computed and modified using the previously entered pilot codes. The modified signal is fed into the pre-designed decoder for decoding. In terms of LDPC decoding performance, the proposed algorithm is not only compared to the standard IB technique, but it is also compared to the performance of several now more extensively used algorithms.

 figure: Fig. 9.

Fig. 9. System architecture block diagram.

Download Full Size | PDF

We conducted transmission experiments in a back-to-back system and a 5km coherent optical transmission system to evaluate performance, and Fig. 10 illustrates the experimentally obtained BER curves. A typical LDPC code with a code length of 4095 and a coding rate of 0.82 is employed in this experimental system, and the maximum number of decoder iterations is set to 50. The first decoding algorithm (red dashed line) is the best-performing double-precision belief propagation decoding algorithm, which turns incoming channel information into a double-precision LLR value and performs sophisticated calculations among the nodes. The least-sum decoder (pink dashed line) differs from the belief propagation decoder in that it uses an approximation operation, converting the multiplication operation into a comparison operation, and is a more widely used algorithm in hardware implementations today. The standard IB decoder (purple solid line) is the third decoding technique in [12]. The blue solid line in the figure represents our proposed bidirectional IB decoding algorithm, while the brown solid line is presented for comparison to highlight the relevance of bit information alignment. The decoder cannot correctly identify the information contained by the symbols without bit information alignment, resulting in poor decoding performance. At a BER of ${10^{ - 3}}$, the proposed bidirectional IB decoding method below the double-precision belief propagation decoder by 0.58 dB and the conventional IB algorithm by 0.22 dB in a back-to-back system. Furthermore, the various algorithms are tested in a coherent optical communication system using a 5 km fiber to evaluate their performance in a true fiber optic network. The performance difference between the bidirectional IB decoding algorithm and the double-precision belief propagation decoding method on a coherent optical system is roughly 0.68 dB at a BER of ${10^{ - 3}}$, and is just 0.26 dB poorer than the regular IB algorithm. In summary, the proposed bidirectional IB technique outperforms the least-sum decoding method and is just 0.5 $\sim$ 0.7 dB slower than the double-precision belief propagation algorithm. When compared to the original IB decoding algorithm, the number of LUT lookup operations required for a single iterative process in the sequential structure is lowered from a square measure to a linear measure with respect to the node degree, resulting in a performance loss of roughly 0.2 $\sim$ 0.3 dB. This facilitates real-time transmission in optical communication systems

 figure: Fig. 10.

Fig. 10. Performance of LDPC codes of length 4095 with code rate 0.82 in (a) B2B (b) 5km fiber system.

Download Full Size | PDF

4.2 Complexity

Table  1 compares the number of LUT lookup operations needed for each repetition. In addition, when compared to the sequential structure in [20], the bidirectional IB decoding algorithm flips the relationship between the number of LUT lookup operations and the degree of nodes from a square to a linear measure. In comparison to the tree structure in [20], the bidirectional IB technique proposed in this study swaps a small number of additional LUT for a smaller number of lookup LUT.

Tables Icon

Table 1. The number of tables looked up and the number of tables stored in an iteration

Figure 11 shows a numerical comparison of the number of lookup LUT required in decoding for the LDPC codes (4095, 3357) used in this paper. The maximum check node degree is ${d_{\rm {c}}} = 24$ and the highest variable node degree is ${d_v} = 4$. Obviously, compared with the original IB decoding algorithm, the proposed decoding algorithm requires significantly less operational and storage resources, and the larger the node degree, the more obvious the reduction in computational complexity.

 figure: Fig. 11.

Fig. 11. System architecture block diagram.

Download Full Size | PDF

5. Conclusion

This study examines the many informational distributions conveyed by each bit in the 1024-QAM modulation format. We next explain the concept of multivariate information bottlenecks and use examples from information theory to show how to apply this paradigm to 1024-QAM signals. High-capacity communication systems are shown to require LDPC and higher-order modulation formats. Second, we describe the problem that the current IB algorithm has the problem of finding the LUT operation squared with the node degree during a single iteration and propose an LDPC information bottleneck decoding algorithm based on a bidirectional recurrent network. The algorithm uses the symmetry of bidirectional recurrent network and quantization information to realize the reuse of the same set of tables, giving full play to the role of intermediate information, so that the relationship between the number of LUT lookup operations and the degree of nodes in the sequential structure in the IB decoding algorithm is reduced from a squared magnitude to a linear one. This provides help in reducing the decoding delay for optical communication and makes real-time decoding possible. To verify the performance of the decoding algorithm, a coherent optical communication system based on 1024-QAM signals is built. The experimental results show that the proposed decoding algorithm operates effectively in the communication system with 1024-QAM signals. The number of LUT lookup operations required for the check and variable nodes in the sequential structure of the single iteration process is reduced from $[({d_c} - 2)({d_c} + 3)]/2$ and $[({d_v} - 1)({d_v} + 2)]/2$ to $(2{d_c} - 4) + \left \lfloor {(2{{\rm {d}}_c} - 3)/2} \right \rfloor$ and $(2{d_v} - 4) + \left \lceil {(2{{\rm {d}}_v} - 1)/2} \right \rceil$ at the expense of about 0.2 $\sim$ 0.3 dB performance.

Funding

National Key Research and Development Program of China (2020YFB1805805).

Disclosures

The authors declare there are no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. R. Borkowski, M. Straub, Y. Ou, et al., “World’s first field trial of 100 gbit/s flexible pon (flcs-pon),” in 2020 European Conference on Optical Communications (ECOC), (IEEE, 2020), pp. 1–4.

2. J. Zhang, W. Jiang, and J. Zhou, “An iterative bp-cnn decoder for optical fiber communication systems,” Opt. Lett. 48(9), 2289–2292 (2023). [CrossRef]  

3. T. Koike-Akino, D. S. Millar, and K. Kojima, “Iteration-aware ldpc code design for low-power optical communications,” J. Lightwave Technol. 34(2), 573–581 (2016). [CrossRef]  

4. M. Yang, Z. Pan, and I. B. Djordjevic, “Fpga-based burst-error performance analysis and optimization of regular and irregular sd-ldpc codes for 50g-pon and beyond,” Opt. Express 31(6), 10936–10946 (2023). [CrossRef]  

5. X. Sun, D. Zou, Z. Qu, et al., “Run-time reconfigurable adaptive ldpc coding for optical channels,” Opt. Express 26(22), 29319–29329 (2018). [CrossRef]  

6. M. A. Fernandes, P. P. Monteiro, and F. P. Guiomar, “400g mimo-fso transmission with enhanced reliability enabled by joint ldpc coding,” in 2021 European Conference on Optical Communication (ECOC), (IEEE, 2021), pp. 1–4.

7. C. Zhang, X. Mu, and J. Yuan, “Construction of multi-rate quasi-cyclic ldpc codes for satellite communications,” IEEE Trans. Commun. 69(11), 7154–7166 (2021). [CrossRef]  

8. Y. Liao, M. Qiu, and J. Yuan, “Design and analysis of delayed bit-interleaved coded modulation with ldpc codes,” IEEE Trans. Commun. 69(6), 3556–3571 (2021). [CrossRef]  

9. B.-Y. Tseng, B. M. Kurkoski, P. Mohr, et al., “An fpga implementation of two-input lut based information bottleneck ldpc decoders,” in 2022 11th International Conference on Modern Circuits and Systems Technologies (MOCAST), (IEEE, 2022), pp. 1–6.

10. J. Lewandowsky and G. Bauch, “Trellis based node operations for ldpc decoders from the information bottleneck method,” in 2015 9th International Conference on Signal Processing and Communication Systems (ICSPCS), (IEEE, 2015), pp. 1–10.

11. T. J. Richardson and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” IEEE Trans. Inf. Theory 47(2), 599–618 (2001). [CrossRef]  

12. J. Lewandowsky and G. Bauch, “Information-optimum ldpc decoders based on the information bottleneck method,” IEEE Access 6, 4054–4071 (2018). [CrossRef]  

13. J. Lewandowsky, G. Bauch, M. Tschauner, et al., “Design and evaluation of information bottleneck ldpc decoders for digital signal processors,” IEICE Trans. Commun. E102.B(8), 1363–1370 (2019). [CrossRef]  

14. J. Lewandowsky, M. Stark, and G. Bauch, “Message alignment for discrete ldpc decoders with quadrature amplitude modulation,” in 2017 IEEE International Symposium on Information Theory (ISIT), (IEEE, 2017), pp. 2925–2929.

15. J. Lewandowsky, M. Adrat, and G. Bauch, “Information bottleneck message passing for military applications,” in 2021 International Conference on Military Communication and Information Systems (ICMCIS), (IEEE, 2021), pp. 1–8.

16. M. Stark, L. Wang, G. Bauch, et al., “Decoding rate-compatible 5g-ldpc codes with coarse quantization using the information bottleneck method,” IEEE Open J. Commun. Soc. 1, 646–660 (2020). [CrossRef]  

17. M. Stark, G. Bauch, J. Lewandowsky, et al., “Decoding of non-binary ldpc codes using the information bottleneck method,” in ICC 2019-2019 IEEE International Conference on Communications (ICC), (IEEE, 2019), pp. 1–6.

18. P. Mohr, G. Bauch, F. Yu, et al., “Coarsely quantized layered decoding using the information bottleneck method,” in ICC 2021-IEEE International Conference on Communications, (IEEE, 2021), pp. 1–6.

19. P. Kang, K. Cai, and X. He, “Generalized mutual information-maximizing quantized decoding of ldpc codes with layered scheduling,” IEEE Trans. Veh. Technol. 71(7), 7258–7273 (2022). [CrossRef]  

20. M. Stark, J. Lewandowsky, and G. Bauch, “Information-bottleneck decoding of high-rate irregular ldpc codes for optical communication using message alignment,” Appl. Sci. 8(10), 1884 (2018). [CrossRef]  

21. L. Chu, H. He, L. Pei, et al., “Nold: A neural-network optimized low-resolution decoder for ldpc codes,” J. Commun. Netw. 23(3), 159–170 (2021). [CrossRef]  

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. L of the corresponding index for the 1024-QAM real part. (a)$v = 0$. (b)$v = 1$. (c)$v = 2$. (d)$v = 3$. (e)$v = 4$.
Fig. 2.
Fig. 2. Information bottleneck framework for bit information alignment.
Fig. 3.
Fig. 3. Merged mapping of $v = 0$ and $v=1$.
Fig. 4.
Fig. 4. (a): Performance of different modulation formats combining LDPC codes. (b):Performance of bit alignment.
Fig. 5.
Fig. 5. Two Input Split Check Node Operation.
Fig. 6.
Fig. 6. Bidirectional operation at ${d_c} = 6$.
Fig. 7.
Fig. 7. Tree structure of variable nodes with ${d_v} = 5$.
Fig. 8.
Fig. 8. Comparison of each decoding algorithm for code rate. (a)0.5.(b)0.82.
Fig. 9.
Fig. 9. System architecture block diagram.
Fig. 10.
Fig. 10. Performance of LDPC codes of length 4095 with code rate 0.82 in (a) B2B (b) 5km fiber system.
Fig. 11.
Fig. 11. System architecture block diagram.

Tables (1)

Tables Icon

Table 1. The number of tables looked up and the number of tables stored in an iteration

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

p ( s l r e , r ~ l r e ) = 1 1024 π σ 2 e ( | r ~ l r e s l r e | 2 σ 2 )
p ( c v , r l r e ) = s r e p ( c v | s l r e ) p ( s l r e , r l r e )
p ( c v | r l r e ) = p ( c v , r l r e ) p ( r l r e )
L ( c v | r l r e ) = log p ( c v = 0 | r l r e ) p ( c v = 1 | r l r e )
I ( c v ; r l r e , v ) = I ( c v ; v | r l r e ) + I ( c v ; r l r e )
I ( c v ; r l r e , v ) I ( c v ; r l r e , z l ) = I ( c v ; r l r e | z l ) + I ( c v ; z l )
p ( c v , r l r e , v ) = p ( c v , r l r e | v ) p ( v ) = p ( c v , r l r e ) p ( v )
p ( c v , z l ) = v = 0 v max p ( v ) r l r e τ p ( z l | r l r e , v ) p ( c v , r l r e | v )
p ( b 0 , z 0 , z 1 ) = ( b 0 , b 1 ) : b 0 = b 0 b 1 p ( b 0 , z 0 ) p ( b 1 , z 1 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.