EURASIP Journal on Wireless Communications and Networking (Mar 2018)

The combination of sparse learning and list decoding of subspace codes for error correction in random network coding

  • Guangzhi Zhang,
  • Shaobin Cai,
  • Dongqiu Zhang,
  • Wanneng Shu,
  • Chunhua Ma,
  • Mu Niu,
  • Xiangnan Ding

DOI
https://doi.org/10.1186/s13638-018-1076-7
Journal volume & issue
Vol. 2018, no. 1
pp. 1 – 15

Abstract

Read online

Abstract The network coding can spread a single original error over the whole network. The simulation shows that the propagated error mostly all the time pollute just 100% of the received packets at the sink if Hamming distance is adopted. If subspace codes are adopted, usually the propagated error will not pollute 100% of the received packets in the sense of subspace distance. However, it also usually pollutes 90% of received packets which is a high error ratio. Even if the rank code and the subspace code are adopted, these existing schemes based on traditional block codes can correct corrupted errors no more than C/2 because of the limitation of the block coding where C is the max flow min cut. It is an agent to find a dense error correction method in random network coding. List decoding of subspace codes can correct Ck‐1 $$ \frac{C}{k}\hbox{-} 1 $$ errors where k is the size of information. When Ck $$ \frac{C}{k} $$ is big, many errors in the sense of subspace distance can be corrected. However, the solution of list decoding is not unique. John Wright proposed a dense error correction technique based on L1 minimization, which can recover nearly 100% of the corrupted observations. In our proposal, the original packets are coded with John Wright’s coding matrix, and then, the coded message is coded again with subspace codes. In the sink, the decoding procedures about list decoding of subspace codes and John Wright’s scheme are performed. At last, the unique solution is achieved even though there are dense propagated errors in random network coding.

Keywords