Hossein Taghavi : Codes on Graphs

50 %
50 %
Information about Hossein Taghavi : Codes on Graphs

Published on November 6, 2007

Author: knowdiff

Source: slideshare.net

Description

Hossein Taghavi's talk organized by the Knowledge Diffusion Network at Sharif University

Codes on Graphs: Introduction and Recent Advances Mohammad Hossein Taghavi In Collaboration with Prof. Paul H. Siegel University of California, San Diego E-mail: mtaghavi@ucsd.edu

Outline Introduction to Coding Theory Shannon’s Channel Coding Theorem Error-Correcting Codes – State-of-the-Art Low-Density Parity-Check Codes Design and Message-Passing Decoding Performance Linear Programming Decoding LP Relaxation Properties Improvements Conclusion and Open Problems

Introduction to Coding Theory

Shannon’s Channel Coding Theorem

Error-Correcting Codes – State-of-the-Art

Low-Density Parity-Check Codes

Design and Message-Passing Decoding

Performance

Linear Programming Decoding

LP Relaxation

Properties

Improvements

Conclusion and Open Problems

Outline Introduction to Coding Theory Shannon’s Channel Coding Theorem Error-Correcting Codes – State-of-the-Art Low-Density Parity-Check Codes Design and Message-Passing Decoding Performance Linear Programming Decoding LP Relaxation Properties Improvements Conclusion and Open Problems

Introduction to Coding Theory

Shannon’s Channel Coding Theorem

Error-Correcting Codes – State-of-the-Art

Low-Density Parity-Check Codes

Design and Message-Passing Decoding

Performance

Linear Programming Decoding

LP Relaxation

Properties

Improvements

Conclusion and Open Problems

A Noisy Communication System INFORMATION SOURCE TRANSMITTER RECEIVER DESTINATION MESSAGE SIGNAL RECEIVED SIGNAL MESSAGE NOISE SOURCE CHANNEL

Common Channels Binary Erasure Channel BEC( ε ) Binary Symmetric Channel BSC( p ) ? ε ε 1- ε 1- ε 0 1 0 1 Additive White Gaussian Noise Channel 0 0 1 1 1- p 1- p p p

Binary Erasure Channel BEC( ε )

Binary Symmetric Channel BSC( p )

Additive White Gaussian Noise Channel

Coding Code rate: We add redundancy to the message for protection against noise. Coding Theory: Design Good Codes (mappings) Design Good Encoding and Decoding Algorithms Source Encoder Decoder Sink Channel

We add redundancy to the message for protection against noise.

Coding Theory:

Design Good Codes (mappings)

Design Good Encoding and Decoding Algorithms

Shannon’s Coding Theorem Every communication channel is characterized by a single number C , called the channel capacity . If C is a code with rate R > C, then the probability of error in decoding this code is bounded away from 0. For any information rate R < C and any δ > 0 , there exists a code C of length n δ and rate R, such that the probability of error in maximum likelihood decoding of this code is at most δ . Bottomline: Reliable communication is possible if and only if

Every communication channel is characterized by a single number C , called the channel capacity .

If C is a code with rate R > C, then the probability of error in decoding this code is bounded away from 0.

For any information rate R < C and any δ > 0 , there exists a code C of length n δ and rate R, such that the probability of error in maximum likelihood decoding of this code is at most δ .

Bottomline: Reliable communication is possible if and only if

Designing Binary Codes Coding generally involves mapping each of the 2 k vectors in {0,1} k to some vector (codeword) in {0,1} n . A good mapping places the codewords at the maximum possible distances. The proof of Shannon’s theorem is based on (long) random codes and optimal decoding. i.e. a random mapping from {0,1} k to {0,1} n The decoder uses a look-up table (practically infeasible) Algebraic coding theory studies highly-structured codes e.g. Reed-Solomon codes have efficient decoders but cannot get very close to the capacity of binary channels.

Coding generally involves mapping each of the 2 k vectors in {0,1} k to some vector (codeword) in {0,1} n .

A good mapping places the codewords at the maximum possible distances.

The proof of Shannon’s theorem is based on (long) random codes and optimal decoding.

i.e. a random mapping from {0,1} k to {0,1} n

The decoder uses a look-up table (practically infeasible)

Algebraic coding theory studies highly-structured codes

e.g. Reed-Solomon codes have efficient decoders but cannot get very close to the capacity of binary channels.

State-of-the-Art Solution Long, structured, “pseudorandom” codes Practical, near-optimal decoding algorithms Examples Turbo codes (1993) Low-density parity-check (LDPC) codes (1960, 1999) State-of-the-art Turbo codes and LDPC codes have brought Shannon limits to within reach on a wide range of channels.

Solution

Long, structured, “pseudorandom” codes

Practical, near-optimal decoding algorithms

Examples

Turbo codes (1993)

Low-density parity-check (LDPC) codes (1960, 1999)

State-of-the-art

Turbo codes and LDPC codes have brought Shannon limits to within reach on a wide range of channels.

Evolution of Coding Technology from Trellis and Turbo Coding, Schlegel and Perez, IEEE Press, 2004 LDPC codes

Outline Introduction to Coding Theory Shannon’s Channel Coding Theorem Error-Correcting Codes – State-of-the-Art Low-Density Parity-Check Codes Design and Message-Passing Decoding Performance Linear Programming Decoding LP Relaxation Properties Improvements Conclusion and Open Problems

Introduction to Coding Theory

Shannon’s Channel Coding Theorem

Error-Correcting Codes – State-of-the-Art

Low-Density Parity-Check Codes

Design and Message-Passing Decoding

Performance

Linear Programming Decoding

LP Relaxation

Properties

Improvements

Conclusion and Open Problems

Binary Linear Codes Codewords are chosen to satisfy a number of (binary) linear constraints. Parameters of binary linear block code C k = number of information bits n = number of code bits R = k / n d min = minimum distance There are many ways to describe C Codebook (list) Parity-check matrix / generator matrix Graphical representation (“Tanner graph”)

Codewords are chosen to satisfy a number of (binary) linear constraints.

Parameters of binary linear block code C

k = number of information bits

n = number of code bits

R = k / n

d min = minimum distance

There are many ways to describe C

Codebook (list)

Parity-check matrix / generator matrix

Graphical representation (“Tanner graph”)

Linear Block Codes on Graphs A binary linear code C is the collection of points x ( i ) in {0,1} n that satisfy H m x n x ( i ) = 0 mod 2 . If H is full rank, there will be 2 n-m codewords. The code can be described by a Tanner Graph: Neighborhood N j : The set of nodes directly connected to j. Degree d j : The size of the neighborhood, N j . Example : x 1 x 2 x 3 x 4 x 5 x 6 x 7 m check nodes n variable nodes x 1 + x 3 + x 6 = 0 mod 2

A binary linear code C is the collection of points x ( i ) in {0,1} n that satisfy

H m x n x ( i ) = 0 mod 2 .

If H is full rank, there will be 2 n-m codewords.

The code can be described by a Tanner Graph:

Neighborhood N j : The set of nodes directly connected to j.

Degree d j : The size of the neighborhood, N j .

Example :

Low-Density Parity-Check Codes Proposed by Gallager (1960) “ Sparseness” of matrix and graph descriptions Number of 1’s in H grows linearly with block length Number of edges in Tanner graph grows linearly with block length “ Randomness” of construction in: Placement of 1’s in H Connectivity of variable and check nodes Iterative, message-passing decoder Simple “local” decoding at nodes Iterative exchange of information (message-passing) Other families of graph-based codes: Repeat Accumulate Codes Fountain Codes

Proposed by Gallager (1960)

“ Sparseness” of matrix and graph descriptions

Number of 1’s in H grows linearly with block length

Number of edges in Tanner graph grows linearly with block length

“ Randomness” of construction in:

Placement of 1’s in H

Connectivity of variable and check nodes

Iterative, message-passing decoder

Simple “local” decoding at nodes

Iterative exchange of information (message-passing)

Other families of graph-based codes:

Repeat Accumulate Codes

Fountain Codes

Code Construction LDPC codes are generally constructed randomly, according to certain distributions on the degrees of variable and check nodes. Once the node degrees are selected, connections are made randomly Regular LDPC: Each check node has degree d c , and each variable node has degree d v Irregular LDPC: The degrees of variable and check nodes need not be constant. (generalization of regular LDPC) Ensemble defined by “node degree distribution” functions.

LDPC codes are generally constructed randomly, according to certain distributions on the degrees of variable and check nodes.

Once the node degrees are selected, connections are made randomly

Regular LDPC: Each check node has degree d c , and each variable node has degree d v

Irregular LDPC: The degrees of variable and check nodes need not be constant. (generalization of regular LDPC)

Ensemble defined by “node degree distribution” functions.

Optimal Bit Decoding Consider transmission of binary inputs X  {  1} over a memoryless channel using linear code C . Assume codewords are transmitted equiprobably. Maximum a posteriori (MAP) bit decoding rule minimizes bit error probability: where is 1 if x is a codeword of C , and 0 otherwise.

Consider transmission of binary inputs X  {  1} over a memoryless channel using linear code C .

Assume codewords are transmitted equiprobably.

Maximum a posteriori (MAP) bit decoding rule minimizes bit error probability:

where is 1 if x is a codeword of C , and 0 otherwise.

Belief Propagation If the Tanner graph is cycle-free, there is a message-passing approach to bit-wise MAP decoding. The nodes of the Tanner graph exchange updated messages, i.e. conditional bit distributions, denoted u = [ u (1), u (-1)] . The initial messages presented by the channel to the variable nodes are of the form The variable-to-check and check-to-variable message updates are determined by the “sum-product” update rule. x 1 x 2 x 3 x 4 x 5 x 6 x 7

If the Tanner graph is cycle-free, there is a message-passing approach to bit-wise MAP decoding.

The nodes of the Tanner graph exchange updated messages, i.e. conditional bit distributions, denoted u = [ u (1), u (-1)] .

The initial messages presented by the channel to the variable nodes are of the form

The variable-to-check and check-to-variable message updates are determined by the “sum-product” update rule.

Sum-Product Update Rule Variable to check Check to Variable where f is the parity-check indicator function. from channel v u v 1 v d-1 v d v 0

Variable to check

Check to Variable

where f is the parity-check indicator function.

Log-Likelihood Formulation The sum-product update is simplified using log-likelihoods For message u, define Variable-to-check update Check-to-variable update L ( u ) edge e L ( v d -2 ) L ( v d -1 ) L ( v 1 ) from channel L ( v )

The sum-product update is simplified using log-likelihoods

For message u, define

Variable-to-check update

Check-to-variable update

Performance Analysis In the spirit of Shannon, we can analyze the performance of message-passing decoding on ensembles of LDPC codes with specified degree distributions ( λ , ρ ) . The results provide criteria for designing LDPC codes that transmit reliably with MP decoding at rates very close to the Shannon capacity. The analysis can assume the all-0’s codeword is sent. The results are asymptotic , i.e. hold for very large block lengths.

In the spirit of Shannon, we can analyze the performance of message-passing decoding on ensembles of LDPC codes with specified degree distributions ( λ , ρ ) .

The results provide criteria for designing LDPC codes that transmit reliably with MP decoding at rates very close to the Shannon capacity.

The analysis can assume the all-0’s codeword is sent.

The results are asymptotic , i.e. hold for very large block lengths.

Key Results Concentration With high probability, the performance of ℓ rounds of BP decoding on a randomly selected code converges to the ensemble average performance as the length n->∞ . Convergence to cycle-free performance The average performance of ℓ rounds of BP decoding on the ( n , λ , ρ ) ensemble converges to the performance on a graph with no cycles of length ≤ 2 ℓ as the length n->∞ .

Concentration

With high probability, the performance of ℓ rounds of BP decoding on a randomly selected code converges to the ensemble average performance as the length n->∞ .

Convergence to cycle-free performance

The average performance of ℓ rounds of BP decoding on the ( n , λ , ρ ) ensemble converges to the performance on a graph with no cycles of length ≤ 2 ℓ as the length n->∞ .

Cycle-free Performance: Density Evolution We can get asymptotic results by looking at the cycle-free case. The cycle-free performance can be computed using density evolution . For an ensemble of LDPC code: the incoming messages to each node are i.i.d. the channel observations, L ( u 0 ) , at different variable nodes are i.i.d. edge e L ( v d -2 ) L ( v d -1 ) L ( v 1 ) from channel L ( v )

We can get asymptotic results by looking at the cycle-free case.

The cycle-free performance can be computed using density evolution .

For an ensemble of LDPC code:

the incoming messages to each node are i.i.d.

the channel observations, L ( u 0 ) , at different variable nodes are i.i.d.

Density Evolution, cont. So at each iteration, we can compute the pdf of the outgoing messages in terms of those of the incoming messages. Having the pdf of the LLRs after ℓ iterations, we can compute the bit error probability. Threshold calculation There is a threshold channel parameter p *( λ , ρ ) such that, for any “better” channel parameter p , the cycle-free error probability approaches 0 as the number of iterations ℓ ->∞. For some channels, we can optimize the degree distributions of irregular LDPC codes for the best p * . This technique has produced rate 1/2 LDPC ensembles with thresholds within 0.0045dB of the Shannon limit on the AWGN channel! A rate 1/2 code with block length 10 7 provided BER of 10 -6 within 0.04 dB of the Shannon limit!

So at each iteration, we can compute the pdf of the outgoing messages in terms of those of the incoming messages.

Having the pdf of the LLRs after ℓ iterations, we can compute the bit error probability.

Threshold calculation

There is a threshold channel parameter p *( λ , ρ ) such that, for any “better” channel parameter p , the cycle-free error probability approaches 0 as the number of iterations ℓ ->∞.

For some channels, we can optimize the degree distributions of irregular LDPC codes for the best p * .

This technique has produced rate 1/2 LDPC ensembles with thresholds within 0.0045dB of the Shannon limit on the AWGN channel!

A rate 1/2 code with block length 10 7 provided BER of 10 -6 within 0.04 dB of the Shannon limit!

Finite-length Performance Remaining questions: How does the waterfall region scale with block length? Where does the error floor occur? In practice, the algorithm sometimes does not converge. We can see the MP decoding as an optimization algorithm that sometimes gets trapped in local minima. This events are analytically characterized for the BEC, but are still not fully understood in general. There are some promising results obtained by techniques from the statistical mechanics literature.

Remaining questions:

How does the waterfall region scale with block length?

Where does the error floor occur?

In practice, the algorithm sometimes does not converge.

We can see the MP decoding as an optimization algorithm that sometimes gets trapped in local minima.

This events are analytically characterized for the BEC, but are still not fully understood in general.

There are some promising results obtained by techniques from the statistical mechanics literature.

Outline Introduction to Coding Theory Shannon’s Channel Coding Theorem Error-Correcting Codes – State-of-the-Art Low-Density Parity-Check Codes Design and Message-Passing Decoding Performance Linear Programming Decoding LP Relaxation Properties Improvements Conclusion and Open Problems

Introduction to Coding Theory

Shannon’s Channel Coding Theorem

Error-Correcting Codes – State-of-the-Art

Low-Density Parity-Check Codes

Design and Message-Passing Decoding

Performance

Linear Programming Decoding

LP Relaxation

Properties

Improvements

Conclusion and Open Problems

Linear Programming Decoding: Motivation Linear Programming (LP) decoding is an alternative to MP decoding for Turbo and LDPC codes. Its performance has connections to that of MP decoding. Advantages: Amenability to finite-length analysis Potential for improvement Detectable failures Major drawback: Higher complexity than MP Sometimes seen as tool to characterize MP decoding. The figure is courtesy of Jon Feldman.

Linear Programming (LP) decoding is an alternative to MP decoding for Turbo and LDPC codes.

Its performance has connections to that of MP decoding.

Advantages:

Amenability to finite-length analysis

Potential for improvement

Detectable failures

Major drawback:

Higher complexity than MP

Sometimes seen as tool to characterize MP decoding.

The figure is courtesy of Jon Feldman.

Maximum-likelihood (ML) Decoding of Binary Linear Codes ML (MAP) Block Decoding: Find the sequence , x , that maximizes the likelihood of the received vector. Optimization with linear objective function, but nonlinear constraints Minimize x Subject to  0 1 1 –1 ChannelEncoder Channel Decoder

ML (MAP) Block Decoding: Find the sequence , x , that maximizes the likelihood of the received vector.

Optimization with linear objective function, but nonlinear constraints

Minimize

x

Subject to

Feldman’s LP Relaxation of ML LP decoding: Each binary parity-check constraint: is replaced by linear inequalities Each binary condition is relaxed to a box constraint These linear inequalities define the fundamental polytope , P . Linear optimization: Minimize x Subject to x 1 x 2 x 3 x 4 x 5 x 6 x 7 N j c j

LP decoding:

Each binary parity-check constraint: is replaced by linear inequalities

Each binary condition is relaxed to a box constraint

These linear inequalities define the fundamental polytope , P .

Linear optimization:

Minimize

x

Subject to

The Fundamental Polytope Properties: The integer vertices of P are exactly the codewords. There are additional “non-integral” vertices, too. Pseudo-Codewords (PCW) γ determines a direction to search for the minimum cost vertex, x . Possible algorithm outputs: ML codeword (if the solution is integral) A non-integral vector (PCW) => declare failure We always know if LP finds the ML solution. - γ Codeword Pseudo-codeword

Properties:

The integer vertices of P are exactly the codewords.

There are additional “non-integral” vertices, too. Pseudo-Codewords (PCW)

γ determines a direction to search for the minimum cost vertex, x .

Possible algorithm outputs:

ML codeword (if the solution is integral)

A non-integral vector (PCW) => declare failure

We always know if LP finds the ML solution.

Some Performance Results For binary linear codes, WER is independent of the transmitted codeword. Motivated by the Hamming weight, we can define the pseudo-weight w p . For AWGN: For binary vectors, it is equal to the Hamming weight. [Koetter et al.]: If 1 n is transmitted and r is received, the Euclidean distance in the signal space between 1 n and r is w p ( r ). So, the performance can be described by the pseudo-weight spectrum. The minimum pseudo-weight describes the error floor. More results: The minimum w p increases sublinearly n for regular codes. Bounds on the threshold of LP decoding

For binary linear codes, WER is independent of the transmitted codeword.

Motivated by the Hamming weight, we can define the pseudo-weight w p .

For AWGN:

For binary vectors, it is equal to the Hamming weight.

[Koetter et al.]: If 1 n is transmitted and r is received, the Euclidean distance in the signal space between 1 n and r is w p ( r ).

So, the performance can be described by the pseudo-weight spectrum.

The minimum pseudo-weight describes the error floor.

More results:

The minimum w p increases sublinearly n for regular codes.

Bounds on the threshold of LP decoding

Size of the LP Decoding Problem For every check node with a neighborhood N of size d : constraints of the form Total of constraints. High-density codes : The complexity is exponential in n . [Feldman et al.]: An equivalent relaxation with O ( n 3 ) constraints. Do we need all the constraints?

For every check node with a neighborhood N of size d :

constraints of the form

Total of constraints.

High-density codes :

The complexity is exponential in n .

[Feldman et al.]: An equivalent relaxation with O ( n 3 ) constraints.

Do we need all the constraints?

Properties Definition: A constraint of the form is a cut at point if it is violated at i.e. . Theorem 1: At any given point , at most one of the constraints introduced by each parity-check can be violated. There is a O(md max ) way to find all these cuts, (if any). d max is the maximum check-node degree.

Definition:

A constraint of the form is a cut at point if it is violated at

i.e. .

Theorem 1: At any given point , at most one of the constraints introduced by each parity-check can be violated.

There is a O(md max ) way to find all these cuts, (if any).

d max is the maximum check-node degree.

Adaptive LP Decoding Reduce the complexity by decreasing the number of constraints/vertices. Do not use a constraint until it is violated. Algorithm 1 (Adaptive LP): Set up the problem with a minimal number of constraints to guarantee boundedness of the result. Find the optimum point x ( k ) by linear programming. For each check node, Check if this check node introduces a cut; if so, add it to the set of constraints. If at least one cut is added, go to step 2; otherwise, we have found the LP solution.

Reduce the complexity by decreasing the number of constraints/vertices.

Do not use a constraint until it is violated.

Algorithm 1 (Adaptive LP):

Set up the problem with a minimal number of constraints to guarantee boundedness of the result.

Find the optimum point x ( k ) by linear programming.

For each check node,

Check if this check node introduces a cut; if so, add it to the set of constraints.

If at least one cut is added, go to step 2; otherwise, we have found the LP solution.

Upper Bound on the Complexity Theorem 2: Algorithm 1 converges in at most n iterations. Proof Outline : The final solution can be determined by n independent constraints where Each intermediate solution, x ( k ) , should violate at least one of κ i ’s. Hence, at most n intermediate solutions/iterations. Corollary 2: Algorithm 1 has at most n+nm constraints at the final iteration. Proof: We start with n constraints, and each of the m parity checks add at most one constraint per iteration.

Theorem 2: Algorithm 1 converges in at most n iterations.

Proof Outline : The final solution can be determined by n independent constraints

where

Each intermediate solution, x ( k ) , should violate at least one of κ i ’s.

Hence, at most n intermediate solutions/iterations.

Corollary 2: Algorithm 1 has at most n+nm constraints at the final iteration.

Proof: We start with n constraints, and each of the m parity checks add at most one constraint per iteration.

Numerical Results at Low SNR Fixed length n = 360 and rate R = 0.5 . Fixed length n = 120 and variable node degree d v = 3 . Observation: Adaptive LP decoding converges with O (1) iterations and less than 2 constraints per parity check.

Observation: Adaptive LP decoding converges with O (1) iterations and less than 2 constraints per parity check.

Numerical Results: Gain in Running Time Reducing the number of constraints translates into a gain in running time: ~10 2 times faster than standard implementation for d c = 8 . Even faster if we use a “warm start” at each iteration. The time remains constant even with a high-density code ( d c = O ( n ) ) The LP solver is not making use of the sparsity! Dashed lines: d c =6 Solid lines: d c =8

Reducing the number of constraints translates into a gain in running time:

~10 2 times faster than standard implementation for d c = 8 .

Even faster if we use a “warm start” at each iteration.

The time remains constant even with a high-density code ( d c = O ( n ) )

The LP solver is not making use of the sparsity!

Open Problems Finite-length analysis of ensembles of LDPC codes under LP decoding Computing the thresholds under LP decoding Design LP solvers that exploit the properties of the decoding problem Given a code, find its best Tanner graph representation for LP decoding

Finite-length analysis of ensembles of LDPC codes under LP decoding

Computing the thresholds under LP decoding

Design LP solvers that exploit the properties of the decoding problem

Given a code, find its best Tanner graph representation for LP decoding

Outline Introduction to Coding Theory Shannon’s Channel Coding Theorem Error-Correcting Codes – State-of-the-Art Low-Density Parity-Check Codes Design and Message-Passing Decoding Performance Linear Programming Decoding LP Relaxation Properties Improvements Conclusion

Introduction to Coding Theory

Shannon’s Channel Coding Theorem

Error-Correcting Codes – State-of-the-Art

Low-Density Parity-Check Codes

Design and Message-Passing Decoding

Performance

Linear Programming Decoding

LP Relaxation

Properties

Improvements

Conclusion

Conclusion LDPC codes are becoming the mainstream in coding technology. Already implemented in 3G and LAN standards Many new applications, beyond LDPC decoding, are being studied for MP algorithms: Joint equalization and decoding Classical combinatorial problems, e.g. SAT Connections to other fields are being made Coding theory is being invaded by statistical physicists! Many important questions are still open, waiting for you!

LDPC codes are becoming the mainstream in coding technology.

Already implemented in 3G and LAN standards

Many new applications, beyond LDPC decoding, are being studied for MP algorithms:

Joint equalization and decoding

Classical combinatorial problems, e.g. SAT

Connections to other fields are being made

Coding theory is being invaded by statistical physicists!

Many important questions are still open, waiting for you!

Some References Gallager, R. G., Low-Density Parity-Check Codes, M.I.T. Press, Cambridge, Mass: 1963. T. Richardson and R. Urbanke, Modern Coding Theory. Cambridge University Press (Preliminary version available online at Urbanke’s webpage at EPFL) Special Issue on Codes on Graphs and Iterative Algorithms, IEEE Transactions on Information Theory, February 2001. Thanks!

Gallager, R. G., Low-Density Parity-Check Codes, M.I.T. Press, Cambridge, Mass: 1963.

T. Richardson and R. Urbanke, Modern Coding Theory. Cambridge University Press (Preliminary version available online at Urbanke’s webpage at EPFL)

Special Issue on Codes on Graphs and Iterative Algorithms, IEEE Transactions on Information Theory, February 2001.

Thanks!

Add a comment

Related presentations

Related pages

ComputerJonoob - Yahoo! Groups

ACM Student Chapter Seminar Codes on Graphs: Introduction and Recent Results Speaker: Mohammad Hossein Taghavi Date: Sunday, Aban 6 Time: 13:00 to 14:00 Place:
Read more

Mohammad Hossein Taghavi | Facebook

Mohammad Hossein Taghavi is on Facebook. To connect with Mohammad, sign up for Facebook today. ... TED, Sina Valiollah‎‎, Code.org, VICE News, ...
Read more

Zeinab Taghavi | LinkedIn

View Zeinab Taghavi’s professional profile on LinkedIn. LinkedIn is the world's largest business network, helping professionals like Zeinab Taghavi ...
Read more

Mohammad Hossein Taghavi - Google Sites

Mohammad Hossein Taghavi. Search this site. Navigation. Home. ... Dissertation: Decoding Linear Codes via Optimization and Graph-Based Techniques. Publications
Read more

Graph-Based Decoding in the Presence of ISI

Graph-Based Decoding in the Presence ... the Tanner graph of the LDPC code Use linear ... constraints of any linear code. M. H. Taghavi, P. H ...
Read more

Hossein Taghavi chess games - 365Chess.com

Comprehensive Hossein Taghavi chess games collection, opening repertoire, tournament converages, ... ECO Codes; Chess Puzzles; Chess Compositions;
Read more

Paul H. Siegel - CMRR - Center for Memory and Recording ...

... and Paul H. Siegel, “Constrained Codes that Mitigate Inter-Cell ... M. Hossein Taghavi. ... Decoding Linear Codes via Optimization and Graph-Based ...
Read more

Mohammad Taghavi | LinkedIn

View Mohammad Taghavi’s professional profile on LinkedIn. LinkedIn is the world's largest business network, helping professionals like Mohammad Taghavi ...
Read more