Encoding Problem From Pinter

bZNyQ7C2

Junior Member
Joined
Aug 29, 2019
Messages
52
I'm working through Charles C Pinter's A Book of Abstract Algebra. I ran into a snag on page 39. The subject is maximum-likelihood decoding of binary groups.

He uses the notation \(\displaystyle {\mathbb{B}^5}\) for the set of all binary words of length 5. Is that notation standard? I'm new to this. Anyway, he also says that any subset of \(\displaystyle {\mathbb{B}^5}\) is called a code, and he gives as an example the code \(\displaystyle {C_1}\) containing the following eight binary words: 00000, 00111, 01001, 01110, 10011, 10100, 11010, 11101. In Problem 4 on page 40, he asks us to decode the following six words in \(\displaystyle {C_1}\): 11111, 00101, 11000, 10011, 10001 and 10111. To decode means to find the codeword (element) of \(\displaystyle {C_1}\) that is the closest to it. The distance is the number of positions where the word being decoded differs from each the codewords in \(\displaystyle {C_1}\). I hope this is clear.

I found that 11111 decodes to 11101. The distance is 1, because in only place, the fourth, are they different.
Similarly, 00101 decodes to 00111. Again the distance is 1.
Also, 11000 decodes to 11010, the distance being 1 again.

I'm pretty sure I'm right so far. But this is what I get for the last three words:

10011 decodes to 10011, distance 0
10001 decodes to 10011, distance 1

Each of them seems to decode to one and only one codeword in \(\displaystyle {C_1}\), but they both decode to the same one. That doesn't seem satisfactory for a good code.

For the last group, I found 10111 decodes to either 00111 or 10011. In either case the distance is 1.

Here's a note the author puts in: "You may have noticed that the last two words in part 4 had ambiguous decodings. For example, 10111 may be decoded as either 10011 or 00111. This situation is clearly unsatisfactory. We shall see next what conditions will ensure that every word can be decoded into only one possible codeword."

My problem is that he says "the last two words." The next to last word 10001 decodes to only one codeword, if I'm not mistaken.

I wonder if the author didn't make a mistake.
 
I'm working through Charles C Pinter's A Book of Abstract Algebra. I ran into a snag on page 39. The subject is maximum-likelihood decoding of binary groups.

He uses the notation \(\displaystyle {\mathbb{B}^5}\) for the set of all binary words of length 5. Is that notation standard? I'm new to this. Anyway, he also says that any subset of \(\displaystyle {\mathbb{B}^5}\) is called a code, and he gives as an example the code \(\displaystyle {C_1}\) containing the following eight binary words: 00000, 00111, 01001, 01110, 10011, 10100, 11010, 11101. In Problem 4 on page 40, he asks us to decode the following six words in \(\displaystyle {C_1}\): 11111, 00101, 11000, 10011, 10001 and 10111. To decode means to find the codeword (element) of \(\displaystyle {C_1}\) that is the closest to it. The distance is the number of positions where the word being decoded differs from each the codewords in \(\displaystyle {C_1}\). I hope this is clear.

I found that 11111 decodes to 11101. The distance is 1, because in only place, the fourth, are they different.
Similarly, 00101 decodes to 00111. Again the distance is 1.
Also, 11000 decodes to 11010, the distance being 1 again.

I'm pretty sure I'm right so far. But this is what I get for the last three words:

10011 decodes to 10011, distance 0
10001 decodes to 10011, distance 1

Each of them seems to decode to one and only one codeword in \(\displaystyle {C_1}\), but they both decode to the same one. That doesn't seem satisfactory for a good code.

No. This is fine. 10001 is a version of 10011 that was received with a single bit error. It was correctly decoded to 10011.


For the last group, I found 10111 decodes to either 00111 or 10011. In either case the distance is 1.

Now this is a problem, and is the usual problem encountered in high noise situations. In this case your code isn't robust enough for the noise environment and you need to increase the Hamming distance between codewords.


I think the author is a bit mistaken. The first case isn't an error, it's correct operation.
 
I gather you think my distance calculations and decodings were correct. That's good to hear. And you agree with my view that there's a mistake in the book. I know that happens, but I'm always reluctant to say it.
 
Top