Each time we add another bit, what happens to the amount of numbers we can make?

In an adder design, overflow is the first problem we have to consider. Let’s look at the following examples. (B = 4 bits)

Example 1

Let a = -0.5 = 1.100 and b = 0.75 = 0.110. A binary addition of 1.100 and 0.110 produces 10.010. If we discard the highest bit (1 in this case), we obtain 0.010 = 0.25, which is the correct answer.

Example 2

Let a = 0.5 = 0.100 and b = 0.75 = 0.110. A binary addition of 0.100 and 0.110 produces 01.010. If we discard the highest bit again, we have 1.010 = -0.75, which is obviously a wrong answer. This is called an overflow problem in binary addition because the answer of a + b is larger than 1 which cannot be represented in the number system although both a and b are in the number system. A conventional way to avoid this problem is to scale the inputs to the binary adder by a factor of 0.5 so that the result is guaranteed to be in the number system. This can be called input-scaling because the inputs to the binary adder are scaled. A problem associated with such an input scaling scheme is the lose of precision. If several additions are performed in a row, the final result may suffer from underflow. The following example shows an extreme case.

Example 3

Let a = 0.001 and b = 0.001. If the input scaling scheme were used in this addition, 0.000 and 0.000 would be the inputs to the 4-bit adder and the result would be 0.000. A non-zero result would become a zero.

A better way to handle the overflow problem is to use output scaling. In the previous example, a + b without input scaling should be 00.010. Scaling the output by a factor of 0.5 generates 0.001, which is a better result. Application of this scheme to Example 2 provides the correct result. However, this scheme seems to give a wrong answer for the case in Example 1. The reason is that the sign bit is treated in the same way as the other bits although the symbol “1” at the sign bit represents a value − 1 while the symbol “1” at other locations represents a value 1. Therefore, with a special design of the sign bit adder, this problem can be solved.

The truth table for a regular full adder is shown in Table 3. From this truth table, the logic relationship of inputs and outputs can be derived as follows:

Table 3. Truth table for a regular full adder

abcincoutsum0000000101010010111010001101101101011111

sum=parity−checkingabcincount=majority−votingabcin

where parity-checking(a,b,cin) can be implemented using (a xor b xor cin) and majority-voting(a,b,cin) can be implemented using ((a and b) or (b and cin) or (a and cin)). For the sign bit in output scaling, the truth table is as follows, keeping in mind that a symbol “1” means a value − 1 for inputs a and b, a value − 2 for output cout, and a value 1 for input cin and output sum.

Table 4. Truth table for sign bit full adder

abcincoutsum0000000101010110110010011101001101011111

From this truth table, the logic relationship of inputs and outputs can be derived as follows:

sum=parity−checkingabcincount=majority−votingabcin¯

The logic for sum is the same as that in a regular full adder. The only difference is in the logic for cout where one more inverter is needed to generate cin¯for the majority-voting circuit. Figure 34 shows a regular full adder and a sign-bit full adder for output scaling.

Each time we add another bit, what happens to the amount of numbers we can make?

Figure 34. A regular full adder (a) and a sign-bit full adder (b) for output scaling

Figure 35 shows bit-parallel implementations of input scaling and output scaling schemes. For input scaling, aB–1 and bB–1 are discarded before a and b go to the B-bit adder, a0 and b0 are sign-extended to the left-most full adder. The sum bit of the left-most full adder is the sign bit of the result and the cout bit is discarded. For output scaling, no bits are discarded at the input and there are no sign-extentions either. The sum bit of the right-most full adder is discarded and the cout bit of the left-most full adder, which has one more inverter than the regular full adder, is the sign bit of the result.

Each time we add another bit, what happens to the amount of numbers we can make?

Figure 35. Parallel implementations of an adder with input scaling (a) and output scaling (b)

Figure 36 shows bit-serial implementations of input scaling and output scaling schemes. For input scaling, the control signal con1 determines if a “0” or the previous carry-out should be the input to the carry-in of the full adder. At the beginning cycle of each addition, con1 allows “0” to pass. During the rest of addition cycles, con1 allows the previous carry-out to pass. The control signal con2 provides a qualified ф1 signal to the input registers to prevent the LSBs of the input from getting into the full adder so that sign-extention is automatically performed for the previous addition. A total of B cycles are needed for each addition and the output result is obtained with a two-cycle latency.

Each time we add another bit, what happens to the amount of numbers we can make?

Figure 36. Bit-serial implementations of an adder with input scaling (a) and output scaling (b)

For output scaling, an extra XOR gate is needed as a conditional inverter to generate a complement signal for the majority-voting circuit when the sign bits arrive. The control signal con1 clocks a “0” into the register one cycle before the LSBs of the inputs arrive. The con1 signal also controls the conditional inverter (XOR gate) because, in a pipelined operation, the cycle before new data arrive is the one with sign-bits of the previous input data. The control signal con2 determines if the output of parity-checking or the output of majority-voting should be passed. At the time new data arrive, the carry-out of the previous addition should pass and during the rest of the cycles, sum should pass. While the con2 signal allows the previous carry-out to pass, it actually prevents the LSB of the new result from passing to the output. This is equivalent to discarding the LSB of the output, which is part of the output scaling scheme. Same as the input scaling implementation, it takes B cycles to perform a B-bit addition and the output is obtained with a two-cycle latency.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S009052670680037X

Digital Fiber Modulation and Deep Fiber Architectures

Walter Ciciora, ... Michael Adams, in Modern Cable Television Technology (Second Edition), 2004

19.1 Introduction

This chapter covers binary (digital) modulation of a fiber-optic cable as well as deep fiber architectures. The two are covered together because binary optical transmission is used in last-mile (to the home and/or business) applications. In addition, binary optical transmission is used extensively in metropolitan loops and in intercity trunks. An advantage of digital modulation is that it can operate with much lower signal-to-noise ratio than can analog modulation, as shown in Figure 12.13.

Figure 19.1 illustrates the difference between digital and analog optical modulation. Figure 19.1 (a) represents digital modulation of the optical transmitter. The data is not modulated onto an RF carrier. Rather, the data directly modulates the laser, turning it on and off. If more than one datastream is provided, they are time division multiplexed (TDM'ed) by switching first to one and then to another until all have been sampled. Since no data can be lost during the multiplexing process, the output data rate must be the sum of all input data rates, usually with additional bits added to synchronize data recovery. The data rate actually carried by the laser is called the wire rate.

Each time we add another bit, what happens to the amount of numbers we can make?

Figure 19.1. Difference between digital and analog optical modulation.

Contrast this to the common cable television technique of broadcast optical transmission, as covered in Chapter 12 and shown in Figure 19.1(b). In this system, we often transmit digital signals, which may be TDM'ed but then they are modulated onto RF carriers, normally using either 64- or 256-QAM modulation. These digitally modulated carriers are combined with analog modulated carriers, and the sum of all signals modulates an analog laser, which operates in its linear range, where output power is proportional to input current. The use of multiple carriers to transmit different signals is called frequency division multiplexing, or FDM.

Usually the term analog is applied to the optical transmitter in Figure 19.1(b). It is true that this must be an analog transmitter, since a digital transmitter would never be able to carry the multiple frequencies without creating intolerable intermodulation distortion. However, the signals carried as modulated signals on the analog transmitter may themselves be either analog or digital.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558608283500217

Protocol Layers

Pierre Duhamel, Michel Kieffer, in Joint Source-Channel Decoding, 2010

Checksums

This is a generalization of the previous technique; rather than working bit by bit, the packet is treated as a sequence of integers, each one of l bits. These words are added with 1s complement arithmetic. This means that any carry getting out the l bits should be added to the least significant bit (LSB) of the word. The checksum that is added to the packet is then the 1s complement of this 1s complement sum. By construction, the consistency check is obtained as follows: perform the 1s complement addition of all l bits’ sections (including the checksum), which should be an “all 1s” word if no error occurred.

This should be clear with the following example:

The 1s complement representation of fixed-point integers (8-bit) is recalled in Table 7.1 and is characterized by the fact the zero has two representations, in contrast with more familiar 2s complement representations.

Table 7.1. The 1s Complement Representation

BinaryDecimalHex0000 00000000000 00011010000 00102020000 0011303………1111 1111−0FF1111 1110−1FE1111 1101−2FD1111 1100−3FC

The 1s complement addition, as recalled above, requires some feedback of the carry (carries), as illustrated in Example 7.3.

Example 7.3 (1s complement addition)

This example illustrates the addition of –3 and +7 when 1s complement representation is used.

First, the numbers must be represented in the appropriate manner: –3 is represented by FC and +7 is represented by 07. Then, addition may take place.

First, a classical binary addition (here represented in hexadecimal) is performed: FC + 07 = 01 03 in which 01 is the carry that has to be added to the LSB of the result (03) to give the correct result:

01 + 03 = 04, which is the 1s complement representation of the correct result: +4

So, the 1s complement sum is done by summing the numbers and adding the carry (or carries) to the result. We are now ready to process a checksum example.

Example 7.4 (Simple Internet checksum example)

Suppose we have an 8-bit machine and that we want to send out the packet FE 05. Let’s calculate and verify the Internet checksum. This requires the following:

1.

Perform 1s complement addition of the words. This is done in two steps:

Binary addition of the words (here represented in hexadecimal form)

FE+05=01 03.

The 1s complement sum requires the addition of the carry to the 8-bit word as seen in the example above

03+01=04.

The 1s complement sum of FE + 05 is thus 04.2.

The 1s complement of the 1s complement sum defines the Internet checksum, which turns out to be –04 = FB.

As a result, the packet will be sent as

FE 05 FB.

Now, at the receiving end, we add all the received bytes, including the checksum (again using classical binary addition)

FE+05+FB=01 FE.

The 1s complement sum is

FE+01=FF=−0,

which checks that the transmission was OK.

Checksums require relatively little overhead but offer a relatively weak protection against multiple errors, compared with CRCs that are described below. In fact, even if the checksum appears good on a message that has been received, the message may still contain an undetected error. The probability of this is bounded by 2−C, where C is the number of checksum bits.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123744494000040

Introduction to Digital Logic Design

Ian Grout, in Digital Systems Design with FPGAs and CPLDs, 2008

5.2.3 Signed Binary Numbers

Unsigned (or straight) binary numbers are used when the operations use only positive numbers and the result of any operations is a positive number. However, in most DSP tasks, both the number and the result can be either positive or negative, and the unsigned binary number system cannot be used. The two coding schemes used to achieve this are the 1s complement and 2s complement.

The 1s complement of a number is obtained by changing (or inverting) each of the bits in the binary number (0 becomes a 1 and a 1 becomes a 0):

Original binary number: 10001100

1s complement: 01110011

The 2s complement is formed by adding 1 to the 1s complement:

Original binary number: 10001100

1s complement: 01110011

2s complement: 01110100

The MSB of the binary number is used to represent the sign (0 = positive, 1=negative) of the number, and the remainder of the number represents the magnitude. It is therefore essential that the number of bits used is sufficient to represent the required range, as shown in Table 5.2. Here, only integer numbers are considered.

Table 5.2. Number range

Number of bitsUnsigned binary range2s complement number range4010 to +1510–810 to + 7108010 to +25510–12810 to +1271016010 to +65,53510–32,76810 to +32,76710

Twos complement number manipulation is as follows:

To create a positive binary number from a positive decimal number, create the positive binary number for the magnitude of the decimal number where the MSB is set to 0 (indicating a positive number).

To create a negative binary number from a negative decimal number, create the positive binary number for the magnitude of the decimal number where the MSB is set to 0 (indicating a positive number), then invert all bits and add 1 to the LSB. Ignore any overflow bit from the binary addition.

To create a negative binary number from a positive binary number, where the MSB is set to 0 (indicating a positive number), invert all bits and add 1 to the LSB. Ignore any overflow bit from the binary addition.

To create a positive binary number from a negative binary number, where the MSB is set to 1 (indicating a negative number), invert all bits and add 1 to the LSB. Ignore any overflow bit from the binary addition.

The 2s complement number coding scheme is widely used in digital circuits and system design and so will be explained further. Table 5.3 shows the binary representations of decimal numbers for a four-bit binary number. In the unsigned binary number coding scheme, the binary number represents a positive decimal number from 010 to +1510. In the 2s complement number coding scheme, the decimal number range is −810 to +710.

Table 5.3. Decimal to binary conversion

Decimal number4-bit unsigned binary number4-bit 2s complement signed binary number+ 151111—+ 141110—+ 131101—+ 121100—+ 111011—+ 101010—+91001—+81000—+701110111+601100110+501010101+401000100+300110011+200100010+100010001000000000-1—1111-2—1110-3—1101-4—1100-5—1011-6—1010-7—1001-8—1000

In this, the most negative 2s complement number is 110 greater in magnitude than the most positive 2s complement number. The number range for an n-bit number is: −2N to +(2N − 1).

Addition and subtraction are undertaken by addition and if necessary inversion (creating a negative number from a positive number and vice versa). Table 5.4 shows the cases for addition and subtraction of two numbers (A and B). It is essential to ensure that the two numbers have the same number of bits, the MSB represents the sign of the binary number, and the number of bits used is sufficient to represent the range of possible inputs and the range of possible outputs.

Table 5.4. 2s complement addition and subtraction

Arithmetic operationPolarity of input APolarity of input BActionAugendAddendAddition(A + B)PositivePositiveAdd the augend to the addend and disregard any overflow.PositiveNegativeNegativePositiveNegativeNegativeMinuendSubtrahendSubtraction(A – B)PositivePositiveNegate (invert) the subtrahend, add this to the minuend, and disregard any overflow.PositiveNegativeNegativePositiveNegativeNegative

Figure 5.3 shows an arrangement where two inputs are either added or subtracted, depending on the logic level of a control input. This arrangement requires an adder, a complement (a logical inversion of the inputs bits and add 1, disregarding any overflow), and a digital switch (multiplexer).

Each time we add another bit, what happens to the amount of numbers we can make?

Figure 5.3. Addition and subtraction (2's complement arithmetic)

Input numbers in the range −810 to +710 are represented by four bits in binary. However, the range for the result of an addition is −1610 to +1410, and the range for the result of a subtraction is −1510 to +1510. The result requires five bits in binary to represent the number range (one bit more than the number of bits required to represent the inputs), so the number of bits to represent the inputs will be increased by one bit before the addition or subtraction:

In an unsigned binary number, to increase the wordlength (number of bits) by one bit, append a 0 to the number as the new MSB:

00102 = 000102

10102 = 010102

In a 2s complement number, to increase the wordlength by one bit, then append a bit with the same value as the original MSB to the number as the new MSB:

00102 = 000102

10102 = 110102

Consider the addition of +210 to +310 using 2s complement numbers. The result should be +510. The two input numbers can be represented by three bits, but if 3-bit addition is undertaken, the result will be in error:

Each time we add another bit, what happens to the amount of numbers we can make?

If, however, the input wordlength is increased by one bit, then the addition is undertaken, the result becomes:

What happens when you add another bit?

Each additional bit is an additional multiplicand of 2 in the number of possible values, thus moving to the next power of 2. Doubling memory usually means doubling the number of memory addresses that an application uses, or doubling the amount of RAM used.

When we add another bit the amount of numbers we can make multiplies by 2?

When we add another bit, the amount of numbers we can make multiplies by 2. So a two-bit number can make 4 numbers, but a three-bit number can make 8.

When we add 1 to any number we get the same number?

Mathematically it is . So, the “Successor” of any whole number is the number, obtained by adding to the given number. Hence, When 1 is added to a given number we get the successor of the given number.

How many possibilities does two bit number have?

There are 4 permutations possible in 2 bits.