Floating Point Representation - Computer Science Engineering (CSE) PDF Download

Floating Point Representation
The floating point representation of the number has two parts. The first part represents a signed fixed point numbers called mantissa or significand. The second part designates the position of the decimal (or binary) point and is called exponent. For example, the decimal no + 6132.789 is represented in floating point with fraction and exponent as follows.

Fraction                   Exponent
+0.6132789                +04

This representation is equivalent to the scientific notation +0.6132789 × 10+4 The floating point is always interpreted to represent a number in the following form ±M × R±E.
Only the mantissa M and the exponent E are physically represented in the register (including their sign). The radix R and the radix point position of the mantissa are always assumed.

A floating point binary no is represented in similar manner except that it uses base 2 for the exponent.

For example, the binary no +1001.11 is represented with 8 bit fraction and 0 bit exponent as follows.

0.1001110 × 2100

Fraction            Exponent
01001110           000100

The fraction has zero in the leftmost position to denote positive. The floating point number is equivalent to M × 2E = +(0.1001110)2 × 2+4

Floating Point Representation - Computer Science Engineering (CSE)

There are four basic operations for floating point arithmetic. For addition and subtraction, it is necessary to ensure that both operands have the same exponent values. This may require shifting the radix point on one of the operands to achieve alignment. Multiplication and division are straighter forward.

A floating point operation may produce one of these conditions:

  • Exponent Overflow: A positive exponent exceeds the maximum possible exponent value.
  • Exponent Underflow: A negative exponent which is less than the minimum possible value.
  • Significand Overflow: The addition of two significands of the same sign may carry in a carry out of the most significant bit.
  • Significand underflow: In the process of aligning significands, digits may flow off the right end of the significand.

Floating Point Addition and Subtraction
In floating point arithmetic, addition and subtraction are more complex than multiplication and division. This is because of the need for alignment. There are four phases for the algorithm for floating point addition and subtraction.
1. Check for zeros: Because addition and subtraction are identical except for a sign change, the process begins by changing the sign of the subtrahend if it is a subtraction operation. Next; if one is zero, second is result.
2. Align the Significands: Alignment may be achieved by shifting either the smaller number to the right (increasing exponent) or shifting the large number to the left (decreasing exponent).
3. Addition or subtraction of the significands: The aligned significands are then operated as required.
4. Normalization of the result: Normalization consists of shifting significand digits left until the most significant bit is nonzero.

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Multiplication 
The multiplication can be subdivided into 4 parts.
1. Check for zeroes.
2. Add the exponents.
3. Multiply mantissa.
4. Normalize the product

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Division
The division algorithm can be subdivided into 5 parts
1. Check for zeroes.
2. Initial registers and evaluates the sign.
3. Align the dividend.
4. Subtract the exponent.
5. Divide the mantissa.

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Representation - Computer Science Engineering (CSE)

Floating Point Representation - Computer Science Engineering (CSE)  

The document Floating Point Representation - Computer Science Engineering (CSE) is a part of Computer Science Engineering (CSE) category.
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)

Top Courses for Computer Science Engineering (CSE)

FAQs on Floating Point Representation - Computer Science Engineering (CSE)

1. What is floating point representation in computer science engineering?
Ans. Floating point representation is a method used in computer science engineering to represent real numbers with a fixed number of bits. It allows for a wide range of numbers to be represented, including both very large and very small numbers, by using a combination of a sign bit, an exponent, and a mantissa.
2. How does floating point representation work in computer science engineering?
Ans. In floating point representation, a real number is represented as a binary number in the form of sign * mantissa * base^exponent. The sign bit determines whether the number is positive or negative, the mantissa represents the significant digits of the number, and the exponent determines the position of the decimal point. This allows for a dynamic range of numbers to be represented using a fixed number of bits.
3. What are the advantages of using floating point representation in computer science engineering?
Ans. Floating point representation offers several advantages in computer science engineering. Firstly, it allows for a wide range of numbers to be represented, including both very large and very small numbers. Secondly, it provides a higher precision compared to fixed point representation, making it suitable for applications that require accurate calculations. Lastly, it allows for efficient arithmetic operations on real numbers, such as addition, subtraction, multiplication, and division.
4. What are the limitations of floating point representation in computer science engineering?
Ans. Despite its advantages, floating point representation has certain limitations. One limitation is that it cannot represent all real numbers exactly due to the finite number of bits used. This can lead to rounding errors and loss of precision in calculations. Additionally, floating point arithmetic operations can be slower and less efficient compared to integer arithmetic operations. It is also important to be aware of the limitations of floating point representation when comparing numbers for equality due to the potential for small rounding errors.
5. How is floating point representation implemented in computer hardware?
Ans. Floating point representation is implemented in computer hardware using specialized circuits called floating point units (FPUs). These FPUs are responsible for performing arithmetic operations on floating point numbers. They consist of various components, including registers to store the sign, exponent, and mantissa, as well as logic circuits to perform arithmetic operations. The IEEE 754 standard is commonly used for floating point representation in computer hardware.
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

Exam

,

Free

,

pdf

,

Semester Notes

,

Previous Year Questions with Solutions

,

Extra Questions

,

Important questions

,

video lectures

,

Objective type Questions

,

practice quizzes

,

Sample Paper

,

Summary

,

study material

,

Viva Questions

,

Floating Point Representation - Computer Science Engineering (CSE)

,

MCQs

,

past year papers

,

mock tests for examination

,

ppt

,

shortcuts and tricks

,

Floating Point Representation - Computer Science Engineering (CSE)

,

Floating Point Representation - Computer Science Engineering (CSE)

;