Representation of Basic Information

Representation of Basic Information
The basic functional units of computer are made of electronics circuit and it works with electrical signal. We provide input to the computer in form of electrical signal and get the output in form of electrical signal.

There are two basic types of electrical signals, namely, analog and digital. The analog signals are continuous in nature and digital signals are discrete in nature.

The electronic device that works with continuous signals is known as analog device and the electronic device that works with discrete signals is known as digital device. In present days most of the computers are digital in nature and we will deal with Digital Computer in this course.

Computer is a digital device, which works on two levels of signal. We say these two levels of signal as Highand Low. The High-level signal basically corresponds to some high-level signal (say 5 Volt or 12 Volt) and Low-level signal basically corresponds to Low-level signal (say 0 Volt). This is one convention, which is known as positive logic. There are others convention also like negative logic.

Since Computer is a digital electronic device, we have to deal with two kinds of electrical signals. But while designing a new computer system or understanding the working principle of computer, it is always difficult to write or work with 0V or 5V. To make it convenient for understanding, we use some logical value, say, 

LOW (L)   -   will represent     0V and

HIGH (H)   -   will represent    5V 

Computer is used to solve mainly numerical problems. Again it is not convenient to work with symbolic representation. For that purpose we move to numeric representation. In this convention, we use 0 to represent LOW and 1 to represent HIGH. 

0     means    LOW
 1     means    HIGH

To know about the working principle of computer, we use two numeric symbols only namely 0 and 1. All the functionalities of computer can be captured with 0 and 1 and its theoretical background corresponds to two valued boolean algebra. 

 With the symbol 0 and 1, we have a mathematical system, which is knows as binary number system. Basically binary number system is used to represent the information and manipulation of information in computer. This information is basically strings of 0s and 1s.

The smallest unit of information that is represented in computer is known as Bit ( Binary Digit ), which is either 0 or 1. Four bits together is known as Nibble, and Eight bits together is known as Byte.

The document Representation of Basic Information is a part of the Computer Science Engineering (CSE) Course Computer Architecture & Organisation (CAO).
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)

FAQs on Representation of Basic Information

1. What's the difference between decimal, binary, and hexadecimal number systems in computer architecture?
Ans. Decimal uses base 10 (digits 0-9), binary uses base 2 (0 and 1), and hexadecimal uses base 16 (0-9, A-F). Computers process data in binary internally, while hexadecimal provides a compact notation for representing binary values. Understanding these number systems is essential for learning how information is stored and manipulated in digital circuits.
2. How do I convert a decimal number to binary for my CAO exam?
Ans. Divide the decimal number repeatedly by 2, recording remainders from bottom to top. For example, 13 in decimal becomes 1101 in binary. This conversion method demonstrates the positional notation principle underlying all number systems. Practising multiple conversions helps students recognise patterns and develop speed for exam scenarios involving data representation.
3. Why do computers use binary instead of decimal for storing information?
Ans. Binary aligns with computer hardware's physical nature-transistors operate as switches with two states (on/off), representing 1 and 0. This two-state logic simplifies circuit design and minimises errors. Decimal would require ten distinct voltage levels, making systems unreliable and complex. Binary's efficiency in digital electronics makes it the foundation of all modern information representation.
4. What does ASCII code mean and how is it used to represent characters?
Ans. ASCII (American Standard Code for Information Interchange) assigns unique numerical codes to characters, letters, and symbols using 7 or 8 bits. For instance, the character 'A' is represented as 65 in decimal or 01000001 in binary. This encoding scheme enables computers to store and transmit text data, making character representation standardised across devices and systems.
5. What's the purpose of learning about sign-magnitude and two's complement representations in computer organisation?
Ans. Sign-magnitude and two's complement are methods for representing negative numbers in binary. Two's complement is preferred because it simplifies arithmetic operations and eliminates the +0 and -0 ambiguity present in sign-magnitude. Mastering these representations helps students understand how processors perform calculations with signed integers, which is crucial for grasping fundamental computer organisation principles and memory management.
Explore Courses for Computer Science Engineering (CSE) exam
Get EduRev Notes directly in your Google search
Related Searches
Important questions, video lectures, past year papers, Extra Questions, MCQs, Representation of Basic Information, Objective type Questions, Viva Questions, shortcuts and tricks, Free, Representation of Basic Information, ppt, Exam, Summary, Previous Year Questions with Solutions, Semester Notes, mock tests for examination, Sample Paper, study material, pdf , Representation of Basic Information, practice quizzes;