All questions of Compiler Design for Computer Science Engineering (CSE) Exam

The grammar A → AA | (A) | ε is not suitable for predictive-parsing because the grammar is
  • a)
    ambiguous
  • b)
    left-recursive
  • c)
    right-recursive
  • d)
    an operator-grammar
Correct answer is option 'B'. Can you explain this answer?

Yash Patel answered
nswer (B) is correct because grammar is left recursive, hence the parser may fall into a loop. Answer A is not correct because ambiguity can occur from both left or right recursion.

The process of assigning load addresses to the various parts of the program and adjusting the code and the data in the program to reflect the assigned addresses is called
  • a)
    Assembly
  • b)
    parsing
  • c)
    Relocation
  • d)
    Symbol resolution
Correct answer is option 'C'. Can you explain this answer?

Crack Gate answered
Relocation is the process of assigning load addresses to position-dependent code of a program and adjusting the code and data in the program to reflect the assigned addresses.
Hence Option C is Ans
Symbol resolution  is the process of searching files and libraries to replace symbolic references or names of libraries with actual usable addresses in memory before running a program.

The number of possible min-heaps containing each value from {1, 2, 3, 4, 5, 6, 7} exactly once is _______.
  • a)
     80
  • b)
     8
  • c)
    20
  • d)
    210
Correct answer is option 'A'. Can you explain this answer?

Ravi Singh answered
Set minimum element as root (i.e 1), now 6 are remaining and left subtree will have 3 elements, each left subtree combination can be permuted in 2! ways.
Total ways to design min-heap with 7-elements =  *2! * 2! = 20*2*2 = 80
Alternative approach – Total number of min or max heap tree with 1 to N elements are using recurrence relation:
► T(N) = (N-1)Ck * T(k) * T(N-k-1), where k = number of nodes on left subtree
► T(1) = 1 T(2) = 1 T(3) = 2 T(4) = 3C2 * T(2) * T(1) = 3 T(5) = 4C3 * T(3) * T(1) = 8 T(6) = 5C3 * T(3) * T(2) = 20 T(7) = 5C3 * T(3) * T(3) = 80
So, answer is 80.

A language L   allows declaration of arrays whose sizes are not known during compilation. It is required to make efficient use of memory. Which one of the following is true?
  • a)
     A compiler using static memory allocation can be written for L
  • b)
    A compiler cannot be written for ; an interpreter must be used
  • c)
    A compiler using dynamic memory allocation can be written for L
  • d)
    None of the above
Correct answer is option 'D'. Can you explain this answer?

Yash Patel answered
 If a language L allows declaration of arrays whose sizes are not known during compilation time. It is required to use efficient use of memory.
So a compiler using dynamic memory allocation can be written for L.
An array is a collection of data items, all of the same type, accessed using a common name.
C dynamic memory allocation refers to performing manual memory management for dynamic memory allocation in the C programming language via a group of functions in the C standard library, namely malloc, realloc, calloc and free.

where ‘op’ is one of ‘+’, ‘*’ and ‘ ’ (exponentiation) can be evaluated on a CPU with single register without  storing the value of (a * b) if
  • a)
     ‘op’ is ‘+’ or ‘*’
  • b)
     ‘op’ is ‘ ’ or ‘*’
  • c)
     ‘op’ is ‘ ’ or ‘+’
  • d)
    not possible to evaluate without storing
Correct answer is option 'A'. Can you explain this answer?

Ravi Singh answered
↑ has higer precedence than {*,+,-,/}
So, if op = ↑ implies, we need to evaluate the right hand side of ↑ first and then do the lhs part, which would definately require us to store the value of lhs 
but if its a '+' or '*' , we dont need to store the values evaluated, and on the go can do the operation directly on one register.

Consider the grammar shown below S → i E t S S' | a S' → e S | ε E → b In the predictive parse table. M, of this grammar, the entries M[S', e] and M[S', $] respectively are
  • a)
    {S' → e S} and {S' → e}
  • b)
    {S' → e S} and {}
  • c)
    {S' → ε} and {S' → ε}
  • d)
    {S' → e S, S'→ ε} and {S' → ε}
Correct answer is option 'D'. Can you explain this answer?

Vaibhav Patel answered
Here representing the parsing table as M[ X , Y], where X represents rows(Non terminals) and Y represents columns(terminals). Here are the rules to fill the parsing table. For each distinct production rule A->α, of the grammar, we need to apply the given rules:
Rule 1: if A –> α is a production, for each terminal ‘a’ in FIRST(α), add A–>α to M[ A , a]
 Rule 2 : if ‘ ε ‘ is in FIRST(α), add A –> α to M [A , b] for each ‘b’ in FOLLOW(A). As Entries have been asked corresponding to Non-Terminal S', hence we only need to consider its productions to get the answer. For S' → eS, according to rule 1, this production rule should be placed at the entry M[ S', FIRST(eS)], and from the given grammar, FIRST(eS) ={e}, hence S'->eS is placed in the parsing table at entry M[S' , e]. Similarly, For S'->ε, as FIRST(ε) = {ε}, hence rule 2 should be applied, therefore, this production rule should be placed in the parsing table at entry M[S',FOLLOW(S')], and FOLLOW(S') = FOLLOW(S) = {e, $}, hence R->ε is placed at entry M[S', e] and M[S' , $]. Therefore Answer is option D. Visit the Following links to Learn how to find First and Follow sets.

Which languages necessarily need heap allocation in the runtime environment?
  • a)
    Those that support recursion .
  • b)
    Those that use dynamic scoping
  • c)
    Those that allow dynamic data structure
  • d)
    Those that use global variables
Correct answer is option 'C'. Can you explain this answer?

Sanya Agarwal answered
Runtime environment means we deal with dynamic memory allocation and Heap is a dynamic data structure.
So it is clear that those languages that allow dynamic data structure necessarily need heap allocation in the runtime environment.

Match the following:
Codes:
  
  • a)
    a
  • b)
    b
  • c)
    c
  • d)
    d
Correct answer is option 'C'. Can you explain this answer?

Swati Rao answered
Parsing generally uses Production tree or context free grammar to generate parse tree,from options you can see only option c maps parsing to production tree so you can eliminate other options, also expression evaluation uses post order traversal is true.

A loader is
  • a)
    A program that place programs into memory and prepares them for execution.
  • b)
    A program that automates the translations of assembly language into machine language.
  • c)
    A program that accepts a program written in a high level language and produces an object program.
  • d)
    A program that appears to execute a source program as if it were machine language.
Correct answer is option 'A'. Can you explain this answer?

Anjana Ahuja answered
Answer:

A loader is a program that places programs into memory and prepares them for execution. It is responsible for loading executable files into the computer's memory and prepares them for execution by setting up the necessary data structures and resolving any dependencies.

Explanation:

A loader is an essential component of the operating system that is responsible for various tasks related to the execution of programs. Its main purpose is to load executable files into memory so that the CPU can execute them. Here are the main tasks performed by a loader:

1. Loading:
The loader is responsible for loading the executable file into memory. It reads the file from the disk and allocates memory space for the program's instructions and data. It ensures that the program is loaded into the correct memory locations.

2. Relocation:
Many programs are designed to be loaded at different memory locations. The loader performs the necessary operations to relocate the program to the correct memory location specified by the operating system. It updates the memory references in the program's instructions and data to reflect the new memory location.

3. Linking:
In some cases, a program may have external dependencies on other libraries or modules. The loader resolves these dependencies by linking the program with the required libraries. It connects the program's references to external symbols with the actual memory addresses of the corresponding symbols.

4. Symbol Resolution:
The loader resolves any symbolic references in the program. Symbolic references are memory addresses that are represented symbolically rather than numerically. The loader connects these symbolic references with the actual memory addresses during the loading process.

5. Loading Dependencies:
If the program has any shared libraries or dynamically loaded modules, the loader is responsible for loading these dependencies into memory as well. It ensures that all the necessary components for the program's execution are present in memory.

Overall, a loader plays a crucial role in preparing a program for execution by handling the loading, relocation, linking, and symbol resolution tasks. It sets up the program's environment in memory, allowing the CPU to execute the instructions and data contained within the program.

In analyzing the compilation of PL/I program the description “resolving symbolic address (labies) and generating machine language” is associated with
  • a)
    Assembly and output
  • b)
    Codegeneration
  • c)
    Storage assignment
  • d)
    Syntax analysis
Correct answer is option 'A'. Can you explain this answer?

Ameya Goyal answered
The final phase of the compiler is the generation of target code, consisting of relocatable machine code or assembly code. Intermediate instruction are each translated into a sequence of machine instructions that perform the same task. A crucial aspect is the assignment of variables to registers.

Consider the grammar shown below.
S → C C
C → c C | d
The grammar is
  • a)
    LL(1)
  • b)
    SLR(1) but not LL(1)
  • c)
    LALR(1) but not SLR(1)
  • d)
    LR(1) but not LALR(1)
Correct answer is option 'A'. Can you explain this answer?

Ayush Mehta answered
Since there is no conflict, the grammar is LL(1). We can construct a predictive parse table with no conflicts. This grammar also LR(0), SLR(1), CLR(1) and LALR(1).

Consider the grammar
S → (S) | a
Let the number of states in SLR(1), LR(1) and LALR(1) parsers for the grammar be n1, n2 and n3 respectively. The following relationship holds good
  • a)
    n1 < n2 < n3
  • b)
    n1 = n3 < n2
  • c)
    n1 = n2 = n3
  • d)
    n1 ≥ n3 ≥ n2
Correct answer is option 'B'. Can you explain this answer?

Mansi Shah answered
SLR(1), LR(1), and LALR(1) Parsers
SLR(1), LR(1), and LALR(1) are all types of parsers used in compiler design. These parsers are used to analyze the syntax of a programming language and construct a parse tree or abstract syntax tree.

SLR(1) Parser
SLR(1) stands for "Simple LR(1)" parser. It is a type of bottom-up parser that uses a parsing table to parse the input. The SLR(1) parsing table is constructed from the LR(0) items of the grammar. The number of states in an SLR(1) parser is denoted as n1.

LR(1) Parser
LR(1) stands for "Look-Ahead LR(1)" parser. It is also a type of bottom-up parser that uses a more powerful parsing table compared to SLR(1). The LR(1) parsing table is constructed from the LR(1) items of the grammar. The number of states in an LR(1) parser is denoted as n2.

LALR(1) Parser
LALR(1) stands for "Look-Ahead LR(1)" parser. It is a variant of the LR(1) parser that uses a smaller parsing table while still maintaining the same parsing power. The LALR(1) parsing table is constructed from the LR(0) items of the grammar. The number of states in an LALR(1) parser is denoted as n3.

Relationship between n1, n2, and n3
The given relationship states that n1 = n3 = n2, which means that the number of states in the SLR(1) parser is equal to the number of states in the LALR(1) parser, which is also equal to the number of states in the LR(1) parser.

This relationship holds true because both SLR(1) and LALR(1) parsers are derived from the same LR(0) items of the grammar. The only difference between them is the construction of the parsing table. SLR(1) parser uses a simpler and smaller parsing table compared to LALR(1) parser, resulting in fewer states. LALR(1) parser, on the other hand, uses a more compact parsing table that combines similar states, resulting in a smaller number of states compared to LR(1) parser.

Since both SLR(1) and LALR(1) parsers are derived from the same LR(0) items, they will have the same number of states. Similarly, LR(1) parser, which uses more powerful parsing table construction, will have a larger number of states compared to SLR(1) and LALR(1) parsers.

Hence, the correct answer is option 'B' - n1 = n3 = n2.

A grammar that is both left and right recursive for a non-terminal, is
  • a)
    Ambiguous
  • b)
    Unambiguous
  • c)
    Information is not sufficient to decide whether it is ambiguous or unambiguous
  • d)
    None of the above
Correct answer is option 'C'. Can you explain this answer?

Vaishnavi Kaur answered
Let grammar is like this :

This grammar is left as well as right recursive but still unambiguous.. A is useless production but still part of grammar..
A grammar having both left as well as right recursion may or may not be ambiguous ..
Let's take a grammar say 
Now, according to the link https://en.wikipedia.org/wiki/Formal_grammar
For a grammar G, if we have L(G) then all the strings/sentences in this language can be produced after some finite number of steps .
But, for the grammar
Can we produce any string after a finite number of steps ? NO, and hence language of this grammar is empty set {} .
Hence, If a grammar is having both left and right recursion, then grammar may or may not be ambiguous .

The grammar S → aSa | bS | c is
  • a)
    LL(1) but not LR(1)
  • b)
    LR(1)but not LR(1)
  • c)
    Both LL(1)and LR(1)
  • d)
    Neither LL(1)nor LR(1)
Correct answer is option 'C'. Can you explain this answer?

Arjun Unni answered
First(aSa) = a
First(bS) = b
First(c) = c
All are mutually disjoint i.e no common terminal between them, the given grammar is LL(1).
As the grammar is LL(1) so it will also be LR(1) as LR parsers are more powerful then LL(1) parsers. and all LL(1) grammar are also LR(1) So option C is correct.
Below are more details. A grammar is LL(1) if it is possible to choose the next production by looking at only the next token in the input string.
Formally, grammar G is LL(1) if and only if
      For all productions A → α1 | α2 | ... | αn,
            First(αi) ∩ First(αj) = ∅, 1 ≤ i,j ≤ n, i ≠ j.
    For every non-terminal A such that First(A) contains ε,
             First(A) ∩ Follow(A) = ∅

A canonical set of items is given below
S --> L. > R
Q --> R.
On input symbol < the set has
  • a)
    a shift-reduce conflict and a reduce-reduce conflict.
  • b)
    a shift-reduce conflict but not a reduce-reduce conflict.
  • c)
    a reduce-reduce conflict but not a shift-reduce conflict.
  • d)
    neither a shift-reduce nor a reduce-reduce conflict
Correct answer is option 'D'. Can you explain this answer?

The question is asked with respect to the symbol  ' < ' which is not present in the given canonical set of items. Hence it is neither a shift-reduce conflict nor a reduce-reduce conflict on symbol '<'. Hence D is the correct option. But if the question would have asked with respect to the symbol  ' > ' then it would have been a shift-reduce conflict.

in a context-free grammar
  • a)
    ∈ can’t be the right hand side of any production.
  • b)
    Terminal symbols can’t be present in the left hand side of any production.
  • c)
    The number of grammar symbols in the left hand side is not greater than that of in the right hand side.
  • d)
    All of the above
Correct answer is option 'B'. Can you explain this answer?

Shruti Basak answered
A context-free grammar is a formal grammar that consists of a set of production rules, where each rule defines how to rewrite a nonterminal symbol into a sequence of terminal and/or nonterminal symbols. The rules are applied in any context and do not depend on the surrounding symbols or context.

Context-free grammars are commonly used to describe the syntax of programming languages, natural languages, and other formal languages. They provide a concise and systematic way to generate valid sentences or expressions in these languages.

For example, consider a simple context-free grammar that describes the syntax of arithmetic expressions:

1. S -> E
2. E -> E + T
3. E -> E - T
4. E -> T
5. T -> T * F
6. T -> T / F
7. T -> F
8. F -> (E)
9. F -> num

In this grammar, S is the start symbol, and the rules define how to rewrite nonterminal symbols (S, E, T, F) into sequences of terminal symbols (num, +, -, *, /, (, )). The rules specify the various ways to combine numbers and arithmetic operators to form valid arithmetic expressions.

For example, using this grammar, the expression "2 + 3 * (4 - 1)" can be generated as follows:

S -> E (by applying rule 1)
E -> E + T (by applying rule 2)
E -> T + T (by applying rule 4)
T -> F + T (by applying rule 7)
F -> num + T (by applying rule 9)
F -> 2 + T (by applying rule 9)
F -> 2 + T * F (by applying rule 5)
F -> 2 + F * F (by applying rule 7)
F -> 2 + (E) * F (by applying rule 8)
F -> 2 + (E) * (E) (by applying rule 4)
F -> 2 + (T) * (E) (by applying rule 7)
F -> 2 + (F) * (E) (by applying rule 8)
F -> 2 + (4) * (E) (by applying rule 9)
F -> 2 + (4) * (T) (by applying rule 7)
F -> 2 + (4) * (F) (by applying rule 8)
F -> 2 + (4) * (1) (by applying rule 9)

This sequence of rule applications generates the desired arithmetic expression.

For the grammar below, a partial LL(1) parsing table is also presented along with the grammar. Entries that need to be filled are indicated as E1, E2, and E3.  is the empty string, $ indicates end of input, and, | separates alternate right hand sides of productions.
Q.
The appropriate entries for E1, E2, and E3 are
  • a)
  • b)
  • c)
  • d)
Correct answer is option 'C'. Can you explain this answer?

Diya Kumar answered
As we need to find entries E1, E2 and E3 which are against Non terminals S and B, so we will deal with only those productions which have S and B on LHS. Here representing the parsing table as M[X , Y], where X represents rows( Non terminals) and Y represents columns(terminals).
Rule 1: if P --> Q is a production, for each terminal 't' in FIRST(Q) add P-->Q to M [ P , t ]
Rule 2 : if epsilon is in FIRST(Q), add P --> Q to M [P , f] for each f in FOLLOW(P).
For the production rule S--> aAbB, it will be added to parsing table at the position M[ S , FIRST(aAbB)], Now FIRST(aAbB) = {a}, hence add S--> aAbB to M[S , a] which is E1. For the production rule S--> bAaB, it will be added to parsing table at the position M[ S , FIRST(bAaB)], Now FIRST(bAaB) = {b}, hence add S--> bAaB to M[ S , b] which is E2 For the production rule S--> epsilon , it will be added to parsing table at the position M[ S , FOLLOW(S)], Now FOLLOW(S) = {a , b , $}, hence add S --> epsilon to M[ S , a] and M[ S , b] which are again E1 and E2 respectively. For the production rule B --> S, it will be added to parsing table at the position M[ B , FIRST(S)], Now FIRST(S) also contains epsilon, hence add B --> S to M[ S , FOLLOW(B)], and FOLLOW(B) contains {$}, i.e. M[S , $] which is E3. epsilon corresponds to empty string.

Consider an ambiguous grammar G and its disambiguated version D. Let the language recognized by the two grammars be denoted by L(G) and L(D) respectively. Which one of the following is true?
  • a)
    L (D) ⊂ L (G)
  • b)
    L (D) ⊃ L (G)
  • c)
    L (D) = L (G)
  • d)
    L (D) is empty
Correct answer is option 'C'. Can you explain this answer?

Nandini Khanna answered
L (D) = L (G) , Both must represent same language .Also  if we are converting  a grammar from Ambiguous to UnAmbiguous form , we must  ensure that our new new grammar represents the same language as previous grammar.
For ex G1: S->Sa/aS/a; {Ambiguous (2 parse trees for string 'aa')}
G1':S->aS/a;{Unambiguous}
Both represents language  represented by regular expression: a^+

Generation of intermediate code based on an abstract machine model is useful in compilers because
  • a)
    It makes implementation of lexical analysis and syntax analysis easier.
  • b)
    Syntax-directed translations can be written for intermediate code generation.
  • c)
    It enhances the portability of the front end of the compiler.
  • d)
    It is not possible to generate code for real machines directly from high level language programs
Correct answer is option 'C'. Can you explain this answer?

Shubham Chawla answered
Explanation:

Intermediate code is code that is generated by a compiler between the source code and the target code. It is an abstraction of the target machine's code and is used to make it easier to translate the source code into machine code. The intermediate code is generally optimized and platform-independent, which makes it easier to port the compiler front end to different platforms.

The use of an abstract machine model in the generation of intermediate code has several advantages:

1. Easier implementation of lexical analysis and syntax analysis: The abstract machine model provides a simplified representation of the target machine's instruction set, making the implementation of the lexical and syntax analysis stages of the compiler easier.

2. Syntax-directed translations: Intermediate code generation can be done using syntax-directed translation techniques, which facilitates the construction of compilers.

3. Enhanced portability: The use of intermediate code makes it easier to port the front end of the compiler to different platforms. Since the intermediate code is platform-independent, only the back end of the compiler needs to be modified to generate code for different target machines.

4. Direct generation of code for real machines: Although it is possible to generate code for real machines directly from high-level language programs, the use of an abstract machine model provides a layer of abstraction that makes the process easier and more efficient.

Therefore, option C is the correct answer.

A non relocatable program is one which
  • a)
    Cannot be made to execute in any area of storage other than the one designated for it at the time of its coding or translation.
  • b)
    Consists of a program and relevant information for its relocation.
  • c)
    Can itself perform the relocation of its address sensitive portions.
  • d)
    All of the above
Correct answer is option 'A'. Can you explain this answer?

Gargi Menon answered
The phases of compiler
1. Lexical
2. Syntax
3. Semantic
4. Intermediate code generation, are all machine dependent phases and are not relocatable.
The together make pass-1 of the compiler. This program hence can not be made to execute in any area of storage other than the one designated for it at the time of its coding or translation.

To evaluate an expression without any embedded function calls
  • a)
    One stack is enough
  • b)
    Two stacks are needed
  • c)
    As many stacks as the height of the expression tree are needed
  • d)
    A Turing machine is needed in the general case
Correct answer is option 'A'. Can you explain this answer?

Anirban Khanna answered
Any expression can be converted into Postfix or Prefix form.

Prefix and postfix evaluation can be done using a single stack.

For example : Expression ’10 2 8 * + 3 -‘ is given.
PUSH 10 in the stack.
PUSH 2 in the stack.
PUSH 8 in the stack.
When operator ‘*’ occurs, POP 2 and 8 from the stack.
PUSH 2 * 8 = 16 in the stack.
When operator ‘+’ occurs, POP 16 and 10 from the stack.
PUSH 10 * 16 = 26 in the stack.
PUSH 3 in the stack.
When operator ‘-‘ occurs, POP 26 and 3 from the stack.
PUSH 26 – 3 = 23 in the stack.
So, 23 is the answer obtained using single stack.

 
Thus, option (A) is correct.

Consider the following issues:
1. Simplify the phases.
2. Compiler efficiency is improved.
3. Compiler works faster.
4. Compiler portability is enhanced.
Which is/are true in context of lexical analysis?
  • a)
    1,2 and 3
  • b)
    1,3 and 4
  • c)
    1,2 and 4
  • d)
    All of these
Correct answer is option 'C'. Can you explain this answer?

Pranjal Sen answered
Lexical Analysis

Lexical analysis is the first phase of a compiler, where the source code is analyzed and divided into a sequence of lexemes or tokens. These tokens are then used by the subsequent phases of the compiler for further processing. The main objective of lexical analysis is to simplify the code and make it easier for the compiler to understand and process.

Issues related to Lexical Analysis

1. Simplify the phases: This issue refers to simplifying the overall compilation process by dividing it into smaller and more manageable phases. By simplifying the phases, the compiler becomes more modular and easier to implement and maintain.

2. Compiler efficiency is improved: Improving compiler efficiency involves optimizing the lexical analysis phase to minimize the time and resources required for tokenizing the source code. This can be achieved by using efficient algorithms and data structures for token recognition and processing.

3. Compiler works faster: Making the compiler faster is a desirable goal as it reduces the overall compilation time. By improving the speed of the lexical analysis phase, the compiler can process the source code more quickly and provide faster feedback to the programmer.

4. Compiler portability is enhanced: Enhancing compiler portability involves making the compiler compatible with different platforms and operating systems. By improving the portability of the lexical analysis phase, the compiler can be easily adapted to different environments without significant changes to the code.

True statements in context of lexical analysis

From the given options, the following statements are true in the context of lexical analysis:

1. Simplify the phases: Simplifying the phases helps in making the compiler more modular and easier to implement and maintain. It breaks down the compilation process into smaller, manageable steps.

2. Compiler efficiency is improved: Improving the efficiency of the lexical analysis phase helps in optimizing the tokenization process, resulting in faster and more accurate parsing of the source code.

4. Compiler portability is enhanced: Enhancing the portability of the lexical analysis phase ensures that the compiler can be easily adapted to different platforms and operating systems without significant modifications.

Therefore, the correct answer is option 'C' - 1, 2, and 4.

A compiler is
  • a)
    A program that places programs into memory and prepares them for execution.
  • b)
    A program that automates the translation of assembly language into machine language.
  • c)
    Program that accepts a program written in a high level language and produces an object program.
  • d)
    A program that appears to execute a source program as if it were machine language.
Correct answer is option 'C'. Can you explain this answer?

A compiler is a software program that transforms high-level source code that is written by a developer in a high-level programming language into a low level object code (binary code) in machine language, which can be understood by the processor. The process of converting high-level programming into machine language is known as compilation.

The processor executes object code, which indicates when binary high and low signals are required in the arithmetic logic unit of the processor.

A linker is given object modules for a set of programs that were compiled separately. What information need to be included in an object module?
  • a)
    Object code
  • b)
    Relocation bits
  • c)
    Names and locations of all external symbols defined in the object module
  • d)
    Absolute addresses of internal symbols
Correct answer is option 'C'. Can you explain this answer?

Ameya Basak answered
(c) is the answer. For linker to link external symbols (for example in C, to link an extern variable in one module to a global variable in another module), it must know the location of all external symbols. In C external symbols includes all global variables and function names. 
(a) is trivially there is an object module. (b) must be there if we need to have relocation capability. 

A part of the system software which under all circumstances must reside in the main memory is:
  • a)
    text editor
  • b)
    assembler
  • c)
    linker
  • d)
    loader
  • e)
    none of the above
Correct answer is option 'D'. Can you explain this answer?

The loader is a program that loads the object program from the secondary memory into the main memory for execution of the program. The loader resides in main memory.

Consider the following two sets of LR(1) items of an LR(1) grammar.
X -> c.X, c/d
X -> .cX, c/d
X -> .d, c/d
X -> c.X, $
X -> .cX, $
X -> .d, $
Q. Which of the following statements related to merging of the two sets in the corresponding LALR parser is/are FALSE?
1. Cannot be merged since look aheads are different.
2. Can be merged but will result in S-R conflict.
3. Can be merged but will result in R-R conflict.
4. Cannot be merged since goto on c will lead to two different sets.
  • a)
    1 only
  • b)
    2 only
  • c)
    1 and 4 only
  • d)
    1, 2, 3, and 4
Correct answer is option 'D'. Can you explain this answer?

 
The given two LR(1) set of items are :
X -> c.X, c/d
X -> .cX, c/d
X -> .d, c/d
and
X -> c.X, $
X -> .cX, $
X -> .d, $
The symbols/terminals after the comma are Look-Ahead symbols. These are the sets of LR(1) (LR(1) is also called CLR(1)) items. The LALR(1) parser combines those set of LR(1) items which are identical with respect to their 1st component but different with respect to their 2nd component. In a production rule of a LR(1) set of items, (A -> B , c) , A->B is the 1st component , and the Look-Ahead set of symbols, which is c here, is the second component. Now we can see that in the sets given, 1st component of the corresponding production rule is identical in both sets, and they only differ in 2nd component (i.e. their look-ahead symbols) hence we can combine these sets to make a a single set which would be :
X -> c.X, c/d/$
X -> .cX, c/d/$
X -> .d, c/d/$
This is done to reduce the total number of parser states. Now we can check the statements given. Statement 1 : The statement is false, as merging has been done because 2nd components i.e. look-ahead were different. Statement 2 : In the merged set, we can't see any Shift-Reduce conflict ( because no reduction even possible, reduction would be possible when a production of form P -> q. is present) Statement 3 : In the merged set, we can't see any Reduce-Reduce conflict ( same reason as above, no reduction even possible, so no chances of R-R conflict ) Statement 4: This statement is also wrong, because goto is carried on Non-Terminals symbols, not on terminal symbols, and c is a terminal symbol. Thus, all statements are wrong, hence option D.

Consider the grammar defined by the following production rules, with two operators ∗ and +
S --> T * P
T --> U | T * U
P --> Q + P | Q
Q --> Id
U --> Id
 
Q. Which one of the following is TRUE?
  • a)
    + is left associative, while ∗ is right associative
  • b)
    + is right associative, while ∗ is left associative
  • c)
    Both + and ∗ are right associative
  • d)
    Both + and ∗ are left associative
Correct answer is option 'B'. Can you explain this answer?

From the grammar we can find out associative by looking at grammar.
Let us consider the 2nd production T -> T * U
T is generating T*U recursively (left recursive) so * is left associative.
Similarly
P -> Q + P
Right recursion so + is right associative. So option B is correct.
NOTE: Above is the shortcut trick that can be observed after drawing few parse trees. One can also find out correct answer by drawing the parse tree.

Chapter doubts & questions for Compiler Design - GATE Computer Science Engineering(CSE) 2026 Mock Test Series 2025 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2025 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

Chapter doubts & questions of Compiler Design - GATE Computer Science Engineering(CSE) 2026 Mock Test Series in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

Top Courses Computer Science Engineering (CSE)