Directions: Kindly read the passage carefully and answer the questions given beside.
The savoury smell. The crunchy bite. The salty kick. The buttery finish. Americans will recognize the smell and flavour of their favourite moviegoing snack anywhere. Why is it that we feast our taste buds on these crisp kernels while our eyes feast on the big screen?
A few converging aspects made popcorn the quintessential movie snack, according to Andrew F. Smith, author of Popped Culture: A Social History of Popcorn in America. Mostly, it boiled down to the snack’s price, convenience, and timing. Popcorn was cheap for sellers and for customers, and making it didn’t require a ton of equipment. Popcorn also became popular at a time when movie theaters were in desperate need of an economic boost, which is how popcorn got introduced to the silver screen.
Fun fact: popcorn does not refer to the popped kernel alone. It’s also the name for the specific type of corn that is used to make the snack. It was originally grown in Central America and became popular in the U.S. in the mid-1800s. Compared with other snacks at the time, it was super easy to make, and it got easier in 1885 when the mobile steam-powered popcorn maker was invented. What hit the streets in the late 19th century was a fleet of independent popcorn purveyors. They were like the great-great-grandfathers of food trucks.
Since popcorn was cheap to make, it was also cheap to buy, which increased the popularity of this treat during the Great Depression. The Depression increased consumer spending on cheaper luxury items such as popcorn and movies, and the two industries teamed up. Theaters would allow a particular popcorn salesman to sell right outside the theatre for a daily fee. By the mid-1940s, however, movie theaters had cut out the middleman and begun to have their own concession stands in the lobby. The introduction of the popcorn-driven concession stand to movie theaters kept the movie theatre industry afloat, and popcorn has been a movie-watching staple ever since.
Q. What is the tone of the passage?
Directions: Kindly read the passage carefully and answer the questions given beside.
The savoury smell. The crunchy bite. The salty kick. The buttery finish. Americans will recognize the smell and flavour of their favourite moviegoing snack anywhere. Why is it that we feast our taste buds on these crisp kernels while our eyes feast on the big screen?
A few converging aspects made popcorn the quintessential movie snack, according to Andrew F. Smith, author of Popped Culture: A Social History of Popcorn in America. Mostly, it boiled down to the snack’s price, convenience, and timing. Popcorn was cheap for sellers and for customers, and making it didn’t require a ton of equipment. Popcorn also became popular at a time when movie theaters were in desperate need of an economic boost, which is how popcorn got introduced to the silver screen.
Fun fact: popcorn does not refer to the popped kernel alone. It’s also the name for the specific type of corn that is used to make the snack. It was originally grown in Central America and became popular in the U.S. in the mid-1800s. Compared with other snacks at the time, it was super easy to make, and it got easier in 1885 when the mobile steam-powered popcorn maker was invented. What hit the streets in the late 19th century was a fleet of independent popcorn purveyors. They were like the great-great-grandfathers of food trucks.
Since popcorn was cheap to make, it was also cheap to buy, which increased the popularity of this treat during the Great Depression. The Depression increased consumer spending on cheaper luxury items such as popcorn and movies, and the two industries teamed up. Theaters would allow a particular popcorn salesman to sell right outside the theatre for a daily fee. By the mid-1940s, however, movie theaters had cut out the middleman and begun to have their own concession stands in the lobby. The introduction of the popcorn-driven concession stand to movie theaters kept the movie theatre industry afloat, and popcorn has been a movie-watching staple ever since.
Q. Which of the following CANNOT be inferred from the passage?
1 Crore+ students have signed up on EduRev. Have you? Download the App |
Directions: Kindly read the passage carefully and answer the questions given beside.
The savoury smell. The crunchy bite. The salty kick. The buttery finish. Americans will recognize the smell and flavour of their favourite moviegoing snack anywhere. Why is it that we feast our taste buds on these crisp kernels while our eyes feast on the big screen?
A few converging aspects made popcorn the quintessential movie snack, according to Andrew F. Smith, author of Popped Culture: A Social History of Popcorn in America. Mostly, it boiled down to the snack’s price, convenience, and timing. Popcorn was cheap for sellers and for customers, and making it didn’t require a ton of equipment. Popcorn also became popular at a time when movie theaters were in desperate need of an economic boost, which is how popcorn got introduced to the silver screen.
Fun fact: popcorn does not refer to the popped kernel alone. It’s also the name for the specific type of corn that is used to make the snack. It was originally grown in Central America and became popular in the U.S. in the mid-1800s. Compared with other snacks at the time, it was super easy to make, and it got easier in 1885 when the mobile steam-powered popcorn maker was invented. What hit the streets in the late 19th century was a fleet of independent popcorn purveyors. They were like the great-great-grandfathers of food trucks.
Since popcorn was cheap to make, it was also cheap to buy, which increased the popularity of this treat during the Great Depression. The Depression increased consumer spending on cheaper luxury items such as popcorn and movies, and the two industries teamed up. Theaters would allow a particular popcorn salesman to sell right outside the theatre for a daily fee. By the mid-1940s, however, movie theaters had cut out the middleman and begun to have their own concession stands in the lobby. The introduction of the popcorn-driven concession stand to movie theaters kept the movie theatre industry afloat, and popcorn has been a movie-watching staple ever since.
Q. In the sentence "Our eyes feast on the big screen," which literary device is employed?
Directions: Kindly read the passage carefully and answer the questions given beside.
The savoury smell. The crunchy bite. The salty kick. The buttery finish. Americans will recognize the smell and flavour of their favourite moviegoing snack anywhere. Why is it that we feast our taste buds on these crisp kernels while our eyes feast on the big screen?
A few converging aspects made popcorn the quintessential movie snack, according to Andrew F. Smith, author of Popped Culture: A Social History of Popcorn in America. Mostly, it boiled down to the snack’s price, convenience, and timing. Popcorn was cheap for sellers and for customers, and making it didn’t require a ton of equipment. Popcorn also became popular at a time when movie theaters were in desperate need of an economic boost, which is how popcorn got introduced to the silver screen.
Fun fact: popcorn does not refer to the popped kernel alone. It’s also the name for the specific type of corn that is used to make the snack. It was originally grown in Central America and became popular in the U.S. in the mid-1800s. Compared with other snacks at the time, it was super easy to make, and it got easier in 1885 when the mobile steam-powered popcorn maker was invented. What hit the streets in the late 19th century was a fleet of independent popcorn purveyors. They were like the great-great-grandfathers of food trucks.
Since popcorn was cheap to make, it was also cheap to buy, which increased the popularity of this treat during the Great Depression. The Depression increased consumer spending on cheaper luxury items such as popcorn and movies, and the two industries teamed up. Theaters would allow a particular popcorn salesman to sell right outside the theatre for a daily fee. By the mid-1940s, however, movie theaters had cut out the middleman and begun to have their own concession stands in the lobby. The introduction of the popcorn-driven concession stand to movie theaters kept the movie theatre industry afloat, and popcorn has been a movie-watching staple ever since.
Q.What contributed to the increased popularity of popcorn during the Great Depression, according to the passage?
Directions: Kindly read the passage carefully and answer the questions given beside.
The savoury smell. The crunchy bite. The salty kick. The buttery finish. Americans will recognize the smell and flavour of their favourite moviegoing snack anywhere. Why is it that we feast our taste buds on these crisp kernels while our eyes feast on the big screen?
A few converging aspects made popcorn the quintessential movie snack, according to Andrew F. Smith, author of Popped Culture: A Social History of Popcorn in America. Mostly, it boiled down to the snack’s price, convenience, and timing. Popcorn was cheap for sellers and for customers, and making it didn’t require a ton of equipment. Popcorn also became popular at a time when movie theaters were in desperate need of an economic boost, which is how popcorn got introduced to the silver screen.
Fun fact: popcorn does not refer to the popped kernel alone. It’s also the name for the specific type of corn that is used to make the snack. It was originally grown in Central America and became popular in the U.S. in the mid-1800s. Compared with other snacks at the time, it was super easy to make, and it got easier in 1885 when the mobile steam-powered popcorn maker was invented. What hit the streets in the late 19th century was a fleet of independent popcorn purveyors. They were like the great-great-grandfathers of food trucks.
Since popcorn was cheap to make, it was also cheap to buy, which increased the popularity of this treat during the Great Depression. The Depression increased consumer spending on cheaper luxury items such as popcorn and movies, and the two industries teamed up. Theaters would allow a particular popcorn salesman to sell right outside the theatre for a daily fee. By the mid-1940s, however, movie theaters had cut out the middleman and begun to have their own concession stands in the lobby. The introduction of the popcorn-driven concession stand to movie theaters kept the movie theatre industry afloat, and popcorn has been a movie-watching staple ever since.
Q. According to the passage, what was the role of popcorn purveyors in the late 19th century?
Directions: Kindly read the passage carefully and answer the questions given beside.
In 1986, I left my native South Korea and came to Britain to study economics as a graduate student at the University of Cambridge. Things were difficult. My spoken English was poor. Racism and cultural prejudices were rampant. And the weather was rubbish. But the most difficult thing was the food. Before coming to Britain, I had not realised how bad food can be. Meat was overcooked and under-seasoned. It was difficult to eat, unless accompanied by gravy, which could be very good but also very bad. English mustard, which I fell in love with, became a vital weapon in my struggle to eat dinners. Vegetables were boiled long beyond the point of death to become textureless, and there was only salt around to make them edible. Some British friends would argue valiantly that their food was under-seasoned (err… tasteless?) because the ingredients were so good that you oughtn’t ruin them with fussy things like sauces, which those devious French used because they needed to hide bad meat and old vegetables. Any shred of plausibility of that argument quickly vanished when I visited France at the end of my first year in Cambridge and first tasted real French food.
British food culture of the 1980s was – in a word – conservative; deeply so. The British ate nothing unfamiliar. Food considered foreign was viewed with near-religious scepticism and visceral aversion. Other than completely Anglicised – and generally dire-quality – Chinese, Indian and Italian, you could not get any other national cuisine, unless you travelled to Soho or another sophisticated district in London. British food conservatism was for me epitomised by the now defunct but then-rampant chain, Pizzaland. Realising that pizza could be traumatically ‘foreign’, the menu lured customers with an option to have their pizza served with a baked potato – the culinary equivalent of a security blanket for British people.
As with all discussions of foreignness, of course, this attitude gets pretty absurd when you scrutinise it. The UK’s beloved Christmas dinner consists of turkey (North America), potatoes (Peru or Chile), carrots (Afghanistan) and Brussels sprouts (from, yep, Belgium). But never mind that. Brits then simply didn’t ‘do foreign’.
What a contrast to the British food scene of today – diverse, sophisticated and even experimental. London especially offers everything – cheap yet excellent Turkish doner kebab, eaten at 1am from a van on the street; eye-wateringly expensive Japanese kaiseki dinner; vibrant Spanish tapas bars where you can mix and match things according to your mood and budget; whatever. Flavours span from vibrant, in-your-face Korean levels, to understated but heart-warming Polish. You get to choose between the complexity of Peruvian dishes – with Iberian, Asian and Inca roots – and the simple succulence of Argentinian steak. Most supermarkets and food stores sell ingredients for Italian, Mexican, French, Chinese, Caribbean, Jewish, Greek, Indian, Thai, North African, Japanese, Turkish, Polish and perhaps even Korean cuisines. If you want a more specialist condiment or ingredient, it can likely be found. This in a country where, in the late 1970s, according to an American friend who was then an exchange student, the only place you could score olive oil in Oxford was a pharmacy (for softening ear wax, if you’re wondering).
My theory is that the British people had a collective epiphany sometime in the mid- to late-1990s that their own food sucks, having experienced different – and mostly more exciting – cuisines during their foreign holidays and, more importantly, through the increasingly diverse immigrant communities. Once they did that, they were free to embrace all the cuisines in the world. There is no reason to insist on Indian over Thai, or favour Turkish over Mexican. Everything tasty is fine. The British freedom to consider equally all the choices available has led to it developing perhaps one of the most sophisticated food cultures anywhere.
Q. What transformation in British food culture does the author attribute to the mid- to late-1990s?
Directions: Kindly read the passage carefully and answer the questions given beside.
In 1986, I left my native South Korea and came to Britain to study economics as a graduate student at the University of Cambridge. Things were difficult. My spoken English was poor. Racism and cultural prejudices were rampant. And the weather was rubbish. But the most difficult thing was the food. Before coming to Britain, I had not realised how bad food can be. Meat was overcooked and under-seasoned. It was difficult to eat, unless accompanied by gravy, which could be very good but also very bad. English mustard, which I fell in love with, became a vital weapon in my struggle to eat dinners. Vegetables were boiled long beyond the point of death to become textureless, and there was only salt around to make them edible. Some British friends would argue valiantly that their food was under-seasoned (err… tasteless?) because the ingredients were so good that you oughtn’t ruin them with fussy things like sauces, which those devious French used because they needed to hide bad meat and old vegetables. Any shred of plausibility of that argument quickly vanished when I visited France at the end of my first year in Cambridge and first tasted real French food.
British food culture of the 1980s was – in a word – conservative; deeply so. The British ate nothing unfamiliar. Food considered foreign was viewed with near-religious scepticism and visceral aversion. Other than completely Anglicised – and generally dire-quality – Chinese, Indian and Italian, you could not get any other national cuisine, unless you travelled to Soho or another sophisticated district in London. British food conservatism was for me epitomised by the now defunct but then-rampant chain, Pizzaland. Realising that pizza could be traumatically ‘foreign’, the menu lured customers with an option to have their pizza served with a baked potato – the culinary equivalent of a security blanket for British people.
As with all discussions of foreignness, of course, this attitude gets pretty absurd when you scrutinise it. The UK’s beloved Christmas dinner consists of turkey (North America), potatoes (Peru or Chile), carrots (Afghanistan) and Brussels sprouts (from, yep, Belgium). But never mind that. Brits then simply didn’t ‘do foreign’.
What a contrast to the British food scene of today – diverse, sophisticated and even experimental. London especially offers everything – cheap yet excellent Turkish doner kebab, eaten at 1am from a van on the street; eye-wateringly expensive Japanese kaiseki dinner; vibrant Spanish tapas bars where you can mix and match things according to your mood and budget; whatever. Flavours span from vibrant, in-your-face Korean levels, to understated but heart-warming Polish. You get to choose between the complexity of Peruvian dishes – with Iberian, Asian and Inca roots – and the simple succulence of Argentinian steak. Most supermarkets and food stores sell ingredients for Italian, Mexican, French, Chinese, Caribbean, Jewish, Greek, Indian, Thai, North African, Japanese, Turkish, Polish and perhaps even Korean cuisines. If you want a more specialist condiment or ingredient, it can likely be found. This in a country where, in the late 1970s, according to an American friend who was then an exchange student, the only place you could score olive oil in Oxford was a pharmacy (for softening ear wax, if you’re wondering).
My theory is that the British people had a collective epiphany sometime in the mid- to late-1990s that their own food sucks, having experienced different – and mostly more exciting – cuisines during their foreign holidays and, more importantly, through the increasingly diverse immigrant communities. Once they did that, they were free to embrace all the cuisines in the world. There is no reason to insist on Indian over Thai, or favour Turkish over Mexican. Everything tasty is fine. The British freedom to consider equally all the choices available has led to it developing perhaps one of the most sophisticated food cultures anywhere.
Q. What is the meaning of the word "traumatically" as used in the passage?
Directions: Kindly read the passage carefully and answer the questions given beside.
In 1986, I left my native South Korea and came to Britain to study economics as a graduate student at the University of Cambridge. Things were difficult. My spoken English was poor. Racism and cultural prejudices were rampant. And the weather was rubbish. But the most difficult thing was the food. Before coming to Britain, I had not realised how bad food can be. Meat was overcooked and under-seasoned. It was difficult to eat, unless accompanied by gravy, which could be very good but also very bad. English mustard, which I fell in love with, became a vital weapon in my struggle to eat dinners. Vegetables were boiled long beyond the point of death to become textureless, and there was only salt around to make them edible. Some British friends would argue valiantly that their food was under-seasoned (err… tasteless?) because the ingredients were so good that you oughtn’t ruin them with fussy things like sauces, which those devious French used because they needed to hide bad meat and old vegetables. Any shred of plausibility of that argument quickly vanished when I visited France at the end of my first year in Cambridge and first tasted real French food.
British food culture of the 1980s was – in a word – conservative; deeply so. The British ate nothing unfamiliar. Food considered foreign was viewed with near-religious scepticism and visceral aversion. Other than completely Anglicised – and generally dire-quality – Chinese, Indian and Italian, you could not get any other national cuisine, unless you travelled to Soho or another sophisticated district in London. British food conservatism was for me epitomised by the now defunct but then-rampant chain, Pizzaland. Realising that pizza could be traumatically ‘foreign’, the menu lured customers with an option to have their pizza served with a baked potato – the culinary equivalent of a security blanket for British people.
As with all discussions of foreignness, of course, this attitude gets pretty absurd when you scrutinise it. The UK’s beloved Christmas dinner consists of turkey (North America), potatoes (Peru or Chile), carrots (Afghanistan) and Brussels sprouts (from, yep, Belgium). But never mind that. Brits then simply didn’t ‘do foreign’.
What a contrast to the British food scene of today – diverse, sophisticated and even experimental. London especially offers everything – cheap yet excellent Turkish doner kebab, eaten at 1am from a van on the street; eye-wateringly expensive Japanese kaiseki dinner; vibrant Spanish tapas bars where you can mix and match things according to your mood and budget; whatever. Flavours span from vibrant, in-your-face Korean levels, to understated but heart-warming Polish. You get to choose between the complexity of Peruvian dishes – with Iberian, Asian and Inca roots – and the simple succulence of Argentinian steak. Most supermarkets and food stores sell ingredients for Italian, Mexican, French, Chinese, Caribbean, Jewish, Greek, Indian, Thai, North African, Japanese, Turkish, Polish and perhaps even Korean cuisines. If you want a more specialist condiment or ingredient, it can likely be found. This in a country where, in the late 1970s, according to an American friend who was then an exchange student, the only place you could score olive oil in Oxford was a pharmacy (for softening ear wax, if you’re wondering).
My theory is that the British people had a collective epiphany sometime in the mid- to late-1990s that their own food sucks, having experienced different – and mostly more exciting – cuisines during their foreign holidays and, more importantly, through the increasingly diverse immigrant communities. Once they did that, they were free to embrace all the cuisines in the world. There is no reason to insist on Indian over Thai, or favour Turkish over Mexican. Everything tasty is fine. The British freedom to consider equally all the choices available has led to it developing perhaps one of the most sophisticated food cultures anywhere.
Q. Which of the following cannot be inferred from the passage?
Directions: Kindly read the passage carefully and answer the questions given beside.
In 1986, I left my native South Korea and came to Britain to study economics as a graduate student at the University of Cambridge. Things were difficult. My spoken English was poor. Racism and cultural prejudices were rampant. And the weather was rubbish. But the most difficult thing was the food. Before coming to Britain, I had not realised how bad food can be. Meat was overcooked and under-seasoned. It was difficult to eat, unless accompanied by gravy, which could be very good but also very bad. English mustard, which I fell in love with, became a vital weapon in my struggle to eat dinners. Vegetables were boiled long beyond the point of death to become textureless, and there was only salt around to make them edible. Some British friends would argue valiantly that their food was under-seasoned (err… tasteless?) because the ingredients were so good that you oughtn’t ruin them with fussy things like sauces, which those devious French used because they needed to hide bad meat and old vegetables. Any shred of plausibility of that argument quickly vanished when I visited France at the end of my first year in Cambridge and first tasted real French food.
British food culture of the 1980s was – in a word – conservative; deeply so. The British ate nothing unfamiliar. Food considered foreign was viewed with near-religious scepticism and visceral aversion. Other than completely Anglicised – and generally dire-quality – Chinese, Indian and Italian, you could not get any other national cuisine, unless you travelled to Soho or another sophisticated district in London. British food conservatism was for me epitomised by the now defunct but then-rampant chain, Pizzaland. Realising that pizza could be traumatically ‘foreign’, the menu lured customers with an option to have their pizza served with a baked potato – the culinary equivalent of a security blanket for British people.
As with all discussions of foreignness, of course, this attitude gets pretty absurd when you scrutinise it. The UK’s beloved Christmas dinner consists of turkey (North America), potatoes (Peru or Chile), carrots (Afghanistan) and Brussels sprouts (from, yep, Belgium). But never mind that. Brits then simply didn’t ‘do foreign’.
What a contrast to the British food scene of today – diverse, sophisticated and even experimental. London especially offers everything – cheap yet excellent Turkish doner kebab, eaten at 1am from a van on the street; eye-wateringly expensive Japanese kaiseki dinner; vibrant Spanish tapas bars where you can mix and match things according to your mood and budget; whatever. Flavours span from vibrant, in-your-face Korean levels, to understated but heart-warming Polish. You get to choose between the complexity of Peruvian dishes – with Iberian, Asian and Inca roots – and the simple succulence of Argentinian steak. Most supermarkets and food stores sell ingredients for Italian, Mexican, French, Chinese, Caribbean, Jewish, Greek, Indian, Thai, North African, Japanese, Turkish, Polish and perhaps even Korean cuisines. If you want a more specialist condiment or ingredient, it can likely be found. This in a country where, in the late 1970s, according to an American friend who was then an exchange student, the only place you could score olive oil in Oxford was a pharmacy (for softening ear wax, if you’re wondering).
My theory is that the British people had a collective epiphany sometime in the mid- to late-1990s that their own food sucks, having experienced different – and mostly more exciting – cuisines during their foreign holidays and, more importantly, through the increasingly diverse immigrant communities. Once they did that, they were free to embrace all the cuisines in the world. There is no reason to insist on Indian over Thai, or favour Turkish over Mexican. Everything tasty is fine. The British freedom to consider equally all the choices available has led to it developing perhaps one of the most sophisticated food cultures anywhere.
Q. What tone does the author convey in this passage?
Directions: Kindly read the passage carefully and answer the questions given beside.
In 1986, I left my native South Korea and came to Britain to study economics as a graduate student at the University of Cambridge. Things were difficult. My spoken English was poor. Racism and cultural prejudices were rampant. And the weather was rubbish. But the most difficult thing was the food. Before coming to Britain, I had not realised how bad food can be. Meat was overcooked and under-seasoned. It was difficult to eat, unless accompanied by gravy, which could be very good but also very bad. English mustard, which I fell in love with, became a vital weapon in my struggle to eat dinners. Vegetables were boiled long beyond the point of death to become textureless, and there was only salt around to make them edible. Some British friends would argue valiantly that their food was under-seasoned (err… tasteless?) because the ingredients were so good that you oughtn’t ruin them with fussy things like sauces, which those devious French used because they needed to hide bad meat and old vegetables. Any shred of plausibility of that argument quickly vanished when I visited France at the end of my first year in Cambridge and first tasted real French food.
British food culture of the 1980s was – in a word – conservative; deeply so. The British ate nothing unfamiliar. Food considered foreign was viewed with near-religious scepticism and visceral aversion. Other than completely Anglicised – and generally dire-quality – Chinese, Indian and Italian, you could not get any other national cuisine, unless you travelled to Soho or another sophisticated district in London. British food conservatism was for me epitomised by the now defunct but then-rampant chain, Pizzaland. Realising that pizza could be traumatically ‘foreign’, the menu lured customers with an option to have their pizza served with a baked potato – the culinary equivalent of a security blanket for British people.
As with all discussions of foreignness, of course, this attitude gets pretty absurd when you scrutinise it. The UK’s beloved Christmas dinner consists of turkey (North America), potatoes (Peru or Chile), carrots (Afghanistan) and Brussels sprouts (from, yep, Belgium). But never mind that. Brits then simply didn’t ‘do foreign’.
What a contrast to the British food scene of today – diverse, sophisticated and even experimental. London especially offers everything – cheap yet excellent Turkish doner kebab, eaten at 1am from a van on the street; eye-wateringly expensive Japanese kaiseki dinner; vibrant Spanish tapas bars where you can mix and match things according to your mood and budget; whatever. Flavours span from vibrant, in-your-face Korean levels, to understated but heart-warming Polish. You get to choose between the complexity of Peruvian dishes – with Iberian, Asian and Inca roots – and the simple succulence of Argentinian steak. Most supermarkets and food stores sell ingredients for Italian, Mexican, French, Chinese, Caribbean, Jewish, Greek, Indian, Thai, North African, Japanese, Turkish, Polish and perhaps even Korean cuisines. If you want a more specialist condiment or ingredient, it can likely be found. This in a country where, in the late 1970s, according to an American friend who was then an exchange student, the only place you could score olive oil in Oxford was a pharmacy (for softening ear wax, if you’re wondering).
My theory is that the British people had a collective epiphany sometime in the mid- to late-1990s that their own food sucks, having experienced different – and mostly more exciting – cuisines during their foreign holidays and, more importantly, through the increasingly diverse immigrant communities. Once they did that, they were free to embrace all the cuisines in the world. There is no reason to insist on Indian over Thai, or favour Turkish over Mexican. Everything tasty is fine. The British freedom to consider equally all the choices available has led to it developing perhaps one of the most sophisticated food cultures anywhere.
Q. According to the passage, why did some British friends argue that their food was under-seasoned?
Directions: Kindly read the passage carefully and answer the questions given beside.
Abortion is the expulsion of a fetus from the uterus before it has reached the stage of viability (in human beings, usually about the 20th week of gestation). An abortion may occur spontaneously, in which case it is also called a miscarriage, or it may be brought on purposefully, in which case it is often called an induced abortion. Spontaneous abortions, or miscarriages, occur for many reasons, including disease, trauma, genetic defect, or biochemical incompatibility of mother and fetus. Occasionally a fetus dies in the uterus but fails to be expelled, a condition termed a missed abortion.
Induced abortions may be performed for reasons that fall into four general categories: to preserve the life or physical or mental well-being of the mother; to prevent the completion of a pregnancy that has resulted from rape or incest; to prevent the birth of a child with serious deformity, mental deficiency, or genetic abnormality; or to prevent a birth for social or economic reasons (such as the extreme youth of the pregnant female or the sorely strained resources of the family unit). By some definitions, abortions that are performed to preserve the well-being of the female or in cases of rape or incest are therapeutic, or justifiable, abortions.
Numerous medical techniques exist for performing abortions. During the first trimester (up to about 12 weeks after conception), endometrial aspiration, suction, or curettage may be used to remove the contents of the uterus. In endometrial aspiration, a thin flexible tube is inserted up the cervical canal (the neck of the womb) and then sucks out the lining of the uterus (the endometrium) by means of an electric pump.
In the related but slightly more onerous procedure known as dilatation and evacuation (also called suction curettage or vacuum curettage), the cervical canal is enlarged by the insertion of a series of metal dilators while the patient is under anesthesia, after which a rigid suction tube is inserted into the uterus to evacuate its contents. When, in place of suction, a thin metal tool called a curette is used to scrape (rather than vacuum out) the contents of the uterus, the procedure is called dilatation and curettage. When combined with dilatation, both evacuation and curettage can be used up to about the 16th week of pregnancy.
From 12 to 19 weeks the injection of a saline solution may be used to trigger uterine contractions; alternatively, the administration of prostaglandins by injection, suppository, or other method may be used to induce contractions, but these substances may cause severe side effects. Hysterotomy, the surgical removal of the uterine contents, may be used during the second trimester or later. In general, the more advanced the pregnancy, the greater the risk to the female of mortality or serious complications following an abortion.
Q. What distinguishes a spontaneous abortion from an induced abortion?
Directions: Kindly read the passage carefully and answer the questions given beside.
Abortion is the expulsion of a fetus from the uterus before it has reached the stage of viability (in human beings, usually about the 20th week of gestation). An abortion may occur spontaneously, in which case it is also called a miscarriage, or it may be brought on purposefully, in which case it is often called an induced abortion. Spontaneous abortions, or miscarriages, occur for many reasons, including disease, trauma, genetic defect, or biochemical incompatibility of mother and fetus. Occasionally a fetus dies in the uterus but fails to be expelled, a condition termed a missed abortion.
Induced abortions may be performed for reasons that fall into four general categories: to preserve the life or physical or mental well-being of the mother; to prevent the completion of a pregnancy that has resulted from rape or incest; to prevent the birth of a child with serious deformity, mental deficiency, or genetic abnormality; or to prevent a birth for social or economic reasons (such as the extreme youth of the pregnant female or the sorely strained resources of the family unit). By some definitions, abortions that are performed to preserve the well-being of the female or in cases of rape or incest are therapeutic, or justifiable, abortions.
Numerous medical techniques exist for performing abortions. During the first trimester (up to about 12 weeks after conception), endometrial aspiration, suction, or curettage may be used to remove the contents of the uterus. In endometrial aspiration, a thin flexible tube is inserted up the cervical canal (the neck of the womb) and then sucks out the lining of the uterus (the endometrium) by means of an electric pump.
In the related but slightly more onerous procedure known as dilatation and evacuation (also called suction curettage or vacuum curettage), the cervical canal is enlarged by the insertion of a series of metal dilators while the patient is under anesthesia, after which a rigid suction tube is inserted into the uterus to evacuate its contents. When, in place of suction, a thin metal tool called a curette is used to scrape (rather than vacuum out) the contents of the uterus, the procedure is called dilatation and curettage. When combined with dilatation, both evacuation and curettage can be used up to about the 16th week of pregnancy.
From 12 to 19 weeks the injection of a saline solution may be used to trigger uterine contractions; alternatively, the administration of prostaglandins by injection, suppository, or other method may be used to induce contractions, but these substances may cause severe side effects. Hysterotomy, the surgical removal of the uterine contents, may be used during the second trimester or later. In general, the more advanced the pregnancy, the greater the risk to the female of mortality or serious complications following an abortion.
Q. Which of the following is NOT mentioned as a reason for performing induced abortions in the passage?
Directions: Kindly read the passage carefully and answer the questions given beside.
Abortion is the expulsion of a fetus from the uterus before it has reached the stage of viability (in human beings, usually about the 20th week of gestation). An abortion may occur spontaneously, in which case it is also called a miscarriage, or it may be brought on purposefully, in which case it is often called an induced abortion. Spontaneous abortions, or miscarriages, occur for many reasons, including disease, trauma, genetic defect, or biochemical incompatibility of mother and fetus. Occasionally a fetus dies in the uterus but fails to be expelled, a condition termed a missed abortion.
Induced abortions may be performed for reasons that fall into four general categories: to preserve the life or physical or mental well-being of the mother; to prevent the completion of a pregnancy that has resulted from rape or incest; to prevent the birth of a child with serious deformity, mental deficiency, or genetic abnormality; or to prevent a birth for social or economic reasons (such as the extreme youth of the pregnant female or the sorely strained resources of the family unit). By some definitions, abortions that are performed to preserve the well-being of the female or in cases of rape or incest are therapeutic, or justifiable, abortions.
Numerous medical techniques exist for performing abortions. During the first trimester (up to about 12 weeks after conception), endometrial aspiration, suction, or curettage may be used to remove the contents of the uterus. In endometrial aspiration, a thin flexible tube is inserted up the cervical canal (the neck of the womb) and then sucks out the lining of the uterus (the endometrium) by means of an electric pump.
In the related but slightly more onerous procedure known as dilatation and evacuation (also called suction curettage or vacuum curettage), the cervical canal is enlarged by the insertion of a series of metal dilators while the patient is under anesthesia, after which a rigid suction tube is inserted into the uterus to evacuate its contents. When, in place of suction, a thin metal tool called a curette is used to scrape (rather than vacuum out) the contents of the uterus, the procedure is called dilatation and curettage. When combined with dilatation, both evacuation and curettage can be used up to about the 16th week of pregnancy.
From 12 to 19 weeks the injection of a saline solution may be used to trigger uterine contractions; alternatively, the administration of prostaglandins by injection, suppository, or other method may be used to induce contractions, but these substances may cause severe side effects. Hysterotomy, the surgical removal of the uterine contents, may be used during the second trimester or later. In general, the more advanced the pregnancy, the greater the risk to the female of mortality or serious complications following an abortion.
Q. Which of the following cannot be inferred from the passage?
Directions: Kindly read the passage carefully and answer the questions given beside.
Abortion is the expulsion of a fetus from the uterus before it has reached the stage of viability (in human beings, usually about the 20th week of gestation). An abortion may occur spontaneously, in which case it is also called a miscarriage, or it may be brought on purposefully, in which case it is often called an induced abortion. Spontaneous abortions, or miscarriages, occur for many reasons, including disease, trauma, genetic defect, or biochemical incompatibility of mother and fetus. Occasionally a fetus dies in the uterus but fails to be expelled, a condition termed a missed abortion.
Induced abortions may be performed for reasons that fall into four general categories: to preserve the life or physical or mental well-being of the mother; to prevent the completion of a pregnancy that has resulted from rape or incest; to prevent the birth of a child with serious deformity, mental deficiency, or genetic abnormality; or to prevent a birth for social or economic reasons (such as the extreme youth of the pregnant female or the sorely strained resources of the family unit). By some definitions, abortions that are performed to preserve the well-being of the female or in cases of rape or incest are therapeutic, or justifiable, abortions.
Numerous medical techniques exist for performing abortions. During the first trimester (up to about 12 weeks after conception), endometrial aspiration, suction, or curettage may be used to remove the contents of the uterus. In endometrial aspiration, a thin flexible tube is inserted up the cervical canal (the neck of the womb) and then sucks out the lining of the uterus (the endometrium) by means of an electric pump.
In the related but slightly more onerous procedure known as dilatation and evacuation (also called suction curettage or vacuum curettage), the cervical canal is enlarged by the insertion of a series of metal dilators while the patient is under anesthesia, after which a rigid suction tube is inserted into the uterus to evacuate its contents. When, in place of suction, a thin metal tool called a curette is used to scrape (rather than vacuum out) the contents of the uterus, the procedure is called dilatation and curettage. When combined with dilatation, both evacuation and curettage can be used up to about the 16th week of pregnancy.
From 12 to 19 weeks the injection of a saline solution may be used to trigger uterine contractions; alternatively, the administration of prostaglandins by injection, suppository, or other method may be used to induce contractions, but these substances may cause severe side effects. Hysterotomy, the surgical removal of the uterine contents, may be used during the second trimester or later. In general, the more advanced the pregnancy, the greater the risk to the female of mortality or serious complications following an abortion.
Q. What is the maximum gestational age mentioned in the passage for performing dilatation and curettage (D&C) as a method of abortion?
Directions: Kindly read the passage carefully and answer the questions given beside.
Abortion is the expulsion of a fetus from the uterus before it has reached the stage of viability (in human beings, usually about the 20th week of gestation). An abortion may occur spontaneously, in which case it is also called a miscarriage, or it may be brought on purposefully, in which case it is often called an induced abortion. Spontaneous abortions, or miscarriages, occur for many reasons, including disease, trauma, genetic defect, or biochemical incompatibility of mother and fetus. Occasionally a fetus dies in the uterus but fails to be expelled, a condition termed a missed abortion.
Induced abortions may be performed for reasons that fall into four general categories: to preserve the life or physical or mental well-being of the mother; to prevent the completion of a pregnancy that has resulted from rape or incest; to prevent the birth of a child with serious deformity, mental deficiency, or genetic abnormality; or to prevent a birth for social or economic reasons (such as the extreme youth of the pregnant female or the sorely strained resources of the family unit). By some definitions, abortions that are performed to preserve the well-being of the female or in cases of rape or incest are therapeutic, or justifiable, abortions.
Numerous medical techniques exist for performing abortions. During the first trimester (up to about 12 weeks after conception), endometrial aspiration, suction, or curettage may be used to remove the contents of the uterus. In endometrial aspiration, a thin flexible tube is inserted up the cervical canal (the neck of the womb) and then sucks out the lining of the uterus (the endometrium) by means of an electric pump.
In the related but slightly more onerous procedure known as dilatation and evacuation (also called suction curettage or vacuum curettage), the cervical canal is enlarged by the insertion of a series of metal dilators while the patient is under anesthesia, after which a rigid suction tube is inserted into the uterus to evacuate its contents. When, in place of suction, a thin metal tool called a curette is used to scrape (rather than vacuum out) the contents of the uterus, the procedure is called dilatation and curettage. When combined with dilatation, both evacuation and curettage can be used up to about the 16th week of pregnancy.
From 12 to 19 weeks the injection of a saline solution may be used to trigger uterine contractions; alternatively, the administration of prostaglandins by injection, suppository, or other method may be used to induce contractions, but these substances may cause severe side effects. Hysterotomy, the surgical removal of the uterine contents, may be used during the second trimester or later. In general, the more advanced the pregnancy, the greater the risk to the female of mortality or serious complications following an abortion.
Q. Which medical procedure is employed in the initial trimester to extract the contents of the uterus?
Directions: Read the following passage and answer the question.
With the launch of Brazil's Amazonia-1 satellite from Sriharikota, a new chapter has begun in India's space history. The satellite, a 637-kilogram entity, was the first dedicated commercial mission of NewSpace India Limited, a two-year-old commercial arm of the Department of Space. This is not the first time that NSIL has organised a launch of foreign satellites aboard an Indian Space Research Organisation (ISRO) launch vehicle. The organisation has had launches last November as well as in December 2019. However, the primary satellites aboard both these missions were Indian satellites — the RISAT-2BRI and the EOS-01 — with smaller satellites from several other countries, as well as India, piggybacking on them. The Amazonia mission also saw 18 other satellites being launched and was the first fully commercial mission. India has so far launched 342 foreign satellites from 34 countries using its Polar Satellite Launch Vehicle platform and many of them have involved ISRO's first commercial entity, the Antrix Corporation. There is still confusion on how exactly the responsibilities of NSIL differ from those of Antrix. But with the formation of the Indian National Space Promotion and Authorization Center (IN-SPACe) — a regulatory agency — as well as plans of an independent tribunal to adjudicate disputes among private space entities, there is a potential explosion of market opportunities from space applications on the anvil. Though the private sector plays a major role in developing launch and satellite infrastructure for ISRO, there are now several companies that offer myriad services. Many of these companies want to launch their own satellites, of varying dimensions, and the experience with ISRO has not been smooth always. The most conspicuous has been the controversy involving Devas Multimedia, to which the Government of India owes nearly $1.2 billion going by an order of a tribunal of the International Chamber of Commerce and upheld by a United States federal court last year. NSIL, it is said, is also a move by India's space establishment to insulate the prospects of the space industry in India from repercussions of the Devas-Antrix imbroglio.
Much like unfettered access to the Internet has spawned industries that were inconceivable, similarly, space applications and mapping have barely scratched the surface in terms of the opportunities that they can create. NSIL has a broad ambit and will be involved in collaborations spanning from launches to new space-related industries. NSIL is also expected to be more than just a marketer of ISRO's technologies; it is to find newer business opportunities and expand the sector itself. NSIL must endeavour to not be another Antrix but be continuously in start-up mode. It must conceive of ways to aid space start-ups reach out to rural India and facilitate more recruits from India's young to facilitate careers in space applications and sciences. It must see itself both as an Indian ambassador and disruptor in the space arena.
[Extracted from an editorial published in The Hindu, dated March 6, 2021]
Q. What does the phrase 'on the anvil' mean as used in the passage?
Directions: Read the following passage and answer the question.
With the launch of Brazil's Amazonia-1 satellite from Sriharikota, a new chapter has begun in India's space history. The satellite, a 637-kilogram entity, was the first dedicated commercial mission of NewSpace India Limited, a two-year-old commercial arm of the Department of Space. This is not the first time that NSIL has organised a launch of foreign satellites aboard an Indian Space Research Organisation (ISRO) launch vehicle. The organisation has had launches last November as well as in December 2019. However, the primary satellites aboard both these missions were Indian satellites — the RISAT-2BRI and the EOS-01 — with smaller satellites from several other countries, as well as India, piggybacking on them. The Amazonia mission also saw 18 other satellites being launched and was the first fully commercial mission. India has so far launched 342 foreign satellites from 34 countries using its Polar Satellite Launch Vehicle platform and many of them have involved ISRO's first commercial entity, the Antrix Corporation. There is still confusion on how exactly the responsibilities of NSIL differ from those of Antrix. But with the formation of the Indian National Space Promotion and Authorization Center (IN-SPACe) — a regulatory agency — as well as plans of an independent tribunal to adjudicate disputes among private space entities, there is a potential explosion of market opportunities from space applications on the anvil. Though the private sector plays a major role in developing launch and satellite infrastructure for ISRO, there are now several companies that offer myriad services. Many of these companies want to launch their own satellites, of varying dimensions, and the experience with ISRO has not been smooth always. The most conspicuous has been the controversy involving Devas Multimedia, to which the Government of India owes nearly $1.2 billion going by an order of a tribunal of the International Chamber of Commerce and upheld by a United States federal court last year. NSIL, it is said, is also a move by India's space establishment to insulate the prospects of the space industry in India from repercussions of the Devas-Antrix imbroglio.
Much like unfettered access to the Internet has spawned industries that were inconceivable, similarly, space applications and mapping have barely scratched the surface in terms of the opportunities that they can create. NSIL has a broad ambit and will be involved in collaborations spanning from launches to new space-related industries. NSIL is also expected to be more than just a marketer of ISRO's technologies; it is to find newer business opportunities and expand the sector itself. NSIL must endeavour to not be another Antrix but be continuously in start-up mode. It must conceive of ways to aid space start-ups reach out to rural India and facilitate more recruits from India's young to facilitate careers in space applications and sciences. It must see itself both as an Indian ambassador and disruptor in the space arena.
[Extracted from an editorial published in The Hindu, dated March 6, 2021]
Q. According to the passage, why was the launch of the Amazonia-1 satellite significant?
Directions: Read the following passage and answer the question.
With the launch of Brazil's Amazonia-1 satellite from Sriharikota, a new chapter has begun in India's space history. The satellite, a 637-kilogram entity, was the first dedicated commercial mission of NewSpace India Limited, a two-year-old commercial arm of the Department of Space. This is not the first time that NSIL has organised a launch of foreign satellites aboard an Indian Space Research Organisation (ISRO) launch vehicle. The organisation has had launches last November as well as in December 2019. However, the primary satellites aboard both these missions were Indian satellites — the RISAT-2BRI and the EOS-01 — with smaller satellites from several other countries, as well as India, piggybacking on them. The Amazonia mission also saw 18 other satellites being launched and was the first fully commercial mission. India has so far launched 342 foreign satellites from 34 countries using its Polar Satellite Launch Vehicle platform and many of them have involved ISRO's first commercial entity, the Antrix Corporation. There is still confusion on how exactly the responsibilities of NSIL differ from those of Antrix. But with the formation of the Indian National Space Promotion and Authorization Center (IN-SPACe) — a regulatory agency — as well as plans of an independent tribunal to adjudicate disputes among private space entities, there is a potential explosion of market opportunities from space applications on the anvil. Though the private sector plays a major role in developing launch and satellite infrastructure for ISRO, there are now several companies that offer myriad services. Many of these companies want to launch their own satellites, of varying dimensions, and the experience with ISRO has not been smooth always. The most conspicuous has been the controversy involving Devas Multimedia, to which the Government of India owes nearly $1.2 billion going by an order of a tribunal of the International Chamber of Commerce and upheld by a United States federal court last year. NSIL, it is said, is also a move by India's space establishment to insulate the prospects of the space industry in India from repercussions of the Devas-Antrix imbroglio.
Much like unfettered access to the Internet has spawned industries that were inconceivable, similarly, space applications and mapping have barely scratched the surface in terms of the opportunities that they can create. NSIL has a broad ambit and will be involved in collaborations spanning from launches to new space-related industries. NSIL is also expected to be more than just a marketer of ISRO's technologies; it is to find newer business opportunities and expand the sector itself. NSIL must endeavour to not be another Antrix but be continuously in start-up mode. It must conceive of ways to aid space start-ups reach out to rural India and facilitate more recruits from India's young to facilitate careers in space applications and sciences. It must see itself both as an Indian ambassador and disruptor in the space arena.
[Extracted from an editorial published in The Hindu, dated March 6, 2021]
Q. What is the tone of the author in the passage?
Directions: Read the following passage and answer the question.
With the launch of Brazil's Amazonia-1 satellite from Sriharikota, a new chapter has begun in India's space history. The satellite, a 637-kilogram entity, was the first dedicated commercial mission of NewSpace India Limited, a two-year-old commercial arm of the Department of Space. This is not the first time that NSIL has organised a launch of foreign satellites aboard an Indian Space Research Organisation (ISRO) launch vehicle. The organisation has had launches last November as well as in December 2019. However, the primary satellites aboard both these missions were Indian satellites — the RISAT-2BRI and the EOS-01 — with smaller satellites from several other countries, as well as India, piggybacking on them. The Amazonia mission also saw 18 other satellites being launched and was the first fully commercial mission. India has so far launched 342 foreign satellites from 34 countries using its Polar Satellite Launch Vehicle platform and many of them have involved ISRO's first commercial entity, the Antrix Corporation. There is still confusion on how exactly the responsibilities of NSIL differ from those of Antrix. But with the formation of the Indian National Space Promotion and Authorization Center (IN-SPACe) — a regulatory agency — as well as plans of an independent tribunal to adjudicate disputes among private space entities, there is a potential explosion of market opportunities from space applications on the anvil. Though the private sector plays a major role in developing launch and satellite infrastructure for ISRO, there are now several companies that offer myriad services. Many of these companies want to launch their own satellites, of varying dimensions, and the experience with ISRO has not been smooth always. The most conspicuous has been the controversy involving Devas Multimedia, to which the Government of India owes nearly $1.2 billion going by an order of a tribunal of the International Chamber of Commerce and upheld by a United States federal court last year. NSIL, it is said, is also a move by India's space establishment to insulate the prospects of the space industry in India from repercussions of the Devas-Antrix imbroglio.
Much like unfettered access to the Internet has spawned industries that were inconceivable, similarly, space applications and mapping have barely scratched the surface in terms of the opportunities that they can create. NSIL has a broad ambit and will be involved in collaborations spanning from launches to new space-related industries. NSIL is also expected to be more than just a marketer of ISRO's technologies; it is to find newer business opportunities and expand the sector itself. NSIL must endeavour to not be another Antrix but be continuously in start-up mode. It must conceive of ways to aid space start-ups reach out to rural India and facilitate more recruits from India's young to facilitate careers in space applications and sciences. It must see itself both as an Indian ambassador and disruptor in the space arena.
[Extracted from an editorial published in The Hindu, dated March 6, 2021]
Q. The statement "NSIL must strive not to replicate Antrix" implies that:
Directions: Read the following passage and answer the question.
With the launch of Brazil's Amazonia-1 satellite from Sriharikota, a new chapter has begun in India's space history. The satellite, a 637-kilogram entity, was the first dedicated commercial mission of NewSpace India Limited, a two-year-old commercial arm of the Department of Space. This is not the first time that NSIL has organised a launch of foreign satellites aboard an Indian Space Research Organisation (ISRO) launch vehicle. The organisation has had launches last November as well as in December 2019. However, the primary satellites aboard both these missions were Indian satellites — the RISAT-2BRI and the EOS-01 — with smaller satellites from several other countries, as well as India, piggybacking on them. The Amazonia mission also saw 18 other satellites being launched and was the first fully commercial mission. India has so far launched 342 foreign satellites from 34 countries using its Polar Satellite Launch Vehicle platform and many of them have involved ISRO's first commercial entity, the Antrix Corporation. There is still confusion on how exactly the responsibilities of NSIL differ from those of Antrix. But with the formation of the Indian National Space Promotion and Authorization Center (IN-SPACe) — a regulatory agency — as well as plans of an independent tribunal to adjudicate disputes among private space entities, there is a potential explosion of market opportunities from space applications on the anvil. Though the private sector plays a major role in developing launch and satellite infrastructure for ISRO, there are now several companies that offer myriad services. Many of these companies want to launch their own satellites, of varying dimensions, and the experience with ISRO has not been smooth always. The most conspicuous has been the controversy involving Devas Multimedia, to which the Government of India owes nearly $1.2 billion going by an order of a tribunal of the International Chamber of Commerce and upheld by a United States federal court last year. NSIL, it is said, is also a move by India's space establishment to insulate the prospects of the space industry in India from repercussions of the Devas-Antrix imbroglio.
Much like unfettered access to the Internet has spawned industries that were inconceivable, similarly, space applications and mapping have barely scratched the surface in terms of the opportunities that they can create. NSIL has a broad ambit and will be involved in collaborations spanning from launches to new space-related industries. NSIL is also expected to be more than just a marketer of ISRO's technologies; it is to find newer business opportunities and expand the sector itself. NSIL must endeavour to not be another Antrix but be continuously in start-up mode. It must conceive of ways to aid space start-ups reach out to rural India and facilitate more recruits from India's young to facilitate careers in space applications and sciences. It must see itself both as an Indian ambassador and disruptor in the space arena.
[Extracted from an editorial published in The Hindu, dated March 6, 2021]
Q. Which of the following cannot be inferred from the passage?
Directions: Read the following passage and answer the question.
Back in the 1950s, the modern use of the term "hacking" was coined within the walls of the Massachusetts Institute of Technology. For many years after, a hacker was defined as someone who was an expert at programming and problem-solving with computers, who could stretch the capabilities of what computers and computer programs were originally intended to do.
Hacking is an activity, and what separates any activity from a crime is, very often, permission. Hacking isn't an inherently criminal activity. Someone who engages in the illegal use of hacking should not be called a "bad hacker" but a "cybercriminal," "threat actor" or "cyberattacker." Hackers are people like me and my team at IBM — security professionals who are searching for vulnerabilities, hoping to find weak links in our computer systems before criminals can exploit them.
Those who commit computer crimes fall into two categories: "black hat" and "gray hat." A black hat is someone who hacks with malicious intentions (espionage, data theft), seeking financial or personal gain by exploiting vulnerabilities. A gray hat is someone whose intentions may not be malicious but lacks the permission to hack into a system. Whether a particular criminal is a black hat or a gray hat is simply descriptive of the motivation behind what has already been established as illegal activity.
Somewhere along the way, the security industry also recruited ethics to help justify hacking behaviour, giving us "the ethical hacker" and adding an artificial defensiveness to a profession that has existed since the 1950s. Unfortunately, even accredited security certifications use the adjective in their very title. And while we can't and shouldn't fault the general public for referring to us as ethical hackers, I ask you this: Does it sound right to introduce someone as an ethical stockbroker? How about an ethical engineer or ethical professor?
Hackers play a critical role in keeping companies and people safe. A hacker failing to do the job right is the equivalent to letting a company believe and function as if it's wearing a bulletproof vest when in fact, it's wearing cashmere.
The misrepresentation of the term "hacker" not only undermines the offensive security community but also distorts legislators' understanding and perception of hackers overall. The Computer Fraud and Abuse Act, for example, relies heavily on the term and its misinterpretation. For society to have open and productive discussions about security research and penetration testing, we need to set the record straight on who and what hackers really are. Many government officials whom I've spoken with understand this. Others choose to take my license plate away.
[Extracted with edits and revisions from 'Opinion | Most Hackers Aren't Criminals', The New York Times]
Q. According to the passage, what distinguishes a hacker from a cybercriminal?
Directions: Read the following passage and answer the question.
Back in the 1950s, the modern use of the term "hacking" was coined within the walls of the Massachusetts Institute of Technology. For many years after, a hacker was defined as someone who was an expert at programming and problem-solving with computers, who could stretch the capabilities of what computers and computer programs were originally intended to do.
Hacking is an activity, and what separates any activity from a crime is, very often, permission. Hacking isn't an inherently criminal activity. Someone who engages in the illegal use of hacking should not be called a "bad hacker" but a "cybercriminal," "threat actor" or "cyberattacker." Hackers are people like me and my team at IBM — security professionals who are searching for vulnerabilities, hoping to find weak links in our computer systems before criminals can exploit them.
Those who commit computer crimes fall into two categories: "black hat" and "gray hat." A black hat is someone who hacks with malicious intentions (espionage, data theft), seeking financial or personal gain by exploiting vulnerabilities. A gray hat is someone whose intentions may not be malicious but lacks the permission to hack into a system. Whether a particular criminal is a black hat or a gray hat is simply descriptive of the motivation behind what has already been established as illegal activity.
Somewhere along the way, the security industry also recruited ethics to help justify hacking behaviour, giving us "the ethical hacker" and adding an artificial defensiveness to a profession that has existed since the 1950s. Unfortunately, even accredited security certifications use the adjective in their very title. And while we can't and shouldn't fault the general public for referring to us as ethical hackers, I ask you this: Does it sound right to introduce someone as an ethical stockbroker? How about an ethical engineer or ethical professor?
Hackers play a critical role in keeping companies and people safe. A hacker failing to do the job right is the equivalent to letting a company believe and function as if it's wearing a bulletproof vest when in fact, it's wearing cashmere.
The misrepresentation of the term "hacker" not only undermines the offensive security community but also distorts legislators' understanding and perception of hackers overall. The Computer Fraud and Abuse Act, for example, relies heavily on the term and its misinterpretation. For society to have open and productive discussions about security research and penetration testing, we need to set the record straight on who and what hackers really are. Many government officials whom I've spoken with understand this. Others choose to take my license plate away.
[Extracted with edits and revisions from 'Opinion | Most Hackers Aren't Criminals', The New York Times]
Q. All of the following describe "ethical hacker" except:
Directions: Read the following passage and answer the question.
Back in the 1950s, the modern use of the term "hacking" was coined within the walls of the Massachusetts Institute of Technology. For many years after, a hacker was defined as someone who was an expert at programming and problem-solving with computers, who could stretch the capabilities of what computers and computer programs were originally intended to do.
Hacking is an activity, and what separates any activity from a crime is, very often, permission. Hacking isn't an inherently criminal activity. Someone who engages in the illegal use of hacking should not be called a "bad hacker" but a "cybercriminal," "threat actor" or "cyberattacker." Hackers are people like me and my team at IBM — security professionals who are searching for vulnerabilities, hoping to find weak links in our computer systems before criminals can exploit them.
Those who commit computer crimes fall into two categories: "black hat" and "gray hat." A black hat is someone who hacks with malicious intentions (espionage, data theft), seeking financial or personal gain by exploiting vulnerabilities. A gray hat is someone whose intentions may not be malicious but lacks the permission to hack into a system. Whether a particular criminal is a black hat or a gray hat is simply descriptive of the motivation behind what has already been established as illegal activity.
Somewhere along the way, the security industry also recruited ethics to help justify hacking behaviour, giving us "the ethical hacker" and adding an artificial defensiveness to a profession that has existed since the 1950s. Unfortunately, even accredited security certifications use the adjective in their very title. And while we can't and shouldn't fault the general public for referring to us as ethical hackers, I ask you this: Does it sound right to introduce someone as an ethical stockbroker? How about an ethical engineer or ethical professor?
Hackers play a critical role in keeping companies and people safe. A hacker failing to do the job right is the equivalent to letting a company believe and function as if it's wearing a bulletproof vest when in fact, it's wearing cashmere.
The misrepresentation of the term "hacker" not only undermines the offensive security community but also distorts legislators' understanding and perception of hackers overall. The Computer Fraud and Abuse Act, for example, relies heavily on the term and its misinterpretation. For society to have open and productive discussions about security research and penetration testing, we need to set the record straight on who and what hackers really are. Many government officials whom I've spoken with understand this. Others choose to take my license plate away.
[Extracted with edits and revisions from 'Opinion | Most Hackers Aren't Criminals', The New York Times]
Q. What is the reason behind the author's statement "we need to set the record straight on who and what hackers really are"?
Directions: Read the following passage and answer the question.
Back in the 1950s, the modern use of the term "hacking" was coined within the walls of the Massachusetts Institute of Technology. For many years after, a hacker was defined as someone who was an expert at programming and problem-solving with computers, who could stretch the capabilities of what computers and computer programs were originally intended to do.
Hacking is an activity, and what separates any activity from a crime is, very often, permission. Hacking isn't an inherently criminal activity. Someone who engages in the illegal use of hacking should not be called a "bad hacker" but a "cybercriminal," "threat actor" or "cyberattacker." Hackers are people like me and my team at IBM — security professionals who are searching for vulnerabilities, hoping to find weak links in our computer systems before criminals can exploit them.
Those who commit computer crimes fall into two categories: "black hat" and "gray hat." A black hat is someone who hacks with malicious intentions (espionage, data theft), seeking financial or personal gain by exploiting vulnerabilities. A gray hat is someone whose intentions may not be malicious but lacks the permission to hack into a system. Whether a particular criminal is a black hat or a gray hat is simply descriptive of the motivation behind what has already been established as illegal activity.
Somewhere along the way, the security industry also recruited ethics to help justify hacking behaviour, giving us "the ethical hacker" and adding an artificial defensiveness to a profession that has existed since the 1950s. Unfortunately, even accredited security certifications use the adjective in their very title. And while we can't and shouldn't fault the general public for referring to us as ethical hackers, I ask you this: Does it sound right to introduce someone as an ethical stockbroker? How about an ethical engineer or ethical professor?
Hackers play a critical role in keeping companies and people safe. A hacker failing to do the job right is the equivalent to letting a company believe and function as if it's wearing a bulletproof vest when in fact, it's wearing cashmere.
The misrepresentation of the term "hacker" not only undermines the offensive security community but also distorts legislators' understanding and perception of hackers overall. The Computer Fraud and Abuse Act, for example, relies heavily on the term and its misinterpretation. For society to have open and productive discussions about security research and penetration testing, we need to set the record straight on who and what hackers really are. Many government officials whom I've spoken with understand this. Others choose to take my license plate away.
[Extracted with edits and revisions from 'Opinion | Most Hackers Aren't Criminals', The New York Times]
Q. What is the primary role of a "gray hat" hacker, as described in the passage?
Directions: Read the following passage and answer the question.
Back in the 1950s, the modern use of the term "hacking" was coined within the walls of the Massachusetts Institute of Technology. For many years after, a hacker was defined as someone who was an expert at programming and problem-solving with computers, who could stretch the capabilities of what computers and computer programs were originally intended to do.
Hacking is an activity, and what separates any activity from a crime is, very often, permission. Hacking isn't an inherently criminal activity. Someone who engages in the illegal use of hacking should not be called a "bad hacker" but a "cybercriminal," "threat actor" or "cyberattacker." Hackers are people like me and my team at IBM — security professionals who are searching for vulnerabilities, hoping to find weak links in our computer systems before criminals can exploit them.
Those who commit computer crimes fall into two categories: "black hat" and "gray hat." A black hat is someone who hacks with malicious intentions (espionage, data theft), seeking financial or personal gain by exploiting vulnerabilities. A gray hat is someone whose intentions may not be malicious but lacks the permission to hack into a system. Whether a particular criminal is a black hat or a gray hat is simply descriptive of the motivation behind what has already been established as illegal activity.
Somewhere along the way, the security industry also recruited ethics to help justify hacking behaviour, giving us "the ethical hacker" and adding an artificial defensiveness to a profession that has existed since the 1950s. Unfortunately, even accredited security certifications use the adjective in their very title. And while we can't and shouldn't fault the general public for referring to us as ethical hackers, I ask you this: Does it sound right to introduce someone as an ethical stockbroker? How about an ethical engineer or ethical professor?
Hackers play a critical role in keeping companies and people safe. A hacker failing to do the job right is the equivalent to letting a company believe and function as if it's wearing a bulletproof vest when in fact, it's wearing cashmere.
The misrepresentation of the term "hacker" not only undermines the offensive security community but also distorts legislators' understanding and perception of hackers overall. The Computer Fraud and Abuse Act, for example, relies heavily on the term and its misinterpretation. For society to have open and productive discussions about security research and penetration testing, we need to set the record straight on who and what hackers really are. Many government officials whom I've spoken with understand this. Others choose to take my license plate away.
[Extracted with edits and revisions from 'Opinion | Most Hackers Aren't Criminals', The New York Times]
Q. Which of the following most accurately captures the primary message the author is conveying in the provided passage?
Directions: Read the following information carefully and answer the questions given beside.
Prime Minister Narendra Modi on Monday showcased India as an attractive destination for investment in the defence manufacturing sector, and said the country will move towards becoming one of the leading exporters of military hardware globally, backed by favourable economic policies.
After inaugurating the 14th edition of Aero India, he said India has “rejuvenated” its defence production sector in the last eight-nine years and is looking at increasing the export of military hardware from USD 1.5 billion (one billion is ₹ 100 crore) to USD five billion by 2024-25. “The new India of the 21st century will neither miss any opportunity nor will it lack any effort. We are gearing up. We are bringing revolution in every sector on the path of reforms. PM Modi said India’s defence exports have increased six times in the last five years, and that it has crossed the figure of USD 1.5 billion in its exports.
The five-day aerospace exhibition, considered the largest in Asia, is being participated by over 700 Indian and foreign defence companies, besides delegates from around 100 countries which included several defence ministers as well. The presence of around 100 countries of the world shows how much the world’s faith in India has increased. More than 700 exhibitors from India and abroad are participating in it. It has broken all old records till now.
[Extracted, with edits and revisions, from: “India Rejuvenated Defence Production Sector In 8-9 years: PM Modi At Aero India 2023”, NDTV]
Q. According to Prime Minister Modi, what is India's target for military hardware exports by 2024-25?
Directions: Read the following information carefully and answer the questions given beside.
Prime Minister Narendra Modi on Monday showcased India as an attractive destination for investment in the defence manufacturing sector, and said the country will move towards becoming one of the leading exporters of military hardware globally, backed by favourable economic policies.
After inaugurating the 14th edition of Aero India, he said India has “rejuvenated” its defence production sector in the last eight-nine years and is looking at increasing the export of military hardware from USD 1.5 billion (one billion is ₹ 100 crore) to USD five billion by 2024-25. “The new India of the 21st century will neither miss any opportunity nor will it lack any effort. We are gearing up. We are bringing revolution in every sector on the path of reforms. PM Modi said India’s defence exports have increased six times in the last five years, and that it has crossed the figure of USD 1.5 billion in its exports.
The five-day aerospace exhibition, considered the largest in Asia, is being participated by over 700 Indian and foreign defence companies, besides delegates from around 100 countries which included several defence ministers as well. The presence of around 100 countries of the world shows how much the world’s faith in India has increased. More than 700 exhibitors from India and abroad are participating in it. It has broken all old records till now.
[Extracted, with edits and revisions, from: “India Rejuvenated Defence Production Sector In 8-9 years: PM Modi At Aero India 2023”, NDTV]
Q. In the Stockholm International Peace Research Institute (SIPRI) list of major arms exporters for the period 2015-2019, what is India's position or rank?
Directions: Read the following information carefully and answer the questions given beside.
Prime Minister Narendra Modi on Monday showcased India as an attractive destination for investment in the defence manufacturing sector, and said the country will move towards becoming one of the leading exporters of military hardware globally, backed by favourable economic policies.
After inaugurating the 14th edition of Aero India, he said India has “rejuvenated” its defence production sector in the last eight-nine years and is looking at increasing the export of military hardware from USD 1.5 billion (one billion is ₹ 100 crore) to USD five billion by 2024-25. “The new India of the 21st century will neither miss any opportunity nor will it lack any effort. We are gearing up. We are bringing revolution in every sector on the path of reforms. PM Modi said India’s defence exports have increased six times in the last five years, and that it has crossed the figure of USD 1.5 billion in its exports.
The five-day aerospace exhibition, considered the largest in Asia, is being participated by over 700 Indian and foreign defence companies, besides delegates from around 100 countries which included several defence ministers as well. The presence of around 100 countries of the world shows how much the world’s faith in India has increased. More than 700 exhibitors from India and abroad are participating in it. It has broken all old records till now.
[Extracted, with edits and revisions, from: “India Rejuvenated Defence Production Sector In 8-9 years: PM Modi At Aero India 2023”, NDTV]
Q. What is Aero India 2023?
Directions: Read the following information carefully and answer the questions given beside.
Prime Minister Narendra Modi on Monday showcased India as an attractive destination for investment in the defence manufacturing sector, and said the country will move towards becoming one of the leading exporters of military hardware globally, backed by favourable economic policies.
After inaugurating the 14th edition of Aero India, he said India has “rejuvenated” its defence production sector in the last eight-nine years and is looking at increasing the export of military hardware from USD 1.5 billion (one billion is ₹ 100 crore) to USD five billion by 2024-25. “The new India of the 21st century will neither miss any opportunity nor will it lack any effort. We are gearing up. We are bringing revolution in every sector on the path of reforms. PM Modi said India’s defence exports have increased six times in the last five years, and that it has crossed the figure of USD 1.5 billion in its exports.
The five-day aerospace exhibition, considered the largest in Asia, is being participated by over 700 Indian and foreign defence companies, besides delegates from around 100 countries which included several defence ministers as well. The presence of around 100 countries of the world shows how much the world’s faith in India has increased. More than 700 exhibitors from India and abroad are participating in it. It has broken all old records till now.
[Extracted, with edits and revisions, from: “India Rejuvenated Defence Production Sector In 8-9 years: PM Modi At Aero India 2023”, NDTV]
Q. From which country did India receive its inaugural export order for BrahMos missiles?
Directions: Read the following information carefully and answer the questions given beside.
Prime Minister Narendra Modi on Monday showcased India as an attractive destination for investment in the defence manufacturing sector, and said the country will move towards becoming one of the leading exporters of military hardware globally, backed by favourable economic policies.
After inaugurating the 14th edition of Aero India, he said India has “rejuvenated” its defence production sector in the last eight-nine years and is looking at increasing the export of military hardware from USD 1.5 billion (one billion is ₹ 100 crore) to USD five billion by 2024-25. “The new India of the 21st century will neither miss any opportunity nor will it lack any effort. We are gearing up. We are bringing revolution in every sector on the path of reforms. PM Modi said India’s defence exports have increased six times in the last five years, and that it has crossed the figure of USD 1.5 billion in its exports.
The five-day aerospace exhibition, considered the largest in Asia, is being participated by over 700 Indian and foreign defence companies, besides delegates from around 100 countries which included several defence ministers as well. The presence of around 100 countries of the world shows how much the world’s faith in India has increased. More than 700 exhibitors from India and abroad are participating in it. It has broken all old records till now.
[Extracted, with edits and revisions, from: “India Rejuvenated Defence Production Sector In 8-9 years: PM Modi At Aero India 2023”, NDTV]
Q. The Korwa Ordnance Factory located in Amethi, Uttar Pradesh, has successfully manufactured the initial batch of 7.62 mm Kalashnikov AK-203 assault rifles. This achievement represents a collaborative effort in the production of the Kalashnikov AK-203 assault rifles.