LNAT Exam  >  LNAT Notes  >  Essay Writing  >  Sample Essays for LNAT - 15

Sample Essays for LNAT - 15

This practice document contains two fully worked LNAT essay examples with model answers and critical assessment. Each essay addresses a distinct aspect of technology and modern life, demonstrating the analytical depth, structural rigour, and argumentative sophistication expected in high-scoring LNAT responses.

Question 1

"Social media companies should be legally liable for the mental health harm their platforms cause to young users." To what extent do you agree?

Model Answer

The proliferation of social media platforms has fundamentally altered the landscape of adolescent development, with mounting evidence linking excessive usage to depression, anxiety, and body dysmorphia among young people. Whilst the imposition of legal liability upon technology companies for such harms appears intuitively appealing as a mechanism of accountability, the practical, ethical, and jurisprudential challenges inherent in such an approach render it an imperfect solution. A more nuanced position recognises that whilst social media companies bear moral responsibility for the design choices that exacerbate mental health harms, legal liability must be carefully circumscribed to avoid unintended consequences whilst still incentivising safer platform design.

The primary justification for imposing legal liability upon social media companies rests on the doctrine of corporate responsibility for foreseeable harm. Internal documents released during the Facebook whistleblower case in 2021 revealed that Meta's own research demonstrated that Instagram usage worsened body image issues for one in three teenage girls, yet the company continued to prioritise engagement-maximising algorithms over user wellbeing. This constitutes a clear case of corporate knowledge of harm, analogous to tobacco companies' suppression of research linking cigarettes to cancer. Just as product liability law holds manufacturers accountable for defective goods that cause injury, social media platforms that deploy psychologically manipulative design features-infinite scrolling, variable reward schedules, and algorithmically curated content designed to maximise time on platform-should bear legal responsibility when these features demonstrably harm vulnerable users. The Online Safety Act 2023 in the United Kingdom represents a step in this direction, imposing a duty of care on platforms to protect children from harmful content, with potential criminal liability for executives. Such legal frameworks acknowledge that corporate entities possess both the resources and the technical capacity to mitigate harms they have actively engineered into their products.

However, the establishment of direct legal liability for mental health outcomes presents formidable evidentiary and causal challenges that risk rendering such laws either unenforceable or unjustly broad in application. Mental health conditions arise from complex, multifactorial causes including genetics, family environment, socioeconomic status, and peer relationships; isolating social media usage as the proximate cause of a specific individual's depression or anxiety disorder would require a burden of proof nearly impossible to satisfy in court. Unlike a defective vehicle component that directly causes a collision, or a pharmaceutical product that produces a specific adverse reaction, the pathway from platform design to mental health harm is mediated by individual psychology, usage patterns, and contextual factors. Furthermore, overly expansive liability regimes risk incentivising platforms to implement blunt, restrictive measures-such as blanket age verification systems that compromise user privacy, or the removal of all potentially sensitive content including legitimate mental health support resources-that may ultimately prove more harmful than the problems they seek to address. The threat of litigation might paradoxically lead companies to become less transparent about internal research into potential harms, as such documentation could constitute evidence in future legal proceedings.

Nevertheless, the counterargument that market forces and parental oversight provide sufficient safeguards against platform-induced mental health harms fails to account for the power asymmetries and information deficits inherent in the current technological landscape. Parents often lack the technical literacy to understand algorithmic recommendation systems or the psychological mechanisms of behavioural addiction engineered into platforms. Moreover, network effects-whereby the utility of a platform increases with the number of users-create situations where young people face genuine social exclusion if they do not participate in dominant platforms, rendering individual choice largely illusory. A regulatory framework that stops short of absolute strict liability but instead imposes mandatory transparency requirements, independent safety audits, and significant financial penalties for violations of duty of care standards represents a middle path. The European Union's Digital Services Act, which requires large platforms to conduct risk assessments of systemic harms including mental health impacts and provide researchers with data access, exemplifies this graduated approach. Such regulation creates accountability without requiring the impossible standard of proving direct causation in individual cases.

In conclusion, whilst the imposition of legal liability for mental health harms caused by social media platforms is justified in principle by evidence of corporate knowledge and engineered addictiveness, its implementation must be carefully calibrated to overcome evidentiary challenges and avoid counterproductive outcomes. Rather than creating a regime of individual tort liability, policymakers should focus on structural accountability mechanisms: mandatory design standards that prioritise child safety, algorithmic transparency requirements, independent oversight bodies with enforcement powers, and substantial penalties for breaches of duty of care. The question is not whether social media companies should face legal consequences for harms their platforms cause, but rather how legal frameworks can be designed to create meaningful accountability whilst remaining practically enforceable and avoiding the chilling effects on legitimate expression and innovation that overly broad liability might produce.

Overall Standard - What This Model Essay Demonstrates

This essay achieves a high standard through several specific features. It opens with a sophisticated thesis that neither simply agrees nor disagrees but stakes out a conditional position, immediately signalling analytical maturity. The body paragraphs employ clear PEEL structure without mechanical rigidity: the second paragraph establishes a point about corporate responsibility, supports it with concrete evidence from the Facebook whistleblower case and specific statistics, explains the analogy to product liability law, and links back to legal frameworks like the Online Safety Act 2023. The third paragraph genuinely engages with counterargument rather than constructing a weak opposition to dismiss, addressing the serious evidentiary challenges of proving causation in mental health cases-a level of engagement absent in weaker essays. Real-world legal references (Online Safety Act 2023, Digital Services Act) demonstrate research beyond superficial knowledge. The conclusion adds final insight rather than merely summarising, proposing structural accountability mechanisms as an alternative to individual tort liability. The prose maintains formal academic register throughout without lapsing into conversational language or personal opinion. Vocabulary is precise ("jurisprudential challenges," "proximate cause") without being gratuitously obscure. At approximately 640 words, it meets length expectations whilst maintaining density of argument.

Question 2

"The development of artificial intelligence should be halted until adequate regulatory frameworks are established." Do you agree?

Model Answer

The exponential advancement of artificial intelligence capabilities, particularly in the domain of large language models and generative systems, has prompted urgent calls from technologists, ethicists, and policymakers for regulatory intervention before further development proceeds. Whilst the impulse towards caution is understandable given the potential for catastrophic risks, a blanket moratorium on AI development is neither practically feasible nor strategically advisable. Instead, regulatory frameworks must be developed concurrently with technological advancement through adaptive governance mechanisms that impose guardrails on high-risk applications whilst permitting continued innovation in domains where AI offers substantial societal benefit.

The case for halting AI development rests on the precautionary principle: when an activity raises threats of serious or irreversible harm, lack of full scientific certainty should not be used as grounds for postponing measures to prevent such harm. AI systems already demonstrate capabilities that pose immediate risks-from algorithmic bias in criminal sentencing and hiring decisions that perpetuate structural discrimination, to deepfake technologies that threaten electoral integrity and individual privacy. The Centre for AI Safety statement signed by hundreds of AI researchers in May 2023, which declared that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," underscores the severity of concerns within the expert community itself. Without adequate regulatory frameworks, the deployment of increasingly powerful AI systems amounts to conducting a vast, uncontrolled experiment on society with potentially irreversible consequences. Historical precedent supports regulatory caution: the development of nuclear technology proceeded without sufficient international governance, resulting in proliferation challenges that persist decades later. Establishing comprehensive regulatory architecture before AI capabilities reach critical thresholds of autonomy and influence would prevent similarly intractable problems from becoming entrenched.

However, the proposal to halt AI development entirely until regulation is established founders on both practical impossibility and opportunity cost. AI research is globally distributed across academic institutions, private corporations, and state actors in dozens of countries with divergent regulatory philosophies and competitive incentives. Any moratorium not universally adopted-and universal adoption is implausible given geopolitical rivalries, particularly between the United States and China-would simply cede technological leadership to less scrupulous actors whilst forgoing AI's considerable benefits. Moreover, AI technologies already deliver substantial societal value in domains including medical diagnostics, climate modelling, drug discovery, and accessibility tools for disabled individuals. AlphaFold, DeepMind's protein structure prediction system, has accelerated biological research by solving a fifty-year-old problem in molecular biology, with implications for developing treatments for diseases affecting millions. A blanket development halt would abandon such applications alongside riskier ones, constituting a form of technological luddism that sacrifices tangible present benefits for speculative future safety. Regulation need not wait for development to cease; indeed, effective regulation requires ongoing technical understanding that can only be maintained through continued research and iterative learning about AI capabilities and limitations.

Opponents of continued development might argue that the asymmetry between rapid technological advancement and slow regulatory processes creates inevitable governance gaps that only a moratorium could close. Regulatory bodies operate on timescales of years whilst AI capabilities evolve on timescales of months, making comprehensive ex ante regulation impossible before new capabilities emerge. Yet this argument misconstrues the nature of effective technology governance, which need not be perfectly comprehensive to be functional. The European Union's AI Act, proposed in 2021 and progressing through legislative processes, adopts a risk-tiered approach that imposes stringent requirements on high-risk applications (such as biometric identification and critical infrastructure) whilst permitting lighter-touch regulation for lower-risk uses. This proportionate framework can be adapted as new risks emerge without requiring suspension of all development. Similarly, sector-specific regulatory adaptations-such as the FDA's guidance on AI in medical devices or the Financial Conduct Authority's principles for AI in financial services-demonstrate that targeted governance can address domain-specific risks without broad moratoriums. What is required is not a halt to development but rather mandatory transparency requirements, impact assessments for high-risk deployments, and regulatory agility to update frameworks in response to emerging capabilities.

In conclusion, whilst the development of artificial intelligence without adequate governance structures poses genuine risks that merit serious regulatory attention, a complete moratorium is neither achievable in a multipolar world nor desirable given AI's demonstrated benefits in critical domains. The appropriate response is not to halt progress but to pursue adaptive, risk-proportionate regulation that establishes clear red lines for unacceptable applications, mandates transparency and accountability for high-risk systems, and invests in regulatory capacity to evolve alongside technology. The challenge is not to choose between innovation and safety, but to construct governance mechanisms sophisticated enough to enable the former whilst ensuring the latter-a task that requires continued engagement with AI development rather than withdrawal from it.

Overall Standard - What This Model Essay Demonstrates

This essay demonstrates high-level analytical capability through several concrete features. The introduction immediately establishes a nuanced position that rejects the binary framing of the question, signalling sophisticated thinking. The argument progresses logically: the second paragraph presents the strongest case for the moratorium using the precautionary principle, supported by specific evidence (Centre for AI Safety statement, May 2023) and historical analogy to nuclear technology that is relevant rather than superficial. The third paragraph offers substantive counterargument rather than token opposition, addressing practical impossibility through geopolitical analysis and opportunity cost through concrete example (AlphaFold's contribution to protein folding). Critically, the fourth paragraph engages with the strongest objection to the essay's own position-the speed asymmetry between development and regulation-and responds with specific regulatory examples (EU AI Act, FDA guidance, FCA principles) that demonstrate research beyond general knowledge. The conclusion synthesises rather than summarises, framing the issue as constructing sufficiently sophisticated governance rather than choosing between development and safety. The essay maintains formal academic register throughout, employs precise vocabulary ("ex ante regulation," "risk-tiered approach"), and integrates real-world references seamlessly into argumentative flow rather than name-dropping. At approximately 650 words, it maximises the available space with substantive content throughout.

The document Sample Essays for LNAT - 15 is a part of the LNAT Course Essay Writing for LNAT.
All you need of LNAT at this link: LNAT
Explore Courses for LNAT exam
Get EduRev Notes directly in your Google search
Related Searches
Free, Important questions, pdf , Viva Questions, Objective type Questions, Summary, Semester Notes, past year papers, MCQs, Extra Questions, Sample Essays for LNAT - 15, Exam, mock tests for examination, shortcuts and tricks, study material, Sample Essays for LNAT - 15, video lectures, Sample Paper, ppt, Previous Year Questions with Solutions, Sample Essays for LNAT - 15, practice quizzes;