Page 1
47 November 2023
Re SPONSibLe USe OF Ai
bRidGiNG iNNOv AtiON ANd e thiCS
rtificial i ntelligence (Ai) is transforming
the way humans interact, industries
function, and societies are structured.
t he seemingly limitless potential of Ai
across multiple domains, countries, and human
imaginations has spawned numerous applications.
current applications include image and text
analysis for data analysis purposes, logistics,
assistance in decision-making, autonomous
vehicles, and aerial systems, cybersecurity, etc.
The New Delhi Leaders’ Declaration highlights the significance of
harnessing ‘ AI responsibly for good and for all’ . It states that the G20
leaders are committed to leveraging AI for the public good by solving
challenges in a responsible, inclusive, and human-centric manner while
protecting people’s rights and safety. Groupings like these are in an
opportune position to take the lead in this regard, thereby bridging the
gap between innovation and the ethics of the use of AI.
Additionally, it is being used for security,
surveillance, and inventory management. it is also
being applied extensively to areas like agriculture,
fintech, healthcare, manufacturing, and climate
change, yielding sizeable dividends in all of them.
it has become abundantly clear in the recent
past that Ai can augment human capabilities
and aid us in tackling some of the most pressing
challenges of our time. Ai is a force that has the
ShImona mohan
dR SameeR paTIL
The co-author is a Junior Fellow at the Centre for Security, Strategy and Technology, ORF. She works at the intersection of security, technology
(especially AI and cybersecurity), gender and disarmament. Email: shimona.mohan@orfonline.org.
The author is a Senior Fellow and Deputy Director at the Observer Research Foundation (ORF). He works at the intersection of technology and
national security. Email: sameer.patil@orfonline.org.
A
Page 2
47 November 2023
Re SPONSibLe USe OF Ai
bRidGiNG iNNOv AtiON ANd e thiCS
rtificial i ntelligence (Ai) is transforming
the way humans interact, industries
function, and societies are structured.
t he seemingly limitless potential of Ai
across multiple domains, countries, and human
imaginations has spawned numerous applications.
current applications include image and text
analysis for data analysis purposes, logistics,
assistance in decision-making, autonomous
vehicles, and aerial systems, cybersecurity, etc.
The New Delhi Leaders’ Declaration highlights the significance of
harnessing ‘ AI responsibly for good and for all’ . It states that the G20
leaders are committed to leveraging AI for the public good by solving
challenges in a responsible, inclusive, and human-centric manner while
protecting people’s rights and safety. Groupings like these are in an
opportune position to take the lead in this regard, thereby bridging the
gap between innovation and the ethics of the use of AI.
Additionally, it is being used for security,
surveillance, and inventory management. it is also
being applied extensively to areas like agriculture,
fintech, healthcare, manufacturing, and climate
change, yielding sizeable dividends in all of them.
it has become abundantly clear in the recent
past that Ai can augment human capabilities
and aid us in tackling some of the most pressing
challenges of our time. Ai is a force that has the
ShImona mohan
dR SameeR paTIL
The co-author is a Junior Fellow at the Centre for Security, Strategy and Technology, ORF. She works at the intersection of security, technology
(especially AI and cybersecurity), gender and disarmament. Email: shimona.mohan@orfonline.org.
The author is a Senior Fellow and Deputy Director at the Observer Research Foundation (ORF). He works at the intersection of technology and
national security. Email: sameer.patil@orfonline.org.
A
48 November 2023
capacity to create a more sustainable, equitable,
and interconnected world. However, it also raises
critical ethical and societal concerns, which require
adequate policy consideration and responses.
this highlights the need for the responsible
development and deployment of Ai to ensure that
its transformative power benefits everyone and
leaves no one behind.
G20 new delhi leaders’ declaration and
responsible ai
states are increasingly being compelled
to practise responsible behaviour in their
engagements with Ai for civilian, security and
defence purposes. in this context, the recently
concluded g20 summit in new Delhi (9-10
september 2023) has tackled multiple aspects
related to r esponsible Ai (r Ai). Most of the g20
members have been working towards establishing
regulations for the responsible use of Ai, especially
since the advent of genAi applications. the
european union’s proposed Ai Act is the most
comprehensive attempt to establish a regulatory
framework for the responsible development
of Ai that focuses primarily on strengthening
rules around data quality, transparency, human
oversight, and accountability.
1
t he new Delhi Leaders’ Declaration highlights
the significance of harnessing ‘Ai responsibly for
good and for all’ .
2
it states that the g20 leaders are
committed to leveraging Ai for the public good
by solving challenges in a responsible, inclusive,
and human-centric manner, while protecting
people’s rights and safety. it adds that to ensure
responsible Ai development, deployment and use,
the protection of human rights, transparency and
explainability, fairness, accountability, regulation,
safety, appropriate human oversight, ethics, biases,
privacy, and data protection must be addressed.
in addition, the declaration mentions that the g20
members will pursue a pro-innovation regulatory/
governance approach that maximises the benefits
and takes into account the risks associated with
the use of Ai.
the declaration also reaffirms the leaders’
commitment to g20 Ai Principles of 2019.
these principles had been adopted at the
2019 osaka summit and underline the human-
centred approach of Ai.
3
they take a cue from
the organisation for economic cooperation and
Development principles on Ai, also adopted in
2019, that support the technology to become
innovative and trustworthy, and respect human
rights and democratic values.
4
Besides this, the
declaration also underlines the importance
of investment in supporting human capital
development. towards this, g20 leaders agreed
to extend support to educational institutions
and teachers to enable them to keep pace with
emerging trends and technological advances
including Ai. this will play an important role in
imparting skills for the youth entering the job
market and will offset the concerns around the
adverse economic impacts of Ai.
h ow does ai pose e thical r isks?
According to the AiAAic (Ai, Algorithmic,
and Automation incidents and controversies)
database, which tracks incidents related to the
ethical misuse of Ai, the number of Ai incidents
and controversies has increased 26 times since
2012.
5
several critics of Ai have also raised concerns
about gender and racial bias when it comes to the
application of Ai to services like healthcare and
finance. Although it may appear to be so, Ai is not
neutral; it can internalise and then catastrophically
enhance biases that societies possess, programme
them into the code, and/or ignore them in outputs
Page 3
47 November 2023
Re SPONSibLe USe OF Ai
bRidGiNG iNNOv AtiON ANd e thiCS
rtificial i ntelligence (Ai) is transforming
the way humans interact, industries
function, and societies are structured.
t he seemingly limitless potential of Ai
across multiple domains, countries, and human
imaginations has spawned numerous applications.
current applications include image and text
analysis for data analysis purposes, logistics,
assistance in decision-making, autonomous
vehicles, and aerial systems, cybersecurity, etc.
The New Delhi Leaders’ Declaration highlights the significance of
harnessing ‘ AI responsibly for good and for all’ . It states that the G20
leaders are committed to leveraging AI for the public good by solving
challenges in a responsible, inclusive, and human-centric manner while
protecting people’s rights and safety. Groupings like these are in an
opportune position to take the lead in this regard, thereby bridging the
gap between innovation and the ethics of the use of AI.
Additionally, it is being used for security,
surveillance, and inventory management. it is also
being applied extensively to areas like agriculture,
fintech, healthcare, manufacturing, and climate
change, yielding sizeable dividends in all of them.
it has become abundantly clear in the recent
past that Ai can augment human capabilities
and aid us in tackling some of the most pressing
challenges of our time. Ai is a force that has the
ShImona mohan
dR SameeR paTIL
The co-author is a Junior Fellow at the Centre for Security, Strategy and Technology, ORF. She works at the intersection of security, technology
(especially AI and cybersecurity), gender and disarmament. Email: shimona.mohan@orfonline.org.
The author is a Senior Fellow and Deputy Director at the Observer Research Foundation (ORF). He works at the intersection of technology and
national security. Email: sameer.patil@orfonline.org.
A
48 November 2023
capacity to create a more sustainable, equitable,
and interconnected world. However, it also raises
critical ethical and societal concerns, which require
adequate policy consideration and responses.
this highlights the need for the responsible
development and deployment of Ai to ensure that
its transformative power benefits everyone and
leaves no one behind.
G20 new delhi leaders’ declaration and
responsible ai
states are increasingly being compelled
to practise responsible behaviour in their
engagements with Ai for civilian, security and
defence purposes. in this context, the recently
concluded g20 summit in new Delhi (9-10
september 2023) has tackled multiple aspects
related to r esponsible Ai (r Ai). Most of the g20
members have been working towards establishing
regulations for the responsible use of Ai, especially
since the advent of genAi applications. the
european union’s proposed Ai Act is the most
comprehensive attempt to establish a regulatory
framework for the responsible development
of Ai that focuses primarily on strengthening
rules around data quality, transparency, human
oversight, and accountability.
1
t he new Delhi Leaders’ Declaration highlights
the significance of harnessing ‘Ai responsibly for
good and for all’ .
2
it states that the g20 leaders are
committed to leveraging Ai for the public good
by solving challenges in a responsible, inclusive,
and human-centric manner, while protecting
people’s rights and safety. it adds that to ensure
responsible Ai development, deployment and use,
the protection of human rights, transparency and
explainability, fairness, accountability, regulation,
safety, appropriate human oversight, ethics, biases,
privacy, and data protection must be addressed.
in addition, the declaration mentions that the g20
members will pursue a pro-innovation regulatory/
governance approach that maximises the benefits
and takes into account the risks associated with
the use of Ai.
the declaration also reaffirms the leaders’
commitment to g20 Ai Principles of 2019.
these principles had been adopted at the
2019 osaka summit and underline the human-
centred approach of Ai.
3
they take a cue from
the organisation for economic cooperation and
Development principles on Ai, also adopted in
2019, that support the technology to become
innovative and trustworthy, and respect human
rights and democratic values.
4
Besides this, the
declaration also underlines the importance
of investment in supporting human capital
development. towards this, g20 leaders agreed
to extend support to educational institutions
and teachers to enable them to keep pace with
emerging trends and technological advances
including Ai. this will play an important role in
imparting skills for the youth entering the job
market and will offset the concerns around the
adverse economic impacts of Ai.
h ow does ai pose e thical r isks?
According to the AiAAic (Ai, Algorithmic,
and Automation incidents and controversies)
database, which tracks incidents related to the
ethical misuse of Ai, the number of Ai incidents
and controversies has increased 26 times since
2012.
5
several critics of Ai have also raised concerns
about gender and racial bias when it comes to the
application of Ai to services like healthcare and
finance. Although it may appear to be so, Ai is not
neutral; it can internalise and then catastrophically
enhance biases that societies possess, programme
them into the code, and/or ignore them in outputs
49 November 2023
in the absence of sensitivities to those biases, to
begin with.
6
if the datasets used in developing
any Ai system are incomplete or skewed towards
or against a sub-group, they will produce results
that marginalise those sub-groups or make them
invisible in some way. Y et, even if a dataset is precise
and representative of the intended population,
biased Machine Learning (ML) algorithms applied
to the data may still result in biased outputs.
in most supervised ML models, training
datasets are given labels by a human developer
or coder to enable the ML model to classify
the information it already has. the model then
characterises new information given to it based on
this classification syntax, after which it generates
an output. t here are two possible modes of bias
introduction in this process: first, if the human
developers have their own biases, which they either
introduce into the system or retain due to ignorant
oversight; and second, if biases are incorporated
in the processing of the data within the ‘black box’
of the Ai/ML system, that is not explainable to or
understandable by human operators.
7
the black
box, as the name suggests, makes the learning
process of the system opaque, and its algorithms
can thus only be fixed once an output is generated
and the human developer affirms that there was a
problem with processing the input data.
Besides this, there are also ethical concerns that
have arisen over issues like copyright infringement
and privacy violations due to apps that create
realistic images and art from a description in
natural language.
8,9
several artists have accused
apps of training their algorithms based on images
and illustrations scraped from the web without
the original artists’ consent.
10
then there are concerns regarding the
misuse of Ai in the defence domain to enhance
targeting and surveillance capabilities of drones
on the battlefield. t his is a use-case of Ai in drone
warfare with the potential of ensuing violence.
in other cases, critics have also noted the misuse
of Ai for illegal surveillance. in the cybersecurity
sphere, generative Ai applications are increasingly
posing legitimate security threats as they are
being used to conduct malware attacks. For
instance, cybercriminals, with the help of Ai, mass
generating phishing emails to spread malware
and collect valuable information. these phishing
emails have higher rates of success, than manually
crafted phishing emails. However, an even more
insidious threat has emerged through ‘deepfakes,’
which generate synthetic or artificial media using
ML. such realistic-looking content is difficult
to verify and have become a powerful tool for
disinformation, with grave national security
implications. For instance, in March 2022, a deep
fake video of ukrainian President volodymyr
Zelenskyy asking his troops to surrender went
viral among ukrainian citizens, causing significant
confusion, even as their military was fighting
against the russian forces.
11
Beyond defence and security, Ai has also
evoked fears of adverse economic impact. An
emerging apprehension is that Ai automation
could potentially alter the labour market in a
fundamental manner, with grave implications for
economies in the global s outh that rely on their
labour and human resources.
12,13
What is responsible ai?
these dynamics have created the necessity
for the ‘responsible Ai’ (rAi) and the need to
regulate it. t here has been a gradual momentum
around rallying for responsible innovation
ecosystems. this is especially valid in the
development and deployment of Ai, where
there is a chance for responsible innovation and
use to be institutionalised right from the get-
go and not as an afterthought or a checkbox to
performatively satisfy policy and/or compliance-
Page 4
47 November 2023
Re SPONSibLe USe OF Ai
bRidGiNG iNNOv AtiON ANd e thiCS
rtificial i ntelligence (Ai) is transforming
the way humans interact, industries
function, and societies are structured.
t he seemingly limitless potential of Ai
across multiple domains, countries, and human
imaginations has spawned numerous applications.
current applications include image and text
analysis for data analysis purposes, logistics,
assistance in decision-making, autonomous
vehicles, and aerial systems, cybersecurity, etc.
The New Delhi Leaders’ Declaration highlights the significance of
harnessing ‘ AI responsibly for good and for all’ . It states that the G20
leaders are committed to leveraging AI for the public good by solving
challenges in a responsible, inclusive, and human-centric manner while
protecting people’s rights and safety. Groupings like these are in an
opportune position to take the lead in this regard, thereby bridging the
gap between innovation and the ethics of the use of AI.
Additionally, it is being used for security,
surveillance, and inventory management. it is also
being applied extensively to areas like agriculture,
fintech, healthcare, manufacturing, and climate
change, yielding sizeable dividends in all of them.
it has become abundantly clear in the recent
past that Ai can augment human capabilities
and aid us in tackling some of the most pressing
challenges of our time. Ai is a force that has the
ShImona mohan
dR SameeR paTIL
The co-author is a Junior Fellow at the Centre for Security, Strategy and Technology, ORF. She works at the intersection of security, technology
(especially AI and cybersecurity), gender and disarmament. Email: shimona.mohan@orfonline.org.
The author is a Senior Fellow and Deputy Director at the Observer Research Foundation (ORF). He works at the intersection of technology and
national security. Email: sameer.patil@orfonline.org.
A
48 November 2023
capacity to create a more sustainable, equitable,
and interconnected world. However, it also raises
critical ethical and societal concerns, which require
adequate policy consideration and responses.
this highlights the need for the responsible
development and deployment of Ai to ensure that
its transformative power benefits everyone and
leaves no one behind.
G20 new delhi leaders’ declaration and
responsible ai
states are increasingly being compelled
to practise responsible behaviour in their
engagements with Ai for civilian, security and
defence purposes. in this context, the recently
concluded g20 summit in new Delhi (9-10
september 2023) has tackled multiple aspects
related to r esponsible Ai (r Ai). Most of the g20
members have been working towards establishing
regulations for the responsible use of Ai, especially
since the advent of genAi applications. the
european union’s proposed Ai Act is the most
comprehensive attempt to establish a regulatory
framework for the responsible development
of Ai that focuses primarily on strengthening
rules around data quality, transparency, human
oversight, and accountability.
1
t he new Delhi Leaders’ Declaration highlights
the significance of harnessing ‘Ai responsibly for
good and for all’ .
2
it states that the g20 leaders are
committed to leveraging Ai for the public good
by solving challenges in a responsible, inclusive,
and human-centric manner, while protecting
people’s rights and safety. it adds that to ensure
responsible Ai development, deployment and use,
the protection of human rights, transparency and
explainability, fairness, accountability, regulation,
safety, appropriate human oversight, ethics, biases,
privacy, and data protection must be addressed.
in addition, the declaration mentions that the g20
members will pursue a pro-innovation regulatory/
governance approach that maximises the benefits
and takes into account the risks associated with
the use of Ai.
the declaration also reaffirms the leaders’
commitment to g20 Ai Principles of 2019.
these principles had been adopted at the
2019 osaka summit and underline the human-
centred approach of Ai.
3
they take a cue from
the organisation for economic cooperation and
Development principles on Ai, also adopted in
2019, that support the technology to become
innovative and trustworthy, and respect human
rights and democratic values.
4
Besides this, the
declaration also underlines the importance
of investment in supporting human capital
development. towards this, g20 leaders agreed
to extend support to educational institutions
and teachers to enable them to keep pace with
emerging trends and technological advances
including Ai. this will play an important role in
imparting skills for the youth entering the job
market and will offset the concerns around the
adverse economic impacts of Ai.
h ow does ai pose e thical r isks?
According to the AiAAic (Ai, Algorithmic,
and Automation incidents and controversies)
database, which tracks incidents related to the
ethical misuse of Ai, the number of Ai incidents
and controversies has increased 26 times since
2012.
5
several critics of Ai have also raised concerns
about gender and racial bias when it comes to the
application of Ai to services like healthcare and
finance. Although it may appear to be so, Ai is not
neutral; it can internalise and then catastrophically
enhance biases that societies possess, programme
them into the code, and/or ignore them in outputs
49 November 2023
in the absence of sensitivities to those biases, to
begin with.
6
if the datasets used in developing
any Ai system are incomplete or skewed towards
or against a sub-group, they will produce results
that marginalise those sub-groups or make them
invisible in some way. Y et, even if a dataset is precise
and representative of the intended population,
biased Machine Learning (ML) algorithms applied
to the data may still result in biased outputs.
in most supervised ML models, training
datasets are given labels by a human developer
or coder to enable the ML model to classify
the information it already has. the model then
characterises new information given to it based on
this classification syntax, after which it generates
an output. t here are two possible modes of bias
introduction in this process: first, if the human
developers have their own biases, which they either
introduce into the system or retain due to ignorant
oversight; and second, if biases are incorporated
in the processing of the data within the ‘black box’
of the Ai/ML system, that is not explainable to or
understandable by human operators.
7
the black
box, as the name suggests, makes the learning
process of the system opaque, and its algorithms
can thus only be fixed once an output is generated
and the human developer affirms that there was a
problem with processing the input data.
Besides this, there are also ethical concerns that
have arisen over issues like copyright infringement
and privacy violations due to apps that create
realistic images and art from a description in
natural language.
8,9
several artists have accused
apps of training their algorithms based on images
and illustrations scraped from the web without
the original artists’ consent.
10
then there are concerns regarding the
misuse of Ai in the defence domain to enhance
targeting and surveillance capabilities of drones
on the battlefield. t his is a use-case of Ai in drone
warfare with the potential of ensuing violence.
in other cases, critics have also noted the misuse
of Ai for illegal surveillance. in the cybersecurity
sphere, generative Ai applications are increasingly
posing legitimate security threats as they are
being used to conduct malware attacks. For
instance, cybercriminals, with the help of Ai, mass
generating phishing emails to spread malware
and collect valuable information. these phishing
emails have higher rates of success, than manually
crafted phishing emails. However, an even more
insidious threat has emerged through ‘deepfakes,’
which generate synthetic or artificial media using
ML. such realistic-looking content is difficult
to verify and have become a powerful tool for
disinformation, with grave national security
implications. For instance, in March 2022, a deep
fake video of ukrainian President volodymyr
Zelenskyy asking his troops to surrender went
viral among ukrainian citizens, causing significant
confusion, even as their military was fighting
against the russian forces.
11
Beyond defence and security, Ai has also
evoked fears of adverse economic impact. An
emerging apprehension is that Ai automation
could potentially alter the labour market in a
fundamental manner, with grave implications for
economies in the global s outh that rely on their
labour and human resources.
12,13
What is responsible ai?
these dynamics have created the necessity
for the ‘responsible Ai’ (rAi) and the need to
regulate it. t here has been a gradual momentum
around rallying for responsible innovation
ecosystems. this is especially valid in the
development and deployment of Ai, where
there is a chance for responsible innovation and
use to be institutionalised right from the get-
go and not as an afterthought or a checkbox to
performatively satisfy policy and/or compliance-
50 November 2023
related constraints. in this context, rAi is
broadly understood as the practice of designing,
developing, and deploying Ai to empower
employees and businesses and impact society in
a fair manner. given Ai’s dual-use character, this is
a loose and flexible understanding, and it posits
r Ai as an umbrella term that usually encompasses
considerations around fair, explainable, and
trustworthy Ai systems.
india has been working on rAi since
2018, and niti Aayog also released a two-part
report in 2021 on approaches towards
14
and
operationalisation of
15
rAi principles for the
deployment and use of civilian Ai architectures.
t he seven principles that niti Aayog highlights
are: safety and reliability; equality; inclusivity
and non-discrimination; privacy and security;
transparency; accountability; and protection
and reinforcement of positive human values. it
also recommends measures for the government,
industry bodies, and civil society to implement
these principles in the Ai products they develop
or work with. indian tech industry body
nAsscoM embedded the principles of this
framework into india’s first r Ai Hub and t oolkit
16
released in late 2022, which comprises sector-
agnostic tools to enable entities to leverage Ai
by prioritising user trust and safety.
Pertinently, the focus on r Ai in g20 new Delhi
Leaders’ Declaration also aligns with india holding
the chair of the global Partnership on Artificial
intelligence (gPAi), a multistakeholder initiative
that brings together experts from science, industry,
civil society, international organisations, and
governments.
17
it contributes to the responsible
development of Ai via its r esponsible Ai working
group.
18
india chairing the gPAi is important
since the global south is underrepresented in
the forum: out of its 29 members, only four are
from the global south - Argentina, Brazil, india,
and senegal. t herefore, india is better positioned
to play an active role in bridging this divide and
ensuring that the less developed economies also
get to reap the benefits of this technological shift
towards Ai. new Delhi will host the annual gPAi
summit on 12-14 December 2023. At the last
year’s summit in tokyo, india urged the members
to work together on a common framework of rules
and guidelines on data governance in order to
prevent user harm and ensure the safety of both
the internet and Ai.
c onclusion
though the rise of Ai and its applications in
the past few years has been meteoric and the
scope for innovation in the field is endless, nations
all around the world are waking up to the dangers
of its potential misuse. While there are several
initiatives attempting to address the issue, there
is currently no global consensus or regulatory
framework on the ethical and responsible use
of Ai. Hence, groupings like the g20 and gPAi
are in an opportune position to take the lead in
this regard, thereby bridging the gap between
innovation and the ethics of Ai use. the g20
new Delhi Leaders’ Declaration demonstrates
that leaders of the world’s largest economies are
aware of the potential benefits and risks of Ai and
are committed to working together to ensure
that the technology is developed and used in
a responsible and inclusive manner. the g20
members must follow this declaration by adopting
the anticipatory regulation approach, doing over-
the-horizon thinking, and building a coalition of
diverse stakeholders. ?
Page 5
47 November 2023
Re SPONSibLe USe OF Ai
bRidGiNG iNNOv AtiON ANd e thiCS
rtificial i ntelligence (Ai) is transforming
the way humans interact, industries
function, and societies are structured.
t he seemingly limitless potential of Ai
across multiple domains, countries, and human
imaginations has spawned numerous applications.
current applications include image and text
analysis for data analysis purposes, logistics,
assistance in decision-making, autonomous
vehicles, and aerial systems, cybersecurity, etc.
The New Delhi Leaders’ Declaration highlights the significance of
harnessing ‘ AI responsibly for good and for all’ . It states that the G20
leaders are committed to leveraging AI for the public good by solving
challenges in a responsible, inclusive, and human-centric manner while
protecting people’s rights and safety. Groupings like these are in an
opportune position to take the lead in this regard, thereby bridging the
gap between innovation and the ethics of the use of AI.
Additionally, it is being used for security,
surveillance, and inventory management. it is also
being applied extensively to areas like agriculture,
fintech, healthcare, manufacturing, and climate
change, yielding sizeable dividends in all of them.
it has become abundantly clear in the recent
past that Ai can augment human capabilities
and aid us in tackling some of the most pressing
challenges of our time. Ai is a force that has the
ShImona mohan
dR SameeR paTIL
The co-author is a Junior Fellow at the Centre for Security, Strategy and Technology, ORF. She works at the intersection of security, technology
(especially AI and cybersecurity), gender and disarmament. Email: shimona.mohan@orfonline.org.
The author is a Senior Fellow and Deputy Director at the Observer Research Foundation (ORF). He works at the intersection of technology and
national security. Email: sameer.patil@orfonline.org.
A
48 November 2023
capacity to create a more sustainable, equitable,
and interconnected world. However, it also raises
critical ethical and societal concerns, which require
adequate policy consideration and responses.
this highlights the need for the responsible
development and deployment of Ai to ensure that
its transformative power benefits everyone and
leaves no one behind.
G20 new delhi leaders’ declaration and
responsible ai
states are increasingly being compelled
to practise responsible behaviour in their
engagements with Ai for civilian, security and
defence purposes. in this context, the recently
concluded g20 summit in new Delhi (9-10
september 2023) has tackled multiple aspects
related to r esponsible Ai (r Ai). Most of the g20
members have been working towards establishing
regulations for the responsible use of Ai, especially
since the advent of genAi applications. the
european union’s proposed Ai Act is the most
comprehensive attempt to establish a regulatory
framework for the responsible development
of Ai that focuses primarily on strengthening
rules around data quality, transparency, human
oversight, and accountability.
1
t he new Delhi Leaders’ Declaration highlights
the significance of harnessing ‘Ai responsibly for
good and for all’ .
2
it states that the g20 leaders are
committed to leveraging Ai for the public good
by solving challenges in a responsible, inclusive,
and human-centric manner, while protecting
people’s rights and safety. it adds that to ensure
responsible Ai development, deployment and use,
the protection of human rights, transparency and
explainability, fairness, accountability, regulation,
safety, appropriate human oversight, ethics, biases,
privacy, and data protection must be addressed.
in addition, the declaration mentions that the g20
members will pursue a pro-innovation regulatory/
governance approach that maximises the benefits
and takes into account the risks associated with
the use of Ai.
the declaration also reaffirms the leaders’
commitment to g20 Ai Principles of 2019.
these principles had been adopted at the
2019 osaka summit and underline the human-
centred approach of Ai.
3
they take a cue from
the organisation for economic cooperation and
Development principles on Ai, also adopted in
2019, that support the technology to become
innovative and trustworthy, and respect human
rights and democratic values.
4
Besides this, the
declaration also underlines the importance
of investment in supporting human capital
development. towards this, g20 leaders agreed
to extend support to educational institutions
and teachers to enable them to keep pace with
emerging trends and technological advances
including Ai. this will play an important role in
imparting skills for the youth entering the job
market and will offset the concerns around the
adverse economic impacts of Ai.
h ow does ai pose e thical r isks?
According to the AiAAic (Ai, Algorithmic,
and Automation incidents and controversies)
database, which tracks incidents related to the
ethical misuse of Ai, the number of Ai incidents
and controversies has increased 26 times since
2012.
5
several critics of Ai have also raised concerns
about gender and racial bias when it comes to the
application of Ai to services like healthcare and
finance. Although it may appear to be so, Ai is not
neutral; it can internalise and then catastrophically
enhance biases that societies possess, programme
them into the code, and/or ignore them in outputs
49 November 2023
in the absence of sensitivities to those biases, to
begin with.
6
if the datasets used in developing
any Ai system are incomplete or skewed towards
or against a sub-group, they will produce results
that marginalise those sub-groups or make them
invisible in some way. Y et, even if a dataset is precise
and representative of the intended population,
biased Machine Learning (ML) algorithms applied
to the data may still result in biased outputs.
in most supervised ML models, training
datasets are given labels by a human developer
or coder to enable the ML model to classify
the information it already has. the model then
characterises new information given to it based on
this classification syntax, after which it generates
an output. t here are two possible modes of bias
introduction in this process: first, if the human
developers have their own biases, which they either
introduce into the system or retain due to ignorant
oversight; and second, if biases are incorporated
in the processing of the data within the ‘black box’
of the Ai/ML system, that is not explainable to or
understandable by human operators.
7
the black
box, as the name suggests, makes the learning
process of the system opaque, and its algorithms
can thus only be fixed once an output is generated
and the human developer affirms that there was a
problem with processing the input data.
Besides this, there are also ethical concerns that
have arisen over issues like copyright infringement
and privacy violations due to apps that create
realistic images and art from a description in
natural language.
8,9
several artists have accused
apps of training their algorithms based on images
and illustrations scraped from the web without
the original artists’ consent.
10
then there are concerns regarding the
misuse of Ai in the defence domain to enhance
targeting and surveillance capabilities of drones
on the battlefield. t his is a use-case of Ai in drone
warfare with the potential of ensuing violence.
in other cases, critics have also noted the misuse
of Ai for illegal surveillance. in the cybersecurity
sphere, generative Ai applications are increasingly
posing legitimate security threats as they are
being used to conduct malware attacks. For
instance, cybercriminals, with the help of Ai, mass
generating phishing emails to spread malware
and collect valuable information. these phishing
emails have higher rates of success, than manually
crafted phishing emails. However, an even more
insidious threat has emerged through ‘deepfakes,’
which generate synthetic or artificial media using
ML. such realistic-looking content is difficult
to verify and have become a powerful tool for
disinformation, with grave national security
implications. For instance, in March 2022, a deep
fake video of ukrainian President volodymyr
Zelenskyy asking his troops to surrender went
viral among ukrainian citizens, causing significant
confusion, even as their military was fighting
against the russian forces.
11
Beyond defence and security, Ai has also
evoked fears of adverse economic impact. An
emerging apprehension is that Ai automation
could potentially alter the labour market in a
fundamental manner, with grave implications for
economies in the global s outh that rely on their
labour and human resources.
12,13
What is responsible ai?
these dynamics have created the necessity
for the ‘responsible Ai’ (rAi) and the need to
regulate it. t here has been a gradual momentum
around rallying for responsible innovation
ecosystems. this is especially valid in the
development and deployment of Ai, where
there is a chance for responsible innovation and
use to be institutionalised right from the get-
go and not as an afterthought or a checkbox to
performatively satisfy policy and/or compliance-
50 November 2023
related constraints. in this context, rAi is
broadly understood as the practice of designing,
developing, and deploying Ai to empower
employees and businesses and impact society in
a fair manner. given Ai’s dual-use character, this is
a loose and flexible understanding, and it posits
r Ai as an umbrella term that usually encompasses
considerations around fair, explainable, and
trustworthy Ai systems.
india has been working on rAi since
2018, and niti Aayog also released a two-part
report in 2021 on approaches towards
14
and
operationalisation of
15
rAi principles for the
deployment and use of civilian Ai architectures.
t he seven principles that niti Aayog highlights
are: safety and reliability; equality; inclusivity
and non-discrimination; privacy and security;
transparency; accountability; and protection
and reinforcement of positive human values. it
also recommends measures for the government,
industry bodies, and civil society to implement
these principles in the Ai products they develop
or work with. indian tech industry body
nAsscoM embedded the principles of this
framework into india’s first r Ai Hub and t oolkit
16
released in late 2022, which comprises sector-
agnostic tools to enable entities to leverage Ai
by prioritising user trust and safety.
Pertinently, the focus on r Ai in g20 new Delhi
Leaders’ Declaration also aligns with india holding
the chair of the global Partnership on Artificial
intelligence (gPAi), a multistakeholder initiative
that brings together experts from science, industry,
civil society, international organisations, and
governments.
17
it contributes to the responsible
development of Ai via its r esponsible Ai working
group.
18
india chairing the gPAi is important
since the global south is underrepresented in
the forum: out of its 29 members, only four are
from the global south - Argentina, Brazil, india,
and senegal. t herefore, india is better positioned
to play an active role in bridging this divide and
ensuring that the less developed economies also
get to reap the benefits of this technological shift
towards Ai. new Delhi will host the annual gPAi
summit on 12-14 December 2023. At the last
year’s summit in tokyo, india urged the members
to work together on a common framework of rules
and guidelines on data governance in order to
prevent user harm and ensure the safety of both
the internet and Ai.
c onclusion
though the rise of Ai and its applications in
the past few years has been meteoric and the
scope for innovation in the field is endless, nations
all around the world are waking up to the dangers
of its potential misuse. While there are several
initiatives attempting to address the issue, there
is currently no global consensus or regulatory
framework on the ethical and responsible use
of Ai. Hence, groupings like the g20 and gPAi
are in an opportune position to take the lead in
this regard, thereby bridging the gap between
innovation and the ethics of Ai use. the g20
new Delhi Leaders’ Declaration demonstrates
that leaders of the world’s largest economies are
aware of the potential benefits and risks of Ai and
are committed to working together to ensure
that the technology is developed and used in
a responsible and inclusive manner. the g20
members must follow this declaration by adopting
the anticipatory regulation approach, doing over-
the-horizon thinking, and building a coalition of
diverse stakeholders. ?
51 November 2023
references
1. “eu Ai Act: first regulation on artificial intelligence,” June
14, 2023, https://www.europarl.europa.eu/news/en/
headlines/society/20230601sto93804/eu-ai-act-first-
regulation-on-artificial-intelligence .
2. g20 new Delhi Leaders’ Declaration, s eptember 9-10,
2023, https://www.g20.org/content/dam/gtwenty/
gtwenty_new/document/g20-new-Delhi-Leaders-
Declaration.pdf.
3. g20 Ai Principles, https://www.mofa.go.jp/policy/
economy/g20_summit/osaka19/pdf/documents/en/
annex_08.pdf.
4. “oecD Ai Principles overview,” https://oecd.ai/en/ai-
principles.
5. Artificial i ntelligence index report 2023, stanford
university Human- c entred Artificial i ntelligence, https://
aiindex.stanford.edu/wp-content/uploads/2023/04/
HAi_Ai-index-r eport-2023_cHAPter_3.pdf .
6. shimona Mohan, “Filling the Blanks: Putting gender
into Military A.i.,” orF issue Brief no. 655, August 2023,
observer r esearch Foundation, https://www.orfonline.
org/research/filling-the-blanks-putting-gender-into-
military-ai/.
7. shimona Mohan, “gender-ative Ai: An enduring gender
bias in generative Ai systems,”observer research
Foundation, April 27, 2023, https://www.orfonline.org/
expert-speak/gender-ative-ai/.
8. DALL.e2, https://openai.com/dall-e-2.
9. Midjourney, https://www.midjourney.com/home/.
10. James vincent, “Ai art tools stable Diffusion and
Midjourney targeted with copyright lawsuit,” The
Verge, January 16, 2023, https://www.theverge.
com/2023/1/16/23557098/generative-ai-art-copyright-
legal-lawsuit-stable-diffusion-midjourney-deviantart .
11. t he telegraph, “Deepfake video of volodymyr Zelensky
surrendering surfaces on social media,” March 17, 2022,
https://www.youtube.com/watch?v=X17yrev5sl4.
12. ian shine and Kate Whiting, “these are the jobs most
likely to be lost – and created – because of Ai,” World
Economic Forum, May 4, 2023, https://www.weforum.
org/agenda/2023/05/jobs-lost-created-ai-gpt/.
13. Accenture, “A new era of generative Ai for everyone,”
https://www.accenture.com/content/dam/accenture/
final/accenture-com/document/Accenture-A-new-era-
of-generative-Ai-for-everyone.pdf.
14. niti Aayog, “resPonsiBLe Ai #AiFor ALL: Approach
Document for india Part 1 – Principles for r esponsible
Ai,” February 2021, https://www.niti.gov.in/sites/default/
files/2021-02/r esponsible-Ai-22022021.pdf.
15. niti Aayog, “resPonsiBLe Ai #AiFor ALL: Approach
Document for india: Part 2 - operationalizing Principles
for responsible Ai,” August 2021, https://www.niti.
gov.in/sites/default/files/2021-08/Part2-r esponsible-
Ai-12082021.pdf.
16. inDiAai, “nAsscoM launched the r esponsible Ai hub
and resource kit,” october 11, 2022, https://indiaai.gov.
in/news/nasscom-launched-the-responsible-ai-hub-
and-resource-kit.
17. Prateek tripathi, “india’s chairmanship of the global
Partnership on Ai,” observer r esearch Foundation, August
8, 2023, https://www.orfonline.org/expert-speak/indias-
chairmanship-of-the-global-partnership-on-ai/.
18. the global Partnership on Artificial i ntelligence,
“Working group on responsible Ai,” https://gpai.ai/
projects/responsible-ai/.
Sales Outlets of Publications Division
New Delhi Soochna Bhawan, CGO Complex, Lodhi Road 110003 011-24365609
011-24365610
Navi Mumbai 701, B Wing, 7th Floor, Kendriya Sadan, Belapur 400614 022-27570686
Kolkata 08, Esplanade East 700069 033-22486696
Chennai 'A' Wing, Rajaji Bhawan, Basant Nagar 600090 044-24917673
Thiruvananthapuram Press Road, Near Government Press 695001 0471-2330650
Hyderabad 204, II Floor CGO Towers, Kavadiguda, Secunderabad 500080 040-27535383
Bengaluru I Floor, 'F' Wing, Kendriya Sadan, Koramangala 560034 080-25537244
Patna Bihar State Co-operative Building, Ashoka Rajpath 800004 0612-2675823
Lucknow Hall No 1, II Floor, Kendriya Bhawan, Sector-H, Aliganj 226024 0522-2325455
Ahmedabad 4-C, Neptune T ower , 4th Floor , Nehru Bridge Corner , Ashram Road 380009 079-26588669
Read More