Chapter. 05 AI Ethics
EXERCISES
A. Tick the correct answers.
1. Which
of the following companies designed Tay, an AI chatbot?
a. Microsoft b. Google c. Amazon
Answer: ( a ).
Microsoft
Description: In March
2016, Microsoft was preparing to release its new chatbot, Tay, on Twitter.
Described as an experiment in “ conversational understanding,” Tay was designed
to engage people in dialogue through tweets or direct messages, while emulating
the style and slang of teenage girl.
2. Which
of the following organisations had to
terminate its AI hiring and recruitment process because of AI bias issues?
a. Microsoft
b. Amazon c. Google
Answer: ( b ). Amazon
Description: Amazon spent year for automating the hiring process
but had to terminate its AI hiring and recruitment process because it could not
stop it from biasing against a particular gender. This happened because the AI
system was fed with resumes of people who applied for job in Amazon over a
period of 10 years, and they mostly belonged to a particular gender. Similarly,
a software known as COMPAS, Which is powered by AI claimed to predict the
future criminals, but it renewed debate over AI bias as it showed bias against
a particular community.
3. Which
of the following companies claims that Singularity can reduce the time spent on
drug trials?
a. Google b. HCL c. IBM
Answer:
( c ). IBM
Description: Drug trials are
time-consuming and may even take decades before the medicine finally reaches
the market, claims IBM. However, with help of Singularity, we can test the
drugs billions of times in a simulated environment on a digital human being.
This would significantly cut-down on the time the drug takes to hit the market.
This could go a long way in saving several lives.
4. Which
of the following is not an issue concerning
AI ethics?
a. AI
bias b. Singularity c. Sapience
Answer:
( a ). AI bias
Description: AI bias AI cannot be fair always and it can develop
bias towards a race, gender, or ethnicity. The reason behind this is that AI
system are also developed and trained by humans who themselves can be biased.
5. Which
of the following is not an implication of developing and using AI?
a. Sentience b. AI bias. c. Security
Answer:
( b ). AI bias
Description: AI cannot be
fair always and it can develop bias towards a race, gender, or ethnicity.
6. AI
bias is alternatively known as
a. Data
science bias. b. Machine learning bias. c. Database system bias
Answer:
( b. ) Machine learning bias.
Description: AI bias also
known as Machine Learning bias.
7. Who
is the hypothetical point of artificial intelligent machines surpassing human
intelligence known as?
a. Singularity b. AI bias c.
Security
Answer:
( a ) Singularity.
Description: singularity is
the hypothetical point at which artificial intelligent machines will surpass
human intelligence and human beings will no longer be the most intelligent
beings on Earth.
8. Who
framed the Three Laws of Robotics?
a. Nikola
tesla b. lsaac Asimov c. Garry Kasparov
Answer:
( b ). Isaac Asimov.
Description: American writer Isaac Asimov in the 1940s framed the
Three Laws of Robotics. They are as follows:
·
A robot may not injure a human being, or, through
inaction, allow a human being to come to harm.
·
A robot must obey the instructions given to it by
human beings without violating the first law.
·
A robot must protect its own existence without
violating the first or the second law.
9. Which
of the following is not a component of AI policy?
a. AI
systems must be transparent.
b. Consumers
must have a choice to leave the system.
c. AI
systems do not explain their reasoning.
Answer:
( c ). AI systems do not explain their reasoning.
Description: The main components of a good AI
policy are as follows:
·
An AI- enabled system must be transparent.
·
An AI- powered system must possess the right to information it is collecting.
·
Consumers must have the freedom to leave the system,
and on their request, the data must be deleted.
·
Data collection and the purpose of artificial intelligence
must be restricted by design.
10. What
does GIGO stand for?
a. Good
Institution, Good Organisation
b. Garbage
In, Garbage Out
c. Getting
Input, Getting Output.
Answer:
( b ). Garbage In, Garbage Out.
Description: If systems are
trained properly , using good data, then AI can perform well. However, these
machines can be very harmful too, if they are trained with bad data or if they
are not programmed in an ethical way. The programmers are familiar with the
classic computer maxim --- Garbage In, Garbage OUT (GIGO). Irrelevant input and
the outputs will prove to be useless. Therefor, people who work on training AI
machine need to be well aware of it when determining what data to use.
B. Fill in the blanks.
1. Ethics is a branch of
philosophy, which defines what is good for individuals and society.
2. Sentience is defined as the capacity of the machine to feel
pain and suffering.
3. AI ethics is defined as the moral principles governing the behaviour
of human beings as they design, construct, and use artificial intelligent
machines.
4. AI Policy refers to the public policies that ensure maximum
benefits of AI.
5. AI-powered
machines are not immune to making mistakes.
C. Write T for True and F for False.
1. AI
cannot develop bias towards a race or gender. Answer: False ( F
).
2. Microsoft’s
AI chatbot, Tay, was released on Twitter in 2016. Answer: True ( T ).
3. AI
systems are immune to mistakes. Answer: False ( F ).
4. The
aim of roboethics is to ensure that AI- powered machines place human safety.
Answer: True ( T ).
5. Google
cannot access your browsing history, video viewing, and online searches.
Answer: False ( F ).
D. Answer the following questions.
1. what do you understand by AI ethics? List three AI ethical issues.
Answer: AI ethics
is a set of values, principles, and techniques that employ widely accepted
standards of right and wrong to guide moral conduct in the development and use
of AI technologies.
The following list enumerates all the ethical issues that were identified
from the case studies and the Delphi study, totalling 39.
·
Cost to innovation.
·
Harm to physical integrity.
·
Lack of access to public services.
·
Lack of trust.
·
“Awakening” of AI.
·
Security problems.
·
Lack of quality data.
·
Disappearance of jobs.
2. Why is AI-enabled machine making a mistake considered an ethical issue?
Elucidate with an example.
Answer: Bias is
a much-cited ethical concern related to AI (CDEI 2019). One key challenge is
that machine learning systems can, intentionally or inadvertently, result in
the reproduction of already existing biases.
For
example, random dot patterns can lead a machine to “see”
things that aren't there. If we rely on AI to bring us into a new world of
labour, security and efficiency, we need to ensure that the machine performs as
planned, and that people can't overpower it to use it for their own ends.
3. What is AI bias? What are the reasons for bias shows by AI machines?
Answer: AI bias also
known as Machine Learning bias, AI cannot be fair always and it can develop
bias towards a race, gender, or ethnicity. The reason behind this is that AI
systems are also developed and trained by human who themselves can be biased.
Or
2nd
Answer: Bias in
artificial intelligence can take many forms — from racial bias and
gender prejudice to recruiting inequity and age discrimination. "The
underlying reason for AI bias lies in human prejudice - conscious or
unconscious - lurking in AI algorithms throughout their development
4. What do you understand by Singularity?
Answer: Singularity is the hypothetical point
at which artificial intelligent machines will surpass human intelligence and
human beings will no longer be the most intelligent beings on the Earth.
5. If robots replace doctors in the healthcare industry then what ethical
issues may arise?
Answer: When robots replace doctors in the healthcare industry then
are many ethical issues may arise:
a.
Possible
loss of personalised care.
b.
b.
Loss of human contact.
c. Autonomy, Control
and Accountability.
e. Patient safety
concerns.
f. Possible
exacerbation of healthcare and other inequalities.
0 Comments
If you have any doubts, Please let me know.