top of page
banner.png

Course 3
Creating your own projects with AI

Unit 4 – Ethics of AI and Future Harm

As you are gaining skills for using AI programs, it is important to stop and consider the implications of AI to keep us on track for responsible use of AI. In this unit we will also consider responsible development of AI.

 

USE

As users of technology, we can be responsible and ethical citizens by considering the implications of the technology we use. In terms of AI, being thoughtful about the following issues will mean that we reduce the chances of our peers, co-workers, friends and customers from experiencing harm.

 

Bias and Discrimination: AI systems learn from data, and if that data contains biases, the AI can perpetuate and even amplify these biases. Users do not have direct knowledge of the data, but knowing that it can be biassed is enough to use it with caution. We can try to use different datasets, check answers on multiple sources and make sure that humans are involved in every stage of a project that uses AI.

 

Job Displacement: Automation powered by AI could potentially lead to many individuals becoming unemployed if not properly managed. We can consider if using AI is the correct choice, or if employing a person for some or all of the work is a better choice. Looking at the full picture can help to see ways that an AI may not be the better long term choice.

 

Privacy Concerns: AI relies on data, and the collection and analysis of vast amounts of personal data raise concerns about privacy violations if not handled securely. As with the use of any technology, thinking carefully about what personal information you collect, and how you can store it securely, will help keep individuals safe from information leaks. This also means being aware of how AI programs you use handle the information you upload to it.

 

Dependency and Overreliance: Over reliance on AI systems, especially in critical sectors like healthcare or transportation, can lead to catastrophic failures if these systems malfunction or are attacked by malicious actors. 

 

DEVELOPMENT

As global citizens and users of technology, we can use our voice to help affect change if we see tech features that we don’t agree with. Writing letters of complaint and finding ways to explain your user perspective are important ways to try to ensure the safe and ethical use of AI. 

​

If you reach a point where you are working in developing AI programs, you will hopefully have the chance to find ethical ways to include the perspectives of potential users into the way the programs work. You could also find yourself working with local, national or international policy workers who are considering how to implement laws and legislation to make sure AI remains safe and fair.

​

Accountability and Transparency: Complex AI systems can be difficult to understand and audit. This can lead to a lack of accountability if something goes wrong. Many AI companies do not release information about their training datasets or how their programs are being used. By writing internal rules for transparency, and agreeing to external rules from governments and international agreements, companies can help ensure that the future of AI is safer and more fair. 

​

Fakes and Misinformation: AI-generated fakes can be made to look like real-life news. This could be used to spread misinformation and manipulate public opinion. Ensuring that this sort of use isn’t possible comes down to the workers in AI companies and to the people making rules and legislation for AI.

Ethical Dilemmas: AI systems might face ethical dilemmas in decision-making where the right course of action isn’t always clear, especially in situations where human lives are at stake. Writing strict rules that are agreed by many different experts and end-users can help to alleviate the worst dilemmas. And having checks by humans in the process.

​

Autonomous Weapons: The development of autonomous weapons powered by AI raises ethical concerns about the potential for these weapons to act independently without human control, leading to unintended consequences or warfare escalation.

Being aware of the potential harms is important. It is of course understandable if these issues concern you. They concern everyone. The answer to developing safe AI is to include as many people and perspectives into the conversation as possible. 

​

DISCUSSION

Talk about these issues with a group of your peers. Find the issues that you agree on, and the ones where you have different opinions. Compare these with other groups to find the most difficult topics. These other perspectives will help you understand the difficulties in finding ways to implement AI for the future.

 

 

This is the end of course 3. By now you have some excellent skills and knowledge about AI. Now is a good time to go back to some projects that you didn’t get a chance to complete.

In the next course we will look at ways to use AI for working in maker and innovation hub  spaces.

all innovation logos with new ES.jpg

Innovation Hub

©2023 by Innovation Hub. Part of SIP at UAEU

bottom of page