×

Type In A Topic, Service Offering or Use Case To Search...

Artificial Intelligence technologies
  • Power BI
  • BI
   

How To Build Ethical And Responsible Artificial Intelligence Programme

  • SHARE:
  • Linkedin
  • Twitter
  • Facebook
  • Whatsapp
  • Email

As AI systems grow ever more mainstream, we are also seeing more frequent debates around the dangers and ethical issues posed by Artificial Intelligence technologies. These debates, ubiquitous in newspapers, magazines and TV channels today, bring to public attention the wide range of opinions – ranging from one extreme to the other – by technology, philosophy and social science experts on the topic of AI.

A research was conducted at Georgia University, which showed that AI systems displayed uniformly more reduced performance when confronted with 'dark' pedestrians, creating fears for a future in which a world rife with autonomous cars isn't as safe for people with dark skin tones as it is for lighter-skinned pedestrians.

In December 2019, Facebook took down 600 accounts, collectively with over 55 mn followers, that were using identities created by artificial intelligence to push pro-Trump stories on a variety of topics related to impeachment and elections.

On the other side, AI is already being announced as a messiah for streamlining business operations and use cases. With applications such as smart housing, next-gen automobiles, personal assistants, public surveillance, advanced healthcare, drones in logistics, and fraud prevention in finance, the technology is already growing in use and acceptance.

Especially in today's VUCA world, a robust AI framework is necessary in order to help enterprises make an accurate and timely decisions and pre-empt market conditions to uncover hidden value chains.

With artificial intelligence poised to become more prominent, it is perhaps time to consider the issue of ethics in AI, the pressing challenges and how we can ensure a positive outcome for humanity by delivering an AI model with confidence and in the best possible way.

Today, artificial intelligence technology based on neural networks and Deep Learning models use millions of parameters which create incredibly complex and highly nonlinear internal representations of the images or datasets that are fed to them. Therefore, they are considered as 'Black Box' systems. So, the pressing issue now is how to deliver transparency with this 'black box' AI?

AI is indeed proving to be a double-edged sword with both of its edges far sharper and less understood than with other technologies.

What Are The Broad Pressing Issues With AI In Society?

1. Increasing Inequality - Many experts have expressed concerns about how the rise of robots and intelligent systems might lead to massive job losses, or they might create conditions where capital is accumulated in the hands of a few. The potential of automation technology to replace more expensive human labour in blue-collar jobs may give rise to the need to redeploy or retrain employees to keep them in other roles. In the future, we may be looking at more debates on the universal wage programmes to ensure that no one is left out of this march of progress.

2. What if its Outcome is Not Aligned with the Society Laws? This frequently comes up in Sci-Fi movies where a rogue system is found infringing upon societal laws to pursue its stated objective. A famous incident is well depicted in the popular TV show Rick & Morty, where an intelligent system is tasked by Rick to protect Bert in his absence. In the following turn of events, the system kills a robber, police officers and finally takes the entire city hostage because it deems these things as a threat to Bert's safety. While you might argue that the scenario is far-fetched, it highlights a significant problem - that it is impossible to codify ethical behaviour.

3. Bias Leading to Incorrect Decisions - AI systems are supposed to have a low error rate. After all, we don't expect them to be affected by fatigue, boredom, resentment, or human biases, right? WRONG. There have been some very high-profile cases where the AI system has introduced bias into the system. A case in point here is when Amazon's sophisticated AI-driven recruitment system started to show a tendency to prefer male candidates over female. AI systems are prone to biases in its data, algorithm or its human developer and can develop biases against race, gender, religion or ethnicity.

4. How to Treat AI? - There may be more cases in point that with the AI systems, what legal rights are they entitled to? For example, Sophia was given citizenship of Saudi Arabia. While it was primarily a PR stunt, it opens up an inevitable pressing question - who will be responsible for AI systems if they do an infraction? If the machines and AI systems are indeed "autonomous", can they be held accountable to any wrong-doing? Should robots be charged if it violates a traffic light to arrive at an emergency on time? How do we persecute it? If the system is capable of making upgrades and improvements to its network by itself, without the need for its owner's intervention, then why should it not be recognized and held responsible for what it does? How to deter such a system? These are questions that we have to answer to ensure a future with responsible AI.

Why Is The Issue So Critical?

As a response to the growing concerns with AI, the European Union has decided to create a special committee on Artificial Intelligence in the EU parliament. The purpose of this committee is to create a social, legal and ethical framework for addressing the concerns posed by the growing prevalence of AI technology in human affairs. It means to provide transparent access to information so that responsible, careful and judicious use of AI can be encouraged. Apart from this, the committee also plans to execute upskilling and educational programmes to make the public aware of the different aspects of the technology,

As we step into a future where AI is expected to play a role in our every walk of life, it is imperative to consider the data that we have as to how that future is going to look like. Ethics in AI will provide a safeguard for meaningful innovation and offer a critical eye.

Bill Gates, famously compared the AI technology to a nuclear bomb and stressed that without the appropriate knowledge, AI could become overwhelming and dangerous for society. To debate the question of whether an AI system can be relied upon to make "ethical" decisions, we may have to reconsider our definitions of "ethical" behavior first. Do we have a sure way to define ethics first? How do we build an ethical outcome into logic? Are we willing to put the power of autonomy into systems without understanding these issues first?

How Are Businesses Grappling With These Challenges?

Companies are deploying sophisticated artificial intelligence systems which can mimic human cognitive functions as well as perform extensive analyses and automation functions.

As AI systems are becoming smarter, they have become the modern-day Pandora box, and legitimate concerns need to be addressed before they create an unwanted scenario.

For businesses, the most visible challenges with AI are privacy violations, discrimination and accidents. These issues if not well-handled, can cause deep organization damages from reputational and revenue losses to regulatory backlash, criminal investigation, and diminished public trust.

Organizations are struggling to answer this question - How can they ensure that their algorithms are acting responsibly and ethically?

For companies, the need is to become well-aware of these perils before deploying the algorithms.

  • Define guidelines & set-up governance over the operations of their AI systems
  • Plan how to operationalize these guidelines
  • Discuss with your team and educate them on the importance.

Platforms such as DataRobot have helped organizations to make considerable strides in this direction, by delivering transparency into how the AI model has arrived at its predictions.

Enter Explainable AI

Artificial Intelligence models can often act like "Black Box models"; hence there is critical need for more transparent AI models that provide insights into the data, decision points and technique that is used to give an AI recommendation is growing in importance. Explainable AI focuses more on model interpretability as critical to optimize AI and solve the problem in the best possible way.

Google has taken a significant step in this direction by announcing Explainable AI - with features like What If and attribution modelling to help businesses deploy AI with confidence and streamline model governance.

Explainable AI, XAI For Short Will Need To Deliver Answers To Some Hot Questions Like:

1. Why did the AI system give a specific prediction or take a course of decision?

2. Why didn't it choose another course of action?

3. When did the AI system succeed or fail?

4. When do AI systems give enough confidence in the decision that you can trust it?

5. How can the AI system correct its errors?

Explainable AI, along with streamlined governance, well-laid policies and education around ethics in AI will help us as a society to grapple with the growing concerns with AI. This is an essential part of a strategy to come up with an acceptable and harmonious solution to the dangers posed by artificial intelligence technologies today for society and businesses.

Follow us on LinkedIn to see more such content!
More Reads
Guide to Anomaly Detection Manufacturing
  • Manufacturing
  • Data Analytics
  • Supply Chain
Anomaly Detection for Proactive Risk Mitigation in Manufacturing
  • 02-Apr-2024
  • Aishwarya Saran
READ MORE
product recommendation systems for retail
  • Retail
  • Data Analytics
  • CPG
Product Recommendation Systems for Retail
  • 11-Mar-2024
  • Lalitesh
READ MORE
Copyright © 2024 Polestar Insights Inc. All Rights Reserved.