Emerging Technology

The Responsible AI Standards: Highlighting Path to Governance

The new Bing is running on GPT-4.
Big news! Have you tried it?

GPT-4 has the ability to process more than 25,000 words of text. GPT-4 is advanced in creativity compared to earlier models. It is said to have the power to generate, edit, and iterate with users on creative and technical writing. And why Bing, why now? Microsoft has confirmed that certain versions of Bing that utilize GPT technology were utilizing GPT-4 prior to its official release. No conspiracy there!

Under rapid-fire changes in buyer preferences and competition, the CMO must lead into a future of uncertain territory with strategic collaboration and execution across the enterprise. With the fast emergence of chatGPT usage across generations from Baby Boomers in Education to Millennials producing content for their B2B companies, these standards have been worked on by various stakeholders. Who are the stakeholders? We will get to that.

The Responsible AI Standard is the product of a multi-year effort to define product development requirements for responsible AI. Any student of AI knows there are standards. As Web 3.0 starts to jell into everyday operations, user interfaces, search engines, legacy systems, communications, talent pool, operating expenses, bonus structures, future of work, and pre-and post-digital transformation projects across industries in the USA and Europe, the standards should include the following:

  1. Transparency: Responsible AI systems should be transparent about how they make decisions. Users and stakeholders should be able to understand how an AI system arrived at a particular decision. A major motivation for AI transparency and explainability is that they can build trust in AI systems, giving users and other stakeholders greater confidence that the system is being used appropriately.
  2. Accountability: AI systems should be accountable for their decisions and actions. If an AI system makes a mistake or causes harm, there should be a mechanism to hold the system accountable.
  3. Fairness: AI systems should be fair and unbiased. They should not discriminate against individuals or groups based on their race, gender, religion, or other factors.
  4. Privacy: AI systems should respect the privacy of individuals. They should only collect and use data that is necessary for their functions, and they should have robust data security and privacy protections in place.
  5. Safety: AI systems should be safe for humans and the environment. They should be designed to minimize the risk of harm to people or the environment.
  6. Explainability: AI systems should be explainable. They should be able to provide a clear explanation of their decision-making process and the factors that influenced their decisions.
  7. Human oversight: AI systems should be subject to human oversight. Humans should have the ability to intervene or override the decisions of AI systems when necessary.

By adopting these responsible AI standards, organizations can ensure that their AI systems are trustworthy, ethical, and beneficial to the society they serve.

Have the AI Providers Drank the Kool-Aid?

The hi-tech industry relies on Artificial Intelligence (AI) to effectively augment and automate decision-making that would overwhelm human operators. It is estimated that by 2025 there will be more than 30 billion connections worldwide. 

30 Billion Connections: Does That Mean 30 Billion Opportunities to Leak?

AI is vulnerable to attacks as there are weaknesses in modern AI architecture. Some platforms could even be broken by the usage of simple text within commands. Back in February 2023, Reddit users found text prompts that override some of OpenAI’s restrictions on ChatGPT. And in March 2023, a bug found in ChatGPT’s open-source library caused the chatbot to leak the personal data of customers, which included some credit card information and the titles of some chats they initiated. 

ChatGPT opens up new avenues for breaches and leaks even with the presence of advanced cybersecurity software. Artificial intelligence will significantly deepen the digital footprint, while potentially raising a cyber risk profile for all AI platform providers and those with employees that use them for their jobs without access limitations. One of the biggest obstacles is the need for continuous monitoring of AI usage. Effective AI Governance is critical to deployment success.

Fortunately, the use of AI, security analytics, and encryption have been shown to reduce the cost of a breach, saving brands between $1.25 million and $1.49 million compared to those who were not using such technology. 

A 2021 report from IBM and the Ponemon Institute revealed that customer personal data (name, email, password) was the most common type of data (44%) that was exposed in data breaches. 

In 2022, data breaches dominated the headlines. Companies from Cash App to Microsoft were the victims of data breaches as cybercriminals continued disrupting business continuity and hindering business success. 

In January 2023, MailChimp claimed that a threat actor was able to gain access to its systems through a social engineering attack, and was then able to access data attached to 133 MailChimp accounts. In February, Australian software company Atlassian was noted to have suffered a serious data breach. A hacking group broke into the company’s systems and extracted data relating to staff which included in the dataset are names, email addresses, the departments that staff work in, and other information relating to their employment at Atlassian. 

Responsible AI Standards

Laws and norms have not caught up with AI’s unique risks. The National Institute of Standards and Technology (NIST) is conducting research, engages stakeholders, and producing reports on the characteristics of trustworthy AI. NIST partnered up with the National Science Foundation (NSF) on an Institute for Trustworthy AI in Law & Society (TRAILS) led by the University of Maryland. 

Higher Word Limit: Responsible AI Standards

The active global dialogue about creating principled and actionable norms to ensure responsible development and deployment of AI at companies adopting new technologies like chatGPT involves discussions around ethical considerations and responsible AI practices. For example, a data holder is obligated to protect the data in an AI system. Privacy and security are an integral part of this system. Personal needs to be secured and AI should be accessed in a way that doesn’t compromise an individual’s privacy or business networks. Remote work doesn’t make this 

There is a rich and active global dialog about how to create principled and actionable norms to ensure organizations develop and deploy AI responsibly at companies from Amazon to Microsoft. Manual means of assurance cannot scale to satisfy such digital demands.

The primary goal is to ensure that organizations develop AI solutions that are transparent, accountable, and aligned with societal values and norms. The dialogue includes discussions around development frameworks, ethical principles, and best practices that can guide ethical AI development.

Issues such as bias, privacy, and fairness are central to the dialogue, with stakeholders exploring ways to ensure that AI solutions do not perpetuate or amplify existing social inequalities, protect user privacy and data, and mitigate risk to society.

Furthermore, the dialogue involves various stakeholders, including policymakers, researchers, industry experts, and civil society organizations. This diverse group of actors contributes to the creation of comprehensive, inclusive, and robust guidelines for responsible AI development and deployment.

Who are these stakeholders?

Generally speaking, the stakeholders who contribute to the creation of comprehensive, inclusive, and robust guidelines for responsible AI development and deployment may include:

  1. Policymakers: National and international policymakers play a crucial role in setting regulations and legal frameworks for AI development and deployment. They make decisions and set guidelines for the industry to follow. Governments, regulators, policy think tanks, and those who create guidelines and policies for technology companies and society as a whole, are often involved in this capacity.
  2. Researchers: Researchers in AI, computer science, ethics, and other related fields play a critical role in the development of responsible AI. Their work focuses on examining AI from a technical standpoint, as well as investigating the potential impact of AI on society, industry, and the environment.
  3. Industry experts: AI industry experts, including professionals responsible for the development, deployment, and management of AI systems play an important role in creating responsible AI guidelines. They understand the technical implications of AI and the associated risks and benefits. They can ensure the proper implementation of AI applications and technologies.
  4. Civil society organizations: Civil society organizations have an essential role in providing inclusive and equitable input into AI development and deployment, particularly for underrepresented communities. They highlight AI’s potential risks and benefits for society and work to create comprehensive and inclusive AI policies. They also monitor AI developments to ensure that they align with social and ethical principles.
  5. End-users: AI end-users, including businesses, governments, individuals, and other organizations, have a responsibility to ensure that their use of AI aligns with social and ethical principles. They must demand responsible AI technologies and continuously assess the potential positive and negative impacts of AI applications on society and industry.

Overall, the active global dialogue about creating principled and actionable norms to ensure responsible AI development and deployment aims to promote innovation that benefits society while mitigating risks and avoiding unintended negative consequences. It is expected that these standards will continue to evolve and become more refined as technology advances and society’s expectations of ethical AI use and development grow. It is likely that these standards will continue to be shaped by input from a diverse range of stakeholders, including industry experts, policymakers, and the public. Some areas that may receive increased focus over the next few years include improving transparency in AI systems, addressing bias and fairness concerns, and promoting accountability and responsibility among AI developers and users. As AI becomes more pervasive in our daily lives and industries adopt these standards as part of their operations and strategies, the expectations for responsible AI will continue to rise.

Related posts
Emerging Technology

Hyperautomation: The Future of Intelligent Automation

Hyperautomation is the latest trend in automation, which combines the power of artificial…
Read more

Leave a Reply

Your email address will not be published. Required fields are marked *