Building ethical and transparent global AI standards

By the Blueprint Team

Artificial intelligence (AI) has the potential to reshape our world in ways few can even imagine, and businesses are spending lots of money to harness its power. According to a recent report, global investment in AI companies increased by 115 percent from 2020, reaching $77.5 billion USD. While the potential benefits are vast, they are matched by the potential risks for individuals and society. So, as the AI landscape hits new levels of growth, it’s only fitting that legislators take notice.  

AI legislation is in its early stages, and companies would be wise to take note of proposed regulations as they provide clear insight into the future of AI. 

European Union

As with all things data-related, the EU is determined to set the global standard yet again. Just as we saw the EU create a global standard for privacy when it passed the EU General Data Protection Regulation, extending data subject rights and data protection requirements for companies handling personal information, it is now trying its hand at AI with its AI Act.

Set to be the first comprehensive regulation on AI by a major regulator anywhere, the EU’s AI Act is expected to begin the trialogue phase of the EU legislative process this spring and could be passed by the end of 2023. Introducing a risk-based approach, the Act separates the applications and systems of AI into three categories: unacceptable risk, which is banned (such as social scoring), high-risk applications, which will be highly regulated, and lastly, a lower-risk category which is largely unregulated. It also introduces transparency obligations for AI systems that interact with people but are not obvious to users. Though the EU now aims to set the global standard for AI, it is worth noting that China has had legislation in effect since March of 2022 that governs the use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and “disseminate positive energy.”

United States

Not to be outdone, the US is also making moves when it comes to AI, albeit slower and more measuredly. The White House Office of Science and Technology Policy released a “Blueprint for an AI Bill of Rights” towards the end of 2022 “to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.” This nonbinding white paper calls for greater transparency, accountability, and privacy to address the concern that AI can replicate and even deepen inequalities present in society. Specifically, the blueprint proposes five core principles that should be built into AI technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and alternative options.

While actions at the federal level are nonbinding, companies should take note of the patchwork of AI legislation currently in place and expected to come soon. This includes existing state privacy laws containing provisions governing “automated decision-making,” which includes technology that facilitates AI-powered decisions. Additionally, the National Conference of State Legislators currently lists 17 states that have introduced bills or resolutions so far this year. Also worth noting is the commitment of the Federal Trade Commission as part of its Advance Notice of Proposed Rulemaking aimed at addressing “commercial surveillance” and data security, as it contains a full section exploring rulemaking for “automated decision-making systems.” It is important to recognize that, even in the absence of federal legislation, changes at the state level and actions from the FTC will impact how businesses operate now and in the future.


Lawmakers worldwide are determined to prove that finding a uniform baseline for responsible and informed AI is possible, and the popularity of apps like ChatGPT has increased the urgency. EU Commissioner for the Internal Market Thierry Breton recently highlighted this, saying, “As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks.” He continues, “This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data.”

2023 will likely see some unprecedented moves to harness AI and bring uniformity to its application, with a focus on reducing bias, increasing accountability and transparency in AI systems, as well as ensuring the protection of individuals’ rights and freedoms. Companies would be well positioned to follow the regulatory conversations taking place, align themselves with emerging best practices, and brace for the inevitable regulation coming down the pike.

Share with your network

You may also enjoy