Skip to content

Artificial Intelligence: Exploring Trust's Development

In the early stages of the previous year, discussions amongst our team focused on the anticipated unveiling of artificial intelligence (AI). The anticipation was that we would finally witness its capabilities, providing us with insight into its effects on business and society, and offering...

Artificial Intelligence Navigating the Path of Credibility
Artificial Intelligence Navigating the Path of Credibility

Artificial Intelligence: Exploring Trust's Development

In recent developments, the world of artificial intelligence (AI) is witnessing a surge in regulation and guidelines aimed at ensuring responsible use. This shift comes in response to growing concerns about ethics, transparency, human rights, and security.

One of the most significant moves in this direction is the European Union's (EU) AI Act, set to be implemented by mid-2025. This comprehensive regulation aims to govern AI technologies, addressing various ethical, legal, and societal implications. Alongside the AI Act, the General-Purpose AI (GPAI) Code of Practice was published in July 2025. This voluntary code supports organisations in complying with the AI Act’s requirements, focusing on transparency, accountability, and alignment with fundamental human rights[1].

In the United States, the landscape of AI regulation is more fragmented. The One Big Beautiful Bill Act, signed in July 2025, introduces stringent regulations on foreign influence, supply chains, and technology licensing for AI companies. This Act mandates strict sourcing rules and compliance concerning “prohibited foreign entities,” reflecting national security and economic interests[2]. At the state level, various legislations are emerging, such as the Texas Responsible Artificial Intelligence Governance Act and rules proposed in Rhode Island to prevent AI-driven bias and deceptive practices[3][4].

Enforcement of these regulations is typically the responsibility of governmental agencies with jurisdiction, supported by regulators and, in some cases, voluntary compliance pathways. In Europe, the European Commission is the primary body defining, implementing, and enforcing AI regulations, working alongside national regulators[1]. In the United States, enforcement is more fragmented, with federal agencies shaping overarching rules and providing funding tied to compliance, and state governments enacting and enforcing laws targeting local AI impacts[2][3][4].

These frameworks collectively aim to ensure responsible AI use by addressing ethical implications, transparency, bias, supply chain integrity, and security risks. The form of AI regulation, such as a council, standards body, or international framework, remains to be seen.

The evolving landscape of AI regulation today is characterised by a patchwork of binding laws and voluntary codes, shaped by legislators, regulators, and policymakers intent on balancing innovation with safeguards for society[1][2][3][4]. As AI continues to permeate various aspects of our lives, the need for clear, effective, and globally coordinated regulations becomes increasingly important.

Technology and artificial intelligence (AI) are at the heart of these regulatory developments. The European Union's AI Act, aiming to govern AI technologies, emphasizes transparency, accountability, and respect for human rights, while the One Big Beautiful Bill Act in the United States focuses on foreign influence, supply chains, and technology licensing for AI companies.

Read also:

    Latest