Britain is facing "serious harm" from its lack of action on artificial intelligence risks, with influential MPs warning that consumers are being put at risk by the government's and Bank of England's failure to regulate AI use in the financial sector.
The UK's financial system, which relies heavily on artificial intelligence, has been left vulnerable to disaster. With over 75% of City firms now using AI, insurers and international banks among them, there is a growing concern that these firms could exacerbate financial crises if they make similar decisions in response to economic shocks. However, the UK's lack of clear regulations means that businesses have to figure out how existing guidelines apply to AI, leaving many worried about the consequences.
A report by MPs on the Treasury committee has highlighted the risks, including a lack of transparency around how AI could influence financial decisions and whether data providers, tech developers or financial firms would be held accountable if things went wrong. The use of AI also increases the likelihood of fraud, and the dissemination of unregulated and misleading financial advice.
Furthermore, rising AI use has increased firms' cybersecurity risks, leaving them overly reliant on a small number of US tech companies for essential services. This could amplify "herd behaviour", with businesses making similar financial decisions during economic shocks, risking a financial crisis.
MPs are now calling on regulators to take action, including the launch of new stress tests that would assess the City's readiness for AI-driven market shocks. They also want the Financial Conduct Authority (FCA) to publish practical guidance by the end of the year, clarifying how consumer protection rules apply to AI use and who would be held accountable if consumers suffer any harm.
The Treasury committee has warned that by taking a wait-and-see approach to AI in financial services, regulators are exposing consumers and the financial system to potentially serious harm.
The UK's financial system, which relies heavily on artificial intelligence, has been left vulnerable to disaster. With over 75% of City firms now using AI, insurers and international banks among them, there is a growing concern that these firms could exacerbate financial crises if they make similar decisions in response to economic shocks. However, the UK's lack of clear regulations means that businesses have to figure out how existing guidelines apply to AI, leaving many worried about the consequences.
A report by MPs on the Treasury committee has highlighted the risks, including a lack of transparency around how AI could influence financial decisions and whether data providers, tech developers or financial firms would be held accountable if things went wrong. The use of AI also increases the likelihood of fraud, and the dissemination of unregulated and misleading financial advice.
Furthermore, rising AI use has increased firms' cybersecurity risks, leaving them overly reliant on a small number of US tech companies for essential services. This could amplify "herd behaviour", with businesses making similar financial decisions during economic shocks, risking a financial crisis.
MPs are now calling on regulators to take action, including the launch of new stress tests that would assess the City's readiness for AI-driven market shocks. They also want the Financial Conduct Authority (FCA) to publish practical guidance by the end of the year, clarifying how consumer protection rules apply to AI use and who would be held accountable if consumers suffer any harm.
The Treasury committee has warned that by taking a wait-and-see approach to AI in financial services, regulators are exposing consumers and the financial system to potentially serious harm.