On October 24, the CFPB issued Circular 2024-06, which warns companies using third-party consumer reports, particularly surveillance-based “black box” or AI algorithmic scores, that they must follow the Fair Credit Reporting Act with respect to the personal data of their workers. This guidance adds to the growing body of law that protects employees from potentially harmful use of AI.Continue Reading CFPB Warns Employers Regarding FCRA Rules for AI-Driven Worker Surveillance
Artificial Intelligence
Colorado Enacts Nation’s First AI Discrimination Law
On May 17, Colorado’s governor signed the nation’s first artificial intelligence law designed to prevent algorithmic discrimination. The law is slated to go into effect on February 1, 2026.Continue Reading Colorado Enacts Nation’s First AI Discrimination Law
Justice Department Hire’s First Chief AI Officer
On February 22, Attorney General Merrick B. Garland appointed Jonathan Mayer as the Justice Department’s inaugural Chief Science and Technology Advisor and Chief Artificial Intelligence Officer. Mayer will sit in the Justice Department’s Office of Legal Policy and lead the Department’s newly established Emerging Technologies Board which coordinates and governs AI and other emerging technologies across the Department. Mayer will also build a team of technical and policy experts in cybersecurity and AI. The Chief AI Officer position is a role required by President Biden’s Executive Order on AI. Mayer is an assistant professor of computer science and public affairs at Princeton University and served as the technology law and policy advisor to then-Senator Kamala Harris as well as the chief technologist to the FCC’s Enforcement Bureau. Continue Reading Justice Department Hire’s First Chief AI Officer
DOJ and SEC Officials Issue Harsh Warnings Concerning the Misuse of AI
In separate public speeches, senior officials from the Department of Justice and the Securities and Exchange Commission warned of harsh penalties against individuals and companies misusing AI for fraudulent purposes.Continue Reading DOJ and SEC Officials Issue Harsh Warnings Concerning the Misuse of AI
FTC Opens Inquiry Into Generative AI Investments and Partnerships
On January 25, the FTC announced that it was issuing Section 6(b) orders against five Big Tech companies requiring them to provide information regarding recent investments and partnerships involving generative artificial intelligence (AI) companies and major cloud service providers.Continue Reading FTC Opens Inquiry Into Generative AI Investments and Partnerships
FTC Approves Compulsory Process for AI-related Products and Services
On November 21, the FTC voted 3-0 to approve the omnibus resolution authorizing the use of compulsory process in nonpublic investigations involving products and services that use or claim to be produced using artificial intelligence (AI) or claim to detect its use. The resolution will make it easier for FTC staff to issue civil investigative demands (CIDs), which are a form of compulsory process similar to a subpoena, in investigations relating to AI, while retaining the Commission’s authority to determine when CIDs are issued. This resolution will be in effect for 10 years. Continue Reading FTC Approves Compulsory Process for AI-related Products and Services
Hsu Suggests Caution in Rollout of AI and Tokenization in Banking
On June 16, Michael Hsu, the Acting Comptroller of the Currency gave remarks at the American Bankers Association’s Risk and Compliance Conference about the risks of tokenization and AI on the banking industry. While reiterating his skepticism of cryptocurrency (see our previous blog post here), Hsu cautions that the decentralization and “trustlessness” associated with public blockchains will impose severe limitations on the scalability of tokenization, and its associated benefits. Rather, Hsu advocates for the development of centralized and regulated “trusted blockchains” that, due to the security and safety they offer, are better positioned to facilitate the growth of tokenization at scale in a safe, sound, and fair manner.Continue Reading Hsu Suggests Caution in Rollout of AI and Tokenization in Banking
CFPB Warns of Risks Related to AI Chatbots in Banking
On June 6, the CFPB released a new report related to the adoption of chatbots by financial institutions, including those with advanced technology such as generative chatbots and others marketed as “artificial intelligence.” “In 2022, over 98 million users (approximately 37% of the U.S. population) engaged with a bank’s chatbot. This number is projected to grow to 110.9 million users by 2026.” According to the CFPB, “financial institutions have begun experimenting with generative machine learning and other underlying technologies such as neural networks and natural language processing to automatically create chat responses using text and voices.” Chatbots are intended, in part, to help institutions reduce the costs of customer service agents. Continue Reading CFPB Warns of Risks Related to AI Chatbots in Banking
You Don’t Need a Machine to Predict What the FTC Might Do About Unsupported AI Claims
The rapid rise of AI used with advertising, marketing and other consumer facing applications has caused the FTC to continue to take notice and issues guidance. For example, the FTC is concerned about false or unsubstantiated claims about an AI product’s efficacy. It has issued AI-related guidance in the past. The following is some recent FTC guidance to consider when referencing AI in your advertising. This guidance is not necessarily new, but the fact that it is being reiterated should be a signal that the FTC continues to focus on this area and that actions may be forthcoming. In fact, the recent guidance states: “AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.”Continue Reading You Don’t Need a Machine to Predict What the FTC Might Do About Unsupported AI Claims