On April 29, Acting Comptroller of the Currency Rodney Hood delivered pre-recorded remarks at the National Fair Housing Alliance’s Responsible AI Symposium.  In his speech, Hood reiterated the OCC’s commitment to deploying AI responsibly within the banking sector and highlighted the agency’s broader initiatives to promote financial inclusion. Continue Reading OCC’s Hood Emphasized AI Oversight and Inclusion in Financial Services

On April 7, the White House issued a fact sheet outlining new steps to support the responsible use and procurement of AI across federal agencies. The initiative builds on the Biden Administration’s 2023 Executive Order on AI and is intended to reduce administrative hurdles, improve interagency coordination, and expand access to commercially available AI tools.Continue Reading White House Unveils Government-Wide Plan to Streamline AI Integration

On March 25, Virginia Governor Glenn Youngkin vetoed two bills that sought to impose new restrictions on “high-risk” artificial intelligence (AI) systems and fintech lending partnerships. The vetoes reflect the Governor’s continued emphasis on fostering innovation and economic growth over introducing new regulatory burdens.Continue Reading Virginia Governor Vetoes Rate Cap and AI Regulation Bills

On February 20, the Virginia General Assembly passed the High-Risk Artificial Intelligence Developer and Deployer Act. If signed into law, Virginia would become the second state, after Colorado, to enact comprehensive regulation of “high-risk” artificial intelligence systems used in critical consumer-facing contexts, such as employment, lending, housing, and insurance.Continue Reading Virginia Moves to Regulate High-Risk AI with New Compliance Mandates 

On December 19, the U.S. Department of Treasury released a report summarizing key findings from its 2024 Request for Information (RFI) on the uses, opportunities, and risks of Artificial Intelligence (AI) in financial services. The report notes the increasing prevalence of AI, including generative AI, and explores the opportunities and challenges associated with its use.Continue Reading Treasury Highlights AI’s Potential and Risks in Financial Services

On October 24, the CFPB issued Circular 2024-06, which warns companies using third-party consumer reports, particularly surveillance-based “black box” or AI algorithmic scores, that they must follow the Fair Credit Reporting Act with respect to the personal data of their workers. This guidance adds to the growing body of law that protects employees from potentially harmful use of AI.Continue Reading CFPB Warns Employers Regarding FCRA Rules for AI-Driven Worker Surveillance

On February 22, Attorney General Merrick B. Garland appointed Jonathan Mayer as the Justice Department’s inaugural Chief Science and Technology Advisor and Chief Artificial Intelligence Officer. Mayer will sit in the Justice Department’s Office of Legal Policy and lead the Department’s newly established Emerging Technologies Board which coordinates and governs AI and other emerging technologies across the Department. Mayer will also build a team of technical and policy experts in cybersecurity and AI. The Chief AI Officer position is a role required by President Biden’s Executive Order on AI. Mayer is an assistant professor of computer science and public affairs at Princeton University and served as the technology law and policy advisor to then-Senator Kamala Harris as well as the chief technologist to the FCC’s Enforcement Bureau. Continue Reading Justice Department Hire’s First Chief AI Officer

On January 25, the FTC announced that it was issuing Section 6(b) orders against five Big Tech companies requiring them to provide information regarding recent investments and partnerships involving generative artificial intelligence (AI) companies and major cloud service providers.Continue Reading FTC Opens Inquiry Into Generative AI Investments and Partnerships

On November 21, the FTC voted 3-0 to approve the omnibus resolution authorizing the use of compulsory process in nonpublic investigations involving products and services that use or claim to be produced using artificial intelligence (AI) or claim to detect its use. The resolution will make it easier for FTC staff to issue civil investigative demands (CIDs), which are a form of compulsory process similar to a subpoena, in investigations relating to AI, while retaining the Commission’s authority to determine when CIDs are issued. This resolution will be in effect for 10 years. Continue Reading FTC Approves Compulsory Process for AI-related Products and Services