On July 24, the California Privacy Protection Agency (CPPA) approved a major rule package covering automated decision-making technology (ADMT), mandatory cybersecurity audits, and privacy risk assessments under the California Consumer Privacy Act (CCPA). The package narrows the definition of ADMT to tools that replace human decision making for significant decisions in areas like lending, housing, employment, education, and health care.Continue Reading California Finalizes New CCPA Rules on ADMT, Cybersecurity Audits, and Risk Assessments

On July 10, Massachusetts Attorney General Andrea Joy Campbell announced a $2.5 million settlement with a student loan company to resolve allegations that its underwriting practices violated the Massachusetts Consumer Protection Act and the Equal Credit Opportunity Act, including through the use of artificial intelligence models that produced disparate impacts on protected groups.Continue Reading Massachusetts AG Settles with Student Loan Lender on AI-Based Fair Lending Violations

On June 22, 2025, Texas Governor Greg Abbott signed into law House Bill 149, enacting the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). The law establishes one of the nation’s most comprehensive state-level artificial intelligence regulatory frameworks. TRAIGA imposes disclosure, consent, and compliance requirements on developers, deployers, and governmental entities who use artificial intelligence systems (AI). The law is set to take effect on January 1, 2026.Continue Reading Texas Enacts Sweeping AI Law: Disclosure, Consent, and Compliance Requirements Take Effect in 2026

On April 29, Acting Comptroller of the Currency Rodney Hood delivered pre-recorded remarks at the National Fair Housing Alliance’s Responsible AI Symposium.  In his speech, Hood reiterated the OCC’s commitment to deploying AI responsibly within the banking sector and highlighted the agency’s broader initiatives to promote financial inclusion. Continue Reading OCC’s Hood Emphasized AI Oversight and Inclusion in Financial Services

On April 7, the White House issued a fact sheet outlining new steps to support the responsible use and procurement of AI across federal agencies. The initiative builds on the Biden Administration’s 2023 Executive Order on AI and is intended to reduce administrative hurdles, improve interagency coordination, and expand access to commercially available AI tools.Continue Reading White House Unveils Government-Wide Plan to Streamline AI Integration

On March 25, Virginia Governor Glenn Youngkin vetoed two bills that sought to impose new restrictions on “high-risk” artificial intelligence (AI) systems and fintech lending partnerships. The vetoes reflect the Governor’s continued emphasis on fostering innovation and economic growth over introducing new regulatory burdens.Continue Reading Virginia Governor Vetoes Rate Cap and AI Regulation Bills

On February 20, the Virginia General Assembly passed the High-Risk Artificial Intelligence Developer and Deployer Act. If signed into law, Virginia would become the second state, after Colorado, to enact comprehensive regulation of “high-risk” artificial intelligence systems used in critical consumer-facing contexts, such as employment, lending, housing, and insurance.Continue Reading Virginia Moves to Regulate High-Risk AI with New Compliance Mandates 

On December 19, the U.S. Department of Treasury released a report summarizing key findings from its 2024 Request for Information (RFI) on the uses, opportunities, and risks of Artificial Intelligence (AI) in financial services. The report notes the increasing prevalence of AI, including generative AI, and explores the opportunities and challenges associated with its use.Continue Reading Treasury Highlights AI’s Potential and Risks in Financial Services

On October 24, the CFPB issued Circular 2024-06, which warns companies using third-party consumer reports, particularly surveillance-based “black box” or AI algorithmic scores, that they must follow the Fair Credit Reporting Act with respect to the personal data of their workers. This guidance adds to the growing body of law that protects employees from potentially harmful use of AI.Continue Reading CFPB Warns Employers Regarding FCRA Rules for AI-Driven Worker Surveillance

On February 22, Attorney General Merrick B. Garland appointed Jonathan Mayer as the Justice Department’s inaugural Chief Science and Technology Advisor and Chief Artificial Intelligence Officer. Mayer will sit in the Justice Department’s Office of Legal Policy and lead the Department’s newly established Emerging Technologies Board which coordinates and governs AI and other emerging technologies across the Department. Mayer will also build a team of technical and policy experts in cybersecurity and AI. The Chief AI Officer position is a role required by President Biden’s Executive Order on AI. Mayer is an assistant professor of computer science and public affairs at Princeton University and served as the technology law and policy advisor to then-Senator Kamala Harris as well as the chief technologist to the FCC’s Enforcement Bureau. Continue Reading Justice Department Hire’s First Chief AI Officer