U.S. Regulation

Cynthia Lummis Proposes Artificial Intelligence Bill, Requiring AI Firms to Disclose Technicals

Senator Cynthia Lummis (R-WY) has introduced the Responsible Innovation and Safe Expertise (RISE) Act of 2025, a legislative proposal designed to clarify liability frameworks for artificial intelligence (AI) used by professionals. The bill could bring transparency from AI developers – stoping short of requiring models to be open source.

In a press release, Lummis said the RISE Act would mean that professionals, such as physicians, attorneys, engineers, and financial advisors, remain legally responsible for the advice they provide, even when it is informed by AI systems. At the time, AI developers who create the systems can only shield themselves from civil liability when things go awry if they publicly release model cards.

The proposed bill defines model cards as detailed technical documents that disclose an AI system’s training data sources, intended use cases, performance metrics, known limitations, and potential failure modes. All this is intended to help help professionals assess whether the tool is appropriate for their work.

“Wyoming values both innovation and accountability; the RISE Act creates predictable standards that encourage safer AI development while preserving professional autonomy,” Lummis said in a press release. “This legislation doesn’t create blanket immunity for AI,” Lummis continued. However, the immunity granted under this Act has clear boundaries. The legislation excludes protection for developers in instances of recklessness, willful misconduct, fraud, knowing misrepresentation, or when actions fall outside the defined scope of professional usage.

Additionally, developers face a duty of ongoing accountability under the RISE Act. AI documentation and specifications must be updated within 30 days of deploying new versions or discovering significant failure modes, reinforcing continuous transparency obligations. The RISE Act, as it’s written now, stops short of mandating that AI models become fully open source. Developers can withhold proprietary information, but only if the redacted material isn’t related to safety, and each omission is accompanied by a written justification explaining the trade secret exemption.

In a prior interview with CoinDesk, Simon Kim, the CEO of Hashed, one of Korea’s leading VC funds, spoke about the danger of centralized, closed-source AI that’s effectively a black box. “OpenAI is not open, and it is controlled by very few people, so it’s quite dangerous. Making this type of [closed source] foundational model is similar to making a ‘god’, but we don’t know how it works,” Kim said at the time.

Terron Gold

Recent Posts

Senator Murphy Alleges White House Insiders Profited From Iran Strike Bets, Pushes to Ban Prediction Markets on Government Actions

U.S. Senator Chris Murphy (D-Conn.) is calling for legislation to ban prediction markets that allow traders to bet…

2 days ago

IRS Proposes Electronic-Only Delivery For Crypto Tax Forms Under New Reporting Rules

The U.S. Internal Revenue Service (IRS) has proposed a new rule that would allow cryptocy brokers to deliver…

2 days ago

Crypto-Friendly Fintech Revolut Files For U.S. Banking License to Expand Crypto and Payments Services

Global fintech powerhouse Revolut has filed an application for a U.S. banking license, a move that would allow…

2 days ago

Suspect Arrested on Caribbean Island of Saint Martin in $46M Seized Crypto Theft Case

A man accused of stealing tens of millions of dollars in cryptocy from U.S. government…

2 days ago

NYSE Parent ICE Invests in Crypto Exchange OKX at $25B Valuation Amid Tokenized Stocks Push

Intercontinental Exchange (ICE) — the parent company of the New York Stock Exchange — has taken a strategic…

2 days ago

AI Models Favor Bitcoin as a Store of Value, Stablecoins for Payments, BPI Study Finds

A new study from the Bitcoin Policy Institute (BPI) found that leading artificial intelligence models overwhelmingly favor Bitcoin…

2 days ago