The Federal Trade Commission (FTC) is intensifying its oversight of emerging technologies and persistent financial frauds.
The agency has launched a sweeping inquiry into AI chatbots that function as digital companions, while recently securing settlements that permanently ban operators of a notorious student loan forgiveness scam from the debt relief industry.
These moves underscore the FTC’s dual mission: safeguarding vulnerable consumers, particularly children and debt-burdened borrowers, while promoting innovation without unchecked risks.
The first initiative targets the burgeoning field of AI-powered chatbots, which are increasingly designed to simulate human-like interactions and emotional bonds.
The FTC issued six 6(b) orders to seven major companies—Alphabet Inc., Character Technologies Inc., Instagram LLC, Meta Platforms Inc., OpenAI OpCo LLC, Snap Inc., and xAI Corp.—requiring them to disclose detailed information about their practices.
These orders, authorized under Section 6(b) of the FTC Act, empower the agency to conduct broad studies without an immediate law enforcement aim, focusing instead on gathering data to inform future policies.
At the core of the inquiry are concerns over how these chatbots, leveraging generative AI, mimic human emotions, intentions, and relationships.
Often marketed as friendly confidants, they can foster deep trust among users, especially children and teens, potentially leading to negative psychological or developmental impacts.
The FTC wants to know how companies measure and monitor these risks pre- and post-deployment, including strategies to mitigate harm to minors, enforce age restrictions, and comply with the Children’s Online Privacy Protection Act (COPPA).
Responses must cover everything from monetization of user engagement—such as through ads or data sales—to how personal information from conversations is handled, shared, or used.
The orders demand transparency on character development, user input processing, disclosures about capabilities and risks, and advertising practices aimed at parents.
For instance, companies must reveal if they track violations of community guidelines or terms of service.
The Commission voted 3-0 to approve the orders, reflecting bipartisan concern. FTC Chairman Andrew N. Ferguson emphasized the balance at stake:
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy. As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.”
This probe arrives amid rapid AI adoption, with chatbots like those from OpenAI’s ChatGPT or Meta‘s platforms becoming everyday companions for millions.
Critics argue that without robust safeguards, these tools could exacerbate isolation or expose young users to inappropriate content.
For consumers, the inquiry promises greater accountability, potentially leading to clearer warnings and age-appropriate designs.
Industry watchers see it as a call to action: firms must prioritize ethical AI development to avoid future scrutiny, which could shape global standards and influence investments in the $100 billion-plus AI sector.
Complementing this effort, the FTC is closing the book on a classic fraud : a student loan debt-relief scam that preyed on desperate borrowers.
Eric Caldwell and David Hernandez, operators of Nevada-based Superior Servicing and affiliated entities, have agreed to settlements that bar them permanently from the debt relief business.
The scheme duped consumers by falsely posing as affiliates of the Department of Education or loan servicers, charging illegal upfront fees—often hundreds of dollars—that were pocketed rather than applied to loan balances.
Borrowers were lured with promises of forgiveness or reductions that never materialized, exacerbating their financial woes amid rising student debt, which totals over $1.7 trillion nationwide.
The proposed orders, filed in the U.S. District Court for the District of Nevada and approved 3-0 by the Commission, impose sweeping prohibitions.
Caldwell and Hernandez cannot misrepresent affiliations with government entities, charge upfront fees, make false claims about services, or use deception to collect financial data.
Caldwell faces an additional telemarketing ban, while Hernandez must adhere to the Telemarketing Sales Rule.
Financially, a $45.9 million judgment looms, largely suspended due to inability to pay, but they must surrender over $560,000 in assets and pay more than $1.6 million immediately.
Non-compliance could trigger the full amount.
Litigation continues against co-defendant Dennise Merdjanian and corporate entities.
The scam’s victims, many low-income or first-generation borrowers, lost thousands collectively, highlighting vulnerabilities in the opaque debt relief market.
The FTC’s action, building on a 2025 complaint amendment, serves as a deterrent, reinforcing bans on upfront fees under the Telemarketing Sales Rule.
It signals zero tolerance for exploitation in an industry rife with bad actors, urging legitimate providers to prioritize transparency.
Together, these FTC updates illustrate a seemingly proactive regulator adapting to modern threats.
By probing AI‘s companion role and dismantling scams, the agency aims to protect consumers from digital deception and financial predation, hopefully fostering trust in technology and markets.