The incident reignites concerns over the potential misuse of deepfakes, a technology that can create realistic and often undetectable audio and video forgeries. Credit: Thinkstock The Federal Communications Commission (FCC) has proposed a hefty $6 million fine against a political consultant for allegedly using AI-generated voice cloning and caller ID spoofing to spread election-related misinformation. “Political consultant Steve Kramer was responsible for the calls and now faces a $6 million proposed fine for perpetrating this illegal robocall campaign on January 21, 2024,” the FCC said in a statement. The FCC alleged that Kramer orchestrated a robocall campaign that featured a deepfake of President Joe Biden’s voice, urged New Hampshire voters not to participate in the January primary, asking them to “save your vote for the November election.” Kramer’s action, conducted just two days before the presidential primary, violated the “Truth in Caller ID Act,” the FCC said. This law prohibits the transmission of false or misleading caller ID information with the intent to defraud, cause harm, or wrongfully obtain value. “We will act swiftly and decisively to ensure that bad actors cannot use U.S. telecommunications networks to facilitate the misuse of generative AI technology to interfere with elections, defraud consumers, or compromise sensitive data,” Loyaan A Egal, chief of the Enforcement Bureau and chair of the Privacy and Data Protection Task Force at FCC said in the statement. The FCC is also taking action against Lingo Telecom for its role in facilitating the illegal robocalls, the statement added. “Lingo Telecom transmitted these calls, incorrectly labeling them with the highest level of caller ID attestation, making it less likely that other providers could detect the calls as potentially spoofed. The Commission brought a separate enforcement action today against Lingo Telecom for apparent violations of STIR/SHAKEN for failing to utilize reasonable “Know Your Customer” protocols to verify caller ID information in connection with Mr. Kramer’s illegal robocalls.” The Commission has made clear that calls made with AI-generated voices are “artificial” under the Telephone Consumer Protection Act (TCPA), confirming that the FCC and state Attorneys General have the needed tools to go after bad actors behind these nefarious robocalls, the statement added. “In addition, the FCC launched a formal proceeding to gather information on the current state of AI use in calling and texting and ask questions about new threats, like robocalls.” Echoes of a wider debate This incident reignites concerns over the potential misuse of deepfakes, a technology that can create realistic and often undetectable audio and video forgeries. Earlier this month, actress Scarlett Johansson raised similar concerns alleging OpenAI using her voice without consent in its AI application. She had alleged that the voice behind “Sky” voice chat sounded “eerily similar” to her. However, OpenAI quickly refuted this allegation. “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” OpenAI CEO Sam Altman said in a statement. “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products.” Meanwhile, the ChatGPT maker had paused the voice of “Sky,” the statement added. “Johansson’s case highlights broader ethical and legal challenges surrounding AI-generated content and the need for stringent regulations to protect individuals’ privacy and identities,” said Faisal Kawoosa, founder and chief analyst at the technology research and consulting firm, Techarc. Related content news Nvidia to power India’s AI factories with tens of thousands of AI chips India’s cloud providers and server manufacturers plan to boost Nvidia GPU deployment nearly tenfold by the year’s end compared to 18 months ago. By Prasanth Aby Thomas Oct 24, 2024 5 mins GPUs Artificial Intelligence Data Center news New middleware doubles GPU computational efficiency for AI workloads in trials, says Fujitsu The company says the computing broker is aimed at solving the GPU shortage for compute-intensive workloads by improving resource allocation and memory management across AI platforms and applications. By Elizabeth Montalbano Oct 22, 2024 4 mins GPUs Artificial Intelligence news analysis Nvidia: Latest news and insights Here’s what you need to know about the AI and processor giant’s latest product and company news. By Dan Muse Oct 15, 2024 6 mins Artificial Intelligence news US targets advanced AI and cloud firms with new reporting proposal This, along with other AI regulations, sparks worries for enterprises about escalating compliance costs and curbing innovation. By Prasanth Aby Thomas Sep 10, 2024 1 min Regulation Artificial Intelligence Cloud Computing PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe