OpenAI Presents Voice-Cloning Instrument

OpenAI unveiled a voice-cloning tool on Friday that it intends to maintain under strict control until security measures are put in place to prevent listeners from being tricked by audio fakes.

Based on a 15-second audio sample, a model named “Voice Engine” can virtually mimic someone’s speech, according to an OpenAI blog post that presents the findings of a tool’s small-scale test.

“We acknowledge that producing speech that mimics human voices carries significant hazards, which are particularly relevant during an election year,” the San Francisco-based business stated.

“To make sure we are taking their feedback into account as we build,” we are interacting with both domestic and foreign partners in the fields of government, media, entertainment, education, civil society, and other areas.

Disinformation specialists worry that the widespread availability of voice cloning tools—which are low-cost, simple to use, and difficult to track down—will lead to widespread abuse of AI-powered applications during a crucial election year.

OpenAI stated that it was “taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse,” acknowledging these issues.

The circumspect unveiling occurred many months after a political adviser for a Democratic opponent of Joe Biden’s in the long run acknowledged being the brains behind a robocall that mimicked the US president.

An operative for Minnesota congressman Dean Phillips created the AI-generated call, which included what sounded like Joe Biden’s voice advising people not to vote in the New Hampshire primary in January.

Experts expressed anxiety about the episode, fearing a wave of AI-powered deepfake information in the 2024 White House race and other important elections this year around the world.

According to OpenAI, partners in the Voice Engine test program have agreed to guidelines that include obtaining the express and informed agreement of everyone whose voice is replicated using the program.

The company also stated that audiences need to know when the voices they hear are artificial intelligence created.

We have put in place a number of security safeguards, such as proactive usage tracking and watermarking to track the source of any audio produced by Voice Engine, according to OpenAI.

AFP

0 0 votes
Article Rating

Join Our WhatsApp Groups

Lagmen Limited Job Alert 1
Lagmen Limited Job Alert 2

Submit Your Discover News

discovernews@lagmen.net
reachus@lagmen.net

Contact Us Now

Tel: +2348051324267
Tel: +2348094097992
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x