348

Sopro TTS: A 169M model with zero-shot voice cloning that runs on the CPU

That's cool and useful.

IMO, the best alternative is Chatterbox-TTS-Server [0] (slower, but quite high quality).

[0] https://github.com/devnen/Chatterbox-TTS-Server

a day agorealityfactchex

I quite like IndexTTS2 personally, it does voice cloning and also lets you modulate emotion manually through emotion vectors which I've found quite a powerful tool. It's not necessarily something everyone needs, but it's really cool technology in my opinion.

It's been particularly useful for a model orchestration project I've been working on. I have an external emotion classification model driving both the LLM's persona and the TTS output so it stays relatively consistent. The affect system also influences which memories are retrieved; it's more likely to retrieve 'memories' created in the current affect state. IndexTTS2 was pretty much the only TTS that gives the level of control I felt was necessary.

11 hours agoBoxOfRain

Wow, the IndexTTS2 demo is very good. Definitely going to check that out. Thanks.

[0] https://indextts2.org

4 hours agorealityfactchex

Chatterbox-TTS has a MUCH MUCH better output quality though, the quality of the output from Sopro TTS (based on the video embedded on GitHub) is absolutely terrible and completely unusable for any serious application, while Chatterbox has incredible outputs.

I have an RTX5090, so not exactly what most consumers will have but still accessible, and it's also very fast, around 2 seconds of audio per 1 second of generation.

Here's an example I just generated (first try, 22 seconds runtime, 14 seconds of generation): https://jumpshare.com/s/Vl92l7Rm0IhiIk0jGors

Here's another one, 20 seconds of generation, 30 seconds of runtime, which clones a voice from a Youtuber (I don't use it for nefarious reasons, it's just for the demo): https://jumpshare.com/s/Y61duHpqvkmNfKr4hGFs with the original source for the voice: https://www.youtube.com/@ArbitorIan

13 hours agoiLoveOncall

You should try it! I wouldn’t say it’s the best, far from that. But also wouldn’t say it’s terrible. If you have a 5090, then yes, you can run much more powerful models in real time. Chatterbox is a great model though

13 hours agosammyyyyyyy

> But also wouldn’t say it’s terrible.

But you included 3 samples on your GitHub video and they all sound extremely robotic and have very bad artifacts?

12 hours agoiLoveOncall

[dead]

12 hours agosamuel-vitorino

I've been using Higgs-Audio for a while now as the primary TTS system. How would you say does Chatterbox compare to it if you have experience?

13 hours agokkzz99

I haven't used it. I compared it with T5Gemma TTS that came out recently and Chatterbox is much better in all aspects, but especially in voice cloning where T5Gemma basically did not work.

12 hours agoiLoveOncall

Super nice! I've been using Kokoro locally, which is 82M parameters and runs (and sounds) amazing! https://huggingface.co/hexgrad/Kokoro-82M

16 hours agoarmcat

BTW does anyone know of good assistant voice stack that's Open Source? I used https://github.com/ricky0123/vad for voice activation -> works good, then just using Web Speech API as that's the fastest and then commercial TTS for speed as couldn't find good one.

15 hours agomachiaweliczny

I tried Kokoro-JS that I think runs in browser and it was too way too slow with latency also not supporting language I wanted

15 hours agomachiaweliczny

I have a 5070 in my rig. What I'm running is Kokoro in a Python/FastAPI backend - I also use local quantized models (I swap between ministral-3 and Qwen3) as "the brains" (offload to GPT-5.2 inc. web search for "complex" tasks or those requiring the web). In the backend I use Kokoro and generate wav bytes that I send to the frontend. The frontend is just a simple HTML page with a textbox and a button, invoking a `fetch()`. I type, and it responds back in audio. The round-trip time is <1 second for me, unless it needs to call OpenAI API for "complex" tasks. I am yet to integrate STT as well and then the cycle is complete. That's the stack, and not slow at all, but it depends on your HW.

8 hours agoarmcat

What measures are being taken to ensure that this model isn't used to lower the cost of fraudsters committing grandparent scams by mimicking the voices of grandchildren?

6 hours agobcrl

None, obviously, and it's barking up the wrong tree. The genie is already out of the bottle as there are zillions of similar free services and software that do the same thing, and there's no quick-fix panacea technological solutions to social and legal problems. Legislation in every locality need to create extremely harsh penalties for impersonating other people, and elders need to be educated to ask questions of their family members that only the real people would know the answers to.

4 hours agoburnt-resistor

What is "zero-shot" supposed to mean?

21 hours agoVerifiedReports

zero-shot is a single prompt (maybe with additional context in the form of files.)

few-shot is providing a few examples to steer the LLM

multi-shot is a longer cycle of prompts and refinement

18 hours agoonion2k

I've been calling good results from a single prompt "single-shot." Is this not right?

13 hours agomikkupikku

No as you could have multiple examples of expected output in single prompt. You should just "ask" - that's zero shot. If you "ask + provide" examples then you are in the n+1 shot realm. But I suppose terminology is shifting from this...

9 hours agoflimflamm

This is one-shot.

12 hours agosamtheprogram

if you had one-shot

or one opportunity

16 hours agomoffkalast

to seize everything you ever wanted in one moment

16 hours agonake89

Would you capture it or just let it slip? Yo

10 hours agoraccer

[flagged]

15 hours agoaltmanaltman

I believe in this case it means that you do not need to provide other voice samples to get a good clone.

20 hours agocarteazy

It means there is zero training involved in getting from voice sample to voice duplicate. There used to be models that take a voice sample, run 5 or 10 training iterations (which of course takes 10 mins, or a few hours if you have hardware as shitty as mine), and only then duplicate the voice.

This you give the voice sample as part of the input, and immediately it tries to duplicate the voice.

19 hours agospwa4

Doesn’t NeuTTS work the same way?

18 hours agox3haloed

[dead]

21 hours agomikalauskas

Tried english. There are similarities. Really impressive for such budget Also increadibly easy to use, thanks for this

a day agoyamal4321

But its english-only - so what else could you have tried? Asking because I‘m interested in a german version :)

14 hours agoxiconfjs

It's impressive given the constraints!

Would you consider releasing a more capable version that renders with fewer artifacts (and maybe requires a bit more processing power)?

Chatterbox is my go-to, this could be a nice alternative were it capable of high-fidelity results!

a day agobtbuildem

This is my side “hobby”. And compute is quite expensive. But if the community’s responsive is good, I will definitely think about it! Btw, chatterbox is a great model and inspiration

a day agosammyyyyyyy

Thanks can you share details about compute economics you dealt with ?

a day agobicepjai

Yeah sure. The training was about ~250 dollars, which is quite low by today’s standards. And I spent a bit more on ablations and research

a day agosammyyyyyyy

Very cool work, especially for a hobby project.

Do you have any plans to publish a blog post on how you did that? ?What training data and how much? Your training and ablations methodology, etc.

21 hours agolittlestymaar

What does "zero-shot" mean in this context?

a day agoSoftTalker

The *-shot jargon is just in-crowd nonsense that has been meaningless since day one (or zero). Like Big O notation but even more arbitrary (as evidenced by all the answers to your comment).

11 hours agothrowaway132448

> Zero-shot learning (ZSL) is a problem setup in deep learning where, at test time, a learner observes samples from classes which were not observed during training, and needs to predict the class that they belong to. The name is a play on words based on the earlier concept of one-shot learning, in which classification can be learned from only one, or a few, examples.

https://en.wikipedia.org/wiki/Zero-shot_learning

edit: since there seems to be some degree of confusion regarding this definition, I'll break it down more simply:

We are modeling the conditional probability P(Audio|Voice). If the model samples from this distribution for a Voice class not observed during training, it is by definition zero-shot.

"Prediction" here is not a simple classification, but the estimation of this conditional probability distribution for a Voice class not observed during training.

Providing reference audio to a model at inference-time is no different than including an AGENTS.md when interacting with an LLM. You're providing context, not updating the model weights.

a day agonateb2022

This generic answer from Wikipedia is not very helpful in this context. Zero-shot voice cloning in TTS usually means that data of the target speaker you want the generated speech to sound like does not need to be included in the training data used to train the TTS models. In other words, you can provide an audio sample of the target speaker together with the text to be spoken to generate the audio that sounds like it was spoken by that speaker.

a day agowoodson

Why wouldn’t that be one-shot voice cloning? The concept of calling it zero shot doesn’t really make sense to me.

a day agocoder543

Zero-shot means zero-retraining, so think along the lines of "Do you need to modify the weights? Or can you keep the weights fixed and you only need to supply an example?"

As with other replies, yes this is a silly name.

a day agoben_w

> Zero-shot means zero-retraining, so think along the lines of "Do you need to modify the weights? Or can you keep the weights fixed and you only need to supply an example?"

I would caution that using the term "example" suggests further learning happens at inference-time, which isn't the case.

For LLMs, the entire prompt is the input and conveys both the style and the content vectors. In zero-shot voice cloning, we provide the exact same inputs vectors but just decoupled. Providing reference audio is no different than including "Answer in the style of Sir Isaac Newton" in an LLM's prompt. The model doesn't 'learn' the voice; it simply applies the style vector to the content during the forward pass.

10 hours agonateb2022

Providing inference-time context (in this case, audio) is no different than giving a prompt to an LLM. Think of it as analogous to an AGENTS.md included in a prompt. You're not retraining the model, you're simply putting the rest of the prompt into context.

If you actually stopped and fine-tuned the model weights on that single clip, that would be one-shot learning.

a day agonateb2022

To me, a closer analogy is In Context Learning.

In the olden days of 2023, you didn’t just find instruct-tuned models sitting on every shelf.

You could use a base model that has only undergone pretraining and can only generate text continuations based on the input it receives. If you provided the model with several examples of a question followed by an answer, and then provided a new question followed by a blank for the next answer, the model understood from the context that it needed to answer the question. This is the most primitive use of ICL, and a very basic way to achieve limited instruction following behavior.

With this few-shot example, I would call that few-shot ICL. Not zero shot, even though the model weights are locked.

But, I am learning that it is technically called zero shot, and I will accept this, even if I think it is a confusingly named concept.

a day agocoder543

I don't disagree, but that's what people started calling it. Zero-shot doesn't make sense anyway, as how would the model know what voice it should sound like (unless it's a celebrity voice or similar included in the training data where it's enough to specify a name).

a day agowoodson

> Zero-shot doesn't make sense anyway, as how would the model know what voice it should sound like (unless it's a celebrity voice or similar included in the training data where it's enough to specify a name).

It makes perfect sense; you are simply confusing training samples with inference context. "Zero-shot" refers to zero gradient updates (retraining) required to handle a new class. It does not mean "zero input information."

> how would the model know what voice it should sound like

It uses the reference audio just like a text based model uses a prompt.

> unless it's a celebrity voice or similar included in the training data where it's enough to specify a name

If the voice is in the training data, that is literally the opposite of zero-shot. The entire point of zero-shot is that the model has never encountered the speaker before.

a day agonateb2022

With LLMs I've seen zero-shot used to describe scenarios where there's no example, it "take this and output JSON", while one-shot has the prompt include an example like "take this and output JSON, for this data the JSON should look like this".

Thus if you feed a the model target voice, ie an example of the desired output vouce, it sure seems like it should be classified as one-shot.

However it seems the zero-shot in voice cloning is relative to learning, and in contrast to one-shot learning[1].

So a bit overloaded term causing confusion from what I can gather.

[1]: https://en.wikipedia.org/wiki/One-shot_learning_(computer_vi...

a day agomagicalhippo

The confusion clears up if you stop conflating contextual conditioning (prompting) with actual Learning (weight updates). For LLMs, "few-shot prompting" is technically a misnomer that stuck; you are just establishing a pattern in the context window, not training the model.

In voice cloning, the reference audio is simply the input, not a training example. You wouldn't say an image classifier is doing "one-shot learning" just because you fed it one image to classify. That image is the input. Similarly, the reference audio is the input that conditions the generation. It is zero-shot because the model's weights were never optimized for that specific speaker's manifold.

a day agonateb2022
[deleted]
a day ago

So if you get your target to record (say) 1 hour of audio, that's a one-shot.

If you didn't do that (because you have 100 hours of other people talking), that's zero-shots, no?

a day agogeocar

> So if you get your target to record (say) 1 hour of audio, that's a one-shot.

No, that would still be zero shot. Providing inference-time context (in this case, audio) is no different than giving a prompt to an LLM. Think of it as analogous to an AGENTS.md included in a prompt. You're not retraining the model, you're simply putting the rest of the prompt into context.

If you actually stopped and fine-tuned the model weights on that single clip, that would be one-shot learning.

a day agonateb2022

> Providing inference-time context (in this case, audio) is no different than giving a prompt to an LLM.

Right... And you have 0-shot prompts ("give me a list of animals"), 1-shot prompts ("give me a list of animals, for example: a cat"), 2-shot prompts ("give me a list of animals, for example: a cat; a dog"), etc.

The "shot" refers to how many examples are provided to the LLM in the prompt, and have nothing to do with training or tuning, in every context I've ever seen.

12 hours agoImPostingOnHN

> Right... And you have 0-shot prompts ("give me a list of animals"), 1-shot prompts ("give me a list of animals, for example: a cat"), 2-shot prompts ("give me a list of animals, for example: a cat; a dog"), etc.

> The "shot" refers to how many examples are provided to the LLM in the prompt, and have nothing to do with training or tuning, in every context I've ever seen.

In formal ML, "shot" refers to the number of samples available for a specific class during the training phase. You're describing a colloquial usage of the term found only in prompt engineering.

You can't apply an LLMism to a voice cloning model where standard ML definitions apply.

10 hours agonateb2022

It’s nonsensical to call it “zero shot” when a sample of the voice is provided. The term “zero shot cloning” implies you have some representation of the voice from another domain - e.g. a text description of the voice. What they’re doing is ABSOLUTELY one shot cloning. I don’t care if lots of STT folks use the term this way, they’re wrong.

a day agooofbey

> This generic answer from Wikipedia is not very helpful in this context.

Actually, the general definition fits this context perfectly. In machine learning terms, a specific 'speaker' is simply a 'class.' Therefore, a model generating audio for a speaker it never saw during training is the exact definition of the Zero-Shot Learning problem setup: "a learner observes samples from classes which were not observed during training," as I quoted.

Your explanation just rephrases the very definition you dismissed.

a day agonateb2022

From your definition:

> a learner observes samples from classes which were not observed during training, and needs to predict the class that they belong to.

That's not what happens in zero-shot voice cloning, which is why I dismissed your definition copied from Wikipedia.

a day agowoodson

> That's not what happens in zero-shot voice cloning

It is exactly what happens. You are confusing the task (classification vs. generation) with the learning paradigm (zero-shot).

In the voice cloning context, the class is the speaker's voice (not observed during training), samples of which are generated by the machine learning model.

The definition applies 1:1. During inference, it is predicting the conditional probability distribution of audio samples that belong to that unseen class. It is "predict[ing] the class that they belong to," which very same class was "not observed during training."

You're getting hung up on the semantics.

a day agonateb2022

Jeez, OP asked what it means in this context (zero-shot voice cloning), where you quoted a generic definition copied from Wikipedia. I defined it concretely for this context. Don't take it as a slight, there is no need to get all argumentative.

a day agowoodson

I think the point is it's not zero shot if a sample is needed. A system that require one sample is usually considered one-shot, or few-shot if it needs few, etc etc.

a day agonumpad0

I don't understand the comments here at all. I played the audio and it sounds absolutely horrible, far worse than computer voices sounded fifteen years ago. Not even the most feeble minded person would mistake that as a human. Am I not hearing the same thing everyone else is hearing? It sounds straight up corrupted to me. Tested in different browsers, no difference.

a day agoguerrilla

As I said, some reference voices can lead to bad voice quality. But if it sounds that bad, it’s probably not it. Would love to dig into it if you want

a day agosammyyyyyyy

I agree with the comment above. I have not logged into hacker news in _years_ but did so today just to weigh in here. If people are saying that the audio sounds great, then there is definitely something going on with a subset of users where we are only hearing garbled words with a LOT of distortion. This does not sound like natural speech to met at all. It sounds more like a warped cassette tape. And I do not mean to slight your work at all. I am actually incredibly puzzled here to understand why my perception of this is so radically different from others!

a day agocodefreakxff

Thank you for commenting. I wonder if this could be another situation like "the dress" (2015) or maybe something is wrong with our codecs...

a day agoguerrilla

No, nothing wrong with your codecs. It's sounds shitty. But given the small size and speed it's still impressive.

It's like saying .kkrieger looks like a bad game, which it does, but then again .kkrieger is only 96kb or whatever.

20 hours agoMashimo

How big are TTS models like this usually?

.kkrieger looks like an amazing game for the mid-90s. It's incomprehensible that it's only 96kb.

19 hours agoguerrilla

Here is an overview: https://www.inferless.com/learn/comparing-different-text-to-...

Also keep in mind the processing time. The ^ article above used a NVIDIA L4 with 24-GB VRAM. Sopro claims 7.5 second processing time on CPU for 30 seconds of audio!

If you want to get real good quality TTS, you should check out elevenlabs.io

Different tools for different goals.

17 hours agoMashimo

I mean I'm talking about the mp4. How could people possibly be worried about scammers after listening to that?

a day agoguerrilla

I didn’t specially cherry pick those examples. You can try it anyway for yourself. But thanks for the feedback anyway

a day agosammyyyyyyy

No shade on you. It's definitely impressive. I just didn't understand people's reactions.

a day agoguerrilla

It sounds like someone using an electrolarynx to me.

15 hours agojrmg

I thought it was RFK

a day agofoolserrandboy

spasmodic dysphonia as a service.

a day agoserf

Yes, if this selected piece is the best that was available to be used as a showcase, it's immediately off putting in distortion and mangling of pronunciation.

14 hours agowildmXranat

same here, tried few different voices including my kids and my own, the generated audio is not similar at all, it's not even a proper voice

21 hours agojackyysq

Thank you, I was scrolling and scrolling in utter disbelief. It sounds absolutely dreadful. Would drive me nuts to listen to for more than a minute.

12 hours agoeigenvalue

Is there yet any model like this, but which works as a "speech plus speech to speech" voice modulator — i.e. taking a fixed audio sample (the prompt), plus a continuous audio stream (the input), and transforming any speech component of the input to have the tone and timbre of the voice in the prompt, resulting in a continuous audio output stream? (Ideally, while passing through non-speech parts of the input audio stream; but those could also be handled other ways, with traditional source separation techniques, microphone arrays, etc.)

Though I suppose, for the use-case I'm thinking of (v-tubers), you don't really need the ability to dynamically change the prompt; so you could also simplify this to a continuous single-stream "speech to speech" model, which gets its target vocal timbre burned into it during an expensive (but one-time) fine-tuning step.

a day agoderefr

Chatterbox TTS does this in “voice cloning” mode but you have to implement the streaming part yourself.

There are two inputs: audio A (“style”) and B (“content”). The timbre is taken from A, and the content, pronunciation, prosody, accent, etc is taken from B.

Strictly soeaking, voice cloning models like this and chatterbox are not “TTS” - they’re better thought of as “S+STS”, that is, speech+style to speech

a day agogcr

yes, check out RVC (retrieval voice conversation) which I believe is the only good open source voice changer. Currently there's a bit of a conflict between the original creator and current developers. So don't use the main fork. I think you'll be able to find a more up-to-date fork that's in english.

a day agolumerios

There must be something out there that does this reliably as I often see/hear v-tubers doing it.

a day agoqingcharles

This is very cool! And it'll only get better. I do wonder, if, at least as a patch-up job, they could do some light audio processing to remove the raspiness from the voices.

17 hours agoLoveMortuus

I just had some amusing results using text with lots of exclamations and turning up the temperature. Good fun.

a day agokrunck

Does the 169M include the ~90M params for the Mimi codec? Interesting approach using FiLM for speaker conditioning.

a day agowoodson

No, it doesn’t.

a day agosammyyyyyyy

Impressive! The cloning and voice affect is great. Has a slight warble in the voice on long vowels, but not a huge issue. I'll definitely check it out - we could use voice generation for alerting on one of our projects (no GPUs on hardware).

a day agoconvivialdingo

Cool! Yeah the voice quality really depends on the reference audio. Also mess with the parameters. All the feedback is welcome

a day agosammyyyyyyy

Very cool. I'd love a slightly larger version with hopefully improved voice quality.

Nice work!

a day agolukebechtel

Thanks! Yeah I kinda postponed publishing it until it was a bit better, but as a perfectionist, it would have never been published

a day agosammyyyyyyy

understood! Glad you shipped.

a day agolukebechtel

Very nice to have done this by yourself, locally.

I wish there was an open/local tts model with voice cloning as good as 11l (for non-english languages even)

a day agoelaus

Yeah, we are not quite there, but I’m sure we are not far either

a day agosammyyyyyyy

What could possibly go wrong...

Don't you ever think about what the balance of good and bad is when you make something like this? What's the upside? What's the downside?

In this particular case I can only see downsides, if there are upsides I'd love to hear about them. All I see is my elderly family members getting 'me' on their phones asking for help, and falling for it.

I've gotten into the habit of waiting for the other person to speak first when I answer the phone now and the number is unknown to me.

a day agojacquesm

I am unhappy about the criminal dimension of voice cloning, too, but there are plenty of use cases.

e.g. If I could have a (local!) clone of my own voice, I could get lots of wait-on-the-phone chores done by typing on my desktop to VOIP while accomplishing other things.

a day agojbaber

But why do you need it to be a clone of your voice? A generic TTS like Siri or a vocaloid would be sufficient.

a day agoanigbrowl

Yes, you are right. However, there are many upsides to this kind of technology. For example, it can restore the voices of people who were affected by numerous diseases

a day agosammyyyyyyy

Ok, that's an interesting angle, I had not thought of that, but of course you'd still need a good sample of them from before that happened. Thank you for the explanation.

a day agojacquesm

are you under the impression that this is the first such tool? it's not. it's not even the hundredth. this Pandora's box has been opened a long time ago.

a day agoAlex2037

There is no such thing as bad technology.

a day agoidiotsecant

That is simply not true. There is lots of bad technology.

a day agojacquesm

who gets to decide which technology must be banned? the same people who decide which books must be burned?

a day agoAlex2037

Surely that would be you.

a day agojacquesm

Like what? There's no technology that simply by existing causes harm to the world, people do that part.

a day agoidiotsecant

> Like what? There's no technology that simply by existing causes harm to the world, people do that part.

People create that technology, therefore enforcing their own lack of morals and lack of ethics onto it. That's the part that most humans in the post-digital age seem to ignore to purposefully deflect and absolve themselves from any responsibilities.

Also, companies will always be controlled by humans that optimized their life for greed, not by the ones that specialized on philosophical implications.

The inventors of novichok or the nuclear bomb didn't have "world peace" in mind. They had "world peace through me enforcing my own will onto my enemies" in mind.

a day agocookiengineer

The inventors of novichok or the nuclear bomb didn't have "world peace" in mind. They had "world peace through me enforcing my own will onto my enemies" in mind.

I don't know about Novichok, but nuclear bombs have stopped world wars, at least so far.

a day agoCamperBob2

like that chemical weapon that was specifically designed to react with gas mask absorbent materials so to activate at the protected side so to circumvent filteration(long banned since the end of WWI).

a day agonumpad0

> There is no such thing as bad technology.

If nothing else, it's a debate where we'd need to define our terms.

13 hours agoCoastalCoder

I hate to admit it, but it's true. Technology is amoral and neutral rather than morally directed, it can directed towards profits, control, and nefarious goals, sure. It's the added externalities in the form of lost jobs and suffering borne by many and power gained by a few that technological advancement enables. The decision to or how to use technology by human decisions is where a moral crossroad exists and is considered or ignored by the stakeholders involved. Substantive engineering ethics isn't much of a thing anymore as long the TC is enough, but performative complaints about napkins not being 100% recycled or insufficient planted trees are the ostensible substitutes.

4 hours agoburnt-resistor

Emm...I played the sample audio and it was...horrible?

How is it voice cloning if even the sample doesn't sound like any human being...

a day agoGathering6678

I should have posted the reference audio used with the examples. Honestly it doesn’t sound so different from them. Voice cloning can be from a cartoon too, doesn’t have to be from a human being

a day agosammyyyyyyy

A before / after with the reference and output seems useful to me, and maybe a range from more generic to more recognizable / celebrity voice samples so people can kinda see how it tackles different ones?

(Prominent politician or actor or somebody with a distinct speaking tone?)

a day agonemomarx

That is probably a good idea. I was so confused listening to the example.

a day agoGathering6678

Also, I didn’t want to use known voices as the example, so I ended up using generic ones from the datasets

a day agosammyyyyyyy

I'm sure it has its uses, but for anything with a higher requirement for quality, I think Vibe Voice is the only real OSS cloning option.

F2/E5 are also very good but have plenty of bad runs, you need to keep re-rolling until you get good outputs.

15 hours agojokethrowaway

It sounds a lot like RFK Jr! Does anyone have any more casual examples?

a day agosergiotapia

Muito fixe. Now the next challenge (for me) is how to convert this to DART and run on Android. :-)

a day agonunobrito

Obrigado! Quando (e se fizeres isso) manda pm!

a day agosammyyyyyyy

Sorry but the quality is too bad.

I'm sure it has its uses, but for anything practical I think Vibe Voice is the only real OSS cloning option. F2/E5 are also very good but has plenty of bad runs, you need to keep re-rolling.

15 hours agojokethrowaway

A scammers dream.

a day agobrikym

That's exactly how I see it.

a day agojacquesm

Unfortunately, we have to prepare for a future where this kind of stuff is everywhere. We will have to rethink how trust is modeled online and offline.

a day agosoulofmischief

I agree.

I'd be curious to hear why its advocates believe that this is a net win for society.

13 hours agoCoastalCoder

it doesn't need to be. are video games a net win for society? is porn?

11 hours agoAlex2037

unfortunately I think you're right, the cons massively outweigh the pros.

One constructive use would be making on-demand audiobooks.