Open
Description
MaskGCT is a very impressive project—great work!
I do have a question about the architecture. Why does the model predict semantic tokens conditioned on prompt wavs? According to the AudioLM paper, speaker information is included in the acoustic tokens. Would removing the speaker information from the first transformer help reduce timbre leakage(I am not sure whether the model has this question)?
Thanks for your help!
Metadata
Metadata
Assignees
Labels
No labels