I meet two problems
-
I follow the words on readme.md as follows to load a pre-trained model:
import fairseq
import hubert_pretraining, hubert
ckpt_path = "/../../large_lrs3_iter5.pt"
models, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path])
At this point, an error occurred:
omegaconf.errors.ConfigAttributeError: Key 'required_seq_len_multiple' not in 'AVHubertConfig'
full_key: required_seq_len_multiple
object_type=AVHubertConfig
-
I want to Finetune an AV-HuBERT model with Seq2Seq ,follow the readme.md:
$ cd avhubert
$ fairseq-hydra-train --config-dir /path/to/conf/finetune/ --config-name large_lrs3_433h.yaml
task.data=/path/to/data task.label_dir=/path/to/label
task.tokenizer_bpe_model=/path/to/tokenizer model.w2v_path=/path/to/checkpoint/large_lrs3_iter5.pt
hydra.run.dir=/path/to/experiment/finetune/ common.user_dir=pwd
The error is as follows:
omegaconf.errors.ConfigKeyError: Key 'input_modality' not in 'AVHubertPretrainingConfig'
full_key: input_modality
object_type=AVHubertPretrainingConfig
I don't know what caused these errors
I meet two problems
I follow the words on readme.md as follows to load a pre-trained model:
I want to Finetune an AV-HuBERT model with Seq2Seq ,follow the readme.md:
$ cd avhubert
$ fairseq-hydra-train --config-dir /path/to/conf/finetune/ --config-name large_lrs3_433h.yaml
task.data=/path/to/data task.label_dir=/path/to/label
task.tokenizer_bpe_model=/path/to/tokenizer model.w2v_path=/path/to/checkpoint/large_lrs3_iter5.pt
hydra.run.dir=/path/to/experiment/finetune/ common.user_dir=
pwdThe error is as follows:
omegaconf.errors.ConfigKeyError: Key 'input_modality' not in 'AVHubertPretrainingConfig'
full_key: input_modality
object_type=AVHubertPretrainingConfig
I don't know what caused these errors