-
Notifications
You must be signed in to change notification settings - Fork 69
Model evaluation inconsistency between runs #23
Copy link
Copy link
Open
Description
I ran into inconsistent evaluation results while using TxGNN_Demo.ipynb. When I executed the evaluation immediately after training:
TxGNN.finetune(n_epoch = 500,
learning_rate = 5e-4,
train_print_per_n = 5,
valid_per_n = 20)
result = TxEval.eval_disease_centric(disease_idxs = 'test_set',
show_plot = False,
verbose = True,
save_result = True,
return_raw = False)
I obtained different results compared to when I first saved and then reloaded the model:
TxGNN.finetune(n_epoch = 500,
learning_rate = 5e-4,
train_print_per_n = 5,
valid_per_n = 20)
TxGNN.save_model('./model_ckpt')
TxGNN.load_pretrained('./model_ckpt')
result = TxEval.eval_disease_centric(disease_idxs = 'test_set',
show_plot = False,
verbose = True,
save_result = True,
return_raw = False)
Did I use load_pretrained() incorrectly, or is this an issue others have encountered as well?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels