-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
an Assertion error in step 3 get_pred_Hamiltonian of Inference part #41
Comments
This assert is used to check if your trained DeepH models include all possible orbital combinations. Please review the |
Hi, best regards, |
Yes, the number of orbits for this element is 26. Increasing the number of orbits will increase the training time per epoch, but not significantly. Alternatively, you can choose to train a new model with only the additional orbits that the old model lacks. Then, when running |
ok, many thanks for your help. |
Hi, There,
When I successfully get the training model and olp matrix, I did the inference part and I meet a error like this:
=> load best checkpoint (epoch 5969)
=> Atomic types: [52, 74], spinful: True, the number of atomic types: 2.
Load processed graph from /share/home/zhangtao/work/xxxx/xxxx/work_dir/inference/graph.pkl
Traceback (most recent call last):
File "/share/home/zhangtao/anaconda3/envs/ZT-py39/bin/deeph-inference", line 8, in
sys.exit(main())
File "/share/home/zhangtao/anaconda3/envs/ZT-py39/lib/python3.9/site-packages/deeph/scripts/inference.py", line 105, in main
predict(input_dir=work_dir, output_dir=work_dir, disable_cuda=disable_cuda, device=device,
File "/share/home/zhangtao/anaconda3/envs/ZT-py39/lib/python3.9/site-packages/deeph/inference/pred_ham.py", line 167, in predict assert np.all(np.isnan(hamiltonian) == False)
AssertionError
here I also list the inference.ini setting:
[basic]
OLP_dir = /share/home/zhangtao/work/WTe2/train/data/WTe2/work_dir/olp
work_dir = /share/home/zhangtao/work/WTe2/train/data/WTe2/work_dir/inference
interface = openmx
structure_file_name = POSCAR
task = [1, 2, 3, 4, 5]
sparse_calc_config = /share/home/zhangtao/work/WTe2/train/data/WTe2/work_dir/inference/band.json
trained_model_dir = /share/home/zhangtao/work/WTe2/train/data/WTe2/work_dir/trained_model
restore_blocks_py = True
dense_calc = True
disable_cuda = False
device = cuda:0
huge_structure = True
[interpreter]
julia_interpreter = /share/home/zhangtao/software/julia-1.6.6/bin/julia
[graph]
radius = 9.0
create_from_DFT = True
band setting:
{
"calc_job": "band",
"which_k": 0,
"fermi_level": 0,
"lowest_band": -10.3,
"max_iter": 300,
"num_band": 100,
"k_data": ["20 0.5000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 X Γ", "20 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.5000000000 0.0000000000 Γ Y", "20 0.0000000000 0.5000000000 0.0000000000 0.5000000000 0.5000000000 0.0000000000 Y M","20 0.5000000000 0.5000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000 M Γ"]
}
I have tried to find out reason, but I failed. So sad! I would greatly appreciate your kind help, if you could give me some advice on this error.
Best regards,
Tao
The text was updated successfully, but these errors were encountered: