Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RT-TDDFT GPU Acceleration (Phase 2): Adding needed BLAS and LAPACK support for Tensor on CPU and refactoring linear algebra operations in TDDFT #5773

Draft
wants to merge 16 commits into
base: develop
Choose a base branch
from

Conversation

AsTonyshment
Copy link
Collaborator

@AsTonyshment AsTonyshment commented Dec 26, 2024

Phase 1: Rewriting existing code using Tensor (complete)

This is merely a draft and does not represent the final code. Since Tensor can effectively support heterogeneous computing, the goal of the first phase is to rewrite the existing algorithms using Tensor. Currently, all memory is still explicitly allocated on the CPU (the parameter of the Tensor constructor is container::DeviceType::CpuDevice).

Phase 2: Adding needed BLAS and LAPACK support for Tensor on CPU and refactoring linear algebra operations in TDDFT (complete)

Key Changes:

  • Added template structs lapack_getrf and lapack_getri in module_base/module_container/ATen/kernels/lapack.h to support matrix LU factorization (getrf) and matrix inversion (getri) operations for Tensor objects.
  • Fixed original LAPACK function (zgetrf_ and zgetri_) declarations in module_base/lapack_connector.h to comply with standard conventions.
  • Fully implemented CPU-based BLAS and LAPACK support for Tensor operations in TDDFT. These linear algebra operations in container::kernels module from module_base/module_container/ATen include a Device parameter, enabling seamless support for heterogeneous computing (GPU acceleration in future phases).

Phase 3: Adding needed GPU-based linear algebra operations supporting Tensor in the container::kernels module (in progress)

The objective of Phase 3 is to add GPU-based linear algebra operations (especially LU factorization getrf and matrix inversion getri) supporting Tensor in the container::kernels module (cuBLAS/cuSOLVER).

@AsTonyshment AsTonyshment marked this pull request as draft December 27, 2024 02:30
@mohanchen mohanchen added GPU & DCU & HPC GPU and DCU and HPC related any issues Features Needed The features are indeed needed, and developers should have sophisticated knowledge labels Dec 31, 2024
@AsTonyshment
Copy link
Collaborator Author

The current program has some bugs that cause the data in psi to become all zeros after evolution. Through debugging, we found that this issue arises because the original implementation of deep copying psi_k to tmp1 in source/module_hamilt_lcao/module_tddft/norm_psi.cpp was inadvertently replaced with Tensor's CopyFrom.

Useful information:

  1. CopyFrom Method:
    The CopyFrom method currently performs a shallow copy, meaning it shares the underlying data buffer between the source and destination tensors. This can lead to unintended side effects if modifications are made to one tensor, as they will reflect in the other.

  2. Assignment Operator (=):
    The assignment operator (=) is correctly implemented to perform a deep copy, ensuring that the destination tensor gets its own independent copy of the data buffer. This behavior is consistent with expectations for a deep copy operation.

@AsTonyshment
Copy link
Collaborator Author

After testing several parallel parameter combinations for Si-2 (small system) and Si-64 (large system), we conclude that the Tensor implementation on the CPU incurs almost no performance loss. In fact, it appears to be slightly faster than the previous implementation especially for large systems. The test results are as follows:
image

@AsTonyshment AsTonyshment changed the title RT-TDDFT GPU Acceleration (Phase 1): Rewriting existing code using Tensor RT-TDDFT GPU Acceleration (Phase 2): Adding needed BLAS and LAPACK support for Tensor on CPU and refactoring linear algebra operations in TDDFT Jan 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Features Needed The features are indeed needed, and developers should have sophisticated knowledge GPU & DCU & HPC GPU and DCU and HPC related any issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants