开发者

Sparse Cholesky factorization algorithm for GPU [closed]

开发者 https://www.devze.com 2023-03-29 01:56 出处:网络
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.

Closed 9 years ago.

  • Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
  • Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Improve this question

Can anyone provide me with a parallel algorithm for calculating the sparse Cholesky factorization开发者_开发知识库? It must be suitable for execution on a GPU. Any answers in CUDA, OpenCL, or even pseudo-code would be much appreciated.


Generally speaking, direct sparse methods are not a great fit for the GPU. While the best direct solvers (thinking about packages like CHOLMOD, SuperLU, MUMPS here) use strategies to generate dense sub blocks which can be processed using L3 BLAS, the size and shape of the blocks don't tend to profit from using a GPU BLAS for acceleration. It doesn't mean it can't be done, just that the performance improvements may not be worth the effort.

Seeing as you are asking about a sparse Cholesky factorization, I assumed the matrix is symmetric positive definite. In that case you might consider using an iterative solver -- there are a number of good implementations of Conjugate Gradient and other Krylov subspace methods with simple preconditioners which might be of some use. The Cusp library for CUDA might be worth investigating if your problem is amenable to iterative methods. The ViennaCL library offers something similar if you are looking for OpenCL.


The multi-frontal algorithm seems to be a popular choice for parallel sparse factorisation. Check out the MUMPS package, here.

As I understand it, the code makes extensive use of level 3 BLAS calls (DGEMM and etc) to achieve high performance. I would investigate whether it's possible to link to a GPU based BLAS implementation, such as CUDA BLAS or the like if you're keen to use your GPU rather than FPU.

Contrary to dense factorisation, sparse methods always include a non-negligible amount of integer work in addition to the floating-point work done (though the floating-point is still dominant). I'm no expert on GPU's, but would the CPU be better suited for integer work than the GPU?? This might be an argument against implementing the whole algorithm for the GPU...

Hope this helps.


Check out these articles, courtesy of the ACM (SC'08 and PPoPP '09 are excellent conferences).

V. Volkov, J. W. Demmel. Benchmarking GPUs to tune dense linear algebra. SC'08.

Jung, J.H., O’Leary, D.P. Cholesky Decomposition and Linear Programming on a GPU. Scholarly Paper, University of Maryland, 2006.

G. Quintana-Orti, F. D. Igual, E. S. Quintana-Orti, R. A. van de Geijn. Solving dense linear systems on platforms with multiple hardware accelerators. PPoPP '09.

If you don't have access to these through the ACM Portal/DL, they might be online somewhere. Otherwise... I can probably quote some of the most relevant sections, with citations, and have it be fair use.

EDIT:

Check this out maybe?

http://www.google.com/url?sa=t&source=web&cd=2&ved=0CB0QFjAB&url=http%3A%2F%2Fwww.netlib.org%2Flapack%2Flawnspdf%2Flawn223.pdf&rct=j&q=linpack%20gpu%20cholesky&ei=5nZOTtzCOYHAtgesmdSzBw&usg=AFQjCNGfQECpU6YJQhMRQYktlxpFKLFPjQ&cad=rja

EDIT2: Missed the part about "sparse".

Looking around online and at the ACM/IEEE, I don't see a lot that jumps out at me. What I do see doesn't sound promising... this might not be a computation where you see a lot of benefit from using the GPU.


Sparse Cholesky factorizations on a GPU is an open problem. Even the Linear Programming paper mentioned previously uses a dense algorithm while most problems are sparse. The commercial LP solver market is very competitive, but nobody has a product that makes much use of the GPU yet.


See UHM - Unassembled Hyper Matrix solver. It can calculates sparse Cholesky factorization using multiple GPU on one host.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号