CUDA Sucks Customer Reviews and Feedback

From Everything.Sucks

CUDA (Compute Unified Device Architecture) is a parallel computing platform and application programming interface (API) model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit (GPU) for general purpose processing – an approach termed GPGPU (General-Purpose computing on Graphics Processing Units). The CUDA platform is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels.

An unsatisfied CUDA user wrote the following review on an online forum:

I do not think CUDA framework is well designed and mature enough, which lead to development cost very high and not easy to learn and use. The developer still need to specify <<<blocks, thread per block>>> values explicitly. Such as that vector add function: VecAdd<<<1, N>>>(A, B, C) example. You need to tell the vector length explicitly, then CUDA will distribute them into N thread to do mini-add. WHY CUDA Runtime CANNOT judge vector length by itself??? IS IT VERY HARD??? EVERYTHING CUDA needs developer to assign. Is CUDA a infant or CUDA Designers are infants?


Be the first to tell the world why CUDA sucks!

I certify that this review is based on my own experiece and is my opinion of this person or business. I have not been offered any incentive or payment to write this review.


Enter Code