### Computing 2D Constrained Delaunay Triangulation Using the GPU.

IEEE transactions on visualization and computer graphics | 28 Nov 2012

###### M Qi, T Cao and T Tan

##### Abstract

We propose the first GPU solution to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight line graph (PSLG) consisting of points and edges. There are many existing CPU algorithms to solve the CDT problem in computational geometry, yet there has been no prior approach to solve this problem efficiently using the parallel computing power of the GPU. For the special case of the CDT problem where the PSLG consists of just points, which is simply the normal Delaunay triangulation problem, a hybrid approach using the GPU together with the CPU to partially speed up the computation has already been presented in the literature. Our work, on the other hand, accelerates the entire computation on the GPU. Our implementation using the CUDA programming model on NVIDIA GPUs is numerically robust, and runs up to an order of magnitude faster than the best sequential implementations on the CPU. This result is reflected in our experiment with both randomly generated PSLGs and real-world GIS data having millions of points and edges.

- Tweets*
- 0
- Facebook likes*
- 0
- Reddit*
- 0
- News coverage*
- 0
- Blogs*
- 0
- SC clicks
- 3
- Concepts
- Algorithm, Delaunay triangulation, Computational geometry, GPGPU, Computer science, Graphics processing unit, CUDA, Parallel computing
- MeSH headings
- -

comments powered by Disqus

* Data courtesy of Altmetric.com