الفهرس | Only 14 pages are availabe for public view |
Abstract Visualization is a method of computing. It transforms the symbolic into the geometric, enabling researchers to observe their simulations and computations. Visualization offers a method for seeing the unseen. It enriches the process of scientific discovery and fosters profound and unexpected insights. In many fields it is already revolutionizing the way scientists do science. The goal of visualization is to leverage existing scientific methods by providing new scientific insight through visual methods. In the modeling of many scientific and engineering problems, vector fields are used to describe moving fluids or changing forces, where a vector (i.e., a direction with magnitude) is assigned to each point in the space-time domain. Effective visualization of time-varying 3D vector fields is critical for the understanding of complex phenomena and dynamic processes under investigation. A typical time-varying dataset from a Computational Fluid Dynamics (CFD) simulation can easily require hundreds of gigabytes or even terabytes of storage space, which creates challenges for the consequent data-analysis tasks. In this research, new techniques for visualization of extremely large time-varying vector data using high performance computing are presented. The high level requirements that guided the formulation of the new techniques are (a) support for large dataset sizes, (b) support for temporal coherence of the vector data, (c) support for distributed memory high performance computing and (d) optimum utilization of the computing nodes with multicores (multi-core processors). The challenge is to design and implement techniques that meet these complex requirements and balance the conflicts between them. The fundamental innovation in this work is developing efficient distributed visualization for large time-varying vector data. The maximum performance was reached through the parallelization of multiple processes on the multiple cores of each computing node. Accuracy of the proposed techniques was confirmed compared to the benchmark results with average difference of 5%. In addition, the proposed techniques exhibited acceptable speedup that reaches 700% for different data sizes with better scalability for larger ones. Finally, the utilization of the computing nodes was satisfactory for the considered test cases. |