A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   X   Y   Z  

Oswald, B. S.C.

Paper Title Page
TUPPP16 Integration of a Large-Scale Eigenmode Solver into the ANSYS(c) Workflow Environment 122
 
  • B. S.C. Oswald, A. Adelmann, M. Bopp, R. Geus
    PSI, Villigen
 
  The numerical computation of eigenfrequencies and eigenmodal fields of large accelerator cavities, based on full-wave, three-dimensional models, has attracted considerable interest in the recent past. In particular, it is of vital interest to know the performance characteristics, such as resonance frequency, quality figures and the modal fields, respectively, of such devices prior to construction; given the fact that the physical fabrication of a cavity is expensive and time-consuming, a device that does not comply with its specifications can not be tolerated; a robust and reliable digital prototyping methodology is therefore essential. Furthermore, modern cavity designs typically exhibit delicate and detailed geometrical features that must be considered for obtaining accurate results. At PSI a three-dimensional finite-element code has been developed to compute eigenvalues and eigenfields of accelerator cavities (*). While this code has been validated versus experimentally measured cavity data, its usage has remained somewhat limited due to missing functionality to connect it to industrial grade modeling software. Such an interface would allow creating advanced CAD geometries, meshing them in ANSYS and eventually exporting and analyzing the design in femaxx. We have therefore developed pre- and postprocessing software which imports meshes generated in ANSYS for a femaxx run. A postprocessing step generates a result file than can be imported into ANSYS and further be analyzed there. Thereby, we have integrated femaXX into the ANSYS workflow such that detailed cavity designs leading to large meshes can be analyzed with femaXX, taking advantage of its capability to address very large eigenvalue problems. Additionally, we have added functionality for parallel visualization to femaxx. We present a practical application of the pre- and postprocessing codes and compare the results against experimental values, where available, and other numerical codes when the model has no

* P. Arbenz, M. Becka, R. Geus, U. L. Hetmaniuk, and T. Mengotti,
"On a Parallel Multilevel Preconditioned Maxwell Eigensolver".
Parallel Computing, 32(2): 157-165 (2006).

 
THM1MP01 H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations 289
 
  • A. Adelmann, A. Gsell, B. S.C. Oswald
    PSI, Villigen
  • E. W. Bethel, J. M. Shalf, C. Siegerist
    LBNL, Berkeley, California
 
  The very largest parallel particle simulations, for problems involving six dimensional phase space and field data, generate vast quantities of data. It is desirable to store such enormous data-sets efficiently and also to share data effortlessly between other programs and analysis tools. With H5Part we defined a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading and writing of the data to the HDF5 file format. Our API, which is oriented towards the needs of the particle physics and cosmology community, provides support for three types of common data types: particles, structured and unstructured meshes. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on computer systems ranging from laptops to supercomputers. The following languages are supported: C, C++, Fortran and Python. We show the easy usage and the performance for reading/writing terabytes of data on several parallel platforms. H5part is being distributed as Open Source under a BSD-like license.  
slides icon Slides