Paper | Title | Page |
---|---|---|
TUPPP16 | Integration of a Large-Scale Eigenmode Solver into the ANSYS(c) Workflow Environment | 122 |
|
||
The numerical computation of eigenfrequencies and eigenmodal fields of large accelerator cavities, based on full-wave, three-dimensional models, has attracted considerable interest in the recent past. In particular, it is of vital interest to know the performance characteristics, such as resonance frequency, quality figures and the modal fields, respectively, of such devices prior to construction; given the fact that the physical fabrication of a cavity is expensive and time-consuming, a device that does not comply with its specifications can not be tolerated; a robust and reliable digital prototyping methodology is therefore essential. Furthermore, modern cavity designs typically exhibit delicate and detailed geometrical features that must be considered for obtaining accurate results. At PSI a three-dimensional finite-element code has been developed to compute eigenvalues and eigenfields of accelerator cavities (*). While this code has been validated versus experimentally measured cavity data, its usage has remained somewhat limited due to missing functionality to connect it to industrial grade modeling software. Such an interface would allow creating advanced CAD geometries, meshing them in ANSYS and eventually exporting and analyzing the design in femaxx. We have therefore developed pre- and postprocessing software which imports meshes generated in ANSYS for a femaxx run. A postprocessing step generates a result file than can be imported into ANSYS and further be analyzed there. Thereby, we have integrated femaXX into the ANSYS workflow such that detailed cavity designs leading to large meshes can be analyzed with femaXX, taking advantage of its capability to address very large eigenvalue problems. Additionally, we have added functionality for parallel visualization to femaxx. We present a practical application of the pre- and postprocessing codes and compare the results against experimental values, where available, and other numerical codes when the model has no
* P. Arbenz, M. Becka, R. Geus, U. L. Hetmaniuk, and T. Mengotti, |
||
TUAPMP03 | Recent Progress on the MaryLie/IMPACT Beam Dynamics Code | 157 |
|
||
Funding: Supported in part by the US DOE, Office of Science, SciDAC program; Office of High Energy Physics; Office of Advanced Scientific Computing Research
MaryLie/IMPACT (ML/I) is a 3D parallel Particle-In-Cell code that combines the nonlinear optics capabilities of MaryLie 5.0 with the parallel particle-in-cell space-charge capability of IMPACT. In addition to combining the capabilities of these codes, ML/I has a number of powerful features, including a choice of Poisson solvers, a fifth-order rf cavity model, multiple reference particles for rf cavities, a library of soft-edge magnet models, representation of magnet systems in terms of coil stacks with possibly overlapping fields, and wakefield effects. The code allows for map production, map analysis, particle tracking, and 3D envelope tracking, all within a single, coherent user environment. ML/I has a front end that can read both MaryLie input and MAD lattice descriptions. The code can model beams with or without acceleration, and with or without space charge. Developed under a US DOE Scientific Discovery through Advanced Computing (SciDAC) project, ML/I is well suited to large-scale modeling, simulations having been performed with up to 100M macroparticles. ML/I uses the H5Part* library for parallel I/O. The code inherits the powerful fitting/optimizing capabilities of MaryLie, augmented for the new features of ML/I. The combination of soft-edge magnet models, high-order capability, and fitting/optimization, makes it possible to simultaneously remove third-order aberrations while minimizing fifth-order, in systems with overlapping, realistic magnetic fields. Several applications will be presented, including aberration correction in a magnetic lens for radiography, linac and beamline simulations of an e-cooling system for RHIC, design of a matching section across the transition of a superconducting linac, and space-charge tracking in the damping rings of the International Linear Collider.
*ICAP 2006 paper ID 1222, A. Adelmann et al., "H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations" |
||
|
Slides | |
WEA4IS04 | Comparison of h- and p- Refinement in a Finite Element Maxwell Time Domain Solver | 288 |
|
||
Two different frameworks are used and compared: FEMSTER [*] and Ng [**]. FEMSTER is a C++ class library of higher-order discrete differential forms. A discrete differential form is a finite element basis function with properties that mimic differential forms. The library consists of elements, basis functions, quadrature rules, and bilinear forms, i.e. the main building blocks for our FETD solver. Ng on the other hand is a software package providing amongst others basis functions for solving electromagnetic problems. The implemented higher order shape functions for edge-, face- and inner-elements were proposed by Schöberl et al. They show a local complete sequence i.e. for each edge, face and element an individual polynomial order can be chosen. For the convergence studies , the electric field in a cubic and cylindrical cavity is initialized randomly and integrated in time. The field was then analysed and compared with the analytic eigenfrequencies of the cavities. We show results of this convergence studies when changing the mesh size and the polynomial order of the basis functions in the FETD solvers. [*] P.~Castillo, et. al, Discrete differential forms: A novel methodology for robust computational electromagnetics. Technical Report UCRL-ID-151522, LLNL, 2003 [**] J.~Schöberl, S.~Zaglmayr. High order Nédélec elements with local complete sequence properties, COMPEL 24, 2, 374, 2005 | ||
|
Slides | |
THM1MP01 | H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations | 289 |
|
||
The very largest parallel particle simulations, for problems involving six dimensional phase space and field data, generate vast quantities of data. It is desirable to store such enormous data-sets efficiently and also to share data effortlessly between other programs and analysis tools. With H5Part we defined a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading and writing of the data to the HDF5 file format. Our API, which is oriented towards the needs of the particle physics and cosmology community, provides support for three types of common data types: particles, structured and unstructured meshes. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on computer systems ranging from laptops to supercomputers. The following languages are supported: C, C++, Fortran and Python. We show the easy usage and the performance for reading/writing terabytes of data on several parallel platforms. H5part is being distributed as Open Source under a BSD-like license. | ||
|
Slides |