Paper | Title | Other Keywords | Page | |||||
---|---|---|---|---|---|---|---|---|
MOM1MP02 | The FPP and PTC Libraries | closed-orbit, survey, collective-effects, simulation | 17 | |||||
|
In this short article we summarize the FPP package and the tracking code PTC which is crucially based on FPP. PTC is remarkable for its use of beam structures which take into full account the three dimensional structure of a lattice and its potential topological complexities such as those found in colliders and recirculators.
|
|
|
Slides
|
|
|
||
MOM1MP03 | Resonance Driving Term Experiments: An Overview | resonance, betatron, sextupole, multipole | 22 | |||||
|
The frequency analysis of the betatron motion is a valuable tool for the characterization of the linear and non-linear motion of a particle beam in a storage ring. In recent years, several experiments have shown that resonance driving terms can be successfully measured from the spectral decomposition of the turn-by-turn BPM data. The information on the driving terms can be used to correct unwanted resonances, to localize strong non-linear perturbations and provides a valuable tool for the construction of the non-linear model of the real accelerator. In this paper we introduce briefly the theory, the computational tools and we give a review of the resonance driving terms experiments performed on different circular machines.
|
|
|
Slides
|
|
|
||
TUAPMP01 | Rigorous Global Optimization for Parameter Estimates and Long-Term Stability Bounds | storage-ring | 152 | |||||
|
Funding: DOE, NSF |
The code COSY INFINITY supports rigorous computations with numerical verification based on Taylor models, a tool developed by us that can be viewed as an extension of the differential algebraic methods that also determines rigorous Taylor remainder bounds. Such verified computation techniques can be utilized for global optimization tasks, resulting in a guarantee that the true optimum over a given domain is found. The method of Taylor models has a high order scaling property, suppressing the problem of over-estimation that is a common problem of reliable computational methods. We have applied the method to some of typical optimization tasks in accelerator physics such as lattice design parameter optimizations and the Lyapunov function based long-term stability estimates for storage rings. The implementation of Taylor models in COSY INFINITY has inherited all the advantageous features of the implementation of differential algebras in the code, resulting in very efficient execution. COSY-GO, the Taylor model based rigorous global optimizer of COSY INFINITY, can run either on a single processor or in a multi processor mode based on MPI. We present various results of optimization problems run on more than 2,000 processors at NERSC operated by the US Department of Energy. Specifically, we discuss rigorous long-term stability estimates of the Tevatron, as well as high-dimensional rigorous design optimization of RIA fragment separators. |
|
Slides
|
|
|
||
TUAPMP02 | CHEF: A Framework for Accelerator Optics and Simulation | optics, simulation, quadrupole, site | 153 | |||||
|
Funding: This manuscript has been authored by Universities Research Association, Inc. under contract No. DE-AC02-76CH03000 with the U. S. Department of Energy. |
We describe CHEF, an application based on an extensive hierarchy of C++ class libraries. The objectives are (1) provide a convenient, effective application to perform standard beam optics calculations and (2) seamlessly support development of both linear and non-linear simulations, for applications ranging from a simple beamline to an integrated system involving multiple machines. Sample applications are discussed. |
|
Slides
|
|
|
||
TUAPMP03 | Recent Progress on the MaryLie/IMPACT Beam Dynamics Code | space-charge, optics, acceleration, simulation | 157 | |||||
|
Funding: Supported in part by the US DOE, Office of Science, SciDAC program; Office of High Energy Physics; Office of Advanced Scientific Computing Research |
MaryLie/IMPACT (ML/I) is a 3D parallel Particle-In-Cell code that combines the nonlinear optics capabilities of MaryLie 5.0 with the parallel particle-in-cell space-charge capability of IMPACT. In addition to combining the capabilities of these codes, ML/I has a number of powerful features, including a choice of Poisson solvers, a fifth-order rf cavity model, multiple reference particles for rf cavities, a library of soft-edge magnet models, representation of magnet systems in terms of coil stacks with possibly overlapping fields, and wakefield effects. The code allows for map production, map analysis, particle tracking, and 3D envelope tracking, all within a single, coherent user environment. ML/I has a front end that can read both MaryLie input and MAD lattice descriptions. The code can model beams with or without acceleration, and with or without space charge. Developed under a US DOE Scientific Discovery through Advanced Computing (SciDAC) project, ML/I is well suited to large-scale modeling, simulations having been performed with up to 100M macroparticles. ML/I uses the H5Part* library for parallel I/O. The code inherits the powerful fitting/optimizing capabilities of MaryLie, augmented for the new features of ML/I. The combination of soft-edge magnet models, high-order capability, and fitting/optimization, makes it possible to simultaneously remove third-order aberrations while minimizing fifth-order, in systems with overlapping, realistic magnetic fields. Several applications will be presented, including aberration correction in a magnetic lens for radiography, linac and beamline simulations of an e-cooling system for RHIC, design of a matching section across the transition of a superconducting linac, and space-charge tracking in the damping rings of the International Linear Collider.
*ICAP 2006 paper ID 1222, A. Adelmann et al., "H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations" |
|
Slides
|
|
|
||
WEPPP01 | Recent Developments in IMPACT and Application to Future Light Sources | simulation, linac, electron, space-charge | 182 | |||||
|
The Integrated Map and Particle Accelerator Tracking (IMPACT) code suite was originally developed to model beam dynamics in ion linear accelerators. It has been greatly enhanced and now includes a linac design code, a 3D rms envelope code and two parallel particle-in-cell (PIC) codes IMPACT-T, a time-based code, and IMPACT-Z, a z-coordinate based code. Presently, the code suite has been increasingly used in simulations of high brightness electron beams for future light sources. These simulations, performed using up to 100 million macroparticles, include effects related to nonlinear magnetic optics, rf structure wake fields, 3D self-consistent space charge, and coherent synchrotron radiation (at present a 1D model). Illustrations of application for a simulation of the microbunching instability are given. We conclude with plans of further developments pertinent to future light sources.
|
|
|
|||||
WEPPP04 | The FPP Documentation | site, resonance, linac, beam-transport | 191 | |||||
|
FPP is the FORTRAN90 library which overloads Berzs DA-package and Forests Lielib. Furthermore it is also the library which implements a Taylor Polymorphic type. This library is essential to code PTC, the Polymorphic Tracking Code. Knowledge of the tools of FPP permits the computation of perturbative quantities in any code which uses FPP such as PTC/MAD-XP. We present here the available HTML documentation.
|
|
|
|||||
WEPPP12 | New Developments of MAD-X UsingPTC | closed-orbit, controls, linac, quadrupole | 209 | |||||
|
For the last few years the MAD-X program makes use of the Polymorphic Tracking Code (PTC) to perform calculations related to beam dynamics in the nonlinear regime. This solution has provided an powerful tool with a friendly and comfortable user interface. Its apparent success has generated a demand for further extensions. We present the newest features developed to fulfill in particular the needs of the Compact LInear Collider (CLIC) studies. A traveling wave cavity element has been implemented that enables simulations of accelerating lines. An important new feature is the extension of the matching module to allow fitting of non-linear parameters to any order. Moreover, calculations can be performed with parameter dependence defined in the MAD-X input. In addition the user can access the PTC routines for the placement of a magnet with arbitrary position and orientation. This facilitates the design of non-standard lattices. Lastly, for the three dimensional visualization of lattices, tracked rays in global coordinates and beam envelopes are now available.
|
|
|
|||||
THM2IS01 | Accelerator Description Formats | simulation, controls, background, quadrupole | 297 | |||||
|
Being an integral part of accelerator software, accelerator description aims to provide an external representation of an accelerators internal model and associative effects. As a result, the choice of description formats is driven by the scope of accelerator applications and is usually implemented as a tradeoff between various requirements: completeness and extensibility, user and developer orientation, and others. Moreover, an optimal solution does not remain static but instead evolves with new project tasks and computer technologies. This talk presents an overview of several approaches, the evolution of accelerator description formats, and a comparison with similar efforts in the neighboring high-energy physics domain. Following the UAL Accelerator-Algorithm-Probe pattern, we will conclude with a next logical specification, Accelerator Propagator Description Format (APDF), providing a flexible approach for associating physical elements and evolution algorithms most appropriate for the immediate tasks.
|
|
|
Slides
|
|
|
||
THM2IS02 | The Universal Accelerator Parser | linac, controls, quadrupole, sextupole | 303 | |||||
|
The Universal Accelerator Parser (UAP) is a library for reading and translating between lattice input formats. The UAP was primarily implemented to allow programs to parse Acelerator Markup Language (AML) formatted files [D. Sagan et al. The Accelerator Markup Language and the Universal Accelerator Parser'', 2006 Europ. Part. Acc. Conf.]. Currently, the UAP also supports the MAD lattice format. The UAP provides an extensible framework for reading and translating between different lattice formats. Included are routines for expression evaluation and beam line expansion. The use of a common library among accelerator codes will greatly improve the interoperability between different lattice file formats, and ease the development and maintenance to support these formats in programs. The UAP is written in C++ and compiles on most Unix, Linux, and Windows platforms. A Java port is maintained for platform independence. Software developers can easily integrate the library into existing code by using the provided hooks.
|
|
|
Slides
|
|
|
||
THMPMP02 | Adaptive 2-D Vlasov Simulation of Particle Beams | simulation, heavy-ion, emittance, focusing | 310 | |||||
|
In order to address the noise problems occuring in Particle-In-Cell (PIC) simulations of intense particle beams, we have been investigating numerical methods based on the solution of the Vlasov equation on a grid of phase-space. However, especially for high intensity beam simulations in periodic or alternating gradient focusing fields, where particles are localized in phase space, adaptive strategies are required to get computationally efficient codes based on this method. To this aim, we have been developing fully adaptive techniques based on interpolating wavelets where the computational grid is changed at each time step according to the variations of the distribution function of the particles. Up to now we only had an adaptive axisymmetric code. In this talk, we are going to present a new adaptive code solving the paraxial Vlasov equation on the full 4D transverse phase space, which can handle real two-dimensional problems like alternating gradient focusing. In order to develop this code efficiently, we introduce a hierarchical sparse data structure, which enabled us not only to reduce considerably the computation time but also the required memory. All computations and diagnostics are performed on the sparse data structure so that the complexity becomes proportional to the number of points needed to describe the particle distribution function.
|
|
|
Slides
|
|
|