Paper | Title | Other Keywords | Page | |||||
---|---|---|---|---|---|---|---|---|
MOMPMP01 | Computational Beam Dynamics for SNS Commissioning and Operation | proton, space-charge, electron, linac | 1 | |||||
|
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U. S. Department of Energy. |
The computational approach is providing essential guidance and analysis for the commissioning and operation of SNS. Computational models are becoming sufficiently realistic that it is now possible to study detailed beam dynamics issues quantitatively. Increasingly, we are seeing that the biggest challenge in performing successful analyses is that of knowing and describing the machine and beam state accurately. Even so, successful benchmarks with both theoretical predictions and experimental results are leading to increased confidence in the capability of these models. With this confidence, computer codes are being employed in a predictive manner to guide the machine operations. We will illustrate these points with various examples taken from the SNS linac and ring. |
|
Slides
|
|
|
||
MOMPMP02 | Computational Needs for the ILC | luminosity, emittance, damping, feedback | 7 | |||||
|
Funding: This work is supported by the Commission of the European Communities under the 6th Framework Programme, contract number RIDS-011899. |
The ILC requires detailed studies of the beam transport and of individual components of the transport system. The main challenges are the generation and preservation of the low emittance beams, the protection of the machine from excessive beam loss and the provision of good experimental conditions. The studies of these effects leads to specifications for the different accelerator components and hence can significantly impact the cost. |
|
Slides
|
|
|
||
MOM1MP01 | Massive Tracking on Heterogeneous Platforms | dynamic-aperture, controls, collider, hadron | 13 | |||||
|
The LHC@home project uses public resource computing to simulate circulating protons in the future Large Hadron Collider (LHC). As the physics simulated may become chaotic, checking the integrity of the computation distributed over a heterogeneous network requires perfectly identical (or homogeneous) floating-point behaviour, regardless of the model of computer used. This article defines an acceptable homogeneous behaviour based on existing standards, and explains how to obtain it. This involves processor, operating system, programming language and compiler issues. In the LHC@home project, imposing this homogeneous behaviour entailed less than 10% performance degradation per processor, and almost doubled the number of processors which could be usefully exploited.
|
|
|
Slides
|
|
|
||
MOM1MP02 | The FPP and PTC Libraries | lattice, closed-orbit, survey, collective-effects | 17 | |||||
|
In this short article we summarize the FPP package and the tracking code PTC which is crucially based on FPP. PTC is remarkable for its use of beam structures which take into full account the three dimensional structure of a lattice and its potential topological complexities such as those found in colliders and recirculators.
|
|
|
Slides
|
|
|
||
MOM2IS02 | Large Scale Parallel Wake Field Computations for 3D-Accelerator Structures with the PBCI Code | vacuum, electromagnetic-fields, diagnostics, electron | 29 | |||||
|
Funding: This work was partially funded by EUROTeV (RIDS-011899), EUROFEL (RIDS-011935), DFG (1239/22-3) and DESY Hamburg |
The X-FEL project and the ILC require a high quality beam with ultra short electron bunches. In order to predict the beam quality in terms of both, single bunch energy spread and emittance, an accurate estimation of the short range wake fields in the TESLA crymodules, collimators and other geometrically complex accelerator components is necessary. We have presented earlier wake field computations for short bunches in rotationally symmetric components with the code ECHO. Most of the wake field effects in the accelerator, however, are due to geometrical discontinuities appearing in fully three dimensional structures. For the purpose of simulating such structures, we have developed the Parallel Beam Cavity Interaction (PBCI) code. The new code is based on the full field solution of Maxwell equations in the time domain, for ultra-relativistic current sources. Using a specialized directional-splitting technique, PBCI produces particularly accurate results in wake field computations, due to the dispersion free integration of the discrete equations in the direction of bunch motion. One of the major challenges to deal with, when simulating fully three dimensional accelerator components is the huge computational effort needed for resolving both, the geometrical details and the bunch extensions by the computational grid. For this reason, PBCI implements massive parallelization on a distributed memory environment, based on a flexible domain decomposition method. In addition, PBCI uses the moving window technique, which is particularly well suited for wake potential computations in very long structures. As a particular example of such a structure, the simulation results of a complete module of TESLA cavities with eight cells each for a um-bunch will be given. |
|
Slides
|
|
|
||
MOM2IS03 | Low-Dispersion Wake Field Calculation Tools | electromagnetic-fields, linac, vacuum, linear-collider | 35 | |||||
|
Funding: This work was partially funded by EUROTeV (RIDS-011899), DFG (1239/22-3) and DESY Hamburg. |
Extremely short bunches are used in future linear colliders, such as the International Linear Collider (ILC). Accurate and computationally efficient numerical methods are needed to resolve the bunch and to accurately model the geometry. In very long accelerator structures, computational efficiency necessitates the use of a moving window in order to save memory. On the other hand, parallelization is desirable to decrease the simulation times. Explicit schemes are usually more convenient to parallelize than implicit schemes since the implementation of a separate potentially time-consuming linear solver can thus be avoided. Explicit numerical methods without numerical dispersion in the direction of beam propagation are presented for fully 3D wake field simulations and for the special case of axially symmetric structures. The introduced schemes are validated by comparing with analytical results and by providing numerical examples for practical accelerator structures. Conformal techniques to enhance the convergence rate are presented and the advantages of the conformal schemes are verified by numerical examples. |
|
Slides
|
|
|
||
MOA1MP01 | EM Field Simulation Based on Volume Discretization: Finite Integration and Related Approaches | electromagnetic-fields | 41 | |||||
|
Today's design and analysis demands for accelerator components request for reliable, accurate, and flexible simulation tools for electromagnetic fields. Amongst the widest spread approaches is the Finite Integration Technique (FIT), which has been used in electro- and magnetostatics, eddy current problems, wave propagation problems, as well as PIC codes. FIT belongs to the class of local approach in the sense, that the discrete equations are derived cell-by-cell by transforming the continuous Maxwellian equations onto the computational grid. Other representatives of local approaches are Finite Differences (FD), Finite Volumes (FV), Finite Elements (FE), and the Cell Method (CM). All these approaches are based on a volume discretization, defined by the three-dimensional mesh. Whereas the close relations between FIT and FD has been known since the beginning of both approaches in the seventies, recent research has revealed that under certain circumstances, also FIT and FE have many important properties in common. In the light of the forthcoming 30 years-anniversary of the first FIT-publication in 1977, this contribution reviews these properties as well as some still existing important differences, and their consequences for the usage of the methods in practice. It is shown that the differences between the main representatives of so-called "geometrical methods" (FIT, FD, FE, CM) are surprisingly small. Some of the recent research on this topic is presented, which has lead to new theoretical insights in computational electromagnetics. Finally the possible impact of these results on the derivation of new simulation methods is discussed.
|
|
|
Slides
|
|
|
||
MOA2IS01 | The ORBIT Simulation Code: Benchmarking and Applications | electron, proton, space-charge, impedance | 53 | |||||
|
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U. S. Department of Energy. |
The contents, structure, implementation, benchmarking, and applications of ORBIT as an accelerator simulation code are described. Physics approaches, algorithms, and limitations for space charge, impedances, and electron cloud effects are discussed. The ORBIT code is a parallel computer code, and the scalabilities of the implementations of parallel algorithms for different physics modules are shown. ORBIT has a long history of benchmarking with analytical exactly solvable problems and experimental data. The results of this benchmarking and the current usage of ORBIT are presented. |
|
Slides
|
|
|
||
MOA2IS02 | Simulations of Single Bunch Collective Effects Using HEADTAIL | electron, impedance, space-charge, single-bunch | 59 | |||||
|
The HEADTAIL code is a very versatile tool that can be used for simulations of electron cloud induced instabilities as well as for Transverse Mode Coupling Instability and space charge studies. The effect of electron cloud and/or a conventional impedance (resonator or resistive wall) on a single bunch is modeled using a wake field approach. The code naturally allows either for dedicated studies of one single effect or for more complex studies of the interplay between different effects. Sample results from electron cloud studies (coherent and incoherent effects) and TMCI studies (e.g., for the PS and SPS) will be discussed in detail and compared, where possible, with results from other codes having similar features and/or with existing machine data.
|
|
|
Slides
|
|
|
||
MOA2IS03 | Towards the Description of Long Term Self Consistent Effects in Space Charge Induced Resonance Trapping | beam-losses, space-charge, resonance, emittance | 65 | |||||
|
In recent studies the effect of the space charge induced trapping has been shown relevant for long term storage of bunches. There the mechanism of emittance growth and beam loss have been studied for frozen bunch particle distribution. However, when beam loss or halo density are large enough, this approximation have to be reconsidered. We present here a first study on the effect of self consistency in frozen models as intermediate step towards fully 2.5 and 3D simulations.
|
|
|
Slides
|
|
|
||
MOAPMP01 | Coupled Transient Thermal and Electromagnetic Finite Element Simulation of Quench in Superconducting Magnets | superconducting-magnet, electromagnetic-fields, target, controls | 70 | |||||
|
Resistive, normal zones may propagate through the low temperature superconducting coils. The rise in temperature in the windings and the internal voltages developed during this quench process are a critical issue for magnet safety, in addition the eddy currents induced in support structures during a quench may result in large Lorentz forces that can cause damage. Approximate adiabatic models have been used to achieve good results for the time profile of the current decay*. More accurate methods based on finite element simulations have also been used to obtain temperature and voltage distributions**. This paper describes transient, closely coupled thermal, electromagnetic finite element and circuit simulations developed to model quenching magnets. The program was designed to be efficient for this calculation. It uses nodal finite elements for the transient thermal simulation and edge elements for the electromagnetic simulation. The two simulations can be performed on different symmetry groups so that the model size can be minimized. Circuit models are coupled to the electromagnetic simulator either using filamentary edge loops or with a full volume mesh in the coils. Accurately meshing the coils increases the model size, but it is essential if accurate fields and time derivatives of the field are required. The main source of heat in the coils during quench is resistive loss in the normal zone. However rate dependent losses caused by the changing magnetic field may cause heating and therefore trigger a quench in other coils. Having closely coupled thermal and electromagnetic simulations makes it easy to include these effects and hence greatly improves the reliability of the simulation. Calculated and measured results for a 4 coil superconducting polarized target magnet will be presented. In this system the quench spreads to another coil as a result of rate dependent losses, the calculated results change dramatically if these losses are not included.
|
* M. N. Wilson, Superconducting magnets p217ff |
|
Slides
|
|
|
||
MOAPMP03 | Geometrical Methods in Computational Electromagnetism | controls, background | 75 | |||||
|
From almost one century, it has been known that vector fields E, H, D, B, etc., in the Maxwell equations are just "proxies" for more fundamental objects, the differential forms e, h, d, b, etc., that when integrated on lines or surfaces, as the case may be, yield physically meaningful quantities such as emf's, mmf's, fluxes, etc. This viewpoint helps separate the "non-metric" part of the equations (Faraday and Ampère), fully covariant, from the "metric" one (the constitutive laws), with more restricted (Lorentz) covariance. The usefulness of this viewpoint in computational issues has been realized more recently, and will be the main topic addressed in this survey. It makes the association of degrees of freedom with mesh elements such as edges, facets, etc. (instead of nodes as in traditioanl finite element techniques), look natural, whereas the very notion of "edge element" seemed exotic twenty years ago. It explains why all numerical schemes treat Faraday and Ampère the same way, and only differ in the manner they discretize metric-dependent features, i.e., constitutive laws. What finite elements, finite volumes, and finite differences, have in common, is thus clearly seen. Moreover, this seems to be the right way to advance the "mimetic discretization" or "discrete differential calculus" research programs, which many dream about: a kind of functorial transformation of the partial differential equations of physics into discrete models, when space-time continuum is replaced by a discrete structure such as a lattice, a simplicial complex, etc. Though total fulfillment of this dream is still ahead, we already have something that engineers especially programmers keen on object-oriented methods should find valuable: A discretization toolkit, offering ready-to-use, natural "discrete" counterparts to virtually all "continuous" objects discernible in the equations, fields, differential operators, v x B force fields, Maxwell tensor, etc.
|
|
|
Slides
|
|
|
||
TUMPMP01 | Simple Maps in Accelerator Simulations | electron, ion, proton, vacuum | 81 | |||||
|
Difference systems (described by maps) exhibit much richer dynamical behavior than differential systems, because of the emphasis they place on occasional "high-frequency" transient kicks. Thus, the standard map (with pulsed gravity) displays chaos, while the gravity pendulum does not. Maps also speed up simulations enormously, by summarizing complex dynamics in short form. A new example of richer bahavior, and of dramatic speed up, comes from the representation of interacting electron clouds and ion clouds. Coupled maps are capable of demonstrating the first order phase transitions (from cloud "off" to "on") that are sometimes observed in practice, and enable the extension of electron cloud simulation to include much slower evolving ion clouds.
|
|
|
Slides
|
|
|
||
TUMPMP02 | Magnetodynamic Formulation Resolving Eddy-Current Effects in the Yoke and the Superconductive Cable of the FAIR Dipole Magnets | superconductivity, dipole, synchrotron, coupling | 90 | |||||
|
Funding: This work was supported by the Gesellschaft für Schwerionenforschung (GSI), Darmstadt. |
Transient 3D simulations are carried out for two types of superconductive dipole magnets. Eddy-current effects in the yoke are treated by homogenising the laminated iron composite whereas interstrand eddy-current effects are resolved by either a cable magnetization model or a cable eddy-current model. The simulations reveal the Joule losses in the magnets. |
|
Slides
|
|
|
||
TUPPP09 | Modeling High-Current Instabilities in Particle Accelerators | damping, radiation, storage-ring, beam-transport | 110 | |||||
|
Funding: This work has been partially supported by the EU commission in the sixth framework programme, contract no. 011935 EUROFEL |
Methods employing integration techniques of Lie algebraic nature have been successfully employed in the past to develop charged beam transport codes, for different types of accelerators. These methods have been so far applied to the transverse motion dynamics, while the longitudinal part has been treated using standard tracking codes. In this contribution we extend the simplectic technique to the analysis of longitudinal and coupled longitudinal and transverse motion in charged beam transport with the inclusion of the non linear dynamics due to the wake field effects. We use the method to model different types of instabilities due to high current. We consider in particular the case of coherent synchrotron instabilities and its implication in the design and performances of high current accelerators. We discuss either single pass and recirculated devices. As to this last case, we also include the effect due to quantum noise and damping. |
|
|||||
TUPPP10 | Design and Modeling of Field-Emitter Arrays for a High Brilliance Electron Source | emittance, electron, cathode, space-charge | 114 | |||||
|
The realization of compact Angstrom wave length free electron lasers depends critically on the brilliance of their electron sources. Field emitters are attractive given their small emission surface and subsequent high current density. The low emittance gun project (LEG) at PSI focuses on developing suitable field emitter arrays (FEA) with a dual gate structure emitting a total current of 5.5A out of a diameter of 500 microns with an emittance in the order of 50 nm rad. Simulations show for idealized emitters that despite micron scale variations of the charge density a low emittance can be obtained by putting the FEA in a pulsed DC diode at 250 MV/m. The challenge lies in modelling all real world effects in the individual field emitter and assembling these into a global emission model. Field emission is often labeled as a cold emission process, nevertheless quantum physical effects lead to a base line energy spread of an order of 150 meV FWHM for the emitted electrons. Replenishing the conduction band with electrons from deep layers gives a further increase in the momentum spread. For the metallic field emitter used, surface roughness has an important influence on the emission properties. It typically gives an additional field enhancement factor of 2.5 to 3 resulting in lower required gate voltages. Additionally we have a detrimental effect on the transverse momentum spread. Work is in progress on obtaining numerical estimates for these effects using among other things measurements using secondary electron microscopy. Further more, the extraction and focusing gates both both give rise to nonlinear defocusing and focusing forces, which have to be minimized by a careful geometric optimization. Combining all these effects gives a reliable parametrization of the individual emitters, which together with a stochastic spatial distribution of emitter properties is used in the global emission model.
|
|
|
|||||
TUPPP24 | Transverse Coupling Impedance of a Ferrite Kicker Magnet: Comparison between Simulations and Measurements | impedance, kicker, coupling, electromagnetic-fields | 128 | |||||
|
Funding: This work was partially funded by DIRACsecondary-Beams(RIDS-515873). |
The driving terms of instabilities in particle accelerators depend on the beam surroundings which are conveniently described by coupling impedances. In the case of critical components, for which analytical calculations are not available, direct measurements of the coupling impedances on a prototype are usually needed. However, this obvious drawback on the design of particle accelerators can be overcome by electromagnetic field simulations within the framework of the Finite Integration Technique. Here we show results from numerical evaluations of the transverse coupling impedance of a ferrite kicker. In order to excite the electromagnetic fields in the device we implement numerically the conventional twin-wire method. A good agreement with experimental measurements is observed, showing a promising way to determine coupling impedances of components of particle accelerators before construction. |
|
|||||
TUPPP26 | A Time-Adaptive Mesh Approach for the Self-Consistent Simulation of Particle Beams | gun, cathode, emittance, vacuum | 132 | |||||
|
Funding: This work was partially funded by HGF (VH-FZ-005) and DESY Hamburg. |
In many applications the self-consistent simulation of charged particle beams is necessary. Especially, in low-energetic sections such as injectors the interaction between particles and fields considering all effects has to be taken into account. Well-known programs like the MAFIA TS modules typically use the Particle-In-Cell (PIC) method for beam dynamics simulations. Since they use a fixed computational grid which has to resolve the bunch adequately, they suffer from enormous memory consumption. Therefore and especially in the 3D case, only rather short sections can be simulated. This may be avoided using adaptive mesh refinement techniques (AMR). Since their application in Finite-Difference methods in time-domain is critical concerning instabilities, usually problem-matched but static meshes are used. In this paper a code working on the basis of a fully dynamic Cartesian grid is presented allowing for simulations capturing both, a high spatial resolution in the vicinity of the bunch and the possibility of simulating structures up to a length of several meters. The code is tested and validated using the RF electron gun of the Photoinjector Test Facility at DESY Zeuthen (PITZ) as an example. The evolution of various beam parameters along the gun is compared with the results obtained by different beam dynamics programs. |
|
|||||
TUPPP28 | New 3D Space Charge Routines in the Tracking Code ASTRA | space-charge, electron, brightness, cathode | 136 | |||||
|
Funding: DESY Hamburg |
Precise and fast 3D space-charge calculations for bunches of charged particles are still of growing importance in recent accelerator designs. A widespread approach is the particle-mesh method computing the potential of a bunch in the rest frame by means of Poisson's equation. Recently new algorithms for solving Poisson's equation have been implemented in the tracking code Astra. These Poisson solvers are iterative algorithms solving a linear system of equations that results from the finite difference discretization of the Poisson equation. The implementation is based on the software package MOEVE (Multigrid Poisson Solver for Non-Equidistant Tensor Product Meshes) developed by G. Pöplau. The package contains a state-of-the-art multigrid Poisson solver adapted to space charge calculations. In this paper the basic concept of iterative Poisson solvers is described. It is compared to the established 3D FFT Poisson solver which is a widely-used method for space charge calculations and also implemented in Astra. Advantages and disadvantages are discussed. Further the similarities and differences of both approaches are demonstrated with numerical examples. |
|
|||||
TUPPP29 | Charge Conservation for Split-Operator Methods in Beam Dynamics Simulations | electron, focusing, space-charge, electromagnetic-fields | 140 | |||||
|
Funding: DFG (1239/22-3) and DESY Hamburg |
For devices in which the bunch dimensions are much smaller than the dimensions of the structure the numerical field solution is typically hampered by spurious oscillations. The reason for this oscillations is the large numerical dispersion error of conventional schemes along the beam axis. Recently, several numerical schemes have been proposed which apply operator splitting to optimize and under certain circumstances eliminate the dispersion error in the direction of the bunch motion. However, in comparison to the standard Yee scheme the methods based on operator splitting do not conserve the standard discrete Gauss law. This contribution is dedicated to the construction of conserved discrete Gauss laws and conservative current interpolation for some of the split operator methods. Finally, the application of the methods in a PIC simulations is shown. |
|
|||||
TUPPP30 | ROCOCO - A Zero Dispersion Algorithm for Calculating Wake Potentials | gun, linac, collective-effects, collider | 144 | |||||
|
Funding: This work was partially funded by EUROTeV (RIDS-011899) and DESY Hamburg. |
Wake fields are a limiting factor due to their collective effects. In colliders and high energy accelerators used in FEL projects short bunches excite high frequency fields which make the computation of near range wake fields inaccurate. Additionally the length of modern accelerating structures limit the powers of certain codes such as TBCI or MAFIA. Both limiting factors, i.e. short bunches and length of accelerating structures - a multiscale problem, can be dealt with in the following way. Using certain zero dispersion directions of a usual Cartesian grid leads to a decrease of the overall dispersion which usually arises by having discrete field values. Combined with a conformal modelling technique the full time step limited by the Courant criterion is used and a moving window is applied. Thus simulations of short bunches in long structures are possible - dispersion and memory problems have been avoided. In this work ROCOCO (Rotated mesh and conformal code) is presented. The zero dispersion algorithm uses a new discretization scheme based on a rotated mesh combined with the established USC scheme and the moving window technique mentioned above. The advantage of an explicit algorithm is joined with the zero dispersion along the beam's propagation direction. A dispersion analysis for the 2D version of the code is shown as well as some results for common structures of accelerator physics - such as collimators and the TESLA 9 cell structure. |
|
|||||
TUPPP31 | Eigenmode Expansion Method in the Indirect Calculation of Wake Potential in 3D Structures | radio-frequency, higher-order-mode, linear-collider, collider | 148 | |||||
|
Funding: EUROFEL (RIDS-011935), DESY Hamburg |
The eigenmode expansion method was used in the early 1980s in calculating wake potential for 2D rotational symmetric structures. In this paper it is extended to general 3D cases. The wake potential is computed as the sum of two parts, direct and indirect ones. The direct wake potential is obtained by an integral of field components from a full wave solution, which stops just at the end of the structure. The indirect wake potential is then calculated analytically through the eigenmode expansion method. This is to avoid the full wave modeling of a very long outgoing beam pipe, which is computational expensive. In our work, the Finite Integration Technique (FIT) with moving mesh window is used to model the structure. The fields are recorded at the truncation boundary as a function of time. These fields are then expanded according to discrete eigenmodes of the outgoing pipe, and the eigenmode coefficients are found out at each time step. Then, the coefficients are transferred into frequency domain and the integral of wake fields along a path to infinity is computed analytically. In the case that the moving mesh window is narrow, appropriate exploration of time domain coefficients is necessary. Numerical tests show that the proposed method provides an accurate result with as less as three modes for a collimator structure. |
|
|||||
TUAPMP02 | CHEF: A Framework for Accelerator Optics and Simulation | lattice, optics, quadrupole, site | 153 | |||||
|
Funding: This manuscript has been authored by Universities Research Association, Inc. under contract No. DE-AC02-76CH03000 with the U. S. Department of Energy. |
We describe CHEF, an application based on an extensive hierarchy of C++ class libraries. The objectives are (1) provide a convenient, effective application to perform standard beam optics calculations and (2) seamlessly support development of both linear and non-linear simulations, for applications ranging from a simple beamline to an integrated system involving multiple machines. Sample applications are discussed. |
|
Slides
|
|
|
||
TUAPMP03 | Recent Progress on the MaryLie/IMPACT Beam Dynamics Code | space-charge, lattice, optics, acceleration | 157 | |||||
|
Funding: Supported in part by the US DOE, Office of Science, SciDAC program; Office of High Energy Physics; Office of Advanced Scientific Computing Research |
MaryLie/IMPACT (ML/I) is a 3D parallel Particle-In-Cell code that combines the nonlinear optics capabilities of MaryLie 5.0 with the parallel particle-in-cell space-charge capability of IMPACT. In addition to combining the capabilities of these codes, ML/I has a number of powerful features, including a choice of Poisson solvers, a fifth-order rf cavity model, multiple reference particles for rf cavities, a library of soft-edge magnet models, representation of magnet systems in terms of coil stacks with possibly overlapping fields, and wakefield effects. The code allows for map production, map analysis, particle tracking, and 3D envelope tracking, all within a single, coherent user environment. ML/I has a front end that can read both MaryLie input and MAD lattice descriptions. The code can model beams with or without acceleration, and with or without space charge. Developed under a US DOE Scientific Discovery through Advanced Computing (SciDAC) project, ML/I is well suited to large-scale modeling, simulations having been performed with up to 100M macroparticles. ML/I uses the H5Part* library for parallel I/O. The code inherits the powerful fitting/optimizing capabilities of MaryLie, augmented for the new features of ML/I. The combination of soft-edge magnet models, high-order capability, and fitting/optimization, makes it possible to simultaneously remove third-order aberrations while minimizing fifth-order, in systems with overlapping, realistic magnetic fields. Several applications will be presented, including aberration correction in a magnetic lens for radiography, linac and beamline simulations of an e-cooling system for RHIC, design of a matching section across the transition of a superconducting linac, and space-charge tracking in the damping rings of the International Linear Collider.
*ICAP 2006 paper ID 1222, A. Adelmann et al., "H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations" |
|
Slides
|
|
|
||
TUAPMP04 | Simulation of Secondary Electron Emission with CST Particle Studio(TM) | electron, electromagnetic-fields, vacuum, scattering | 160 | |||||
|
In accelerator physics and high power vacuum electronics the secondary electron emission (SEE) has in many cases an important influence on the physical behavior of the device. Since its analytical prediction even for simple geometries is extremely cumbersome, numerical simulation is essential to get a better understanding of the possible effects and ideas to change the design. The current paper introduces the implementation of SEE within the code CST Particle Studio (TM), which is an easy to use three dimensional tool for the simulation of electromagnetic fields and charged particles. There are three basic types of secondary electrons, the elastic reflected, the rediffused and the true secondary ones. The implemented SEE model is based on a probabilistic, mathematically self-consistent model developed by Furman and includes the three kinds of secondary electrons mentioned above. The paper presents simulation results with focus to the SEE for the absorbed power within an electron collector of a high power tube. As second example the secondary emission process is studied within the superconducting TESLA cavity, which gives some hints for the understanding of multipactor effects in those cavity and filter structures.
|
|
|
Slides
|
|
|
||
WEMPMP01 | Computational Needs for XFELS | undulator, electron, emittance, space-charge | 164 | |||||
|
X-ray Free Electron Lasers (FEL) make use of the principle of Self-Amplified-Spontaneous-Emission (SASE) where electron bunches interact in an undulator with their own co-propagating radiation. They do not require optical resonators and their frequency is therefore not limited by material properties as the reflectivity of mirrors. The performance of X-ray SASE FELs depends exponentially on the beam quality of the electron bunch. Therefore effects in the beamline before the undulator are as important as particle-field interactions of the FEL-SASE process. Critical components are the low emittance electron source, accelerating sections, the bunch compression system and the undulator. Due to the high peak currents and small beam dimensions space charge (SC) effects have to be considered up to energies in the GeV range. Coherent synchrotron radiation (CSR) drives not only the FEL but is also emitted in dispersive sections as bunch compressors. SC, CSR, and wake fields affect significantly longitudinal beam parameters (peak current, correlated and uncorrelated energy spread) and the transverse emittance. Start-to-end simulations use a sequence of various tracking codes (with or without SC, CSR and wake fields) and FEL programs. Usually the particle or phase space information has to be carefully converted for each transition from one tool to another. Parameter studies need many simulations of the complete system or a part of it and beyond that, calculations with several random seeds are necessary to consider the stochastic nature of SASE-FEL process.
|
|
|
Slides
|
|
|
||
WEMPMP02 | Wish-List for Large-Scale Simulations for Future Radioactive Beam Facilities | ion, heavy-ion, linac, diagnostics | 170 | |||||
|
Funding: This work is supported by the U. S. Department of Energy under contract W-31-109-Eng-38. |
As accelerator facilities become more complex and demanding and computational capabilities become ever more powerful, there is the opportunity to develop and apply very large-scale simulations to dramatically increase the speed and effectiveness of many aspects of the design, commissioning, and finally the operational stages of future projects. Next-generation radioactive beam facilities are particularly demanding and stand to benefit greatly from large-scale, integrated simulations of essentially all aspects or components. These demands stem from things like the increased complexity of the facilities that will involve, for example, multiple-charge-state heavy ion acceleration, stringent limits on beam halos and losses from high power beams, thermal problems due to high power densities in targets and beam dumps, and radiological issues associated with component activation and radiation damage. Currently, many of the simulations that are necessary for design optimization are done by different codes, and even separate physics groups, so that the process proceeds iteratively for the different aspects. There is a strong need, for example, to couple the beam dynamics simulation codes with the radiological and shielding codes so that an integrated picture of their interactions emerges seamlessly and trouble spots in the design are identified easily. This integration is especially important in magnetic devices such as heavy ion fragment separators that are subject to radiation and thermal damage. For complex, high-power accelerators there is also the need to fully integrate the control system and beam diagnostics devices to a real-time beam dynamics simulation to keep the tunes optimized without the need for continuous operator feedback. This will most likely require on-line peta-scale computer simulations. The ultimate goal is to optimize performance while increasing the cost-effectiveness and efficiency of both the design and operational stages of future facilities. |
|
Slides
|
|
|
||
WEMPMP03 | Parallel Higher-Order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations | gun, space-charge, emittance, plasma | 176 | |||||
|
Funding: Work supported by US DOE contract DE-AC002-76SF00515 |
Under the US DOE SciDAC project, SLAC has developed a suite of 3D (2D) Parallel Higher-order Finite Element (FE) codes, T3P (T2P) and PIC3P (PIC2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in RF cavities of complex shape. The codes are built on the FE infrastructure that supports SLACs frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular) meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. PIC3P (PIC2P) extends T3P (T2P) to treat charged particle dynamics self-consistently using the PIC approach, the first such implementation on the FE grid. Examples from applications to the ILC, LCLS and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids. |
|
Slides
|
|
|
||
WEPPP01 | Recent Developments in IMPACT and Application to Future Light Sources | linac, electron, lattice, space-charge | 182 | |||||
|
The Integrated Map and Particle Accelerator Tracking (IMPACT) code suite was originally developed to model beam dynamics in ion linear accelerators. It has been greatly enhanced and now includes a linac design code, a 3D rms envelope code and two parallel particle-in-cell (PIC) codes IMPACT-T, a time-based code, and IMPACT-Z, a z-coordinate based code. Presently, the code suite has been increasingly used in simulations of high brightness electron beams for future light sources. These simulations, performed using up to 100 million macroparticles, include effects related to nonlinear magnetic optics, rf structure wake fields, 3D self-consistent space charge, and coherent synchrotron radiation (at present a 1D model). Illustrations of application for a simulation of the microbunching instability are given. We conclude with plans of further developments pertinent to future light sources.
|
|
|
|||||
WEPPP02 | Recent Improvements to the IMPACT-T Parallel Particle Tracking Code | space-charge, electromagnetic-fields, cathode, linac | 185 | |||||
|
Funding: Supported in part by the US DOE, Office of Science, SciDAC program; Office of High Energy Physics; Office of Advanced Scientific Computing Research |
The IMPACT-T code is a parallel three-dimensional quasi-static beam dynamics code for modeling high brightness beams in photoinjectors and RF linacs. Developed under the US DOE Scientific Discovery through Advanced Computing (SciDAC) program, it includes several key features including a self-consistent calculation of 3D space-charge forces using a shifted and integrated Green function method, multiple energy bins for a beams with large energy spread, and models for treating RF standing wave and traveling wave structures. In this paper, we report on recent improvements to the IMPACT-T code including short-range transverse and longitudinal wakefield models and a longitudinal CSR wakefield model. Some applications will be presented including simulation of the photoinjector for the Linac Coherent Light Source (LCLS) and beam generation from a nano-needle photocathode. |
|
|||||
WEPPP03 | Recent Improvements of PLACET | ground-motion, collider, linac, emittance | 188 | |||||
|
The tracking code PLACET simulates beam transport and orbit correction in linear colliders from the damping ring to the interaction point and beyond. It is a fully programmable and modular software, thanks to a Tcl interface and external modules based on shared libraries. Recent improvements of the code are presented, including the possibility to simulate bunch compressors and to use parallel computer systems.
|
|
|
|||||
WEPPP07 | Phase Space Tomography Diagnostics at the PITZ Facility | emittance, diagnostics, electron, quadrupole | 194 | |||||
|
Funding: This work has partly been supported by the European Community, contract 011935 (EUROFEL) |
A high phase-space density of the electron beam is obligatory for the successful operation of a Self Amplified Spontaneous Emission - Free Elector Laser (SASE-FEL). Detailed knowledge of the phase-space density distribution is thus very important for characterizing the performance of the used electron sources. The Photo Injector Test Facility at DESY in Zeuthen (PITZ) is built to develop, operate and optimize electron sources for FELs. Currently a tomography module for PITZ is under design as part of the ongoing upgrade of the facility. This contribution studies the performance of the tomography module. Errors in the beam size measurements and their contribution to the calculated emittance will be studied using simulated data. As a practical application the Maximum Entropy Algorithm (MENT) will be used to reconstruct the data generated by an ASTRA simulation. |
|
|||||
WEPPP10 | Implementation of the DYNAMION Code to the End-To-End Beam Dynamics Simulations for the GSI Proton and Heavy Ion Linear Accelerators | rfq, linac, ion, emittance | 201 | |||||
|
The advanced multi-particle code DYNAMION is sufficient to calculate beam dynamics in linear accelerators and transport lines under space charge conditions with high accuracy. Special features like the consideration of field measurements, misalignment and fabrication errors, and data from the real topology of RFQ electrodes, drift tubes, quadrupole lenses lead to reliable results of the beam dynamics simulations. End-to-end simulations for the whole linac (from ion source extraction to the synchrotron entrance) allow for the investigation and optimization of the overall machine performance as well as for the calculation of the expected impact of different upgrade measures, proposed to improve beam brilliance. Recently the DYNAMION code is applied to investigate the beam dynamics for the different GSI-linacs: the heavy ion high current UNILAC, the high current proton linac for the future Facility for Antiproton and Ion Research at Darmstadt (FAIR), and the light ion accelerator for the cancer therapy (HICAT), to be commissioned in Heidelberg (Germany) in the near future. Recent results of the beam dynamics simulations by means of the DYNAMION code are presented. The proposed upgrade measures as well as tuning and optimization of the linacs are discussed.
|
|
|
|||||
WEPPP11 | Comparison of the Beam Dynamics Designs for the FAIR High Current Proton LINAC-RFQ | rfq, emittance, proton, linac | 205 | |||||
|
The antiproton physics program for future Facility for Antiproton and Ion Research (FAIR) at Darmstadt is based on a rate of 7·1010 cooled antiprotons per hour. To provide sufficient primary proton intensities a new proton linac is planned. The proposed linac comprises an Electron Cyclotron Resonance (ECR) proton source, a Radio Frequency Quadrupole (RFQ), and Crossed-bar H-cavities (CH). Its operation frequency of 352 MHz allows for an efficient acceleration to up to 70 MeV using normal conducting CH-DTLs. The beam pulses with a length of 32 mks, a current of 70 mA, and total transverse emittances of 7 mkm will allow to fill the existing GSI synchrotron SIS 18 within one multi-turn-injection up to its space charge limit of 7·1012 protons. Conceptual RFQ designs for two different RFQ types are proposed simultaneously: an RFQ of 4-rod type from the University Frankfurt and a 4 windows type RFQ from Institute for Theoretical and Experimental Physics (ITEP) and Moscow Radio-Technical Institute (MRTI). Studies of the beam dynamics in both RFQs has been done with the versatile multi-particle code DYNAMION. The topology of the RFQ tanks and electrodes is used "as to be fabricated" to provide for the realistic calculations of the external electrical field. The simulations are done under space charge conditions and including influence of the possible misalignments and errors of the fabrication. Simulated results for both designs will be discussed, as well as pros and cons. A comparison of the DYNAMION results with the simulations done by means of the PARMTEQM and LIDOS (dedicated codes for an RFQ design) is presented.
|
|
|
|||||
WEPPP15 | Simulations of Pellet Target Effects with Program PETAG01 | target, electron, storage-ring, antiproton | 216 | |||||
|
New internal targets play an important role in modern nuclear and high energy physics research. One of such targets is a pellet target which is a variant of a micro-particle internal target. This target has a number of very attractive features when it used in a storage ring. The software package PETAG01 has been developed for modelling the pellet target and it can be used for numerical calculations of the interaction of a circulating beam with the target in a storage ring. We present numerical calculations to study the beam dynamics of the ions in the storage ring, where strong cooling techniques in combination with the pellet target are applied. Some important effects due to the target in combination with electron cooling and its influence on the beam parameters have been considered.
|
|
|
|||||
WEPPP17 | Tracking Code with 3D Space Charge Calculations Taking into Account the Elliptical Shape of the Beam Pipe | space-charge, electron, damping, positron | 220 | |||||
|
Funding: Work supported by DESY, Hamburg |
The determination of electron cloud instability thresholds is a task with high priority in the ILC damping rings research and development objectives. Simulations of electron cloud instabilities are therefore essential. In this paper a new particle tracking program is presented which includes the Poisson solver MOEVE for space charge calculations. Recently, perfectly electric conducting beam pipes with arbitrary elliptical shapes have been implemented as boundary conditions in the Poisson solver package MOEVE. The 3D space charge algorithm taking into account a beam pipe of elliptical shape will be presented along with numerical test cases. The routine is also implemented in the program code ASTRA, in addition we compare the tracking with both routines. |
|
|||||
WEPPP21 | Efficient Time Integration for Beam Dynamics Simulations Based on the Moment Method | emittance, beam-transport, space-charge, multipole | 224 | |||||
|
Funding: This work was partially funded by EUROFEL (RIDS-011935) and DESY Hamburg. |
The moment method model has been proven to be a valuable tool for numerical simulations of a charged particle beam transport both in accelerator design studies and in optimization of the operating parameters for an already existing beam line. On the basis of the Vlasov equation which describes a collision-less kinetic approach, the time evolution of such integral quantities like the mean or rms dimensions, the mean or rms kinetic momenta, and the total energy or energy spread for a bunched beam can be described by a set of first order non-autonomous ordinary differential equations. Application of a proper time integrator to such a system of ordinary differential equations enables then to determine the time evolution of all involved ensemble parameter under consistent initial conditions. From the vast amount of available time integration methods different versions have to be implemented and evaluated to select a proper algorithm. The computational efficiency in terms of effort and accuracy serves as a selection criterion. Among possible candidates of suited time integrators for the given set of moment equations are the explicit Runge-Kutta methods, the implicit theta methods, and the linear implicit Rosenbrock methods. Various algorithms have been implemented and tested under real-world conditions. In the paper the evaluation process is documented. |
|
|||||
WESEPP01 | CST's Commercial Beam-Physics Codes | controls, emittance, impedance, electromagnetic-fields | 228 | |||||
|
During the past decades Particle Accelerators have grown to higher and higher complexity and cost, so that a careful analysis and understanding of the machines' behaviour becomes more and more important. CST offers userfriendly numerical simulation tools for the accurate analysis of electromagnetic fields in combination with charged particles, including basic thermal analysis. The CST STUDIO SUITE code family is the direct successor of the code MAFIA, combining the numerical accuracy of the Finite Integration Theory and Perfect Boundary Approximation within an intuitive, easy-to-use CAD environment. Automatic Parameter Sweeping and Optimization are available to achieve and control the design goals. In this paper various solver modules of CST PARTICLE STUDIO, CST EM STUDIO and CST MICROWAVE STUDIO will be presented along accelerator-relevant examples, such as: |
|
|
|||||
WESEPP03 | High-Order Algorithms for Simulation of Laser Wakefield Accelerators | laser, emittance, electron | 230 | |||||
|
Funding: This work is funded by the US DOE Office of Science, Office of High Energy Physics, including use of NERSC. |
Electromagnetic particle-in-cell (PIC) simulations of laser wakefield accelerator (LWFA) experiments have shown great success recently, qualitatively capturing many exciting features, like the production of ~1 GeV electron beams with significant charge, moderate energy spread and remarkably small emittance. Such simulations require large clusters or supercomputers for full-scale 3D runs, and all state-of-the art codes are using similar algorithms, with 2nd-order accuracy in space and time. Very high grid resolution and, hence, a very large number of time steps are required to obtain converged results. We present preliminary results from the implementation and testing of 4th-order algorithms, which hold promise for dramatically improving the accuracy of future LWFA simulations. |
|
|||||
WESEPP04 | The ORBIT Simulation Code: Benchmarking and Applications | space-charge, electron | 231 | |||||
|
The contents, structure, implementation, benchmarking, and applications of ORBIT as an accelerator simulation code are described. Physics approaches, algorithms, and limitations for space charge, impedances, and electron cloud effects are discussed. The ORBIT code is a parallel computer code, and the scalabilities of the implementations of parallel algorithms for different physics modules are shown. ORBIT has a long history of benchmarking with analytical exactly solvable problems and experimental data. The results of this benchmarking and the current usage of ORBIT are presented.
|
|
|
|||||
WEA1MP01 | Parallel Simulation of Coulomb Collisions for High-Energy Electron Cooling Systems | electron, ion, luminosity, heavy-ion | 233 | |||||
|
Funding: This work is funded by the US DOE Office of Science, Office of Nuclear Physics. |
High-energy electron cooling requires co-propagation of relativistic electrons over many meters with the recirculating bunches of an ion collider ring. The expected increase of ion beam luminosity makes such systems a key component for proposed efforts like the RHIC luminosity upgrade* and the FAIR project**. Correctly simulating the dynamical friction of heavy ions, during brief interactions with low-density electron populations, in the presence of arbitrary electric and magnetic fields, requires a molecular dynamics approach that resolves close Coulomb collisions. Effective use of clusters and supercomputers is required to make such computations practical. Previous work*** will be reviewed. Recent algorithmic developments**** and future plans will be emphasized.
* http://www.bnl.gov/cad/ecooling |
|
Slides
|
|
|
||
WEA3MP01 | Strong-Strong Beam-Beam Simulations | beam-beam-effects, damping, coupling, collider | 250 | |||||
|
During the collision of two charged beams the strong non-linear electromagnetic fields of the two beams perturb each other. This effect is called beam-beam interaction. Of particular interest in present and future machines are studies of the behaviour of equally strong and intense beams, the so-called strong-strong beam-beam interaction. After a careful definition of strong-strong beam-beam effects, I describe the applications where such studies are required. A major issue for strong-strong simulations are the computational challenges which are discussed. Finally I shall describe some of the modern techniques and procedures to solve them.
|
|
|
Slides
|
|
|
||
WEA3MP02 | Self-Consistent Simulations of High-Intensity Beams and E-Clouds with WARP POSINST | electron, ion, plasma, collider | 256 | |||||
|
Funding: Supported by U. S. Department of Energy under Contracts No. DE-AC02-05CH11231 and No. W-7405-Eng-48 and by US-LHC accelerator research program (LARP). |
We have developed a new, comprehensive set of simulation tools aimed at modeling the interaction of intense ion beams and electron clouds (e-clouds). The set contains the 3-D accelerator PIC code WARP and the 2-D "slice" e-cloud code POSINST, as well as a merger of the two, augmented by new modules for impact ionization and neutral gas generation. The new capability runs on workstations or parallel supercomputers and contains advanced features such as mesh refinement, disparate adaptive time stepping, and a new "drift-Lorentz" particle mover for tracking charged particles in magnetic fields using large time steps. It is being applied to the modeling of ion beams (1 MeV, 180 mA, K+) for heavy ion inertial fusion and warm dense matter studies, as they interact with electron clouds in the High-Current Experiment (HCX). We describe the capabilities and present recent simulation results with detailed comparisons against the HCX experiment, as well as their application (in a different regime) to the modeling of e-clouds in the Large Hadron Collider (LHC). |
|
Slides
|
|
|
||
WEA3MP03 | Benchmarking of Space Charge Codes Against UMER Experiments | space-charge, electron, diagnostics, cathode | 263 | |||||
|
Funding: This work is funded by US Dept. of Energy and by the US Dept. of Defense Office of Naval Research. |
The University of Maryland Electron Ring (UMER) is a scaled electron recirculator using low-energy, 10 keV electrons, to maximize the space charge forces for beam dynamics studies. We have recently circulated in UMER the highest-space-charge beam in a ring to date, achieving a breakthrough both in the number of turns and in the amount of current propagated. As of the time of submission, we have propagated 5 mA for at least 10 turns, and, with some loss, for over 50 turns, meaning about 0.5 nC of electrons survive for 10 microseconds. This makes UMER an attractive candidate for benchmarking space charge codes in regimes of extreme space charge. This talk will review the UMER design and available diagnostics, and will provide examples of benchmarking the particle-in-cell code WARP on UMER data, as well as an overview of the detailed information on our website. An open dialogue with interested coded developers is solicited. |
|
Slides
|
|
|
||
WEA3MP04 | Implementation and validation of space charge and impedance kicks in the code PATRIC for studies of transverse coherent instabilities in the FAIR rings | space-charge, impedance, dipole, damping | 267 | |||||
|
Funding: Work supported by EU design study (contract 515873 -DIRACsecondary-Beams) |
Simulation studies of the transverse stability of the FAIR synchrotrons have been started. The simulation code PATRIC has been developed in order to predict coherent instability thresholds with space charge and different impedance sources. Some examples of code validation using the numerical Schottky noise and analytical stability boundaries will be discussed. |
|
Slides
|
|
|
||
WEA4IS01 | Superconducting Cavity Design for the International Linear Collider | damping, dipole, linear-collider, collider | 271 | |||||
|
The International Linear Collider (ILC) is the highest priority future accelerator project in High Energy Physics whose R&D is presently the focus of the Global Design Effort (GDE). SLAC's Advanced Computations Department (ACD) is involved in the accelerating cavity design for the ILC main linac using the advanced tools developed under the US DOE SciDAC initiative. The codes utilize higher-order finite elements for increased accuracy and are in production mode on distributed memory supercomputers at NERSC and NCCS to perform the large-scale simulations needed by the ILC cavity design. Presently the code suite includes the eigensolver Omega3P for calculating mode damping, the time-domain solver T3P for computing wakefields, and the particle tracking code Track3P for simulating multipacting and dark current. This talk will provide an overview of their applications to the baseline TDR cavity design, and the alternate Low-Loss and Ichiro designs. Numerical results on HOM damping, cavity deformations, multipacting, and trapped modes in multi-cavity structures will be presented. Design issues with the input coupler and the HOM notch filter will also be addressed.
|
|
|
Slides
|
|
|
||
WEA4IS02 | Numerical Computation of Kicker Impedances: Towards a Complete Database for the GSI SIS100/300 Kickers | kicker, impedance, coupling, extraction | 277 | |||||
|
Funding: Work supported by the GSI and the DFG under contract GK 410/3. |
Fast kicker modules represent a potential source of beam instabilities in the planned Facility for Antiproton and Ion Research (FAIR) at the Gesellschaft für Schwerionenforschung (GSI), Darmstadt. Containing approximately six tons of lossy ferrite material, the more than forty kicker modules to be installed in the SIS-100 and SIS-300 synchrotrons are expected to have a considerable parasitic influence on the high-current beam dynamics. In order to be able to take these effects into account in the kicker design, a dedicated electromagnetic field software for the calculation of coupling impedances has been developed. Here we present our numerical results on the longitudinal and transverse kicker coupling impedances for the planned components and point out ways of optimization. Besides the inductive coupling of the beam to the external network -relevant below 100 MHz- particular attention is paid to the impact of ferrite losses up to the beam-pipe cutoff frequency. |
|
Slides
|
|
|
||
WEA4IS03 | 2-D Electromagnetic Model of Fast-Ramping Superconducting Magnets | coupling, dipole, induction, shielding | 283 | |||||
|
The simulation of pulsed superconducting magnets has gained importance at the verge of fast-ramping cyclotron projects. The ROXIE program has been devised for the design and optimization of superconducting magnets. The 2-D electromagnetic model of a fast-ramping magnet in ROXIE consists of |
|
|
Slides
|
|
|
||
THM1MP01 | H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations | 289 | ||||||
|
The very largest parallel particle simulations, for problems involving six dimensional phase space and field data, generate vast quantities of data. It is desirable to store such enormous data-sets efficiently and also to share data effortlessly between other programs and analysis tools. With H5Part we defined a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading and writing of the data to the HDF5 file format. Our API, which is oriented towards the needs of the particle physics and cosmology community, provides support for three types of common data types: particles, structured and unstructured meshes. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on computer systems ranging from laptops to supercomputers. The following languages are supported: C, C++, Fortran and Python. We show the easy usage and the performance for reading/writing terabytes of data on several parallel platforms. H5part is being distributed as Open Source under a BSD-like license.
|
|
|
Slides
|
|
|
||
THM1MP02 | Parallel Particle-In-Cell (OIC) Codes | electron, diagnostics, emittance, laser | 290 | |||||
|
Funding: This work has been partially supported by DESY Hamburg. |
Particle-In-Cell (PIC) simulations are commonly used in the field of computational accelerator physics for modelling the interaction of electromagnetic fields and charged particle beams in complex accelerator geometries. However, the practicability of the method for real world simulations, is often limited by the huge size of accelerator devices and by the large number of computational particles needed for obtaining accurate simulation results. Thus, the parallelization of the computations becomes necessary to permit the solution of such problems in a reasonable time. Different algorithms allowing for an efficient parallel simulation by preserving an equal distribution of the computational workload on the processes while minimizing the interprocess communication are presented. This includes some already known approaches based on a domain decomposition technique as well as novel schemes. The performance of the algorithms is studied in different computational environments with simulation examples including a full 3D simulation of the PITZ-Injector [*].
*A. Oppelt et al Status and First Results from the Upgraded PITZ Facility, Proc. FEL 2005 |
|
Slides
|
|
|
||
THM2IS01 | Accelerator Description Formats | lattice, controls, background, quadrupole | 297 | |||||
|
Being an integral part of accelerator software, accelerator description aims to provide an external representation of an accelerators internal model and associative effects. As a result, the choice of description formats is driven by the scope of accelerator applications and is usually implemented as a tradeoff between various requirements: completeness and extensibility, user and developer orientation, and others. Moreover, an optimal solution does not remain static but instead evolves with new project tasks and computer technologies. This talk presents an overview of several approaches, the evolution of accelerator description formats, and a comparison with similar efforts in the neighboring high-energy physics domain. Following the UAL Accelerator-Algorithm-Probe pattern, we will conclude with a next logical specification, Accelerator Propagator Description Format (APDF), providing a flexible approach for associating physical elements and evolution algorithms most appropriate for the immediate tasks.
|
|
|
Slides
|
|
|
||
THM2IS03 | CST's Commercial Beam-Physics Codes | controls, emittance, impedance, electromagnetic-fields | 308 | |||||
|
During the past decades Particle Accelerators have grown to higher and higher complexity and cost, so that a careful analysis and understanding of the machines' behaviour becomes more and more important. CST offers userfriendly numerical simulation tools for the accurate analysis of electromagnetic fields in combination with charged particles, including basic thermal analysis. The CST STUDIO SUITE code family is the direct successor of the code MAFIA, combining the numerical accuracy of the Finite Integration Theory and Perfect Boundary Approximation within an intuitive, easy-to-use CAD environment. Automatic Parameter Sweeping and Optimization are available to achieve and control the design goals. In this paper various solver modules of CST PARTICLE STUDIO, CST EM STUDIO and CST MICROWAVE STUDIO will be presented along accelerator-relevant examples, such as: |
|
|
Slides
|
|
|
||
THMPMP02 | Adaptive 2-D Vlasov Simulation of Particle Beams | heavy-ion, emittance, focusing, lattice | 310 | |||||
|
In order to address the noise problems occuring in Particle-In-Cell (PIC) simulations of intense particle beams, we have been investigating numerical methods based on the solution of the Vlasov equation on a grid of phase-space. However, especially for high intensity beam simulations in periodic or alternating gradient focusing fields, where particles are localized in phase space, adaptive strategies are required to get computationally efficient codes based on this method. To this aim, we have been developing fully adaptive techniques based on interpolating wavelets where the computational grid is changed at each time step according to the variations of the distribution function of the particles. Up to now we only had an adaptive axisymmetric code. In this talk, we are going to present a new adaptive code solving the paraxial Vlasov equation on the full 4D transverse phase space, which can handle real two-dimensional problems like alternating gradient focusing. In order to develop this code efficiently, we introduce a hierarchical sparse data structure, which enabled us not only to reduce considerably the computation time but also the required memory. All computations and diagnostics are performed on the sparse data structure so that the complexity becomes proportional to the number of points needed to describe the particle distribution function.
|
|
|
Slides
|
|
|
||
THMPMP03 | Accelerator Modeling under SciDAC: Meeting the Challenges of Next-Generation Accelerator Design, Analysis, and Optimization. | electron, plasma, space-charge, booster | 315 | |||||
|
Under the US DOE Scientific Discovery through Advanced Computing (SciDAC) initiative, a new generation of parallel simulation codes has been developed to meet the most demanding accelerator modeling problems for the DOE Office of Science (DOE/SC). Originally sponsored by DOE/SC's Office of High Energy Physics in collaboration with the Office of Advanced Scientific Computing Research, the new simulation capabilities have also been applied to other DOE projects, and to international projects as well. The new software has been applied to many projects, including the Tevatron, PEP-II, LHC, ILC, the Fermilab Booster, SNS, the JPARC project, the CERN SPL, many photoinjectors, and the FERMI@Elettra project. Codes have also been developed to model laser wakefield accelerators and plasma wakefield accelerators; these codes are being used both in support of advanced accelerator experiments, as well as to provide insight into the physics of ultra- high gradient accelerators. In this talk I will provide an overview of the computational capabilities that have been developed under our SciDAC project, and describe our plans for code development under the next phase of SciDAC.
|
|
|
Slides
|
|
|