A   B   C   D   E   F   G   H   I   K   L   M   O   P   Q   R   S   T   U   V   W  

diagnostics

Paper Title Other Keywords Page
MOM2IS02 Large Scale Parallel Wake Field Computations for 3D-Accelerator Structures with the PBCI Code simulation, vacuum, electromagnetic-fields, electron 29
 
  • E. Gjonaj, X. Dong, R. Hampel, M. Kärkkäinen, T. Lau, W. F.O. Müller, T. Weiland
    TEMF, Darmstadt
  Funding: This work was partially funded by EUROTeV (RIDS-011899), EUROFEL (RIDS-011935), DFG (1239/22-3) and DESY Hamburg

The X-FEL project and the ILC require a high quality beam with ultra short electron bunches. In order to predict the beam quality in terms of both, single bunch energy spread and emittance, an accurate estimation of the short range wake fields in the TESLA crymodules, collimators and other geometrically complex accelerator components is necessary. We have presented earlier wake field computations for short bunches in rotationally symmetric components with the code ECHO. Most of the wake field effects in the accelerator, however, are due to geometrical discontinuities appearing in fully three dimensional structures. For the purpose of simulating such structures, we have developed the Parallel Beam Cavity Interaction (PBCI) code. The new code is based on the full field solution of Maxwell equations in the time domain, for ultra-relativistic current sources. Using a specialized directional-splitting technique, PBCI produces particularly accurate results in wake field computations, due to the dispersion free integration of the discrete equations in the direction of bunch motion. One of the major challenges to deal with, when simulating fully three dimensional accelerator components is the huge computational effort needed for resolving both, the geometrical details and the bunch extensions by the computational grid. For this reason, PBCI implements massive parallelization on a distributed memory environment, based on a flexible domain decomposition method. In addition, PBCI uses the moving window technique, which is particularly well suited for wake potential computations in very long structures. As a particular example of such a structure, the simulation results of a complete module of TESLA cavities with eight cells each for a um-bunch will be given.

 
slides icon Slides  
 
WEMPMP02 Wish-List for Large-Scale Simulations for Future Radioactive Beam Facilities simulation, ion, heavy-ion, linac 170
 
  • J. A. Nolen
    ANL, Argonne, Illinois
  Funding: This work is supported by the U. S. Department of Energy under contract W-31-109-Eng-38.

As accelerator facilities become more complex and demanding and computational capabilities become ever more powerful, there is the opportunity to develop and apply very large-scale simulations to dramatically increase the speed and effectiveness of many aspects of the design, commissioning, and finally the operational stages of future projects. Next-generation radioactive beam facilities are particularly demanding and stand to benefit greatly from large-scale, integrated simulations of essentially all aspects or components. These demands stem from things like the increased complexity of the facilities that will involve, for example, multiple-charge-state heavy ion acceleration, stringent limits on beam halos and losses from high power beams, thermal problems due to high power densities in targets and beam dumps, and radiological issues associated with component activation and radiation damage. Currently, many of the simulations that are necessary for design optimization are done by different codes, and even separate physics groups, so that the process proceeds iteratively for the different aspects. There is a strong need, for example, to couple the beam dynamics simulation codes with the radiological and shielding codes so that an integrated picture of their interactions emerges seamlessly and trouble spots in the design are identified easily. This integration is especially important in magnetic devices such as heavy ion fragment separators that are subject to radiation and thermal damage. For complex, high-power accelerators there is also the need to fully integrate the control system and beam diagnostics devices to a real-time beam dynamics simulation to keep the tunes optimized without the need for continuous operator feedback. This will most likely require on-line peta-scale computer simulations. The ultimate goal is to optimize performance while increasing the cost-effectiveness and efficiency of both the design and operational stages of future facilities.

 
slides icon Slides  
 
WEPPP07 Phase Space Tomography Diagnostics at the PITZ Facility emittance, simulation, electron, quadrupole 194
 
  • G. Asova, S. Khodyachykh, M. Krasilnikov, F. Stephan
    DESY Zeuthen, Zeuthen
  • P. Castro, F. Loehl
    DESY, Hamburg
  Funding: This work has partly been supported by the European Community, contract 011935 (EUROFEL)

A high phase-space density of the electron beam is obligatory for the successful operation of a Self Amplified Spontaneous Emission - Free Elector Laser (SASE-FEL). Detailed knowledge of the phase-space density distribution is thus very important for characterizing the performance of the used electron sources. The Photo Injector Test Facility at DESY in Zeuthen (PITZ) is built to develop, operate and optimize electron sources for FELs. Currently a tomography module for PITZ is under design as part of the ongoing upgrade of the facility. This contribution studies the performance of the tomography module. Errors in the beam size measurements and their contribution to the calculated emittance will be studied using simulated data. As a practical application the Maximum Entropy Algorithm (MENT) will be used to reconstruct the data generated by an ASTRA simulation.

 
 
WEA2IS01 Status and Future Developments in Large Accelerator Control Systems controls, collider, site, linear-collider 239
 
  • K. S. White
    Jefferson Lab, Newport News, Virginia
  Funding: This work was supported by DOE contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab.

Over the years, accelerator control systems have evolved from small hardwired systems to complex computer controlled systems with many types of graphical user interfaces and electronic data processing. Today’s control systems often include multiple software layers, hundreds of distributed processors, and hundreds of thousands of lines of code. While it is clear that the next generation of accelerators will require much bigger control systems, they will also need better systems. Advances in technology will be needed to ensure the network bandwidth and CPU power can provide reasonable update rates and support the requisite timing systems. Beyond the scaling problem, next generation systems face additional challenges due to growing cyber security threats and the likelihood that some degree of remote development and operation will be required. With a large number of components, the need for high reliability increases and commercial solutions can play a key role towards this goal. Future control systems will operate more complex machines and need to present a well integrated, interoperable set of tools with a high degree of automation. Consistency of data presentation and exception handling will contribute to efficient operations. From the development perspective, engineers will need to provide integrated data management in the beginning of the project and build adaptive software components around a central data repository. This will make the system maintainable and ensure consistency throughout the inevitable changes during the machine lifetime. Additionally, such a large project will require professional project management and disciplined use of well-defined engineering processes. Distributed project teams will make the use of standards, formal requirements and design and configuration control vital. Success in building the control system of the future may hinge on how well we integrate commercial components and learn from best practices used in other industries.

 
slides icon Slides  
 
WEA3MP03 Benchmarking of Space Charge Codes Against UMER Experiments space-charge, simulation, electron, cathode 263
 
  • R. A. Kishek, G. Bai, B. L. Beaudoin, S. Bernal, D. W. Feldman, R. B. Fiorito, T. F. Godlove, I. Haber, P. G. O'Shea, C. Papadopoulos, B. Quinn, M. Reiser, D. Stratakis, D. F. Sutter, J. C.T. Thangaraj, K. Tian, M. Walter, C. Wu
    IREAP, College Park, Maryland
  Funding: This work is funded by US Dept. of Energy and by the US Dept. of Defense Office of Naval Research.

The University of Maryland Electron Ring (UMER) is a scaled electron recirculator using low-energy, 10 keV electrons, to maximize the space charge forces for beam dynamics studies. We have recently circulated in UMER the highest-space-charge beam in a ring to date, achieving a breakthrough both in the number of turns and in the amount of current propagated. As of the time of submission, we have propagated 5 mA for at least 10 turns, and, with some loss, for over 50 turns, meaning about 0.5 nC of electrons survive for 10 microseconds. This makes UMER an attractive candidate for benchmarking space charge codes in regimes of extreme space charge. This talk will review the UMER design and available diagnostics, and will provide examples of benchmarking the particle-in-cell code WARP on UMER data, as well as an overview of the detailed information on our website. An open dialogue with interested coded developers is solicited.

 
slides icon Slides  
 
THM1MP02 Parallel Particle-In-Cell (OIC) Codes simulation, electron, emittance, laser 290
 
  • F. Wolfheimer, E. Gjonaj, T. Weiland
    TEMF, Darmstadt
  Funding: This work has been partially supported by DESY Hamburg.

Particle-In-Cell (PIC) simulations are commonly used in the field of computational accelerator physics for modelling the interaction of electromagnetic fields and charged particle beams in complex accelerator geometries. However, the practicability of the method for real world simulations, is often limited by the huge size of accelerator devices and by the large number of computational particles needed for obtaining accurate simulation results. Thus, the parallelization of the computations becomes necessary to permit the solution of such problems in a reasonable time. Different algorithms allowing for an efficient parallel simulation by preserving an equal distribution of the computational workload on the processes while minimizing the interprocess communication are presented. This includes some already known approaches based on a domain decomposition technique as well as novel schemes. The performance of the algorithms is studied in different computational environments with simulation examples including a full 3D simulation of the PITZ-Injector [*].

*A. Oppelt et al Status and First Results from the Upgraded PITZ Facility, Proc. FEL 2005

 
slides icon Slides