| Paper | Title | Other Keywords | Page |
|---|---|---|---|
| WCO205 | Upgrade of SACLA DAQ System Adapts to Multi-Beamline Operation | controls, experiment, operation, laser | 22 |
|
|||
| We report the data acquisition system (DAQ) for user experiments at SACLA (the SPring-8 Angstrom Compact Free Electron Laser). The system provides standardized experimental framework to various XFEL users since March 2012. It is required to store shot-by-shot information synchronized with the XFEL beam of 60Hz at the maximum repetition rate. The data throughput goes up to 6 Gbps with TOF waveforms and/or images (e.g. X-ray diffraction images) from experiments. The data are stored to the hierarchical storage system capable of more than 6 PByte at the last stage. The DAQ system incorporates with prompt data processing performed by a 14 TFlops PC cluster as well as on-line monitoring. In 2014, SACLA will introduce the third beamline to increase the capacity of experiments. On the DAQ side, it is a challenge to operate multiple experiments simultaneously. The control and data stream will be duplicated and separated for beamlines. A new central server to manage each beamline condition in one place will help increase the efficiency of setup procedure and reduce risks of mishandling between beamlines. | |||
|
Slides WCO205 [1.472 MB] | ||
| WPO017 | IFMIF EVEDA RFQ Local Control System to Power Tests | controls, EPICS, rfq, software | 69 |
|
|||
|
In the IFMIF EVEDA project, normal conducting Radio Frequency Quadrupole (RFQ) is used to bunch and accelerate a 130 mA steady beam to 5 MeV. RFQ cavity is divided into three structures, named super-modules. Each super-module is divided into 6 modules for a total of 18 modules for the overall structure. The final three modules have to be tested at high power to test and validate the most critical RF components of RFQ cavity and, the control system itself. The choice of the last three modules is due to the fact that they will operate in the most demanding conditions in terms of power density (100 kW/m) and surface electric field (1.8*Ekp). The Experimental Physics and Industrial Control System (EPICS) environment [1] provides the framework to control any equipment connected to it. This paper report the usage of this framework to the RFQ power tests at Legnaro National Laboratories [2].
[1] http://www.aps.anl.gov/epics [2] http://www.lnl.infn.it/~epics |
|||
| WPO021 | Renovation of PC-based Console System for J-PARC Main Ring | operation, controls, EPICS, GUI | 81 |
|
|||
| Console system for J-PARC Main Ring (MR) was designed in 2007 and had been used for accelerator commissioning and operation since then. It was composed of 20 diskless thin clients and 10 terminal servers. Both of them are PC-based computers running Scientific Linux (SL) as their operating system. Migration to ordinary fat clients was planned in 2013, triggered by update from SL4 to SL6, based on use experiences of those thin clients. Intel NUC is selected as a result of preliminary investigation. Its evaluation is carried successfully out during commissioning of MR. Presently 10 thin clients have been replaced by fat clients. Migration scenario and technique of managing fat clients are discussed. | |||
| WPO027 | The Measurement and Monitoring of Spectrum and Wavelength of Coherent Radiation at Novosibirsk Free Electron Laser | radiation, FEL, controls, operation | 96 |
|
|||
| The architecture and capabilities of free electron laser radiation spectrum measurement system described in details in this paper. For execution of the measurements the monochromator and step-motor with radiation power sensor are used. As the result of the measurements, the curve of spectrum of radiation is transmitted to control computer. As this subsystem is fully integrated to common FEL control system, the results of measurements – spectrum graph, average wavelength, calculated radiation power, are able to transmit to any another computer on FEL control local area network and also on user stations computers. | |||
|
Poster WPO027 [2.250 MB] | ||
| WPO032 | Magnet Measurement System Upgrade at PSI | controls, EPICS, software, operation | 111 |
|
|||
| The magnet measurement system at the Paul Scherrer Institute (PSI) was significantly upgraded in the last few years. At the moment, it consists of automated Hall probe, rotating wire, and vibrating wire setups, which form a very efficient magnet measurement facility. The paper concentrates on the automation hardware and software implementation, which has made it possible not only to significantly increase the performance of the magnet measurement facility at PSI, but also to simplify magnet measurement data handling and processing. | |||
|
Poster WPO032 [1.313 MB] | ||
| WPO034 | Network Architecture at Taiwan Photon Source of NSRRC | controls, EPICS, monitoring, photon | 117 |
|
|||
| A robust, secure and high throughput network is necessary for the 3 GeV Taiwan Photon Source (TPS) in NSRRC. The NSRRC network divides into several subsets according to its functionality and includes CS-LAN, ACC-LAN, SCI-LAN, NSRRC-LAN and INFO-LAN for the instrumental control, subsystem of accelerator, beam-line users, office users and servers for the information office respectively. Each LAN is connected via the core switch by routing protocol to avoid traffic interference. Subsystem subnets connect to control system via EPICS based channel-access gateways for forwarding data. Outside traffic will be block by a firewall to ensure the independence of control system (CS-LAN). Various network management tools and machines are used for maintenance and troubleshooting. The network system architecture, cabling topology and maintainability will be described in this report. | |||
|
Poster WPO034 [1.847 MB] | ||
| WPO038 | A Modular Personnel Safety System for VELA based on Commercial Safety Network Controllers | operation, controls, electron, laser | 123 |
|
|||
| STFC Daresbury Laboratory has recently commissioned VELA (Versatile Electron Linear Accelerator), a high performance electron beam test facility. It will be used to deliver high quality, short pulse electron beams to industrial users to aid in the development of new products in the fields of health care, security, energy and waste processing and also to develop and test novel compact accelerator technologies. In the early stages of the design it was decided to use commercial Safety Network Controllers and I/O to implement the Personnel Safety System in place of the electro-mechanical relay-based system used on previous projects. This provides a high integrity, low cost solution while also allowing the design to be modular, programmable and easily expandable. This paper describes the design and realisation of the VELA Personnel Safety System and considers its future development. In addition, the application of the system to the protection of high-power laser systems and medical accelerators will also be discussed. | |||
| TCO102 | Eplanner Software for Machine Activities Management | software, operation, database, synchrotron | 129 |
|
|||
| For Indus-2, a 2.5 GeV Synchrotron Radiation Source, operational at Indore, India, the need was felt for software for easily managing various related activities for avoiding communication gaps among the crew members and clearly bringing out the important communications for machine operation. Typical requirements were to have the facility to enter and display daily, weekly and longer operational calendars, to convey system specific and machine operation related standing instructions, to log and track the faults occurring during the operations and follow up actions on the faults logged etc. Overall, the need was for a system to easily manage the number of jobs related to planning the day to day operations of a national facility. The paper describes such a web based system developed and in use regular use and found extremely useful. | |||
|
Slides TCO102 [5.439 MB] | ||
| TCO304 | Launching the FAIR Timing System with CRYRING | timing, controls, software, hardware | 155 |
|
|||
| During the past two years, significant progress has been made on the development of the General Machine Timing system for the upcoming FAIR facility at GSI. The prime features are time-synchronization of 2000-3000 nodes using the White Rabbit Precision-Time-Protocol (WR-PTP), distribution of International Atomic Time (TAI) time stamps and synchronized command and control of FAIR control system equipment. A White Rabbit network has been set up connecting parts of the existing facility and a next version of the Timing Master has been developed. Timing Receiver nodes in form factors Scalable Control Unit (standard front-end controller for FAIR), VME, PCIe and standalone have been developed. CRYRING is the first machine on the GSI/FAIR campus to be operated with this new timing system and serves as a test-ground for the complete control system. Installation of equipment starts in late spring followed by commissioning of equipment in summer 2014. | |||
|
Slides TCO304 [7.818 MB] | ||
| FPO001 | InfiniBand interconnects for high-throughput data acquisition in a TANGO environment | TANGO, controls, interface, software | 164 |
|
|||
| Advances in computational performance allow for fast image-based control. To realize efficient control loops in a distributed experiment setup, large amounts of data need to be transferred, requiring high-throughput networks with low latencies. In the European synchrotron community, TANGO has become one of the prevalent tools to remotely control hardware and processes. In order to improve the data bandwidth and latency in a TANGO network, we realized a secondary data channel based on native InfiniBand communication. This data channel is implemented as part of a TANGO device and by itself is independent of the main TANGO network communication. TANGO mechanisms are used for configuration, thus the data channel can be used by any TANGO-based software that implements the corresponding interfaces. First results show that we can achieve a maximum bandwidth of 30 Gb/s which is close to the theoretical maximum of 32 Gb/s, possible with our 4xQDR InfiniBand test network, with average latencies as low as 6 μs. This means that we are able to surpass the limitations of standard TCP/IP networks while retaining the TANGO control schemes, enabling high data throughput in a TANGO environment. | |||
|
Slides FPO001 [0.511 MB] | ||
|
Poster FPO001 [3.767 MB] | ||
| FPO014 | New Data Archive System for SPES Project Based on EPICS RDB Archiver with PostgreSQL Backend | EPICS, controls, database, hardware | 191 |
|
|||
|
SPES project [1] is a ISOL facility under construction at INFN, Laboratori Nazionali di Legnaro, which requires the integration between the accelerator systems actually used and the new line composed by the primary beam and the ISOL target. As consequence, a migration from the actual control system to a new one based on EPICS [2] is mandatory to realize a distributed control network for the new facility. One of the first implementation realized for this purpose is the Archiver System, an important service required for experiments. Comparing information and experiences provided by other Laboratories, an EPICS Archive System [3] based on PostgreSQL is implemented to provide this service. Preliminary tests are done with a dedicated hardware and following the project requirements. After these tests used to determinate a good configuration for Database and EPICS Application, the system is going to be moved in production, where it will be integrated with the first subsystem upgraded to EPICS. Dedicated customizations are made to the application for providing a simple user experience in managing and interact with the archiver system.
[1] https://web.infn.it/spes [2] http://www.aps.anl.gov/epics [3] http://sourceforge.net/apps/trac/cs-studio/wiki/RDBArchive |
|||
| FPO015 | Device Control Database Tool (DCDB) | EPICS, database, controls, Linux | 194 |
|
|||
|
Funding: This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 289485. In a physics facility containing numerous instruments, it is advantageous to reduce the amount of effort and repetitive work needed for changing the control system (CS) configuration: adding new devices, moving instruments from beamline to beamline, etc. We have developed a CS configuration tool, which provides an easy-to-use interface for quick configuration of the entire facility. It uses Microsoft Excel as the front-end application and allows the user to quickly generate and deploy IOC configuration (EPICS start-up scripts, alarms and archive configuration) onto IOCs; start, stop and restart IOCs, alarm servers and archive engines, etc. The DCDB tool utilizes a relational database, which stores information about all the elements of the accelerator. The communication between the client, database and IOCs is realized by a REST server written in Python. The key feature of the DCDB tool is that the user does not need to recompile the source code. It is achieved by using a dynamic library loader, which automatically loads and links device support libraries. The DCDB tool is compliant with ITER CODAC (used at ITER and ESS), but can also be used in any other EPICS environment. |
|||
|
Poster FPO015 [0.522 MB] | ||
| FPO022 | New developments on the FAIR Data Master | controls, operation, timing, FPGA | 207 |
|
|||
| During the last year, a small scale timing system has been built with a first version of the Data Master. In this paper, we will describe field test progress as well as new design concepts and implementation details of the new prototype to be tested with the CRYRING accelerator timing system. The message management layer has been introduced as a hardware acceleration module for the timely dispatch of control messages. It consists of a priority queue for outgoing messages, combined with a scheduler and network load balancing. This loosens the real-time constraints for the CPUs composing the control messages noticeably, making the control firmware very easy to construct and deterministic. It is further opening perspectives away from the current virtual machine-like implementation on to a specialized programming language for accelerator control. In addition, a streamlined and better fitting model for beam production chains and cycles has been devised for use in the data master firmware. The processing worst case execution time becomes completely calculable, enabling fixed time-slices for safe multiplexing of cycles in all of the CPUs. | |||
|
Slides FPO022 [0.890 MB] | ||
| FPO024 | First Idea on Bunch to Bucket Transfer for FAIR | timing, target, synchrotron, flattop | 210 |
|
|||
| The FAIR facility makes use of the General Machine Timing (GMT) system and the Bunch phase Timing System (BuTiS) to realize the synchronization of two machines. In order to realize the bunch to bucket transfer, firstly, the source machine slightly detunes its RF frequency at its RF flattop. Secondly, the source and target machines exchange packets over the timing network shortly before the transfer and make use of the RF frequency-beat method to realize the synchronization between both machines with accuracy better than 1o. The data of the packet includes RF frequency, timestamp of the zero-crossing point of the RF signal, harmonic number and bunch/bucket position. Finally, both machines have all information of each other and can calculate the coarse window and create announce signals for triggering kickers. | |||
|
Poster FPO024 [2.077 MB] | ||
| FCO202 | OpenGL-Based Data Analysis in Virtualized Self-Service Environments | software, GPU, hardware, synchrotron | 237 |
|
|||
|
Funding: Federal Ministry of Education and Research, Germany Modern data analysis applications for 2D/3D data samples apply complex visual output features which are often based on OpenGL, a multi-platform API for rendering vector graphics. They demand special computing workstations with a corresponding CPU/GPU power, enough main memory and fast network interconnects for a performant remote data access. For this reason, users depend heavily on available free workstations, both temporally and locally. The provision of virtual machines (VMs) accessible via a remote connection could avoid this inflexibility. However, the automatic deployment, operation and remote access of OpenGL-capable VMs with professional visualization applications is a non-trivial task. In this paper, we discuss a concept for a flexible analysis infrastructure that will be part in the project ASTOR, which is the abbreviation for “Arthropod Structure revealed by ultra-fast Tomography and Online Reconstruction”. We present an Analysis-as-a-Service (AaaS) approach based on the on-demand allocation of VMs with dedicated GPU cores and a corresponding analysis environment to provide a cloud-like analysis service for scientific users. |
|||
|
Slides FCO202 [1.126 MB] | ||