| Paper | Title | Other Keywords | Page |
|---|---|---|---|
| WCO101 | Drivers and Software for MicroTCA.4 | interface, Linux, hardware, LLRF | 1 |
|
|||
| The MicroTCA.4 crate standard provides a powerful electronic platform for digital and analogue signal processing. Besides excellent hardware modularity, it is the software reliability and flexibility as well as the easy integration into existing software infrastructures that will drive the widespread adoption of the new standard. The DESY MicroTCA.4 User Tool Kit (MTCA4U) comprises three main components: A Linux device driver, a C++ API for accessing the MicroTCA.4 devices and a control system interface layer. The main focus of the tool kit is flexibility to enable fast development. The universal, expandable PCIexpress driver and a register mapping library allow out of the box operation of all MicroTCA.4 devices which carry firmware developed with the DESY FPGA board support package. The control system adapter provides callback functions to decouple the application code from the middleware layer. Like this the same business logic can be used at different facilities without further modification. | |||
|
Slides WCO101 [0.760 MB] | ||
| WCO102 | Controls Middleware for FAIR | framework, CORBA, software, operation | 4 |
|
|||
| With the FAIR complex, the control systems at GSI will face new scalability challenges due to significant amount of new hardware coming with the new facility. Although, the old systems have proven themselves as sustainable and reliable, they are based on technologies, which have become obsolete years ago. During the FAIR construction time and the associated shutdown GSI will replace multiple components of the control system. The success in the integration of CERNs FESA and LSA frameworks had moved GSI to extend the cooperation with the controls middleware and especially Remote Device Access (RDA) and Java API for Parameter Control (JAPC) frameworks. However, the current version of RDA is based on CORBA technology, which itself, can be considered obsolete. Consequently, it will be replaced by a newer version (RDA3), which will be based on ZeroMQ, and will offer a new improved API based on the experience from previous usage. The collaboration between GSI and CERN shows that new RDA is capable to comply with requirements of both environments. In this paper we present general architecture of the new RDA and depict its integration in the GSI control system. | |||
|
Slides WCO102 [0.323 MB] | ||
| WCO103 | Integration of New Power Supply Controllers in the Existing Elettra Control System | TANGO, power-supply, operation, interface | 7 |
|
|||
| The Elettra control system has been running since 1993. The controllers of the storage ring power supplies, still the original ones, have become obsolete and are no more under service. A renewal to overcome these limitations is foreseen. A prototype of the new controllers based on the BeagleBone embedded board and an in-house designed ADC/DAC carrier board, has been installed and tested in Elettra. A Tango device server running in the BeagleBone is in charge of controlling the power supply. In order to transparently integrate the new Tango controlled power supplies with the existing Remote Procedure Call (RPC) based control system, a number of software tools have been developed, mostly in the form of Tango devices and protocol bridges. This approach allows us to keep using legacy machine physics programs when integrating the new Tango based controllers and to carry out the upgrade gradually with less impact on the machine operation schedule. | |||
|
Slides WCO103 [1.228 MB] | ||
| WCO201 | Computing Infrastructure for Online Monitoring and Control of High-throughput DAQ Electronics | detector, GPU, hardware, software | 10 |
|
|||
| New imaging stations with high-resolution pixel detectors and other synchrotron instrumentation have ever increasing sampling rates and put strong demands on the complete signal processing chain. Key to successful systems is high-throughput computing platform consisting of DAQ electronics, PC hardware components, communication layer and system and data processing software components. Based on our experience building a high-throughput platform for real-time control of X-ray imaging experiments, we have designed a generalized architecture enabling rapid deployment of data acquisition system. We have evaluated various technologies and come up with solution which can be easily scaled up to several gigabytes-per-second of aggregated bandwidth while utilizing reasonably priced mass-market products. The core components of our system are an FPGA platform for ultra-fast data acquisition, Infiniband interconnects and GPU computing units. The presentation will give an overview on the hardware, interconnects, and the system level software serving as foundation for this high-throughput DAQ platform. This infrastructure is already successfully used at KIT's synchrotron ANKA. | |||
|
Slides WCO201 [2.948 MB] | ||
| WCO202 | Data Management at the Synchrotron Radiation Facility ANKA | database, experiment, data-management, data-analysis | 13 |
|
|||
| The complete chain from submitting a proposal, collecting meta data, performing an experiment, towards analysis of these data and finally long term archive will be described. During this process a few obstacles have to be tackled. The workflow should be transparent to the user as well as to the beamline scientists. The final data will be stored in NeXus compatible HDF5 container format. Because the transfer of one large file is more efficient than transferring many small files, container formats enable a faster transfer of experiment data. At the same time HDF5 supports to store meta data together with the experiment data. For large data sets another implication is the performance to download the files. Furthermore the analysis software might not be available at each home institution; as a result it should be an option to access the experiment data on site. The meta data allows to find, analyse, preserve and curate the data in a long term archive, which will become a requirement fairly soon. | |||
|
Slides WCO202 [2.380 MB] | ||
| WCO203 | Profibus in Process Controls | diagnostics, cryogenics, operation, EPICS | 16 |
|
|||
| The cryogenic installations on the DESY campus are widely distributed. The liquid Helium (LHE) is produced in a central building. Three cryogenic plants are installed. One is in operation for FLASH the other two are currently in the commissioning phase and will be used for the European XFEL. Thousands of I/O channels are spread over the campus this way. The majority of the I/O devices are standard devices used in process control. The de facto standard for distributed I/O in process controls in Germany is Profibus. So it is obvious to use this bus also for cryogenic controls. Subsequently we developed also special electronics to attach temperature and level readouts to this field bus. Special diagnostic tools are available and permanently attached to the bus. Condition monitoring tools provide diagnostics which enable preventative maintenance planning. Specific tools were developed in Control System Studio (CSS) which is -the- standard tool for configuration, diagnostic and controls for all cryogenic plants. We will describe our experience over the last years with this infrastructure. | |||
|
Slides WCO203 [1.116 MB] | ||
| WCO204 | A Prototype Data Acquisition System of Abnormal RF Waveform at SACLA | database, operation, LLRF, GUI | 19 |
|
|||
| At SACLA, an event-synchronized data acquisition system had been installed. The system collects shot-by-shot data, such as representative point data of the phase and amplitude of the rf cavity pickup signals, in synchronization with the beam operation cycle. In addition, rf waveform data is collected every 10 minutes. However a collection with several minutes cycle couldn’t catch an abnormal rf waveform that suddenly occurs. To overcome this problem, we have developed a system to capture waveform when some abnormal event occurs. The system consists of the VMEbus systems, a DAQ server, and a NoSQL database system, Cassandra. The VMEbus system detects an abnormal rf waveform, collects all related waveforms with same shot and sends to a DAQ server. All waveforms are stored Cassandra via the DAQ server. The DAQ server keeps data for 2 seconds from current time on memory to complement Cassandra’s eventual consistency model. We constructed a prototype DAQ system with a minimum configuration and checked its performance. We report the requirements and structure of the DAQ system and the test results in this paper. | |||
|
Slides WCO204 [1.426 MB] | ||
| WCO205 | Upgrade of SACLA DAQ System Adapts to Multi-Beamline Operation | experiment, operation, network, laser | 22 |
|
|||
| We report the data acquisition system (DAQ) for user experiments at SACLA (the SPring-8 Angstrom Compact Free Electron Laser). The system provides standardized experimental framework to various XFEL users since March 2012. It is required to store shot-by-shot information synchronized with the XFEL beam of 60Hz at the maximum repetition rate. The data throughput goes up to 6 Gbps with TOF waveforms and/or images (e.g. X-ray diffraction images) from experiments. The data are stored to the hierarchical storage system capable of more than 6 PByte at the last stage. The DAQ system incorporates with prompt data processing performed by a 14 TFlops PC cluster as well as on-line monitoring. In 2014, SACLA will introduce the third beamline to increase the capacity of experiments. On the DAQ side, it is a challenge to operate multiple experiments simultaneously. The control and data stream will be duplicated and separated for beamlines. A new central server to manage each beamline condition in one place will help increase the efficiency of setup procedure and reduce risks of mishandling between beamlines. | |||
|
Slides WCO205 [1.472 MB] | ||
| WCO206 | Sardana – A Python Based Software Package for Building Scientific Scada Applications | TANGO, interface, GUI, framework | 25 |
|
|||
|
Sardana is a software suite for Supervision, Control and Data Acquisition in scientific installations. It aims to reduce cost and time of design, development and support of the control and data acquisition systems [1]. Sardana, thanks to the Taurus library [2], allows the user to build modern and generic interfaces to the laboratory instruments. It also delivers a flexible python based macro environment, via its MacroServer, which allows custom procedures to be plug in and provides a turnkey set of standard macros e.g. generic scans. Thanks to the Device Pool the heterogeneous hardware could be easily plug in based on common and dynamic interfaces. The Sardana development started at Alba, where it is extensively used to operate all beamlines, the accelerators and auxiliary laboratories. In the meantime, Sardana attracted interest of other laboratories where it is used with success in various configurations. An international community of users and developers [3] was formed and it now maintains the package. Modern data acquisition approaches guides and stimulates current developments in Sardana. This article describes how the Sardana community approaches some of its challenging projects.
[1] "Sardana: The Software for Building SCADAS in Scientific Environments" T.M. Coutinho et al: ICALEPCS 2011 [2] www.taurus-scada.org [3] www.sourceforge.net/projects/sardana |
|||
|
Slides WCO206 [11.925 MB] | ||
| WCO207 | A New Data Acquisition Software and Analysis for Accurate Magnetic Field Integral Measurement at BNL Insertion Devices Laboratory | software, data-acquisition, insertion-device, EPICS | 28 |
|
|||
| A new data acquisition software has been developed in LabVIEW to measure the first and second magnetic field integral distributions of Insertion Devices (IDs). The main characteristics of the control system and the control interface program are presented. The new system has the advantage to make automatic and synchronized measurements as a function of gap and/or phase of an ID. The automatic gap and phase control is a real-time communication based on EPICS system and the eight servomotors of the measurement system are controlled using a Delta Tau GeoBrick PMAC-2. The methods and the measurement techniques are described and the performance of the system together with the recent results will be discussed. | |||
|
Slides WCO207 [8.786 MB] | ||
| WPO001 | Integrating Siemens PLCs and EPICS over Ethernet at the Canadian Light Source | PLC, EPICS, Ethernet, interface | 31 |
|
|||
| The Canadian Light Source (CLS) is a 3rd-generation synchrotron light source on the University of Saskatchewan Campus in Saskatoon, SK, Canada. The control system is based on the Experimental Physics and Industrial Controls System (EPICS) toolkit. A number of systems delivered to the CLS arrived with Siemens, PLC-based automation. EPICS integration was initially accomplished circa 2003 using application-specific hardware; communicating over Profibus. The EPICS driver and IOC application software were developed at the CLS. The hardware has since been discontinued. To minimize reliance on specialized components, the CLS moved to a more generic solution, using readily-available Siemens Ethernet modules, CLS-generated PLC code, and an IOC using the Swiss Light Source (SLS) Siemens/EPICS driver. This paper will provide details on the implementation of that interface. It will cover detailed functionality of the PLC programming, custom tools used to streamline configuration, deployment and maintenance of the interface. It will also describe handshaking between the devices and lessons learned. It will conclude by identifying where further development and improvement may be realized. | |||
| WPO003 | Setup of a History Storage Engine Based on a Non-Relational Database at ELSA | database, operation, interface, software | 34 |
|
|||
| The electron stretcher facility ELSA provides a beam of unpolarized and polarized electrons of up to 3.2 GeV energy to external hadron physics experiments. Its in house developed distributed computer control system is able to provide real time beam diagnostics as well as steering tasks in one homogeneous environment. Recently it was ported from HP-UX running on three HP workstations to a single Linux personal computer. This upgrade to powerful PC hardware opened up the way for the development of a new archive engine with a noSQL database backend based on Hyptertable. The system is capable of recording every parameter change at any given time. Beside the visualization in a newly developed graphical history data browser, the data can be exported to several programs - for example a diff-like tool to compare and recall settings of the accelerator. This contribution will give details on recent improvements of the control system and the setup of the history storage engine. | |||
| WPO004 | News from the FAIR Control System under Development | timing, software, framework, ion | 37 |
|
|||
| The control system for the FAIR (Facility for Antiproton and Ion Research) accelerator facility is presently under development and implementation. The FAIR accelerators will extend the present GSI accelerator chain, then being used as injector, and provide anti-proton, ion, and rare isotope beams with unprecedented intensity and quality for a variety of research programs. This paper shortly summarizes the general status of the FAIR project and focusses on the progress of the control system design and its implementation. The poster presents the general system architecture and updates on the status of major building blocks of the control system. We highlight the control system implementation efforts for CRYRING, a new accelerator presently under recommissioning at GSI, which will serve as a test-ground for the complete control system stack and evaluation of the new controls concepts. | |||
|
Slides WPO004 [1.039 MB] | ||
| WPO005 | Progress and Challenges during the Development of the Settings Management System for FAIR | operation, framework, database, ion | 40 |
|
|||
| A few years into development of the new control system for FAIR (Facility for Antiproton and Ion Research), a first version of the new settings management system is available. As a basis, the CERN LSA framework (LHC Software Architecture) is being used and enhanced in collaboration between GSI and CERN. New aspects, like flexible cycle lengths, have already been introduced while concepts for other requirements, like parallel beam operation at FAIR, are being developed. At SIS18, LSA settings management is currently being utilized for testing new machine models and operation modes relevant for FAIR. Based upon experience with SIS18, a generic model for ring accelerators has been created that will be used throughout the new facility. It will also be deployed for commissioning and operation of CRYRING by the end of 2014. During development, new challenges came up. To ease collaboration, the LSA code base has been split into common and institute specific modules. An equivalent solution for the database level is still to be found. Besides technical issues, a data-driven system like LSA requires high-quality data. To ensure this, organizational processes need to be put in place at GSI. | |||
|
Poster WPO005 [1.049 MB] | ||
| WPO006 | FESA3 Integration in GSI for FAIR | site, software, framework, timing | 43 |
|
|||
| GSI decided to use FESA (Front-End Software Architecture) as the front-end software toolkit for the FAIR accelerator complex. FESA was originally developed at CERN. Since 2010 FESA3, a revised version of FESA, is developed in the frame of an international collaboration between CERN and GSI. During development of FESA3 emphasis was placed on the possibility of flexible customization for different environments and to provide site-specific extensions to allow adaptation for the contributors. GSI is the first institute different than CERN to integrate FESA3 into its control system environment. Some of the necessary preparations have already been performed to establish FESA3 at GSI. Examples are RPM packaging for multiple installations, support for site-specific properties and data types, first integration of the White Rabbit based timing system, etc. . Further developments such as e.g. integration of a site-specific database or the full integration of GSI's beam process concept for FAIR will follow. | |||
| WPO007 | The FAIR R3B Prototype Cryogenics Control System | cryogenics, framework, PLC, database | 46 |
|
|||
|
Funding: GSI Helmholtzzentrum für Schwerionenforschung The superconducting GLAD magnet is one of the major parts for the R3B experiment at FAIR. R3B stands for Reactions with Relativistic Radioactive Beams. The cryogenic operation will be ensured by a fully refurbished TCF 50 cold box and oil removal system. One of the major design goals for its control system is to operate as independent as possible from magnet controls acting as a first prototype for the later cryogenic installations in the FAIR facility. The operation of the compressor, oil removal system, and the gas management was tested in Jan. 2014. We have followed a staged implementation of the controls, firstly implementing all processes in a S7-319F with PROFIBUS and PROFINET I/O modules using WinCC OA as SCADA. In a second step a migration and implementation into the CERN UNICOS framework will be done for the first time at GSI. This can be seen as preparatory work for novel industrial control systems to be established for the FAIR facility. Within late spring 2014 a first cool down of the refurbished cold box is foreseen. Once the magnet will be delivered, the magnet and the cryogenics controls will be commissioned together. |
|||
| WPO008 | An Extensible Equipment Control Library for Hardware Interfacing in the FAIR Control System | power-supply, software, hardware, framework | 49 |
|
|||
| In the FAIR control system the SCU (Scalable Control Unit, an industry PC with a bus system for interfacing electronics) is the standard front-end controller for power supplies. The FESA-framework is used to implement front-end software in a standardized way, to give the user a unified look on the installed equipment. As we were dealing with different power converters and thus with different SCU slave card configurations, we had two main things in mind: First, we wanted to be able to use common FESA classes for different types of power supplies, regardless of how they are operated or which interfacing hardware they use. Second, code dealing with the equipment specifics should not be buried in the FESA-classes but instead be reusable for the implementation of other programs. To achieve this we built up a set of libraries which interface the whole SCU functionality as well as the different types of power supplies in the field. Thus it is now possible to easily integrate new power converters and the SCU slave cards controlling them in the existing equipment software and to build up test programs quickly. | |||
| WPO009 | An Optics-Suite and -Server for the European XFEL | optics, interface, software, emittance | 52 |
|
|||
| A software library for optics calculations was developed for the European XFEL Project. The calculations will be done with ELEGANT as the backend. The new software is available as a shared library as well as an own standing server in the control system. It creates and analyses all input and output files and allows to use different optics at the same time. The lattice is derived from an EXCEL file which is also used for machine installation purposes. The access from the control system uses a TINE interface; a MATLAB object offers an easy programming interface. | |||
|
Poster WPO009 [0.417 MB] | ||
| WPO010 | A Unified Matlab API for TINE and DOOCS Control Systems at DESY | interface, operation, background, data-management | 55 |
|
|||
| At the European XFEL, MATLAB will play an important role as a programming language for high level controls. We present a standard MATLAB API which provides a unified interface for TINE and DOOCS control systems. It supports a wide variety of datatypes as well as synchronous and asynchronous communication modes. | |||
|
Poster WPO010 [0.266 MB] | ||
| WPO011 | Vacuum Interlock Control System for EMBL Beamlines at PETRA III | vacuum, PLC, ion, interface | 57 |
|
|||
| A vacuum interlock system is developed for EMBL beamlines at PETRA-III facility. It runs on Beckhoff PLC and protects instruments by closing corresponding vacuum valves and beam shutters when pressure exceeds a safety threshold. Communication with PETRA-III interlock system is implemented via digital I/O connections. The system is integrated in the EMBL beamlines control via TINE and supplies data to archive and alarm subsystems. A LabVIEW client, operating in TINE environment, provides graphical user interface for the vacuum interlock system control and data representation. | |||
| WPO012 | The EMBL Beamline Control Framework BICFROCK | software, PLC, LabView, operation | 60 |
|
|||
| The EMBL hosts three Beamlines at the Petra Synchrotron at DESY. The control of the Beamlines is based on a Labview TINE Framework. Working examples of the layered structure of the control software and the signal transport with the Fieldbus based control electronic using Ethercat will be presented as well as the layout of the synchronization implementation of all beamline elements. | |||
|
Slides WPO012 [0.877 MB] | ||
| WPO013 | Status of the FLUTE Control System | EPICS, electron, Ethernet, linac | 63 |
|
|||
| The accelerator test facility FLUTE (Ferninfrarot, Linac- Und Test-Experiment) is being under construction nearby ANKA at the Karlsruhe Institute of Technology (KIT). FLUTE is a linac-based accelerator facility for generating coherent THz radiation. One of the goals of the FLUTE project is the development and fundamental examination of new concepts and technologies for the generation of intensive and ultra-broad-band THz pulses fed by femtosecond electron-bunches. In order to study the various mechanisms influencing the final THz pulses, data-acquisition and storage systems are required that allow for the correlation of beam parameters on a per-pulse basis. In parallel to the construction of the accelerator and the THz beam-line, a modern, EPICS-based control system is being developed. This control system combines well-established techniques (like S7 PLCs, Ethernet, and EPICS) with rather new components (like MicroTCA, Control System Studio, and NoSQL databases) in order to provide a robust, stable system, that meets the performance requirements. We present the design concept behind the FLUTE control system and report on the status of the commissioning process. | |||
|
Slides WPO013 [1.313 MB] | ||
| WPO016 | Magnet Power Supply Control Mockup for the SPES Project | EPICS, interface, GUI, embedded | 66 |
|
|||
|
The Legnaro National Laboratories employs about 100 Magnet Power Supplies (MPSs). The existing control infrastructure is a star architecture with a central coordinator and ethernet/serial multiplexers. In the context of the ongoing SPES project, a new magnet control system is being designed with EPICS [1, 2] based software and low cost embedded hardware. A mockup has been setup as a test stand for validation. The paper reports a description of the prototype, together with first results.
[1] http://www.aps.anl.gov/epics [2] http://www.lnl.infn.it/~epics |
|||
| WPO017 | IFMIF EVEDA RFQ Local Control System to Power Tests | EPICS, rfq, network, software | 69 |
|
|||
|
In the IFMIF EVEDA project, normal conducting Radio Frequency Quadrupole (RFQ) is used to bunch and accelerate a 130 mA steady beam to 5 MeV. RFQ cavity is divided into three structures, named super-modules. Each super-module is divided into 6 modules for a total of 18 modules for the overall structure. The final three modules have to be tested at high power to test and validate the most critical RF components of RFQ cavity and, the control system itself. The choice of the last three modules is due to the fact that they will operate in the most demanding conditions in terms of power density (100 kW/m) and surface electric field (1.8*Ekp). The Experimental Physics and Industrial Control System (EPICS) environment [1] provides the framework to control any equipment connected to it. This paper report the usage of this framework to the RFQ power tests at Legnaro National Laboratories [2].
[1] http://www.aps.anl.gov/epics [2] http://www.lnl.infn.it/~epics |
|||
| WPO018 | Upgrade of Beam Diagnostics System of ALPI-PIAVE Accelerator's Complex at LNL | diagnostics, EPICS, software, interface | 72 |
|
|||
| The beam diagnostics system of ALPI-PIAVE accelerators has been recently upgraded by migrating the control software to EPICS. The system is based on 40 modules each one including a Faraday cup and a beam profiler made of a couple of wire grids. The device's insertion is controlled by stepper motors in ALPI and by pnematic valves in PIAVE. To reduce the upgrade costs the existing VME hardware used for data acquisition has been left unchanged, while the motor controllers only have been replaced by new units developed in house. The control software has been rebuilt from scratch using EPICS tools. The operator interface is based on CSS; a Channel Archiver based on .. has been installed to support the analysis of transport setup during tests of new beams. The ALPI-PIAVE control system is also a bench test for the new beam diagnostics under development for the SPES facility, whose installation is foreseen in mid 2015. | |||
| WPO019 | STARS: Current Development Status | GUI, interface, hardware, status | 75 |
|
|||
|
STARS (Simple Transmission and Retrieval System) [1] is extremely simple and useful software for small-scale control systems and it runs on various operating system. STARS consists of client programs (STARS clients) and a server (STARS server) program. Each client is connected to the server via a TCP/IP socket and each client and the server communicate with text based message. STARS is used for various system at the KEK Photon Factory (beamline control system, experimental hall access control system, key handling system etc.) and development of stars (development many kind of STARS clients, interconnection of Web2c [2] and STARS etc.) is still going. We will describe current development status of STARS.
[1] http://stars.kek.jp/ [2] http://adweb.desy.de/mcs/web2cToolkit/web2chome.htm |
|||
|
Slides WPO019 [2.604 MB] | ||
| WPO020 | Development and Application of the STARS-based Beamline Control System and Softwares at the KEK Photon Factory | software, status, undulator, detector | 78 |
|
|||
| STARS is a message transferring software for small-scale control systems originally developed at the Photon Factory. It has a server-client architecture using TCP/IP sockets and can work on various types of operating systems. Since the Photon Factory adopted STARS as a common beamline control software, we have developed beamline control system which controls optical devices (mirror, monochrometer etc.). We developed also various system and softwares, such as information delivering system of Photon Factory ring status based on STARS and TINE or measurement softwares based on the STARS, for the Photon Factory beamlines. Now many kinds of useful STARS applications (device clients, simple data acquisitions, user interfaces etc.) are available. We will describe the development and installation status of the STARS-based beamline system and softwares. | |||
| WPO021 | Renovation of PC-based Console System for J-PARC Main Ring | operation, EPICS, network, GUI | 81 |
|
|||
| Console system for J-PARC Main Ring (MR) was designed in 2007 and had been used for accelerator commissioning and operation since then. It was composed of 20 diskless thin clients and 10 terminal servers. Both of them are PC-based computers running Scientific Linux (SL) as their operating system. Migration to ordinary fat clients was planned in 2013, triggered by update from SL4 to SL6, based on use experiences of those thin clients. Intel NUC is selected as a result of preliminary investigation. Its evaluation is carried successfully out during commissioning of MR. Presently 10 thin clients have been replaced by fat clients. Migration scenario and technique of managing fat clients are discussed. | |||
| WPO022 | Control System of Two Superconducting Wigglers and Compensation Magnets in The SAGA Light Source | wiggler, PLC, quadrupole, storage-ring | 84 |
|
|||
| The SAGA Light Source is a synchrotron radiation facility consisting of a 255 MeV injector linac and a 1.4 GeV electron storage ring. Three insertion devices: a superconducting wiggler, an APPLE-II undulator, and a planar undulator, are used for synchrotron radiation experiments. For the demand of hard x-ray experiment, we are planning to install a second superconducting wiggler in the electron storage ring. We are developing the control system for the next superconducting wiggler using conventional PLCs and PCs. To compensate the closed orbit distortion, tune shift and chromaticity change induced by the excitation of the superconducting wiggler, the control system of dipole, quadrupole and sextupole magnets power supplies are also being upgraded. PLCs are linked by optical fiber cable to synchronize each power supplies. We present the control system of the superconducting wigglers and the compensation magnets using PLCs and PCs at this meeting. | |||
| WPO023 | Personnel Safety System in SESAME | booster, interlocks, PLC, microtron | 87 |
|
|||
|
Funding: International Atomic Energy Agency (IAEA) SESAME (Synchrotron-light for Experimental Science and Applications in the Middle East) is a “third-generation” synchrotron light source under construction in Allan, Jordan. Personnel Safety System (PSS) in SESAME restricts and controls the access to forbidden areas of radiation. The PSS is an independent system which is built on Safety PLCs. In order to achieve the desired Safety Integrity Level which is SIL-3, as defined in IEC 61508, several interlocks and access procedures have been implemented in the system fulfilling characteristics such as fail-safe, redundancy and diversity. Also a system meant for monitoring and diagnostics of PSS is built based on EPICS and HMI. PSS PLCs which implement interlock logic send all the input and output bits and PLC status information to EPICS IOC which is not an integral function of PSS operation. This IOC will be connected to other control system’s IOCs to send informative signals describing the status of PSS to the main control system in SESAME. In addition, 5 combined Gamma-Neutron radiation monitors which are distributed around and over the booster area send interlock signals to personnel safety system. |
|||
| WPO024 | Clients Development of SESAME's Control System based on CSS | EPICS, Windows, booster, interface | 90 |
|
|||
| SESAME is a third generation synchrotron light source under construction near Amman (Jordan). It is expected to begin operation in 2016. SESAME's injector (Microtron) and pre-injector (Booster Ring) have been commissioned. Commissioning of the storage ring is expected in 2015. The control system at SESAME is based on EPICS. EPICS IOC's are used for the servers. Control System Studio (CSS) is used for the clients. CSS BEAST alarm handler is used to identify all the critical alarms of the machine including configuration and visualization. This paper presents the architecture and design of the CSS BOY graphical user interfaces (GUIs) and CSS BEAST alarm handler for the different subsystems. It presents the standards followed in the development of SESAME's clients. SESAME will use an archiving tool based on CSS to access process variable history. | |||
|
Poster WPO024 [0.251 MB] | ||
| WPO026 | The Applications of OPC UA Technology in Motion Control System | interface, status, detector, HOM | 93 |
|
|||
| The establishment of data model is more abundant based on OPC UA (Unified Architecture) technology, which has good platform independence and high reliability. Thus it becomes a new direction in the field of data exchange of industrial control. In this paper, the motion control model based on redundant ring network is built by using NI 3110 industrial controller and servo motors. And the data structures used in parallel communication between the host computer and multi terminal motors are designed by using OPC UA technology. So the problem of data exchange between the RT system of lower controller and the Windows system of upper computer is solved better. | |||
|
Poster WPO026 [0.508 MB] | ||
| WPO027 | The Measurement and Monitoring of Spectrum and Wavelength of Coherent Radiation at Novosibirsk Free Electron Laser | radiation, FEL, operation, network | 96 |
|
|||
| The architecture and capabilities of free electron laser radiation spectrum measurement system described in details in this paper. For execution of the measurements the monochromator and step-motor with radiation power sensor are used. As the result of the measurements, the curve of spectrum of radiation is transmitted to control computer. As this subsystem is fully integrated to common FEL control system, the results of measurements – spectrum graph, average wavelength, calculated radiation power, are able to transmit to any another computer on FEL control local area network and also on user stations computers. | |||
|
Poster WPO027 [2.250 MB] | ||
| WPO028 | EPICS BEAST Alarm System Happily Purrs at ANKA Synchrotron Light Source | operation, status, EPICS, monitoring | 99 |
|
|||
|
Funding: ANKA Synchrotron Light Source, KIT, Karlsruhe The control system of the ANKA synchrotron radiation source at KIT (Karlsruhe Institute of Technology) is adopting new, and converting old, devices into an EPICS control system. New GUI panels are developed in Control System Studio (CSS). EPICS alarming capabilities in connection with the BEAST alarm server tool-kit from the CSS bundle are used as an alarming solution. To accommodate ANKA future requirements as well as ANKA legacy solutions, we have decided to extend the basic functionality of BEAST with additional features in order to manage the alarming for different machine operation states. Since the database of alarm sources is been populated from scratch, we have been able take fresh approach in management and creation of alarm sources to build-up alarm trees. New alarm system is being used, tested and refined and future developed in production environment since end of 2013. |
|||
|
Poster WPO028 [1.344 MB] | ||
| WPO030 | Vacuum Pumping Group Controls Based on PLC | vacuum, PLC, status, software | 105 |
|
|||
| In CERN accelerators, high vacuum is needed in the beam pipes and for thermal isolation of cryogenic equipment. The first element in the chain of vacuum production is the pumping group. It is composed of a primary pump, a turbo-molecular pump and a few isolation and intermediate valves; as optional devices we can also find: vacuum gauges, venting valves and leak detection valves. At CERN accelerators, the pumping groups controllers may be found in several hardware configurations, depending on the environment and on the vacuum system used; all of them are based on PLCs and communicate over a field bus; they are controlled by the same flexible and portable software. They are remotely accessed through a SCADA application and can be locally controlled by the same mobile touch-panel. More than 250 pumping groups are permanently installed in the Large Hardron Collider, Linacs or North Area Experiments. | |||
|
Poster WPO030 [1.849 MB] | ||
| WPO031 | Diagnostics Test Stand Setup at PSI and its Controls in Light of the Future SwissFEL | software, hardware, interface, diagnostics | 108 |
|
|||
| In order to provide high quality electron beams, the future SwissFEL machine needs very precise and reliable beam diagnostics tools. At Paul Scherrer Institute (PSI), the development of such tools is performed based on the SwissFEL Injector Test Facility and a dedicated automated diagnostics test stand. The test stand is equipped by not only major SwissFEL beam diagnostics elements (cameras, beam loss monitors, beam current monitors, etc.) but also their controls and data processing hardware and software. The paper describes diagnostics test stand controls software components, which were designed in view of the future SwissFEL operational requirements. | |||
|
Poster WPO031 [0.637 MB] | ||
| WPO032 | Magnet Measurement System Upgrade at PSI | EPICS, software, network, operation | 111 |
|
|||
| The magnet measurement system at the Paul Scherrer Institute (PSI) was significantly upgraded in the last few years. At the moment, it consists of automated Hall probe, rotating wire, and vibrating wire setups, which form a very efficient magnet measurement facility. The paper concentrates on the automation hardware and software implementation, which has made it possible not only to significantly increase the performance of the magnet measurement facility at PSI, but also to simplify magnet measurement data handling and processing. | |||
|
Poster WPO032 [1.313 MB] | ||
| WPO033 | Status of Control System for the TPS Commissioning | EPICS, power-supply, interface, Ethernet | 114 |
|
|||
| Control system for the Taiwan Photon Source (TPS) project has been implemented. The accelerator system began to be commissioning from third quarter of 2014. Final integration test of each subsystem will be done. The EPICS was chosen as the TPS control system framework. The subsystems control interfaces include event based timing system, Ethernet based power supply control, corrector power supply control, PLC-based pulse magnet power supply control and machine protection system, insertion devices motion control system, various diagnostics, and etc. The standard hardware components had been installed and integrated, and the various IOCs (Input Output Controller) had been implemented as various subsystems control platforms. Development and test of the high level and low level software systems are in final phase. The efforts will be summarized at this report. | |||
| WPO034 | Network Architecture at Taiwan Photon Source of NSRRC | network, EPICS, monitoring, photon | 117 |
|
|||
| A robust, secure and high throughput network is necessary for the 3 GeV Taiwan Photon Source (TPS) in NSRRC. The NSRRC network divides into several subsets according to its functionality and includes CS-LAN, ACC-LAN, SCI-LAN, NSRRC-LAN and INFO-LAN for the instrumental control, subsystem of accelerator, beam-line users, office users and servers for the information office respectively. Each LAN is connected via the core switch by routing protocol to avoid traffic interference. Subsystem subnets connect to control system via EPICS based channel-access gateways for forwarding data. Outside traffic will be block by a firewall to ensure the independence of control system (CS-LAN). Various network management tools and machines are used for maintenance and troubleshooting. The network system architecture, cabling topology and maintainability will be described in this report. | |||
|
Poster WPO034 [1.847 MB] | ||
| WPO038 | A Modular Personnel Safety System for VELA based on Commercial Safety Network Controllers | network, operation, electron, laser | 123 |
|
|||
| STFC Daresbury Laboratory has recently commissioned VELA (Versatile Electron Linear Accelerator), a high performance electron beam test facility. It will be used to deliver high quality, short pulse electron beams to industrial users to aid in the development of new products in the fields of health care, security, energy and waste processing and also to develop and test novel compact accelerator technologies. In the early stages of the design it was decided to use commercial Safety Network Controllers and I/O to implement the Personnel Safety System in place of the electro-mechanical relay-based system used on previous projects. This provides a high integrity, low cost solution while also allowing the design to be modular, programmable and easily expandable. This paper describes the design and realisation of the VELA Personnel Safety System and considers its future development. In addition, the application of the system to the protection of high-power laser systems and medical accelerators will also be discussed. | |||
| TCO101 | Benefits, Drawbacks and Challenges During a Collaborative Development of a Settings Management System for CERN and GSI | operation, software, framework, feedback | 126 |
|
|||
| The settings management system LSA (LHC Software Architecture) was originally developed for the LHC (Large Hadron Collider). For FAIR (Facility for Antiproton and Ion Research) a renovation of the GSI control system was necessary. When it was decided in 2008 to use the LSA system for settings management for FAIR, the middle management of the two institutes agreed on a collaborative development. This paper highlights the insights gained during the collaboration, from three different perspectives: organizational aspects of the collaboration, like roles that have been established, planned procedures, the preparation of a formal contract and social aspects to keep people working as a team across institutes. It also shows technical benefits and drawbacks that arise from the collaboration for both institutes as well as challenges that are encountered during development. Furthermore, it provides an insight into aspects of the collaboration which were easy to establish and which still take time. | |||
|
Slides TCO101 [0.728 MB] | ||
| TCO103 | Recent Highlights from Cosylab | software, TANGO, project-management, EPICS | 132 |
|
|||
| Cosylab was established 13 years ago by a group of regular visitors of the PCaPAC. In the meantime, it has grown to a company of 90 employees that covers the majority of accelerator control projects. In this talk, I will present the most interesting developments that we have done in the past two years on a very different range of projects and I will show how we had to get organized in order to be able to manage them all. The developments were made for labs like KIT, ITER, PSI, EBG-MedAustron, European Spallation Source, Maxlab, SLAC, ORNL, GSI/FAIR but also generally for community software like EPICS, TANGO, Control System Studio, White Rabbit, etc. And they range from electronics development to high level software: electric signal conditioning and interfacing, timing system, machine protection system, fibre-optic communication, linux driver development, core EPICS development, packaging, high performance networks, medical device integration, database development, all the way up to turnkey systems. Efficient organisation comprises a matrix structure of teams and groups versus projects and accounts, supported by rigorous reporting, measurements and drill-down analyses. | |||
|
Slides TCO103 [13.372 MB] | ||
| TCO201 | Managing the FAIR Control System Development | ion, project-management, storage-ring, hardware | 135 |
|
|||
| After years of careful preparation and planning, construction and implementation works for the new international accelerator complex FAIR (Facility for Antiproton and Ion Research) at GSI have seriously been started. The FAIR accelerators will extend the present GSI accelerator chain, then being used as injector, and provide anti-proton, ion, and rare isotope beams with unprecedented intensity and quality for a variety of research programs. The accelerator control system for the FAIR complex is presently being designed and developed by the GSI Controls group with a team of about 50 soft- and hardware developers, complemented by an international in-kind contribution from the FAIR member state Slovenia. This paper presents requirements and constraints from being a large and international project and focusses on the organizational and project management strategies and tools for the control system subproject. This includes the project communication, design methodology, release cycle planning, testing strategies and ensuring technical integrity and coherence of the whole system during the full project phase. | |||
|
Slides TCO201 [2.781 MB] | ||
| TCO202 | Status of Indus-2 Control System | operation, feedback, status, diagnostics | 138 |
|
|||
| Indus-2 is a 2.5 GeV Synchrotron Radiation Source at Indore, India. With 6 beamlines commissioned, several more under installation & commissioning and 5 insertion devices planned, the machine is operated in round the clock mode. With implementation of orbit, tune and bunch feedback systems and many new systems in planning, machine is constantly evolving and so is the control system. The control system software is based on PVSS SCADA running on windows PCs and also integrates other software modules in Labview and Matlab. The control hardware is a combination of VME based control stations interconnected over Ethernet and Profibus. Some recent system enhancements include Parameter deviation alarms, transient data capture system, database improvements and web services. Paper takes a stock of the control system and it's evolution with new systems in the offing. | |||
|
Slides TCO202 [6.833 MB] | ||
| TCO205 | Conceptual Design of the Control System for SPring-8-II | storage-ring, database, framework, operation | 144 |
|
|||
| The SPring-8 storage ring was inaugurated 17 years ago in 1997. The storage ring is an 8-GeV synchrotron that functions as a third-generation light source, providing brilliant X-ray beams to a large number of experimental users from all over the world. In recent years, discussions have been held on the necessity of upgrading the current ring to create a diffraction-limited storage ring at the same location. Now, a plan to upgrade the storage ring, called SPring-8-II, has been launched. First, new beam optics capable of storing beams of 6 GeV was designed using a five-bend magnet system to obtain smaller electron beam emittance that would produce coherent X-rays that are brighter than those produced by the current ring. The design of a control system that would meet the performance requirements of the new ring has also started. Equipment control devices are based on factory automation technologies such as PLC and VME, whereas digital data handling with high bandwidths is realized using telecommunication technologies such as xTCA. In this paper, we report on the conceptual design of the control system for SPring-8-II on the basis of the conceptual design report proposed by RIKEN. | |||
|
Slides TCO205 [7.572 MB] | ||
| TCO207 | Common Device Interface 2.0 | database, hardware, device-server, interface | 147 |
|
|||
|
The Common Device Interface (CDI) [1] is a popular device layer in TINE control systems [2]. Indeed, a de-facto device server (more specifically a 'property server') can be instantiated merely by supplying a hardware address database, somewhat reminiscent of an epics IOC. It has in fact become quite popular among uses to do precisely this, although the original design intent anticipated embedding CDI as a hardware layer within a dedicated device server. When control system client applications and central services communicate directly to a CDI server, this places the burden of providing useable, viewable data (and in an efficient manner) squarely on CDI and its address database. In its initial release variant, any modifications to this hardware database needed to be made on the file system used by the CDI device server. In this report we shall describe some of the many new features of CDI release 2.0, which have drawn on the user/developer experience over the past eight years.
[1] 'Using the Common Device Interface in TINE', Duval and Wu, PCaPAC 2006 [2] http://tine.desy.de |
|||
|
Slides TCO207 [1.616 MB] | ||
| TCO301 | Inexpensive Scheduling in FPGAs | FPGA, hardware, distributed, interface | 150 |
|
|||
| In the new scheme for machine control used within the FAIR project, actions are distributed to front-end controllers (FEC) with absolute execution timestamps. The execution time must be both precise to the nanosecond and scheduled faster than a microsecond, requiring a hardware solution. Although the actions are scheduled at the FEC out of order, they must be executed in sorted order. The typical hardware approaches to implementing a priority queue (CAMs, shift-registers, etc.) work well in ASIC designs, but must be implemented in expensive FPGA core logic. Conversely, the typical software approaches (heaps, calendar queues, etc.) are either too slow or too memory intensive. We present an approach which exploits the time-ordered nature of our problem to sort in constant-time using only a few memory blocks. | |||
|
Slides TCO301 [1.370 MB] | ||
| TCO303 | TestBed - Automated Hardware-in-the-Loop Test Framework | EPICS, Linux, framework, data-acquisition | 153 |
|
|||
|
Funding: This project has received funding from the European Unions Seventh Framework Programme for research, technological development and demonstration under grant agreement no 289485. The control systems in big physics facilities may be updated several times a year. Ideally, prior to each release all components of the control system would be tested. One common control system component is a DAQ driver which is generally tested manually according to a predefined test plan. In order to simplify this process, we have developed the TestBed suite, a test framework that executes tests automatically. TestBed is a PXI chassis which contains an embedded controller running the control system on Scientific Linux and a DAQ board capable of generating and acquiring analog and digital signals. TestBed provides an easy-to-use framework written in Python and allows for the quick development and execution of automatic test scripts. From a hardware perspective, each system under test is physically connected to TestBed with a connector board using a predefined pin configuration. Both the system under test and TestBed are connected to the network. The resulting test framework makes it possible for the automatic tests to be executed with each new release of the control system, thus liberating human resources and ensuring complete consistency and repeatability in the testing protocol. |
|||
|
Slides TCO303 [0.703 MB] | ||
| TCO304 | Launching the FAIR Timing System with CRYRING | timing, software, network, hardware | 155 |
|
|||
| During the past two years, significant progress has been made on the development of the General Machine Timing system for the upcoming FAIR facility at GSI. The prime features are time-synchronization of 2000-3000 nodes using the White Rabbit Precision-Time-Protocol (WR-PTP), distribution of International Atomic Time (TAI) time stamps and synchronized command and control of FAIR control system equipment. A White Rabbit network has been set up connecting parts of the existing facility and a next version of the Timing Master has been developed. Timing Receiver nodes in form factors Scalable Control Unit (standard front-end controller for FAIR), VME, PCIe and standalone have been developed. CRYRING is the first machine on the GSI/FAIR campus to be operated with this new timing system and serves as a test-ground for the complete control system. Installation of equipment starts in late spring followed by commissioning of equipment in summer 2014. | |||
|
Slides TCO304 [7.818 MB] | ||
| TCO305 | TCP/IP Control System Interface Development Using Microchip* Brand Microcontrollers | interface, hardware, Ethernet, electronics | 158 |
|
|||
|
Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. Even as the diversity and capabilities of Single-Board-Computers (SBCs) like the Raspberry Pi and BeagleBoard continue to increase, low level microprocessor solutions also offer the possibility of robust distributed control system interfaces. Since they can be smaller and cheaper than even the least expensive SBC, they are easily integrated directly onto printed circuit boards either via direct mount or pre-installed headers. The ever increasing flash-memory capacity and processing clock speeds has enabled these types of microprocessors to handle even relatively complex tasks such as management of a full TCP/IP software and hardware stack. The purpose of this work is to demonstrate several different implementation scenarios wherein a computer control system can communicate directly with an off-the-shelf Microchip brand microcontroller and its associated peripherals. The microprocessor can act as a Hardware-to-Ethernet communication bridge and provide services such as distributed reading and writing of analog and digital values, webpage serving, simple network monitoring and others to any custom electronics solution. * Microchip Technology Inc., www.microchip.com |
|||
|
Slides TCO305 [3.904 MB] | ||
| FCO106 | The Role of the CEBAF Element Database in Commissioning the 12 GeV Accelerator Upgrade | database, hardware, interface, software | 161 |
|
|||
|
Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to this manuscript. The CEBAF Element Database (CED) was first developed in 2010 as a resource to support model-driven configuration of the Jefferson Lab Continuous Electron Beam Accelerator (CEBAF). Since that time, its uniquely flexible schema design, robust programming interface, and support for multiple concurrent versions has permitted it to evolve into a more broadly useful operational and control system tool. The CED played a critical role before and during the 2013 startup and commissioning of CEBAF following its 18-month long shutdown and upgrade. Information in the CED about hardware components and their relations to one-another facilitated a thorough Hot Checkout process involving more than 18,000 system checks. New software relies on the CED to generate EDM screens for operators on-demand thereby ensuring that the information on those screens is correct and up-to-date. The CED also continues to fulfill its original mission of supporting model-driven accelerator setup. Using the new ced2elegant and eDT (elegant Download Tool), accelerator physicists have proven able to compute and apply energy-dependent set points with greater efficiency than ever before. |
|||
|
Slides FCO106 [2.698 MB] | ||
| FPO001 | InfiniBand interconnects for high-throughput data acquisition in a TANGO environment | TANGO, interface, network, software | 164 |
|
|||
| Advances in computational performance allow for fast image-based control. To realize efficient control loops in a distributed experiment setup, large amounts of data need to be transferred, requiring high-throughput networks with low latencies. In the European synchrotron community, TANGO has become one of the prevalent tools to remotely control hardware and processes. In order to improve the data bandwidth and latency in a TANGO network, we realized a secondary data channel based on native InfiniBand communication. This data channel is implemented as part of a TANGO device and by itself is independent of the main TANGO network communication. TANGO mechanisms are used for configuration, thus the data channel can be used by any TANGO-based software that implements the corresponding interfaces. First results show that we can achieve a maximum bandwidth of 30 Gb/s which is close to the theoretical maximum of 32 Gb/s, possible with our 4xQDR InfiniBand test network, with average latencies as low as 6 μs. This means that we are able to surpass the limitations of standard TCP/IP networks while retaining the TANGO control schemes, enabling high data throughput in a TANGO environment. | |||
|
Slides FPO001 [0.511 MB] | ||
|
Poster FPO001 [3.767 MB] | ||
| FPO006 | Integration of Independent Radiation Monitoring System with Main Accelerator Control | radiation, monitoring, operation, hadron | 170 |
|
|||
| The radiation monitoring system of J-PARC was constructed as a part of safety facilities. Thus, it has been operated independently from the main accelerator control system. In fact, the radiation monitoring system consists of two subsystems. The first subsystem developed by JAEA, which covers Linac and RCS ring, is PLC-based. We add a FL-net module to this subsystem to enable one-way data transfer to the accelerator control system. Here FL-net is a device-level communication network using UDP/IP, defined by a Japanese consortium. The second subsystem developed by KEK covers MR ring. It is a CAMAC-based DAQ system. Since this subsystem was difficult to extend, we made signal branches from radiation monitors, and fed them to a new PLC-based DAQ system. As same as the first subsystem, a FL-net module is used for one-way data transfer. In 2013-2014, integration of two subsystems has been carried out. Now radiation monitors can be supervised with the accelerator control system. As a result, accelerator operators can check radiation levels much easier than before. We understand that this is a significant improvement to realize safer operation of J-PARC accelerators. | |||
| FPO009 | HLS Power Supply Control System Based on Virtual Machine | power-supply, feedback, interface, software | 176 |
|
|||
| The Hefei Light Source (HLS) is a VUV synchrotron radiation light source. It is upgraded recently to improve its performance. The power supply control system is a part of the HLS upgrade project. Five soft IOC applications running on the virtual machine are used to control 190 power supplies via MOXA's serial-to-Ethernet device servers. The power supply control system has been under operation since November 2013, and the operation results show the power supply control system is reliable and can satisfy the demands of slow orbit feedback with the frequency of 1Hz. | |||
| FPO010 | The Software Tools and Capabilities of Diagnostic System for Stability of Magnet Power Supplies at Novosibirsk Free Electron Laser | power-supply, operation, diagnostics, FEL | 179 |
|
|||
| The magnetic system of Novosibirsk Free electron laser containing large amount of magnetic elements, feed by power supplies of different types. The time stability of output current of these power supplies is directly influence on coherent radiation parameters, and operation of whole FEL facility. Therefore, system for diagnostics of power supplies state, integrated to common FEL control system, was developed. The main task of this system is to analyze output current of power supply, determinate its time stability value. Also this system is able to determinate the amplitude and frequency of output current ripples, if they have a place for particular power supply, and display obtained results. The main architecture, some other capabilities, and results of usage of this system, are described in this paper. | |||
|
Poster FPO010 [2.527 MB] | ||
| FPO011 | PyPLC, a Versatile PLC-to-PC Python Interface | PLC, TANGO, device-server, interface | 182 |
|
|||
|
The PyPLC [1] Tango Device Server provides a developer-friendly dynamic interface to any Modbus-based control device. Raw data structures from PLC are obtained efficiently and converted into highly customized attributes using the python programing language. The device server allows to add or modify attributes dynamically using single-line python statements. The compact python dialect used is enhanced with Modbus commands and methods to prototype, simulate and implement complex behaviors. As a generic device, PyPLC has been versatile enough to interact with PLC systems used in ALBA [2] Accelerators as well as to our Beamlines SCADA (Sardana [3]). This article describes the mechanisms used to enable this versatility and how the dynamic attribute syntax allowed to speed up the transition from PLC to user interfaces.
[1] www.tango-controls.org [2] www.cells.es [3] www.sardana-controls.org |
|||
|
Poster FPO011 [1.603 MB] | ||
| FPO012 | A Real-Time Data Logger for the MICE Superconducting Magnets | LabView, FPGA, real-time, EPICS | 185 |
|
|||
| The Muon Ionisation Cooling Experiment (MICE) being constructed at STFC’s Rutherford Appleton Laboratory will allow scientists to gain working experience of the design, construction and operation of a muon cooling channel. Among the key components are a number of superconducting solenoid and focus coil magnets specially designed for the MICE project and built by industrial partners. During testing it became apparent that fast, real-time logging of magnet performance before, during and after a quench was required to diagnose unexpected magnet behaviour. To this end a National Instruments Compact RIO (cRIO) data logger system was created, so that it was possible to see how the quench propagates through the magnet. The software was written in Real-Time LabVIEW and makes full use of the cRIO built-in FPGA to obtain synchronised, multi-channel data logging at rates of up to 10kHz. This paper will explain the design and capabilities of the created system, how it has helped to better understand the internal behaviour of the magnets during a quench and additional development to allow simultaneous logging of multiple magnets and integration into the existing EPICS control system. | |||
| FPO014 | New Data Archive System for SPES Project Based on EPICS RDB Archiver with PostgreSQL Backend | EPICS, database, hardware, network | 191 |
|
|||
|
SPES project [1] is a ISOL facility under construction at INFN, Laboratori Nazionali di Legnaro, which requires the integration between the accelerator systems actually used and the new line composed by the primary beam and the ISOL target. As consequence, a migration from the actual control system to a new one based on EPICS [2] is mandatory to realize a distributed control network for the new facility. One of the first implementation realized for this purpose is the Archiver System, an important service required for experiments. Comparing information and experiences provided by other Laboratories, an EPICS Archive System [3] based on PostgreSQL is implemented to provide this service. Preliminary tests are done with a dedicated hardware and following the project requirements. After these tests used to determinate a good configuration for Database and EPICS Application, the system is going to be moved in production, where it will be integrated with the first subsystem upgraded to EPICS. Dedicated customizations are made to the application for providing a simple user experience in managing and interact with the archiver system.
[1] https://web.infn.it/spes [2] http://www.aps.anl.gov/epics [3] http://sourceforge.net/apps/trac/cs-studio/wiki/RDBArchive |
|||
| FPO015 | Device Control Database Tool (DCDB) | EPICS, database, Linux, network | 194 |
|
|||
|
Funding: This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 289485. In a physics facility containing numerous instruments, it is advantageous to reduce the amount of effort and repetitive work needed for changing the control system (CS) configuration: adding new devices, moving instruments from beamline to beamline, etc. We have developed a CS configuration tool, which provides an easy-to-use interface for quick configuration of the entire facility. It uses Microsoft Excel as the front-end application and allows the user to quickly generate and deploy IOC configuration (EPICS start-up scripts, alarms and archive configuration) onto IOCs; start, stop and restart IOCs, alarm servers and archive engines, etc. The DCDB tool utilizes a relational database, which stores information about all the elements of the accelerator. The communication between the client, database and IOCs is realized by a REST server written in Python. The key feature of the DCDB tool is that the user does not need to recompile the source code. It is achieved by using a dynamic library loader, which automatically loads and links device support libraries. The DCDB tool is compliant with ITER CODAC (used at ITER and ESS), but can also be used in any other EPICS environment. |
|||
|
Poster FPO015 [0.522 MB] | ||
| FPO017 | Managing Multiple Function Generators for FAIR | Linux, software, real-time, FPGA | 199 |
|
|||
| In the FAIR control system, equipment which needs to be controlled with ramped nominal values (e.g. power converters) is controlled by a standard front-end controller called scalable control unit (SCU). An SCU combines a ComExpressBoard with Intel CPU and an FPGA baseboard and acts as bus-master on the SCU host-bus. Up to 12 function generators can be implemented in slave-board FPGAs and can be controlled from one SCU. The real-time data supply for the generators demands a special software/hardware approach. Direct control of the generators with a FESA (front-end control software architecture) class, running on an Intel Atom CPU with Linux, does not meet the timing requirements. So an extra layer with an LM32 soft-core CPU is added to the FPGA. Communication between Linux and the LM32 is done via shared memory and a circular buffer data structure. The LM32 supplies the function generators with new parameter sets when it is triggered by interrupts. This two-step approach decouples the Linux CPU from the hard real-time requirements. For synchronous start and coherent clocking of all function generators, special pins on the SCU backplane are being used to avoid bus latencies. | |||
|
Poster FPO017 [1.098 MB] | ||
| FPO018 | Setup and Diagnostics of Motion Control at ANKA Beamlines | software, TANGO, interface, hardware | 201 |
|
|||
| The precise motion control in high resolution is one of the necessary conditions for making high quality measurements at beamline experiments. At a common ANKA beamline up to one hundred actuator axes are working together to align and shape beam, to select beam Energy and to position probes. Some Experiments need additional motion axes supported by transportable controllers plugged temporaly to a local beamline control system. In terms of process control all the analog and digital signals from different sources have to be verified, leveled and interfaced to the motion controllers. They have to be matched and calibrated in the control systems configuration file to real physical quantities which give the input for further data processing. A set of hard- and software tools and methods developed at ANKA over the years is presented in this paper. | |||
|
Poster FPO018 [1.608 MB] | ||
| FPO019 | FPGA Utilization in the Accelerator Interlock System (About the MPS Development in the LIPAc) | FPGA, interface, status, neutron | 204 |
|
|||
| The development of IFMIF (International Fusion Material Irradiation Facility) to generate a 14 MeV source of neutrons with the spectrum of DT fusion reactions is indispensable to qualify suitable materials for the First Wall of the nuclear vessel in fusion power plants. As part of IFMIF validation activities , LIPAc (Linear IFMIF Prototype Accelerator) facility, currently under installation at Rokkasho (Japan) , will accelerate a 125mA CW and 9MeV deuteron beam with a total beam power of 1.125MW. The Machine Protection System (MPS) of LIPAc provides an essential interlock function of stopping the beam in case of anomalous beam loss or other hazardous situations. High speed processing is necessary to achieve properly the MPS main goal. This high speed processing of the signals, distributed alongside the accelerator facility, is based on FPGA technology. This paper describes the basis of FPGA use in the accelerator interlock system through the development of LIPAc’s MPS, with a comparison with using of FPGA of the other accelerator control system. | |||
| FPO022 | New developments on the FAIR Data Master | operation, network, timing, FPGA | 207 |
|
|||
| During the last year, a small scale timing system has been built with a first version of the Data Master. In this paper, we will describe field test progress as well as new design concepts and implementation details of the new prototype to be tested with the CRYRING accelerator timing system. The message management layer has been introduced as a hardware acceleration module for the timely dispatch of control messages. It consists of a priority queue for outgoing messages, combined with a scheduler and network load balancing. This loosens the real-time constraints for the CPUs composing the control messages noticeably, making the control firmware very easy to construct and deterministic. It is further opening perspectives away from the current virtual machine-like implementation on to a specialized programming language for accelerator control. In addition, a streamlined and better fitting model for beam production chains and cycles has been devised for use in the data master firmware. The processing worst case execution time becomes completely calculable, enabling fixed time-slices for safe multiplexing of cycles in all of the CPUs. | |||
|
Slides FPO022 [0.890 MB] | ||
| FPO026 | ADEI and Tango Archiving System – A Convenient Way to Archive and Represent Data | TANGO, interface, database, experiment | 213 |
|
|||
| Tango offers an efficient and powerful archiving mechanism of Tango attributes in a MySQL database. The tool Mambo allows an easy configuration of all to be archived data. This approved archiving concept was successfully introduced to ANKA (Angströmquelle Karlsruhe). To provide an efficient and intuitive web-based interface instead of complex database queries, the TANGO Archiving System was integrated into the “Advanced Data Extraction Infrastructure ADEI”. ADEI is intended to manage data of distributed heterogeneous devices in large-scale physics experiments. ADEI contains internal pre-processing, data quality checks and an intuitive web interface, that guarantees fast access and visualization of huge a data sets stored in the attached data sources like MySQL databases or data files. ADEI and the Tango archiving system have been successfully tested at ANKA's imaging beamlines. It is intended to deploy both at all ANKA beamlines. | |||
|
Poster FPO026 [0.938 MB] | ||
| FPO029 | Redesign of Alarm Monitoring System Application "BeamlineAlarminfoClient" at DESY | device-server, GUI, monitoring, software | 219 |
|
|||
| The alarm monitoring system “BeamlineAlarminfoClient” is a very useful technical-service application at DESY, as it visually renders the locations of important alarms in some sections (e.g. fire or other emergencies). The aim of redesigning this application is to improve the software architecture and allow the easy integration of new observable areas including a new user interface design. This redesign also requires changes on server-side, where alarms are handled and the necessary alarm information is prepared for display. Currently, the client manages alarm data from 17 different servers. This number will increase dramatically in 2014 when new beam lines come into play. Thus creating templates to simplify the addition of new sections makes sense both for the server and client. The client and server are based on the Tine control system and make use of the Tine-Studio utilities, the Alarm Viewer and the Archive Viewer. This paper presents how the redesign is arranged in close collaboration with the customers. | |||
|
Poster FPO029 [0.164 MB] | ||
| FPO030 | Control System Software Environment and Integration for the TPS | EPICS, operation, interface, toolkit | 222 |
|
|||
| The TPS (Taiwan Photon Source) is the latest generation 3 GeV synchrotron light source, and the commissioning starts from third quarter of 2014. The EPICS is adopted as control system framework for the TPS. The various EPICS IOCs have implemented for each subsystem. The control system software environment has been established and integrated specifically for the TPS commissioning. The various purposed operation interfaces have been created and mainly include the function of setting, reading, save, restore and etc. The database related applications have been built, and the applications include archive system, alarm system, logbook, Web and etc. The high level applications which are depended upon properties of each subsystem have been developed and are in test phase. The efforts will be summarized at this report. | |||
|
Slides FPO030 [1.533 MB] | ||
| FPO031 | Power Supplies Transient Recorders for Post-Mortem Analysis of BPM Orbit Dumps at Petra-III | synchrotron, emittance, power-supply, operation | 225 |
|
|||
| PETRA-III is a 3rd generation synchrotron light source dedicated to users at 14 beam lines with 30 instruments. The storage ring is presently modified to add 12 beam lines. PETRA III was operated with several filling modes such as 40, 60, 480 and 960 bunches with a total current of 100mA at electron beam energy of 6 GeV. The horizontal beam emittance is 1 nmrad while a coupling of 1% amounts to a vertical emittance of 10 pmrad. During a user run Machine Protection System (MPS) may trigger an unscheduled beam dump if transients in the current of magnet power supplies are detected which are above permissible limits. The trigger of MPS stops the ring buffers of the 226 BPM electronics where the last 16384 turns just before the dump are stored. These data and transient recorder data of Magnet Power Supply Controllers are available for a post-mortem analysis. Here we discuss in detail the functionality of a Java GUI used to investigate the transient behavior of the differences between set and readout values of different power supplies to find out the responsible power supply that might have led to emittance growth, fluctuations in orbits or beam dumps seen in a post-mortem analysis. | |||
| FPO032 | TPS Screen Monitor User Control Interface | GUI, EPICS, interface, linac | 228 |
|
|||
| The Taiwan Photon Source (TPS) is being constructed at the campus of the NSRRC (National Synchrotron Radiation Research Center) and in commissioning. For beam commissioning, the design and implementation of a screen monitor system for beam profile acquisition, analysis and display was done. A CCD camera with Gigabit Ethernet interface (GigE Vision) is a standard device for image acquisition, to be undertaken with an EPICS IOC via a PV channel; display beam profile and analysis properties are made with a Matlab tool. The further instructions for the design and functionality of the GUI were presented in this report. | |||
| FPO034 | Beamline Data Management at the Synchrotron ANKA | data-management, synchrotron, interface, database | 231 |
|
|||
| We present an architecture consisting of measurement devices, beamline data management and data repository to enable data management at the synchrotron facility ANKA. The operators perform some data management tasks manually and individually for each measurement method. In order to provide the functionality of a data repository it is necessary to collect the data, aggregate metadata and to perform the ingests into the data repository. The data management layer between the measurement devices and the data repository is referred to beamline data management (BLDM), which performs data collection, metadata aggregation and data ingest. Shared libraries contain functionality like migration, ingest or metadata aggregation and form the basis of the BLDM. The workflows and the current state of execution are persisted to enable monitoring and error handling. After data ingest into the data repository, implemented with the KIT Data Manager, archiving, content preservation or bit preservation services are provided for the ingested data. BLDM can connect the existing infrastructure with the data repository without major changes of routine processes to build a data repository for a synchrotron. | |||
| FCO201 | Renovating and Upgrading the Web2cToolkit Suite: A Status Report | interface, toolkit, TANGO, EPICS | 234 |
|
|||
| Web2cToolkit is a collection of Web services. It enables scientists, operators or service technicians to supervise and operate accelerators and beam lines through the World Wide Web. In addition, it provides users with a platform for communication and the logging of data and actions. Recently a novel service, especially designed for mobile devices, has been added. Besides the standard mouse-based interaction it provides a touch- and voice-based user interface. Web2cToolkit is currently undergoing an extensive renovation and upgrading process. Real WYSIWYG-editors are now available to generate and configure synoptic and history displays, and an interface based on 3D-motion and gesture recognition has been implemented. Also the multi-language support and the security of the communication between Web client and server have been improved substantially. The paper reports the complete status of this work and outlines upcoming development. | |||
|
Slides FCO201 [1.318 MB] | ||
| FCO203 | Making it all Work for Operators | operation, EPICS, GUI, injection | 240 |
|
|||
|
Funding: ANKA Synchrotron Light Source, KIT, Karlsruhe As the control system of the ANKA synchrotron radiation source at KIT (Karlsruhe Institute of Technology) is being slowly upgraded it can become, at key stages, temporarily a mosaic of old and new panels while the operator learns to move across to the new system. With the development of general purpose tools, and careful planning of both the final and transition GUIs, we have been able to actually simplify the working environment for machine operators. In this paper we will explain concepts, guides and tools in which GUIs for operators are developed and deployed at ANKA. |
|||
|
Slides FCO203 [0.663 MB] | ||
| FCO204 | How the COMETE Framework Enables the Development of GUI Applications Connected to Multiple Data Sources | TANGO, target, GUI, framework | 243 |
|
|||
|
Today at SOLEIL, our end users requires that GUI applications display data coming from various sources: live data from the Tango [1] control system, archived data stored in the Tango archiving databases and scientific measurement data stored in HDF5 files. Moreover they would like to use the same collection of widgets for the different data sources to be accessed. On the other side, for GUI application developers, the complexity of data source handling had to be hidden. The COMETE [2] framework has been developed to fulfil these allowing GUI developers to build high quality, modular and reusable scientific oriented GUI applications, with consistent look and feel for end users. COMETE offers some key features to software developers: - A data connection mechanism to link the widget to the data source - Smart refreshing service - Easy-to-use and succinct API - Components can be implemented in AWT, SWT and SWING flavors This paper will present the work organization, the software architecture and design of the whole system. We’ll also introduce the COMETE eco-system and the available applications for data visualisation.
[1] TANGO http://www.tango-controls.org [2] COMETE ICALPEPCS 2011 WEMAU012 |
|||
|
Slides FCO204 [1.048 MB] | ||
| FCO206 | PANIC, a Suite for Visualization, Logging and Notification of Incidents | TANGO, database, device-server, PLC | 246 |
|
|||
|
PANIC is a suite of python applications focused on visualization, logging and notification of events occurring in ALBA [1] Synchrotron Control System. Build on top of the PyAlarm Tango [2] Device Server it provides an API and a set of graphic tools to visualize the status of the declared alarms, create new alarm processes and enable notification services like SMS, email, data recording, sound or execution of Tango commands. The user interface provides visual debugging of complex alarm behaviors, that can be declared using single-line python expressions. This article describes the architecture of the PANIC suite, the alarm declaration syntax and the integration of alarm widgets in Taurus [3] user interfaces.
[1] www.cells.es [2] www.tango-controls.org [3] www.taurus-scada.org |
|||
|
Slides FCO206 [1.875 MB] | ||