Keyword: database
Paper Title Other Keywords Page
WCO202 Data Management at the Synchrotron Radiation Facility ANKA experiment, data-management, data-analysis, controls 13
 
  • D. Ressmann, A. Kopmann, V. Mauch, W. Mexner, A. Vondrous
    KIT, Eggenstein-Leopoldshafen, Germany
 
  The complete chain from submitting a proposal, collecting meta data, performing an experiment, towards analysis of these data and finally long term archive will be described. During this process a few obstacles have to be tackled. The workflow should be transparent to the user as well as to the beamline scientists. The final data will be stored in NeXus compatible HDF5 container format. Because the transfer of one large file is more efficient than transferring many small files, container formats enable a faster transfer of experiment data. At the same time HDF5 supports to store meta data together with the experiment data. For large data sets another implication is the performance to download the files. Furthermore the analysis software might not be available at each home institution; as a result it should be an option to access the experiment data on site. The meta data allows to find, analyse, preserve and curate the data in a long term archive, which will become a requirement fairly soon.  
slides icon Slides WCO202 [2.380 MB]  
 
WCO204 A Prototype Data Acquisition System of Abnormal RF Waveform at SACLA operation, LLRF, controls, GUI 19
 
  • M. Ishii, M. Kago
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fukui
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
  • T. Maruyama
    RIKEN/SPring-8, Hyogo, Japan
  • T. Ohshima
    RIKEN SPring-8 Center, Sayo-cho, Sayo-gun, Hyogo, Japan
  • M. Yoshioka
    SES, Hyogo-pref., Japan
 
  At SACLA, an event-synchronized data acquisition system had been installed. The system collects shot-by-shot data, such as representative point data of the phase and amplitude of the rf cavity pickup signals, in synchronization with the beam operation cycle. In addition, rf waveform data is collected every 10 minutes. However a collection with several minutes cycle couldn’t catch an abnormal rf waveform that suddenly occurs. To overcome this problem, we have developed a system to capture waveform when some abnormal event occurs. The system consists of the VMEbus systems, a DAQ server, and a NoSQL database system, Cassandra. The VMEbus system detects an abnormal rf waveform, collects all related waveforms with same shot and sends to a DAQ server. All waveforms are stored Cassandra via the DAQ server. The DAQ server keeps data for 2 seconds from current time on memory to complement Cassandra’s eventual consistency model. We constructed a prototype DAQ system with a minimum configuration and checked its performance. We report the requirements and structure of the DAQ system and the test results in this paper.  
slides icon Slides WCO204 [1.426 MB]  
 
WPO003 Setup of a History Storage Engine Based on a Non-Relational Database at ELSA controls, operation, interface, software 34
 
  • D. Proft, F. Frommberger, W. Hillert
    ELSA, Bonn, Germany
 
  The electron stretcher facility ELSA provides a beam of unpolarized and polarized electrons of up to 3.2 GeV energy to external hadron physics experiments. Its in house developed distributed computer control system is able to provide real time beam diagnostics as well as steering tasks in one homogeneous environment. Recently it was ported from HP-UX running on three HP workstations to a single Linux personal computer. This upgrade to powerful PC hardware opened up the way for the development of a new archive engine with a noSQL database backend based on Hyptertable. The system is capable of recording every parameter change at any given time. Beside the visualization in a newly developed graphical history data browser, the data can be exported to several programs - for example a diff-like tool to compare and recall settings of the accelerator. This contribution will give details on recent improvements of the control system and the setup of the history storage engine.  
 
WPO005 Progress and Challenges during the Development of the Settings Management System for FAIR operation, controls, framework, ion 40
 
  • H.C. Hüther, J. Fitzek, R. Müller, D. Ondreka
    GSI, Darmstadt, Germany
 
  A few years into development of the new control system for FAIR (Facility for Antiproton and Ion Research), a first version of the new settings management system is available. As a basis, the CERN LSA framework (LHC Software Architecture) is being used and enhanced in collaboration between GSI and CERN. New aspects, like flexible cycle lengths, have already been introduced while concepts for other requirements, like parallel beam operation at FAIR, are being developed. At SIS18, LSA settings management is currently being utilized for testing new machine models and operation modes relevant for FAIR. Based upon experience with SIS18, a generic model for ring accelerators has been created that will be used throughout the new facility. It will also be deployed for commissioning and operation of CRYRING by the end of 2014. During development, new challenges came up. To ease collaboration, the LSA code base has been split into common and institute specific modules. An equivalent solution for the database level is still to be found. Besides technical issues, a data-driven system like LSA requires high-quality data. To ensure this, organizational processes need to be put in place at GSI.  
poster icon Poster WPO005 [1.049 MB]  
 
WPO007 The FAIR R3B Prototype Cryogenics Control System controls, cryogenics, framework, PLC 46
 
  • C. Betz, T. Hackler, E. Momper, D. Sanchez Valdepenas, C.S. Schweizer, H. Simon, M. Stern, M. Zaera-Sanz
    GSI, Darmstadt, Germany
 
  Funding: GSI Helmholtzzentrum für Schwerionenforschung
The superconducting GLAD magnet is one of the major parts for the R3B experiment at FAIR. R3B stands for Reactions with Relativistic Radioactive Beams. The cryogenic operation will be ensured by a fully refurbished TCF 50 cold box and oil removal system. One of the major design goals for its control system is to operate as independent as possible from magnet controls acting as a first prototype for the later cryogenic installations in the FAIR facility. The operation of the compressor, oil removal system, and the gas management was tested in Jan. 2014. We have followed a staged implementation of the controls, firstly implementing all processes in a S7-319F with PROFIBUS and PROFINET I/O modules using WinCC OA as SCADA. In a second step a migration and implementation into the CERN UNICOS framework will be done for the first time at GSI. This can be seen as preparatory work for novel industrial control systems to be established for the FAIR facility. Within late spring 2014 a first cool down of the refurbished cold box is foreseen. Once the magnet will be delivered, the magnet and the cryogenics controls will be commissioned together.
 
 
TCO102 Eplanner Software for Machine Activities Management software, operation, network, synchrotron 129
 
  • B.S.K. Srivastava, R.K. Agrawal
    RRCAT, Indore (M.P.), India
  • P. Fatnani
    Raja Ramanna Centre For Advanced Technology, Indore, India
 
  For Indus-2, a 2.5 GeV Synchrotron Radiation Source, operational at Indore, India, the need was felt for software for easily managing various related activities for avoiding communication gaps among the crew members and clearly bringing out the important communications for machine operation. Typical requirements were to have the facility to enter and display daily, weekly and longer operational calendars, to convey system specific and machine operation related standing instructions, to log and track the faults occurring during the operations and follow up actions on the faults logged etc. Overall, the need was for a system to easily manage the number of jobs related to planning the day to day operations of a national facility. The paper describes such a web based system developed and in use regular use and found extremely useful.  
slides icon Slides TCO102 [5.439 MB]  
 
TCO205 Conceptual Design of the Control System for SPring-8-II controls, storage-ring, framework, operation 144
 
  • R. Tanaka, T. Matsushita, T. Sugimoto, R. Tanaka
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fukui
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
 
  The SPring-8 storage ring was inaugurated 17 years ago in 1997. The storage ring is an 8-GeV synchrotron that functions as a third-generation light source, providing brilliant X-ray beams to a large number of experimental users from all over the world. In recent years, discussions have been held on the necessity of upgrading the current ring to create a diffraction-limited storage ring at the same location. Now, a plan to upgrade the storage ring, called SPring-8-II, has been launched. First, new beam optics capable of storing beams of 6 GeV was designed using a five-bend magnet system to obtain smaller electron beam emittance that would produce coherent X-rays that are brighter than those produced by the current ring. The design of a control system that would meet the performance requirements of the new ring has also started. Equipment control devices are based on factory automation technologies such as PLC and VME, whereas digital data handling with high bandwidths is realized using telecommunication technologies such as xTCA. In this paper, we report on the conceptual design of the control system for SPring-8-II on the basis of the conceptual design report proposed by RIKEN.  
slides icon Slides TCO205 [7.572 MB]  
 
TCO207 Common Device Interface 2.0 hardware, device-server, controls, interface 147
 
  • P. Duval, H. Wu
    DESY, Hamburg, Germany
  • J. Bobnar
    Cosylab, Ljubljana, Slovenia
 
  The Common Device Interface (CDI) [1] is a popular device layer in TINE control systems [2]. Indeed, a de-facto device server (more specifically a 'property server') can be instantiated merely by supplying a hardware address database, somewhat reminiscent of an epics IOC. It has in fact become quite popular among uses to do precisely this, although the original design intent anticipated embedding CDI as a hardware layer within a dedicated device server. When control system client applications and central services communicate directly to a CDI server, this places the burden of providing useable, viewable data (and in an efficient manner) squarely on CDI and its address database. In its initial release variant, any modifications to this hardware database needed to be made on the file system used by the CDI device server. In this report we shall describe some of the many new features of CDI release 2.0, which have drawn on the user/developer experience over the past eight years.
[1] 'Using the Common Device Interface in TINE', Duval and Wu, PCaPAC 2006
[2] http://tine.desy.de
 
slides icon Slides TCO207 [1.616 MB]  
 
FCO106 The Role of the CEBAF Element Database in Commissioning the 12 GeV Accelerator Upgrade hardware, controls, interface, software 161
 
  • T. L. Larrieu, M.E. Joyce, M. Keesee, C.J. Slominski, D.L. Turner
    JLab, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to this manuscript.
The CEBAF Element Database (CED) was first developed in 2010 as a resource to support model-driven configuration of the Jefferson Lab Continuous Electron Beam Accelerator (CEBAF). Since that time, its uniquely flexible schema design, robust programming interface, and support for multiple concurrent versions has permitted it to evolve into a more broadly useful operational and control system tool. The CED played a critical role before and during the 2013 startup and commissioning of CEBAF following its 18-month long shutdown and upgrade. Information in the CED about hardware components and their relations to one-another facilitated a thorough Hot Checkout process involving more than 18,000 system checks. New software relies on the CED to generate EDM screens for operators on-demand thereby ensuring that the information on those screens is correct and up-to-date. The CED also continues to fulfill its original mission of supporting model-driven accelerator setup. Using the new ced2elegant and eDT (elegant Download Tool), accelerator physicists have proven able to compute and apply energy-dependent set points with greater efficiency than ever before.
 
slides icon Slides FCO106 [2.698 MB]  
 
FPO013 Beam Data Logging System Base on NoSQL Database at SSRF storage-ring, injection, synchrotron, hardware 188
 
  • Y.B. Yan, Y.B. Leng
    SINAP, Shanghai, People's Republic of China
  • Z.C. Chen, H.L. Geng, L.W. Lai
    SSRF, Shanghai, People's Republic of China
 
  Funding: Supported by the Knowledge Innovation Program of Chinese Academy of Sciences
To improve the accelerator reliability and stability, a beam data logging system was built at SSRF, which is base on NOSQL database Couchbase. The Couchbase is an open source software, and can be used both as a document database or pure key-value database. The logging system stores beam parameters under predefined conditions. It is mainly used for the fault diagnosis, beam parameters tracking or automatic report generation. The details of the data logging system will be reported in this paper.
 
 
FPO014 New Data Archive System for SPES Project Based on EPICS RDB Archiver with PostgreSQL Backend EPICS, controls, hardware, network 191
 
  • M. Montis, S. Fantinel, M.G. Giacchini
    INFN/LNL, Legnaro (PD), Italy
  • M.A. Bellato
    INFN- Sez. di Padova, Padova, Italy
 
  SPES project [1] is a ISOL facility under construction at INFN, Laboratori Nazionali di Legnaro, which requires the integration between the accelerator systems actually used and the new line composed by the primary beam and the ISOL target. As consequence, a migration from the actual control system to a new one based on EPICS [2] is mandatory to realize a distributed control network for the new facility. One of the first implementation realized for this purpose is the Archiver System, an important service required for experiments. Comparing information and experiences provided by other Laboratories, an EPICS Archive System [3] based on PostgreSQL is implemented to provide this service. Preliminary tests are done with a dedicated hardware and following the project requirements. After these tests used to determinate a good configuration for Database and EPICS Application, the system is going to be moved in production, where it will be integrated with the first subsystem upgraded to EPICS. Dedicated customizations are made to the application for providing a simple user experience in managing and interact with the archiver system.
[1] https://web.infn.it/spes
[2] http://www.aps.anl.gov/epics
[3] http://sourceforge.net/apps/trac/cs-studio/wiki/RDBArchive
 
 
FPO015 Device Control Database Tool (DCDB) EPICS, controls, Linux, network 194
 
  • P.A. Maslov, M. Komel, K. Žagar
    Cosylab, Ljubljana, Slovenia
 
  Funding: This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 289485.
In a physics facility containing numerous instruments, it is advantageous to reduce the amount of effort and repetitive work needed for changing the control system (CS) configuration: adding new devices, moving instruments from beamline to beamline, etc. We have developed a CS configuration tool, which provides an easy-to-use interface for quick configuration of the entire facility. It uses Microsoft Excel as the front-end application and allows the user to quickly generate and deploy IOC configuration (EPICS start-up scripts, alarms and archive configuration) onto IOCs; start, stop and restart IOCs, alarm servers and archive engines, etc. The DCDB tool utilizes a relational database, which stores information about all the elements of the accelerator. The communication between the client, database and IOCs is realized by a REST server written in Python. The key feature of the DCDB tool is that the user does not need to recompile the source code. It is achieved by using a dynamic library loader, which automatically loads and links device support libraries. The DCDB tool is compliant with ITER CODAC (used at ITER and ESS), but can also be used in any other EPICS environment.
 
poster icon Poster FPO015 [0.522 MB]  
 
FPO016 Status of Operation Data Archiving System Using Hadoop/HBase for J-PARC distributed, operation, status, EPICS 196
 
  • N. Kikuzawa, Y. Kato, A. Yoshii
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken, Japan
  • H. Ikeda, N. Ouchi
    JAEA, Ibaraki-ken, Japan
 
  J-PARC (Japan Proton Accelerator Research Complex) consists of much equipment. In Linac and 3 GeV rapid cycling synchrotron ring (RCS), the data of over the 64,000 EPICS records for these equipment has been collected. The Data volume is about 2 TB in every year, and the stored total data volume is about 10 TB. The data have been being stored by a Relational Data Base (RDB) system using PostgreSQL, but it is not enough in availability, performance, and capability to increase of data volume flexibility. Hadoop/HBase, which is known as a distributed, scalable and big data store, has been proposed for our next-generation archive system to solve these problems. The test system was built and verified about data transition or database utilization. This report shows the current status of the new archive system, and its advantages and problems which have been obtained through our verification.  
 
FPO026 ADEI and Tango Archiving System – A Convenient Way to Archive and Represent Data TANGO, interface, controls, experiment 213
 
  • D. Haas, S.A. Chilingaryan, A. Kopmann, W. Mexner, D. Ressmann
    KIT, Eggenstein-Leopoldshafen, Germany
 
  Tango offers an efficient and powerful archiving mechanism of Tango attributes in a MySQL database. The tool Mambo allows an easy configuration of all to be archived data. This approved archiving concept was successfully introduced to ANKA (Angströmquelle Karlsruhe). To provide an efficient and intuitive web-based interface instead of complex database queries, the TANGO Archiving System was integrated into the “Advanced Data Extraction Infrastructure ADEI”. ADEI is intended to manage data of distributed heterogeneous devices in large-scale physics experiments. ADEI contains internal pre-processing, data quality checks and an intuitive web interface, that guarantees fast access and visualization of huge a data sets stored in the attached data sources like MySQL databases or data files. ADEI and the Tango archiving system have been successfully tested at ANKA's imaging beamlines. It is intended to deploy both at all ANKA beamlines.  
poster icon Poster FPO026 [0.938 MB]  
 
FPO034 Beamline Data Management at the Synchrotron ANKA data-management, synchrotron, interface, controls 231
 
  • A. Vondrous, T. Jejkal, W. Mexner, D. Ressmann, R. Stotzka
    KIT, Eggenstein-Leopoldshafen, Germany
 
  We present an architecture consisting of measurement devices, beamline data management and data repository to enable data management at the synchrotron facility ANKA. The operators perform some data management tasks manually and individually for each measurement method. In order to provide the functionality of a data repository it is necessary to collect the data, aggregate metadata and to perform the ingests into the data repository. The data management layer between the measurement devices and the data repository is referred to beamline data management (BLDM), which performs data collection, metadata aggregation and data ingest. Shared libraries contain functionality like migration, ingest or metadata aggregation and form the basis of the BLDM. The workflows and the current state of execution are persisted to enable monitoring and error handling. After data ingest into the data repository, implemented with the KIT Data Manager, archiving, content preservation or bit preservation services are provided for the ingested data. BLDM can connect the existing infrastructure with the data repository without major changes of routine processes to build a data repository for a synchrotron.  
 
FCO206 PANIC, a Suite for Visualization, Logging and Notification of Incidents TANGO, controls, device-server, PLC 246
 
  • S. Rubio-Manrique, F. Becheri, G. Cuní, D. Fernandez-Carreiras, C. Pascual-Izarra, Z. Reszela
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  PANIC is a suite of python applications focused on visualization, logging and notification of events occurring in ALBA [1] Synchrotron Control System. Build on top of the PyAlarm Tango [2] Device Server it provides an API and a set of graphic tools to visualize the status of the declared alarms, create new alarm processes and enable notification services like SMS, email, data recording, sound or execution of Tango commands. The user interface provides visual debugging of complex alarm behaviors, that can be declared using single-line python expressions. This article describes the architecture of the PANIC suite, the alarm declaration syntax and the integration of alarm widgets in Taurus [3] user interfaces.
[1] www.cells.es
[2] www.tango-controls.org
[3] www.taurus-scada.org
 
slides icon Slides FCO206 [1.875 MB]