Keyword: database
Paper Title Other Keywords Page
MOBPP03 Fault Tolerant, Scalable Middleware Services Based on Spring Boot, REST, H2 and Infinispan distributed, controls, operation, network 33
 
  • W. Sliwinski, K. Kaczkowski, W. Zadlo
    CERN, Geneva, Switzerland
 
  Control systems require several, core services for work coordination and everyday operation. One such example is Directory Service, which is a central registry of all access points and their physical location in the network. Another example is Authentication Service, which verifies callers identity and issues a signed token, which represents the caller in the distributed communication. Both cases are real life examples of middleware services, which have to be always available and scalable. The paper discusses design decisions and technical background behind these two central services used at CERN. Both services were designed using latest technology standards, namely Spring Boot and REST. Moreover, they had to comply with demanding requirements for fault tolerance and scalability. Therefore, additional extensions were necessary, as distributed in-memory cache (using Apache Infinispan), or Oracle database local mirroring using H2 database. Additionally, the paper will explain the tradeoffs of different approaches providing high-availability features and lessons learnt from operational usage.  
slides icon Slides MOBPP03 [6.846 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOBPP03  
About • paper received ※ 27 September 2019       paper accepted ※ 08 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOBPP05 Dynamic Control Systems: Advantages and Challenges controls, TANGO, experiment, interface 46
 
  • S. Rubio-Manrique, G. Cuní
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
 
  The evolution of Software Control Systems introduced the usage of dynamically typed languages, like Python or Ruby, that helped Accelerator scientists to develop their own control algorithms on top of the standard control system. This new high-level layer of scientist-developed code is prone to continuous change and no longer restricted to fixed types and data structures as low-level control systems used to be. This provides great advantages for scientists but also big challenges for the control engineers, that must integrate this dynamic developments into existing systems like user interfaces, archiving or alarms.  
slides icon Slides MOBPP05 [2.267 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOBPP05  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOCPL05 Software Framework QAClient for Measurement/Automation In Proton Therapy Centers controls, LabView, proton, framework 86
 
  • A. Mayor, O. Actis, D. Meer, B. Rohrer
    PSI, Villigen PSI, Switzerland
 
  PSI operates a proton center for cancer treatments consisting of treatment areas Gantry 2, Gantry 3 and OPTIS2. For calibration measurements and quality assurance procedures which have to be executed on a frequent basis and involve different systems and software products, a software framework (QAClient) was developed at PSI. QAClient provides a configurable and extensible framework communicating with PSI control systems, measurement devices, databases and commercial products as LabVIEW and MATLAB. It supports automation of test protocols with user interaction, data analysis and data storage as well as generating of reports. It runs on Java and on different operating system platforms and offers an intuitive graphical user interface. It is used for clinical checks, calibration and tuning measurements, system integration tests and patient table calibrations. New tasks can be configured using standard tasks, without programming effort. QAClient is used for Gantry 2 Daily Check which reduces the execution time by 70% and simplifies measurements so less trained staff can execute it. QA reports are generated automatically and data gets archived and can be used for trend analysis.  
slides icon Slides MOCPL05 [2.453 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOCPL05  
About • paper received ※ 27 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOCPR03 Planning of Interventions With the Atlas Expert System simulation, detector, experiment, controls 106
 
  • I. Asensi Tortajada, A. Rummler, C.A. Solans Sanchez
    CERN, Geneva, Switzerland
  • J.G. Torres Pais
    Valencia University, Burjassot, Spain
 
  The ATLAS Technical Coordination Expert System is a tool for the simulation of the ATLAS experiment infrastructure that combines information from diverse areas such as detector control (DCS) and safety systems (DSS), gas, water, cooling, ventilation, cryogenics, and electricity distribution. It allows the planning of an intervention during technical stops and maintenance periods, and it is being used during the LS2 to provide an additional source of information for the planning of interventions. This contribution will describe the status of the Expert System and how it us used to provide information on the impact of an intervention based on the risk assessment models of fault tree analysis and principal component analysis.  
slides icon Slides MOCPR03 [9.062 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOCPR03  
About • paper received ※ 27 September 2019       paper accepted ※ 11 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOMPR008 SharePoint for HEPS Technical Systems and Project Management site, project-management, MMI, interface 175
 
  • X.H. Wu, L. Bai, C.P. Chu
    IHEP, Beijing, People’s Republic of China
 
  High Energy Photon Source is the latest planned synchrotron light source in China which is designed for ultra-low emittance and high brightness. The accelerator and beamlines contains tens of thousands of devices which require systematic management. It is also necessary to capture project management information systematically. HEPS chooses the Microsoft SharePoint as the document tool for the project and all technical systems. Additionally, Microsoft Project Server on top of SharePoint is used for the project management. Utilizing the SharePoint and Project software can facilitate a lot of daily work for the HEPS project. This paper describes the SharePoint and Project setup and various applications been developed so far.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOMPR008  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA015 Reverse Engineering the Amplifier Slab Tool at the National Ignition Facility target, optics, simulation, operation 228
 
  • A. Bhasker, R.D. Clark, J.E. Dorham
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
This paper discusses the challenges and steps required to convert a stand-alone legacy Microsoft Access-based application, in the absence of original requirements, to a web-based application with an Oracle backend and Oracle Application Express/JavaScript/JQuery frontend. The Amplifier Slab Selection (ASL) Tool provides a means to manage and track Amplifier Slabs on National Ignition Facility (NIF) beamlines. ASL generates simulations and parameter visualization charts of seated Amplifier Slabs as well as available replacement candidates to help optics designers make beamline configuration decisions. The migration process, undertaken by the NIF Shot Data Systems (SDS) team at Lawrence Livermore National Laboratory (LLNL), included reverse-engineering functional requirements due to evolving processes and changing NIF usage patterns.
 
poster icon Poster MOPHA015 [0.525 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA015  
About • paper received ※ 27 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA023 Applications of an EPICS Embedded and Credit-card Sized Waveform Acquisition EPICS, controls, operation, status 242
 
  • Y.-S. Cheng, K.T. Hsu, K.H. Hu, D. Lee, C.Y. Liao, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  To eliminate long distance cabling for improving signal quality, the remote waveform access supports have been developed for the TPS (Taiwan Photon Source) and TLS (Taiwan Light Source) control systems for routine operation. The previous mechanism was that a dedicated EPICS IOC has been used to communicate with the present Ethernet-based oscilloscopes to acquire each waveform data. To obtain higher reliability operation and low power consumption, the FPGA and SoC (System-on-Chip) based waveform acquisition which embedded an EPICS IOC has been adopted to capture the waveform signals and process to the EPICS PVs (Process Variables). According to specific purposes use, the different graphical applications have been designed and integrated into the existing operation interfaces. These are convenient to observe waveform status and to analyse the caught data on the control consoles. The efforts are described at this paper.  
poster icon Poster MOPHA023 [5.076 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA023  
About • paper received ※ 30 September 2019       paper accepted ※ 08 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA028 High Energy Photon Source Control System Design controls, EPICS, timing, experiment 249
 
  • C.P. Chu, D.P. Jin, G. Lei, G. Li, C.H. Wang, G.L. Xu, L.X. Zhu
    IHEP, Beijing, People’s Republic of China
 
  A 6 GeV high energy synchrotron radiation light source is being built near Beijing, China. The accelerator part contains a linac, a booster and a 1360 m circumference storage ring, and fourteen production beamlines for phase one. The control systems are EPICS based with integrated application and data platforms for the accelerators and beamlines. The number of devices and the complexity level of operation for such a machine is extremely high, therefore, a modern system design is vital for efficient operation of the machine. This paper reports the design, preliminary development and planned near-future work, especially the databases for quality assurance and application software platforms for high level applications.  
poster icon Poster MOPHA028 [2.257 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA028  
About • paper received ※ 30 September 2019       paper accepted ※ 08 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA032 Big Data Architectures for Logging and Monitoring Large Scale Telescope Arrays software, monitoring, controls, operation 268
 
  • A. Costa, U. Becciani, P. Bruno, A.S. Calanducci, A. Grillo, S. Riggi, E. Sciacca, F. Vitello
    INAF-OACT, Catania, Italy
  • V. Conforti, F. Gianotti
    INAF, Bologna, Italy
  • J. Schwarz
    INAF-Osservatorio Astronomico di Brera, Merate, Italy
  • G. Tosti
    Università degli di Perugia, Perugia, Italy
 
  Funding: This work was partially supported by the ASTRI "Flagship Project" financed by the Italian Ministry of Education, University, and Research and led by the Italian National Institute of Astrophysics.
Large volumes of technical and logging data result from the operation of large scale astrophysical infrastructures. In the last few years several "Big Data" technologies have been developed to deal with a huge amount of data, e.g. in the Internet of Things (IoT) framework. We are comparing different stacks of Big Data/IoT architectures including high performance distributed messaging systems, time series databases, streaming systems, interactive data visualization. The main aim is to classify these technologies based on a set of use cases typically related to the data produced in the astronomical environment, with the objective to have a system that can be updated, maintained and customized with a minimal programming effort. We present the preliminary results obtained, using different Big Data stack solution to manage some use cases related to quasi real-time collection, processing and storage of the technical data, logging and technical alert produced by the array of nine ASTRI telescopes that are under development by INAF as a pathfinder array for the Cherenkov astronomy in the TeV energy range.
*ASTRI Project: http://www.brera.inaf.it/~astri/wordpress/
**CTA Project: https://www.cta-observatory.org/
 
poster icon Poster MOPHA032 [1.327 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA032  
About • paper received ※ 02 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA042 Evaluating VISTA and EPICS With Regard to Future Control Systems Development at ISIS controls, EPICS, hardware, software 291
 
  • I.D. Finch
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
 
  The ISIS Muon and Neutron Source has been in operation for more than 30 years and has already seen one complete replacement of its controls system software. Currently ISIS uses the Vista controls system suite of software. I present our work in implementing a new EPICS control system for our Front End Test Stand (FETS) currently running VISTA. This new EPICS system is being used to evaluate a possible migration from Vista to EPICS at a larger scale in ISIS. I present my experience in the initial implementation of EPICS, considerations on using a phased transition during which the two systems are run in parallel, and our future plans with regard to developing control systems in an established decades-old accelerator with heterogeneous systems.  
poster icon Poster MOPHA042 [0.396 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA042  
About • paper received ※ 30 September 2019       paper accepted ※ 08 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA043 Accelerator Control Data Mining with WEKA controls, target, network, GUI 293
 
  • W. Fu, K.A. Brown, T. D’Ottavio, P.S. Dyer, S. Nemesure
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Accelerator control systems generates and stores many time-series data related to the performance of an accelerator and its support systems. Many of these time series data have detectable change trends and patterns. Being able to timely detect and recognize these data change trends and patterns, analyse and predict the future data changes can provide intelligent ways to improve the controls system with proactive feedback/forward actions. With the help of advanced data mining and machine learning technology, these types of analyses become easier to produce. As machine learning technology matures with the inclusion of powerful model algorithms, data processing tools, and visualization libraries in different programming languages (e.g. Python, R, Java, etc), it becomes relatively easy for developers to learn and apply machine learning technology to online accelerator control system data. This paper explores time series data analysis and forecasting in the Relativistic Heavy Ion Collider (RHIC) control systems with the Waikato Environment for Knowledge Analysis (WEKA) system and its Java data mining APIs.
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA043  
About • paper received ※ 20 September 2019       paper accepted ※ 08 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA047 CERN Secondary Beamlines Software Migration Project software, controls, experiment, optics 312
 
  • A. Gerbershagen, D. Banerjee, J. Bernhard, M. Brugger, N. Charitonidis, L. Gatignon, E. Montbarbon, B. Rae, M.S. Rosenthal, M.W.U. Van Dijk
    CERN, Meyrin, Switzerland
  • G. D’Alessandro
    JAI, Egham, Surrey, United Kingdom
  • I. Peres
    Technion, Haifa, Israel
 
  The Experimental Areas group of the CERN Engineering department operates a number of beamlines for the fixed target experiments, irradiation facilities and test beams. The software currently used for the simulation of the beamline layout (BEATCH), beam optics (TRANSPORT), particle tracking (TURTLE) and muon halo calculation (HALO) has been developed in FORTRAN in the 1980s and requires an update in order to ensure long-term continuity. The ongoing Software Migration Project transfers the beamline description to a set of newer commonly used software codes, such as MADX, FLUKA, G4Beamline, BDSIM etc. This contribution summarizes the goals and the scope of the project. It discusses the implementation of the beamlines in the new codes, their integration into the CERN layout database and the interfaces to the software codes used by other CERN groups. This includes the CERN secondary beamlines control system CESAR, which is used for the readout of the beam diagnostics and control of the beam via setting of the magnets, collimators, filters etc. The proposed interface is designed to allow a comparison between the measured beam parameters and the ones calculated with beam optics software.  
poster icon Poster MOPHA047 [1.220 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA047  
About • paper received ※ 25 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA048 The IRRAD Data Manager (IDM) radiation, experiment, software, operation 318
 
  • B. Gkotse, G. Pezzullo, F. Ravotti
    CERN, Meyrin, Switzerland
  • B. Gkotse, P. Jouvelot
    MINES ParisTech, PSL Research University, Paris, France
 
  Funding: This project has received funding from the European Union’s Horizon 2020 Research and Innovation program under Grant Agreement no. 654168.
The Proton Irradiation Facility (IRRAD) is a reference facility at CERN for characterizing detectors and other accelerator components against radiation. To ensure reliable facility operations and smooth experimental data handling, a new IRRAD Data Manager (IDM) web application has been developed and first used during the last facility run before the CERN Long Shutdown 2. Following best practices in User Experience design, IDM provides a user-friendly interface that allows both users to handle their samples’ data and the facility operators to manage and coordinate the experiments more efficiently. Based on the latest web technologies such as Django, JQuery and Semantic UI, IDM is characterized by its minimalistic design and functional robustness. In this paper, we present the key features of IDM, our design choices and its overall software architecture. Moreover, we discuss scalability and portability opportunities for IDM in order to cope with the requirements of other irradiation facilities.
 
poster icon Poster MOPHA048 [2.416 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA048  
About • paper received ※ 30 September 2019       paper accepted ※ 19 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA063 Towards a Common Reliability & Availability Information System for Particle Accelerator Facilities operation, medical-accelerators, radiation, experiment 356
 
  • K. Höppner, Th. Haberer, K. Pasic, A. Peters
    HIT, Heidelberg, Germany
  • J. Gutleber, A. Niemi
    CERN, Meyrin, Switzerland
  • H. Humer
    AIT, Vienna, Austria
 
  Funding: This project has received funding from the European Union’s Horizon 2020 Research and Innovation program under grant agreement No 730871.
Failure event and maintenance record based data collection systems have a long tradition in industry. Today, the particle accelerator community does not possess a common platform that permits storing and sharing reliability and availability information in an efficient way. In large accelerator facilities used for fundamental physics research, each machine is unique, the scientific culture, work organization, and management structures are often incompatible with a streamlined industrial approach. Other accelerator facilities enter the area of industrial process improvement, like medical accelerators due to legal requirements and constraints. The Heidelberg Ion Beam Therapy Center is building up a system for reliability and availability analysis, exploring the technical and organizational requirements for a community-wide information system on accelerator system and component reliability and availability. This initiative is part of the EU H2020 project ARIES, started in May 2017. We will present the technical scope of the system that is supposed to access and obtain information specific to reliability statistics in ways not compromising the information suppliers and system producers.
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA063  
About • paper received ※ 04 October 2019       paper accepted ※ 08 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA085 CERN Controls Open Source Monitoring System monitoring, controls, software, status 404
 
  • F. Locci, F. Ehm, L. Gallerani, J. Lauener, J.P. Palluel, R. Voirin
    CERN, Meyrin, Switzerland
 
  The CERN accelerator controls infrastructure spans several thousands of machines and devices used for Accelerator control and data acquisition. In 2009 a full home-made CERN solution has been developed (DIAMON) to monitor and diagnose the complete controls infrastructure. The adoption of the solution by an enlarged community of users and its rapid expansion led to a final product that became more difficult to operate and maintain, in particular because of the multiplicity and redundancy of the services, the centralized management of the data acquisition and visualization software, the complex configuration and also the intrinsic scalability limits. At the end 2017, a complete new monitoring system for the beam controls infrastructure was launched. The new "COSMOS" system was developed with two main objectives in mind: First, detect instabilities and prevent breakdowns of the control system infrastructure and to provide users with a more coherent and efficient solution for the development of their specific data monitoring agents and related dashboards. This paper describes the overall architecture of COSMOS, focusing on the conceptual and technological choices of the system.  
poster icon Poster MOPHA085 [1.475 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA085  
About • paper received ※ 29 September 2019       paper accepted ※ 19 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA086 The Design of Experimental Performance Analysis and Visualization System experiment, data-management, data-analysis, laser 409
 
  • J. Luo, L. Li, Z. Ni, X. Zhou
    CAEP, Sichuan, People’s Republic of China
  • Y. Gao
    Stony Brook University, Stony Brook, New York, USA
 
  The analysis of experimental performance is an essential task to any experiment. With the increasing demand on experimental data mining and utilization. methods of experimental data analysis abound, including visualization, multi-dimensional performance evaluation, experimental process modeling, performance prediction, to name but a few. We design and develop an experimental performance analysis and visualization system, consisting of data source configuration component, algorithm management component, and data visualization component. It provides us feasibilities such as experimental data extraction and transformation, algorithm flexible configuration and validation, and multi-views presentation of experimental performance. It will bring great convenience and improvement for the analysis and verification of experimental performance.  
poster icon Poster MOPHA086 [0.232 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA086  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA111 Easing the Control System Application Development for CMS Detector Control System with Automatic Production Environment Reproduction controls, experiment, software, detector 476
 
  • I. Papakrivopoulos, G. Bakas, G. Tsipolitis
    National Technical University of Athens, Athens, Greece
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, S. Cittolin, M. Pieri
    UCSD, La Jolla, California, USA
  • P. Brummer, D. Da Silva Gomes, C. Deldicque, M. Dobson, N. Doualot, J.R. Fulcher, D. Gigi, M.S. Gladki, F. Glege, J. Hegeman, A. Mecionis, F. Meijers, E. Meschi, K. Mor, S. Morovic, L. Orsini, D. Rabady, A. Racz, K.V. Raychinov, A. Rodriguez Garcia, H. Sakulin, C. Schwick, D. Simelevicius, P. Soursos, M. Stankevicius, U. Suthakar, C. Vazquez Velez, A.B. Zahid, P. Zejdl
    CERN, Meyrin, Switzerland
  • G.L. Darlea, G. Gomez-Ceballos, C. Paus
    MIT, Cambridge, Massachusetts, USA
  • W. Li, A. Petrucci, A. Stahl
    Rice University, Houston, Texas, USA
  • R.K. Mommsen, S. Morovic, V. O’Dell, P. Zejdl
    Fermilab, Batavia, Illinois, USA
 
  The Detector Control System (DCS) is one of the main pieces involved in the operation of the Compact Muon Solenoid (CMS) experiment at the LHC. The system is built using WinCC Open Architecture (WinCC OA) and the Joint Controls Project (JCOP) framework which was developed on top of WinCC at CERN. Following the JCOP paradigm, CMS has developed its own framework which is structured as a collection of more than 200 individual installable components each providing a different feature. Everyone of the systems that the CMS DCS consists of is created by installing a different set of these components. By automating this process, we are able to quickly and efficiently create new systems in production or recreate problematic ones, but also, to create development environments that are identical to the production ones. This latter one results in smoother development and integration processes, as the new/reworked components are developed and tested in production-like environments. Moreover, it allows the central DCS support team to easily reproduce systems that the users/developers report as being problematic, reducing the response time for bug fixing and improving the support quality.  
poster icon Poster MOPHA111 [0.975 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA111  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA117 Big Data Archiving From Oracle to Hadoop network, monitoring, operation, SCADA 497
 
  • I. Prieto Barreiro, M. Sobieszek
    CERN, Meyrin, Switzerland
 
  The CERN Accelerator Logging Service (CALS) is used to persist data of around 2 million predefined signals coming from heterogeneous sources such as the electricity infrastructure, industrial controls like cryogenics and vacuum, or beam related data. This old Oracle based logging system will be phased out at the end of the LHC’s Long Shut-down 2 (LS2) and will be replaced by the Next CERN Accelerator Logging Service (NXCALS) which is based on Hadoop. As a consequence, the different data sources must be adapted to persist the data in the new logging system. This paper describes the solution implemented to archive into NXCALS the data produced by QPS (Quench Protection System) and SCADAR (Supervisory Control And Data Acquisition Relational database) systems, which generate a total of around 175, 000 values per second. To cope with such a volume of data the new service has to be extremely robust, scalable and fail-safe with guaranteed data delivery and no data loss. The paper also explains how to recover from different failure scenarios like e.g. network disruption and how to manage and monitor this highly distributed service.  
poster icon Poster MOPHA117 [1.227 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA117  
About • paper received ※ 29 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA118 Improving Alarm Handling for the TI Operators by Integrating Different Sources in One Alarm Management and Information System framework, monitoring, interface, controls 502
 
  • M. Bräger, M. Bouzas Reguera, U. Epting, E. Mandilara, E. Matli, I. Prieto Barreiro, M.P. Rafalski
    CERN, Geneva, Switzerland
 
  CERN uses a central alarm system to monitor its complex technical infrastructure. The Technical Infrastructure (TI) operators must handle a large number of alarms coming from several thousand equipments spread around CERN. In order to focus on the most important events and improve the time required to solve the problem, it is necessary to provide extensive helpful information such as alarm states of linked systems, a geographical overview on a detailed map and clear instructions to the operators. In addition, it is useful to temporarily inhibit alarms coming from equipment during planned maintenance or interventions. The tool presents all necessary information in one place and adds simple and intuitive functionality to ease the operation with an enhanced interface.  
poster icon Poster MOPHA118 [0.907 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA118  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA123 Vacuum Controls Configurator: A Web Based Configuration Tool for Large Scale Vacuum Control Systems vacuum, controls, PLC, SCADA 511
 
  • A.P. Rocha, I.A. Amador, S. Blanchard, J. Fraga, P. Gomes, C.V. Lima, G. Pigny, P. Poulopoulou
    CERN, Geneva, Switzerland
 
  The Vacuum Controls Configurator (vacCC) is an application developed at CERN for the management of large-scale vacuum control systems. The application was developed to facilitate the management of the configuration of the vacuum control system at CERN, the largest vacuum system in operation in the world, with over 15,000 vacuum devices spread over 128 km of vacuum chambers. It allows non-experts in software to easily integrate or modify vacuum devices within the control system via a web browser. It automatically generates configuration data that enables the communication between vacuum devices and the supervision system, the generation of SCADA synoptics, long and short term archiving, and the publishing of vacuum data to external systems. VacCC is a web application built for the cloud, dockerized, and based on a microservice architecture. In this paper, we unveil the application’s main aspects concerning its architecture, data flow, data validation, and generation of configuration for SCADA/PLC.  
poster icon Poster MOPHA123 [1.317 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA123  
About • paper received ※ 01 October 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA149 Accelerator Schedule Management at CERN controls, operation, software, status 579
 
  • B. Urbaniec, C. Roderick
    CERN, Geneva, Switzerland
 
  Maximizing the efficiency of operating CERN’s accelerator complex requires careful forward planning, and synchronized scheduling of cross-accelerator events. These schedules are of interest to many people helping them to plan and organize their work. Therefore, this data should be easily accessible, both interactively and programmatically. Development of the Accelerator Schedule Management (ASM) system started in 2017 to address such topics and enable definition, management and publication of schedule data in generic way. The ASM system currently includes three core modules to manage: Yearly accelerator schedules for the CERN Injector complex and LHC; Submission and scheduling of Machine Development (MD) requests with supporting statistics; Submission, approval, scheduling and follow-up of control system changes and their impact. This paper describes the ASM Web application (built with Angular, TypeScript and Java) in terms of: Core scheduling functionality; Integration of external data sources; Provision of programmatic access to schedule data via a language agnostic REST API (allowing other systems to leverage schedule data).  
poster icon Poster MOPHA149 [2.477 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA149  
About • paper received ※ 29 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA157 Global Information Management System for HEPS software, operation, interface, experiment 606
 
  • C.H. Wang, C.P. Chu
    IHEP, Beijing, People’s Republic of China
  • H.H. Lv
    SINAP, Shanghai, People’s Republic of China
 
  HEPS is a big complex science facility which consists of the accelerator, the beam lines and general facilities. The accelerator is made up of many subsystem and a large number of components such as magnets, power supply, high frequency and vacuum equipment, etc. Variety of components and equipment with cables are distributed installation with distance to each other. These components during the stage of the design and construction and commissioning will produce tens of thousands of data. The information collection and storage and management for so much data for a large scientific device is particularly important. This paper describes the HEPS database design and application from the construction and installation and put into operations generated by the uniqueness of huge amounts of data, in order to fully improve the availability and stability of the accelerator, and experiment stations, and further improve the overall performance.  
poster icon Poster MOPHA157 [0.756 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA157  
About • paper received ※ 29 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA158 Compact Electronic Logbook System interface, electron, HOM, framework 611
 
  • L. Wang, M.T. Kang, X. Wu
    IHEP CSNS, Guangdong Province, People’s Republic of China
  • C.P. Chu, F.Q. Guo, Y.C. He, D.P. Jin, J. Liu, Y.L. Zhang, Z. Zhao, P. Zhu
    IHEP, Beijing, People’s Republic of China
 
  Compact Electronic Logbook System (Clog) is designed to record the events in an organized way during operation and maintenance of an accelerator facility. Clog supports functionalities such as log submission, attachment upload, easy to retrieve logged messages, RESTful API and so on, which aims to be compact enough for anyone to conveniently deploy it and anyone familiar with Java EE (Enterprise Edition) technology can easily customize the functionalities. After the development is completed, Clog can be used in accelerator facilities such as BEPC-II (Beijing Electron/Positron Collider Upgrade) and HEPS (High Energy Photon Source). This paper presents the design, implementation and development status of Clog.  
poster icon Poster MOPHA158 [1.035 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA158  
About • paper received ※ 29 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL06 Energy Consumption Monitoring With Graph Databases and Service Oriented Architecture network, monitoring, operation, interface 719
 
  • A. Kiourkos, S. Infante, K.S. Seintaridis
    CERN, Meyrin, Switzerland
 
  CERN is a major electricity consumer. In 2018 it consumed 1.25 TWh, 1/3 the consumption of Geneva. Monitoring of this consumption is crucial for operational reasons but also for raising awareness of the users regarding energy utilization. This monitoring is done via a system, developed internally and is quite popular within the CERN community therefore to accommodate the increasing requirements, a migration is underway that utilizes the latest technologies for data modeling and processing. We present the architecture of the new energy monitoring system with an emphasis on the data modeling, versioning and the use of graphs to store and process the model of the electrical network for the energy calculations. The algorithms that are used are presented and a comparison with the existing system is performed in order to demonstrate the performance improvements and flexibility of the new approach. The system embraces the Service Oriented Architecture principles and we illustrate how these have been applied in its design. The different modules and future possibilities are also presented with an analysis of their strengths, weaknesses, and integration within the CERN infrastructure.  
slides icon Slides TUBPL06 [3.018 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUBPL06  
About • paper received ※ 29 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUDPP01 A Monitoring System for the New ALICE O2 Farm monitoring, detector, network, controls 835
 
  • G. Vino, D. Elia
    INFN-Bari, Bari, Italy
  • V. Chibante Barroso, A. Wegrzynek
    CERN, Meyrin, Switzerland
 
  The ALICE Experiment has been designed to study the physics of strongly interacting matter with heavy-ion collisions at the CERN LHC. A major upgrade of the detector and computing model (O2, Offline-Online) is currently ongoing. The ALICE O2 farm will consist of almost 1000 nodes enabled to readout and process on-the-fly about 27 Tb/s of raw data. To increase the efficiency of computing farm operations a general-purpose near real-time monitoring system has been developed: it lays on features like high-performance, high-availability, modularity, and open source. The core component (Apache Kafka) ensures high throughput, data pipelines, and fault-tolerant services. Additional monitoring functionality is based on Telegraf as metric collector, Apache Spark for complex aggregation, InfluxDB as time-series database, and Grafana as visualization tool. A logging service based on Elasticsearch stack is also included. The designed system handles metrics coming from operating system, network, custom hardware, and in-house software. A prototype version is currently running at CERN and has been also successfully deployed by the ReCaS Datacenter at INFN Bari for both monitoring and logging.  
slides icon Slides TUDPP01 [1.128 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUDPP01  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEAPP03 Converting From NIS to Redhat Identity Management network, controls, Linux, interface 871
 
  • T.S. McGuckin, R.J. Slominski
    JLab, Newport News, Virginia, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177.
The Jefferson Lab (JLab) accelerator controls network has transitioned to a new authentication and directory service infrastructure. The new system uses the Red Hat Identity Manager (IdM) as a single integrated front-end to the Lightweight Directory Access Protocol (LDAP) and a replacement for NIS and a stand-alone Kerberos authentication service. This system allows for integration of authentication across Unix and Windows environments and across different JLab computing environments, including across firewalled networks. The decision making process, conversion steps, issues and solutions will be discussed.
 
slides icon Slides WEAPP03 [3.898 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEAPP03  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEAPP04 ICS Infrastructure Deployment Overview at ESS network, controls, interface, framework 875
 
  • B. Bertrand, S. Armanet, J. Christensson, A. Curri, A. Harrisson, R. Mudingay
    ESS, Lund, Sweden
 
  The ICS Control Infrastructure group at the European Spallation Source (ESS) is responsible for deploying many different services. We treat Infrastructure as code to deploy everything in a repeatable, reproducible and reliable way. We use three main tools to achieve that: Ansible (an IT automation tool), AWX (a GUI for Ansible) and CSEntry (a custom in-house developed web application used as Configuration Management Database). CSEntry (Control System Entry) is used to register any device with an IP address (network switch, physical machines, virtual machines). It allows us to use it as a dynamic inventory for Ansible. DHCP and DNS are automatically updated as soon as a new host is registered in CSEntry. This is done by triggering a task that calls an Ansible playbook via AWX API. Virtual machines can be created directly from CSEntry with one click, again by calling another Ansible playbook via AWX API. This playbook uses proxmox (our virtualization platform) API for the VM creation. By using Ansible groups, different proxmox clusters can be managed from the same CSEntry web application. Those tools give us an easy and flexible solution to deploy software in a reproducible way.  
slides icon Slides WEAPP04 [13.604 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEAPP04  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEBPP02 Centralized System Management of IPMI Enabled Platforms Using EPICS EPICS, interface, monitoring, controls 887
 
  • K. Vodopivec
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This work was supported by the U.S. Department of Energy under contract DE-AC0500OR22725.
Intelligent Platform Management Interface (IPMI) is a specification for computer hardware platform management and monitoring. The interface includes features for monitoring hardware sensors like fan speed and device temperature, inventory discovery, event propagation and logging. All IPMI functionality is accessible without the host operating system running. With its wide support across hardware vendors and the backing of a standardization committee, it is a compelling instrumentation for integration into a control system for large experimental physics projects. Integrating IPMI into EPICS provides the benefit of centralized monitoring, archiving and alarming integrated with the facility control system. A new project has been started to enable this capability by creating a native EPICS device driver built on the open-source FreeIPMI library for the remote host connection interface. The driver supports automatic system components discovery for creating EPICS database templates, detailed device information from Field Replaceable Unit interface, sensor monitoring with remote threshold management, geographical PV addressing in PICMG based platforms and PICMG front panel lights readout.
 
slides icon Slides WEBPP02 [7.978 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEBPP02  
About • paper received ※ 02 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WECPL01 Status of the Control System for Fully Integrated SACLA/SPring-8 Accelerator Complex and New 3 GeV Light Source Being Constructed at Tohoku, Japan controls, framework, storage-ring, operation 904
 
  • T. Sugimoto, N. Hosoda, K. Okada, M. Yamaga
    JASRI, Hyogo, Japan
  • T. Fukui
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
  • M. Ishii
    JASRI/SPring-8, Hyogo-ken, Japan
 
  In the SPring-8 upgrade project, we plan to use the linear accelerator of SACLA as a full-energy injector to the storage ring. For the purpose of simultaneous operation of XFEL lasing and on-demand injection, we developed a new control framework that inherits the concepts of MADOCA. We plan to use the same control framework for a 3 GeV light source under construction at Tohoku, Japan. Messaging of the new control system is based on the MQTT protocol, which enables slow control and data acquisition with sub-second response time. The data acquisition framework, named MDAQ, covers both periodic polling and event-synchronizing data. To ensure scalability, we applied a key-value storage scheme, Apache Cassandra, to the logging database of the MDAQ. We also developed a new parameter database scheme, that handles operational parameter sets for XFEL lasing and on-demand top-up injection. These parameter sets are combined into 60 Hz operation patterns. For the top-up injection, we can select the operational pattern every second on an on-demand basis. In this paper, we report an overview of the new control system and the preliminary results of the integrated operation of SACLA and SPring-8.  
slides icon Slides WECPL01 [10.969 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WECPL01  
About • paper received ※ 03 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WECPR01 EPICS 7 Core Status Report EPICS, site, software, network 923
 
  • A.N. Johnson, G. Shen, S. Veseli
    ANL, Lemont, Illinois, USA
  • M.A. Davidsaver
    Osprey DCS LLC, Ocean City, USA
  • S.M. Hartman, K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
  • H. Junkes
    FHI, Berlin, Germany
  • K.H. Kim
    SLAC, Menlo Park, California, USA
  • M.G. Konrad
    FRIB, East Lansing, Michigan, USA
  • T. Korhonen
    ESS, Lund, Sweden
  • M.R. Kraimer
    Private Address, Osseo, USA
  • R. Lange
    ITER Organization, St. Paul lez Durance, France
  • K. Shroff
    BNL, Upton, New York, USA
 
  Funding: U.S. Department of Energy Office of Science, under Contract No. DE-AC02-06CH11357
The integration of structured data and the PV Access network protocol into the EPICS toolkit has opened up many possibilities for added functionality and features, which more and more facilities are looking to leverage. At the same time however the core developers also have to cope with technical debt incurred in the race to deliver working software. This paper will describe the current status of EPICS 7, and some of the work done in the last two years following the reorganization of the code-base. It will cover some of the development group’s technical and process changes, and echo questions being asked about support for recent language standards that may affect support for older target platforms, and adoption of other internal standards for coding and documentation.
 
slides icon Slides WECPR01 [0.585 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WECPR01  
About • paper received ※ 30 September 2019       paper accepted ※ 02 October 2020       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEMPL004 Inception of a Learning Organization to Improve SOLEIL’s Operation operation, software, interface, controls 1001
 
  • A. Buteau, G. Abeillé, X. Delétoille, J.-F. Lamarre, T. Marion, L.S. Nadolski
    SOLEIL, Gif-sur-Yvette, France
 
  High quality of service is SOLEIL is a key mission since 2007. Historically operation processes and information systems have been defined mostly on the fly by the different teams all along the synchrotron’s journey. Some major outcomes are a limited cross-teams collaboration and a slow learning organization. Consequently, we are currently implementing a holistic approach with common operational processes upon a shared information system. Our first process is "incident management"; an incident is an unplanned disruption or degradation of service. We have tackled incident management for IT* in 2015, then for the accelerators since January 2018. We are starting to extend it to beamlines since beginning 2019. As a follow-up, we will address the "problem management" process (a problem is the cause of one or more incidents) and the creation of a knowledge base for the operation. By implementing those processes, the culture of continuous improvement is slowly spreading, in particular by driving blameless incident and problem analysis. This paper will present the journey we have been through including our results, improvements and difficulties of implementing this new way of thinking.
*ICALEPCS 2015: MOPGF150
 
poster icon Poster WEMPL004 [3.293 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPL004  
About • paper received ※ 30 September 2019       paper accepted ※ 20 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEMPL009 Tracking APS-U Production Components With the Component Database and eTraveler Applications data-management, controls, photon, software 1026
 
  • D.P. Jarosz, N.D. Arnold, J. Carwardine, G. Decker, N. Schwarz, G. Shen, S. Veseli
    ANL, Lemont, Illinois, USA
  • D. Liu
    Osprey DCS LLC, Ocean City, USA
 
  Funding: Argonne National Laboratory’s work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under contract DE-AC02-06CH11357
The installation of the APS-U has a short schedule of one year, making it imperative to be well prepared before the installation process begins. The Component Database (CDB) has been designed to help in documenting and tracking all the components for APS-U. Two new major domains, Machine Design domain and Measurement and Analysis Archive (MAARC) domain, have been added to CDB to further its ability in exhaustively documenting components. The Machine Design domain will help define the purpose of all the components in the APS-U design and the MAARC domain allows association of components with collected data. The CDB and a traveler application from FRIB have been integrated to help with documenting various processes performed, such as inspections and maintenance. Working groups have been formed to define appropriate work flow processes for receiving components, using the tools to document receiving inspection and QA requirements. The applications are under constant development to perform as expected by the working groups. Over some time, especially after production procurement began, the CDB has seen more and more usage in order to aid in preparation for the APS-U installation.
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPL009  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEMPR001 Data Analysis Infrastructure for Diamond Light Source Macromolecular & Chemical Crystallography and Beyond experiment, detector, monitoring, data-acquisition 1031
 
  • M. Gerstel, A. Ashton, R.J. Gildea, K. Levik, G. Winter
    DLS, Oxfordshire, United Kingdom
 
  The Diamond Light Source data analysis infrastructure, Zocalo, is built on a messaging framework. Analysis tasks are processed by a scalable pool of workers running on cluster nodes. Results can be written to a common file system, sent to another worker for further downstream processing and/or streamed to a LIMS. Zocalo allows increased parallelization of computationally expensive tasks and makes the use of computational resources more efficient. The infrastructure is low-latency, fault-tolerant, and allows for highly dynamic data processing. Moving away from static workflows expressed in shell scripts we can easily re-trigger processing tasks in the event that an issue is found. It allows users to re-run tasks with additional input and ensures that automatically and manually triggered processing results are treated equally. Zocalo was originally conceived to cope with the additional demand on infrastructure by the introduction of Eiger detectors with up to 18 Mpixels and running at up to 560 Hz framerate on single crystal diffraction beamlines. We are now adapting Zocalo to manage processing tasks for ptychography, tomography, cryo-EM, and serial crystallography workloads.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPR001  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEMPR004 Why Should You Invest in Asset Management? A Fire and Gas Use Case detector, software, SCADA, MMI 1041
 
  • H. Nissen, S. Grau
    CERN, Geneva, Switzerland
 
  At present, the CERN Fire and Gas detection systems involve about 22500 sensors and their number is increasing rapidly at the same time as the number of equipped installations grows up. These assets cover a wide spectrum of technologies, manufacturers, models, parameters, and ages, reflecting the 60 years of CERN history. The use of strict rules and data structures in the declaration of the assets can make a big impact on the overall system maintainability and therefore on the global reliability of the installation. Organized assets data facilitates the creation of powerful reports that help asset owners and management address material obsolescence and end-of-life concerns with a global perspective Historically preventive maintenance have been used to assure the correct function of the installations. With modern supervision systems, a lot of data is collected and can be used to move from preventive maintenance towards data driven maintenance (predictive). Moreover it optimizes maintenance cost and increase system availability while maintaining reliability. A prerequisite of this move is a coherence on the assets defined in the asset management system and in the supervision system.  
poster icon Poster WEMPR004 [0.675 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPR004  
About • paper received ※ 27 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA017 Integration of Wireless Mobile Equipment in Supervisory Application controls, vacuum, PLC, MMI 1102
 
  • S. Blanchard, R. Ferreira, P. Gomes, G. Pigny, A.P. Rocha
    CERN, Geneva, Switzerland
 
  Pumping group stations and bake-out control cabinets are temporarily installed close to vacuum systems in CERN accelerator tunnels, during their commissioning. The quality of the beam vacuum during operation depends greatly on the quality of the commissioning. Therefore, the integration of mobile equipment in the vacuum supervisory application is primordial. When connected to the control system, the mobile stations appear automatically integrated in the synoptic. They are granted with the same level of remote control, diagnostics and data logging as fixed equipment. The wireless connection and the communication protocol with the supervisory application offer a flexible and reliable solution with high level of integrity.  
poster icon Poster WEPHA017 [1.808 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA017  
About • paper received ※ 30 September 2019       paper accepted ※ 19 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA019 MONARC: Supervising the Archiving Infrastructure of CERN Control Systems controls, SCADA, data-acquisition, monitoring 1111
 
  • J-C. Tournier, E. Blanco Viñuela
    CERN, Geneva, Switzerland
 
  The CERN industrial control systems, using WinCC OA as SCADA (Supervisory Control and Data Acquisition), share a common history data archiving system relying on an Oracle infrastructure. It consists of 2 clusters of two nodes for a total of more than 250 schemas. Due to the large number of schemas and of the shared nature of the infrastructure, three basic needs arose: (1) monitor, i.e. get the inventory of all DB nodes and schemas along with their configurations such as the type of partitioning and their retention period; (2) control, i.e. parameterise each schema individually; and (3) supervise, i.e. have an overview of the health of the infrastructure and be notified of misbehaving schemas or database node. In this publication, we are presenting a way to monitor, control and supervise the data archiving system based on a classical SCADA system. The paper is organized in three parts: the first part presents the main functionalities of the application, while the second part digs into its architecture and implementation. The third part presents a set of use cases demonstrating the benefit of using the application.  
poster icon Poster WEPHA019 [2.556 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA019  
About • paper received ※ 30 September 2019       paper accepted ※ 19 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA020 Pushing the Limits of Tango Archiving System using PostgreSQL and Time Series Databases TANGO, controls, SRF, distributed 1116
 
  • R. Bourtembourg, S. James, J.L. Pons, P.V. Verdier
    ESRF, Grenoble, France
  • G. Cuní, S. Rubio-Manrique
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
  • M. Di Carlo
    INAF - OAAB, Teramo, Italy
  • G.A. Fatkin, A.I. Senchenko, V. Sitnov
    NSU, Novosibirsk, Russia
  • G.A. Fatkin, A.I. Senchenko, V. Sitnov
    BINP SB RAS, Novosibirsk, Russia
  • L. Pivetta, C. Scafuri, G. Scalamera, G. Strangolino, L. Zambon
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  The Tango HDB++ project is a high performance event-driven archiving system which stores data with micro-second resolution timestamps, using archivers written in C++. HDB++ supports MySQL/MariaDB and Apache Cassandra backends and has been recently extended to support PostgreSQL and TimescaleDB*, a time-series PostgreSQL extension. The PostgreSQL backend has enabled efficient multi-dimensional data storage in a relational database. Time series databases are ideal for archiving and can take advantage of the fact that data inserted do not change. TimescaleDB has pushed the performance of HDB++ to new limits. The paper will present the benchmarking tools that have been developed to compare the performance of different backends and the extension of HDB++ to support TimescaleDB for insertion and extraction. A comparison of the different supported back-ends will be presented.
https://timescale.com
 
poster icon Poster WEPHA020 [1.609 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA020  
About • paper received ※ 30 September 2019       paper accepted ※ 02 November 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA047 Cable Database at ESS interface, controls, operation, status 1199
 
  • R.N. Fernandes, S.R. Gysin, J.A. Persson, S. Regnell
    ESS, Lund, Sweden
  • L.J.G. Johansson
    OTIF, Malmö, Sweden
  • S. Sah
    Cosylab, Ljubljana, Slovenia
  • M. Salmič
    COSYLAB, Control System Laboratory, Ljubljana, Slovenia
 
  When completed, the European Spallation Source (ESS) will have around half a million of installed cables to power and control both the machine and end-stations instruments. To keep track of all these cables throughout the different phases of ESS, an application called Cable Database was developed at the Integrated Control System (ICS) Division. It provides a web-based graphical interface where authorized users may perform CRUD operations in cables, as well as batch imports (through well-defined EXCEL files) to substantially shortened the time needed to deal with massive amounts of cables at once. Besides cables, the Cable Database manages cable types, connectors, manufacturers and routing points, thus fully handling the information that surrounds cables. Additionally, it provides a programmatic interface through RESTful services that other ICS applications (e.g. CCDB) may consume to successfully perform their domain specific businesses. The present paper introduces the Cable Database and describes its features, architecture and technology stack, data concepts and interfaces. Finally, it enumerates development directions that could be pursued to further improve this application.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA047  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA048 Management of IOCs at ESS EPICS, factory, controls, interface 1204
 
  • R.N. Fernandes, S.R. Gysin, T. Korhonen, J.A. Persson, S. Regnell
    ESS, Lund, Sweden
  • M. Pavleski, S. Sah
    Cosylab, Ljubljana, Slovenia
 
  The European Spallation Source (ESS) is a neutron research facility based in Sweden that will be in operation in 2023. It is expected to have around 1500 IOCs controlling both the machine and end-station instruments. To manage the IOCs, an application called IOC Factory was developed at ESS. It provides a consistent and centralized approach on how IOCs are configured, generated, browsed and audited. The configuration allows users to select EPICS module versions of interest, and set EPICS environment variables and macros for IOCs. The generation automatically creates IOCs according to configurations. Browsing retrieves information on when, how and why IOCs were generated and by whom. Finally, auditing tracks changes of generated IOCs deployed locally. To achieve these functionalities, the IOC Factory relies on two other applications: the Controls Configuration Database (CCDB) and the ESS EPICS Environment (E3). The first stores information about IOCs, devices controlled by these, and required EPICS modules and snippets, while the second stores snippets needed to generate IOCs (st.cmd files). Combined, these applications enable ESS to successfully manage IOCs with minimum effort.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA048  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA097 Development of a Tango Interface for the Siemens-Based Control System of the Elettra Infrastructure Plants controls, TANGO, device-server, interface 1321
 
  • P. Michelini, I. Ferigutti, F. Giacuzzo, M. Lonza, G. Scalamera, G. Strangolino, M. Trevi
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  The control system of the Elettra Sincrotrone Trieste infrastructure plants (cooling water, air conditioning, electricity, etc.) consists of several Siemens PLCs connected by an Ethernet network and a number of management stations running the Siemens Desigo software for high-level operation and monitoring, graphical display of the process variables, automatic alarm distribution and a wide range of different data analysis features. No external interface has been realized so far to connect Desigo to the Elettra and FERMI accelerator control systems based on Tango, making it difficult for the control room operators to monitor the conventional plant operation and parameters (temperature, humidity, water pressure, etc.), which are essential for the accelerator performance and reliability. This paper describes the development of a dedicated Desigo application to make selected process variables externally visible to a specific Tango device server, which then enables the use of all the tools provided by this software framework to implement graphical interfaces, alarms, archiving, etc. New proposals and developments to expand and improve the system are also discussed.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA097  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA112 Database Scheme for On-Demand Beam Route Switching Operations at SACLA/SPring-8 operation, controls, storage-ring, FEL 1352
 
  • K. Okada, N. Hosoda, T. Ohshima, T. Sugimoto, M. Yamaga
    JASRI, Hyogo, Japan
  • T. Fujiwara, T. Maruyama, T. Ohshima, T. Okada
    RIKEN SPring-8 Center, Hyogo, Japan
  • T. Fukui, N. Hosoda, H. Maesaka
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
  • O. Morimoto, Y. Tajiri
    SES, Hyogo-pref., Japan
 
  At SACLA, the X-ray free electron laser (XFEL) facility, we have been operating the electron linac in time-sharing (equal duty) mode between beamlines. The next step is to vary the duty factor on an on-demand basis and to bring the beam into the SP8 storage ring. It is a part of a big picture of an upgrade*. The low-emittance beam is ideal for the next generation storage ring. In every 60 Hz repetition cycle, we have to deal a bunch of electrons properly. The challenge here is we must keep the beam quality for the XFEL demands while responding occasional injection requests from the storage ring**. This paper describes the database system that supports both SACLA/SP8 operations. The system is a combination of RDB and NoSQL databases. In the on-demand beam switching operation, the RDB part keeps the parameters to define sequences, which include a set of one-second route patterns, and a bucket sequence for the injection, etc. As for data analysis, it is going to be a post-process to build an event for a certain route, because not all equipment get the route command in real time. We present the preparation status toward the standard operation for beamline users.
*http://rsc.riken.jp/pdf/SPring-8-II.pdf
**IPAC2019 proceedings
 
poster icon Poster WEPHA112 [0.561 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA112  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA148 Cumbia-Telegram-Bot: Use Cumbia and Telegram to Read, Monitor and Receive Alerts From the Control Systems controls, operation, TANGO, EPICS 1441
 
  • G. Strangolino
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  Telegram is a cloud-based mobile and desktop messaging app focused on security and speed. It is available for Android, iPhone/iPad, Windows, macOS, Linux and as a web application. The user signs in the cumbia-telegram bot to chat with a Tango or EPICS control system from everywhere. One can read and monitor values, as well as receive alerts when something special happens. Simple source names or their combination into formulas can be sent to the bot. It replies and notifies results. It is simple, fast, intuitive. A phone number to register with telegram and a client are the necessary ingredients. On the server side, cumbia-telegram provides the administrator with full control over the allocation of resources, the network load and the clients authorized to chat with the bot. Additionally, the access to the systems is read only. On the client side, the bot has been meticulously crafted to make interaction easy and fast: history, bookmarks and alias plugins pare texting down to the bone. Preferred and most frequent operations are accessible by simple taps on special command links. The bot relies on modules and plugins, that make the application extensible.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA148  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA164 CAFlux: A New EPICS Channel Archiver System EPICS, MMI, status, interface 1470
 
  • K. Xu
    LANL, Los Alamos, New Mexico, USA
 
  We post a new EPICS channel archiver system that is being developed at LANSCE of Los Alamos National Laboratory. Different from the legacy archiver system, this system is built on InfluxDB database and Plotly visualization toolkits. InfluxDB is an open­source time series database system and provides a SQL-like language for fast storage and retrieval of time series data. By replacing the old archiving engine and index file with InfluxDB, we have a more robust, compact and stable archiving server. On a client side, we intro­duce a new implementation combined with asynchronous programming and multithreaded programming. We also describe a web-based archiver configuration system that is associ­ated with our current IRMIS system. To visualize the data stored, we use the JavaScript Plotly graphing library, another open source toolkit for time series data, to build front­end pages. In addition, we also develop a viewer application with more functionality including basic data statistics and simple arithmetic for channel values. Finally, we propose some ideas to integrate more statistical analysis into this system.  
poster icon Poster WEPHA164 [0.697 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA164  
About • paper received ※ 27 September 2019       paper accepted ※ 20 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA166 Development of Web-based Parameter Management System for SHINE interface, framework, controls, MMI 1478
 
  • H.H. Lv
    SINAP, Shanghai, People’s Republic of China
  • C.P. Chu
    IHEP, Beijing, People’s Republic of China
  • Y.B. Leng, Y.B. Yan
    SSRF, Shanghai, People’s Republic of China
 
  A web-based parameter management system for Shanghai High repetition rate XFEL aNd Extreme light facility (SHINE) is developed for accelerator physicists and researchers to communicate with each other and track the modified history. The system is based on standard J2EE Glassfish platform with MySQL database utilized as backend data storage. The user interface is designed with JavaServer Faces which incorporates MVC architecture. It is of great convenience for researchers in the facility designing process.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA166  
About • paper received ※ 12 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THAPP01 Automatic Generation of PLC Projects Using Standardized Components and Data Models PLC, framework, hardware, interface 1532
 
  • S.T. Huynh, H. Ali, B. Baranasic, N. Coppola, T. Freyermuth, P. Gessler, N. Jardón Bueno, M. Stupar, J. Tolkiehn, J. Zach
    EuXFEL, Schenefeld, Germany
 
  In an environment of rapidly expanding and changing control systems, a solution geared towards the automation of application dependent Programmable Logic Controller (PLC) projects becomes an increasing need at the European X-Ray Free Electron Laser (EuXFEL). Through the standardization of components in the PLC Framework, it becomes feasible to develop tools in order to automate the generation of over 100 Beckhoff PLC Projects. The focus will be on the PLC Management System (PLCMS) tool developed to achieve this. Provided with an electrical diagram markup (EPLAN XML export), the PLCMS queries the database model populated from the PLC Framework. It captures integration parameters and compatible EtherCAT fieldbus hardware. Additionally, inter-device communication and interlocking processes are integrated into the PLC from a defined user template by the PLCMS. The solution provides a flexible and scalable means for automatic and expedited deployment for the PLC control systems. The PLCMS can be further enhanced by interfacing into the Supervisory Control and Data Acquisition (SCADA) system for complete asset management of both PLC software and connected hardware across the facility.  
slides icon Slides THAPP01 [0.908 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-THAPP01  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THCPL04 SCIBORG: Analyzing and Monitoring LMJ Facility Health and Performance Indicators software, controls, laser, monitoring 1597
 
  • J-P. Airiau, V. Denis, P. Fourtillan, C. Lacombe, S. Vermersch
    CEA, LE BARP cedex, France
 
  The Laser MegaJoule (LMJ) is a 176-beam laser facility, located at the CEA CESTA laboratory near Bordeaux (France). It is designed to deliver about 1.4 MJ of energy to targets, for high energy density physics experiments, including fusion experiments. It operates, since June 2018, 5 of the 22 bundles expected in the final configuration. Monitoring system health and performance of such a facility is essential to maintain high operational availability. SCIBORG is the first step of a larger software that will collect in one tool all the facility parameters. Nowadays SCIBORG imports experiment setup and results, alignment and PAM* control command parameters. It is designed to perform data analysis (temporal/crossed) and implements monitoring features (dashboard). This paper gives a first user feedback and the milestones for the full spectrum system.
*PreAmplifier Module
 
slides icon Slides THCPL04 [4.882 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-THCPL04  
About • paper received ※ 01 October 2019       paper accepted ※ 08 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)