Keyword: monitoring
Paper Title Other Keywords Page
MOMPL007 The Design of Intelligent Integrated Control Software Framework of Facilities for Scientific Experiments controls, software, framework, experiment 132
 
  • Z. Ni, L. Li, J. Liu, J. Luo, X. Zhou
    CAEP, Sichuan, People’s Republic of China
  • Y. Gao
    Stony Brook University, Stony Brook, New York, USA
 
  The control system of the scientific experimental facility requires heterogeneous control access, domain algorithm, sequence control, monitoring, log, alarm and archiving. We must extract common requirements such as monitoring, control, and data acquisition. Based on the Tango framework, we build typical device components, algorithms, sequence engines, graphical models and data models for scientific experimental facility control systems developed to meet common needs, and are named the Intelligent integrated Control Software Framework of Facilities for Scientific Experiments (iCOFFEE). As a development platform for integrated control system software, iCOFFEE provides a highly flexible architecture, standardized templates, basic functional components and services for control systems that increase flexibility, robustness, scalability and maintainability. This article focuses on the design of the framework, especially the monitoring configuration and control flow design.  
slides icon Slides MOMPL007 [2.143 MB]  
poster icon Poster MOMPL007 [2.445 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOMPL007  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOMPL010 Data Streaming With Apache Kafka for CERN Supervision, Control and Data Acquisition System for Radiation and Environmental Protection controls, SCADA, real-time, radiation 147
 
  • A. Ledeul, A. Savulescu, G. Segura, B. Styczen
    CERN, Meyrin, Switzerland
 
  The CERN HSE - occupational Health & Safety and Environmental protection - Unit develops and operates REMUS - Radiation and Environmental Unified Supervision - , a Radiation and Environmental Supervision, Control and Data Acquisition system, covering CERN accelerators, experiments and their surrounding environment. REMUS is now making use of modern data streaming technologies in order to provide a secure, reliable, scalable and loosely coupled solution for streaming near real-time data in and out of the system. Integrating the open-source streaming platform Apache Kafka allows the system to stream near real-time data to Data Visualization Tools and Web Interfaces. It also permits full-duplex communication with external Control Systems and IIoT - Industrial Internet Of Things - devices, without compromising the security of the system and using a widely adopted technology. This paper describes the architecture of the system put in place, and the numerous applications it opens up for REMUS and Control Systems in general.  
poster icon Poster MOMPL010 [25.881 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOMPL010  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA031 Software and Hardware Design for Controls Infrastructure at Sirius Light Source controls, interface, hardware, EPICS 263
 
  • J.G.R.S. Franco, C.F. Carneiro, E.P. Coelho, R.C. Ito, P.H. Nallin, R.W. Polli, A.R.D. Rodrigues, V. dos Santos Pereira
    LNLS, Campinas, Brazil
 
  Sirius is a 3 GeV synchrotron light source under construction in Brazil. Assembly of its accelerators began on March 2018, when the first parts of the linear accelerator were taken out of their boxes and installed. The booster synchrotron installation has already been completed and its subsystems are currently under commissioning, while assembly of storage ring components takes place in parallel. The Control System of Sirius accelerators, based on EPICS, plays an important role in the machine commissioning, and installations and improvements have been continuously achieved. This work describes all the IT infrastructure underlying the control system, hardware developments, software architecture, and support applications. Future plans are also presented.  
poster icon Poster MOPHA031 [32.887 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA031  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA032 Big Data Architectures for Logging and Monitoring Large Scale Telescope Arrays software, controls, operation, database 268
 
  • A. Costa, U. Becciani, P. Bruno, A.S. Calanducci, A. Grillo, S. Riggi, E. Sciacca, F. Vitello
    INAF-OACT, Catania, Italy
  • V. Conforti, F. Gianotti
    INAF, Bologna, Italy
  • J. Schwarz
    INAF-Osservatorio Astronomico di Brera, Merate, Italy
  • G. Tosti
    Università degli di Perugia, Perugia, Italy
 
  Funding: This work was partially supported by the ASTRI "Flagship Project" financed by the Italian Ministry of Education, University, and Research and led by the Italian National Institute of Astrophysics.
Large volumes of technical and logging data result from the operation of large scale astrophysical infrastructures. In the last few years several "Big Data" technologies have been developed to deal with a huge amount of data, e.g. in the Internet of Things (IoT) framework. We are comparing different stacks of Big Data/IoT architectures including high performance distributed messaging systems, time series databases, streaming systems, interactive data visualization. The main aim is to classify these technologies based on a set of use cases typically related to the data produced in the astronomical environment, with the objective to have a system that can be updated, maintained and customized with a minimal programming effort. We present the preliminary results obtained, using different Big Data stack solution to manage some use cases related to quasi real-time collection, processing and storage of the technical data, logging and technical alert produced by the array of nine ASTRI telescopes that are under development by INAF as a pathfinder array for the Cherenkov astronomy in the TeV energy range.
*ASTRI Project: http://www.brera.inaf.it/~astri/wordpress/
**CTA Project: https://www.cta-observatory.org/
 
poster icon Poster MOPHA032 [1.327 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA032  
About • paper received ※ 02 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA064 An Off-Momentum Beam Loss Feedback Controller and Graphical User Interface for the LHC feedback, controls, GUI, collimation 360
 
  • B. Salvachua, D. Alves, G. Azzopardi, S. Jackson, D. Mirarchi, M. Pojer
    CERN, Meyrin, Switzerland
  • G. Valentino
    University of Malta, Information and Communication Technology, Msida, Malta
 
  During LHC operation, a campaign to validate the configuration of the LHC collimation system is conducted every few months. This is performed by means of loss maps, where specific beam losses are deliberately generated with the resulting loss patterns compared to expectations. The LHC collimators have to protect the machine from both betatron and off-momentum losses. In order to validate the off-momentum protection, beam losses are generated by shifting the RF frequency using a low intensity beam. This is a delicate process that, in the past, often led to the beam being dumped due to excessive losses. To avoid this, a feedback system based on the 100 Hz data stream from the LHC Beam Loss system has been implemented. When given a target RF frequency, the feedback system approaches this frequency in steps while monitoring the losses until the selected loss pattern conditions are reached, so avoiding the excessive losses that lead to a beam dump. This paper will describe the LHC off-momentum beam loss feedback system and the results achieved.  
poster icon Poster MOPHA064 [5.005 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA064  
About • paper received ※ 27 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA071 Integrated Multi-Purpose Tool for Data Processing and Analysis via EPICS PV Access controls, LEBT, linac, EPICS 379
 
  • J.H. Kim, H.S. Kim, Y.M. Kim, H.-J. Kwon, Y.G. Song
    Korea Atomic Energy Research Institute (KAERI), Gyeongbuk, Republic of Korea
 
  Funding: This work has been supported through KOMAC (Korea Multi-purpose Accelerator Complex) operation fund of KAERI by MSIT (Ministry of Science and ICT)
At the KOMAC, we have been operating a proton linac, consists of an ion source, low energy beam transport, a radio frequency quadrupole and eleven drift tube linacs for 100 MeV. The beam that users require is transported to the five target rooms using linac control system based on EPICS framework. In order to offering stable beam condition, it is important to figure out characteristic of a 100 MeV proton linac. Then the beam diagnosis systems such as beam current monitoring system, beam phase monitoring system and beam position monitoring system are installed on linac. All the data from diagnosis systems are monitored using control system studio for user interface and are archived through archive appliance. Operators analyze data after experiment for linac characteristic or some events are happened. So data scanning and processing tools are required to manage and analysis the linac more efficiently. In this paper, we describe implementation for the integrated data processing and analysis tools based on data access.
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA071  
About • paper received ※ 30 September 2019       paper accepted ※ 02 October 2020       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA085 CERN Controls Open Source Monitoring System controls, software, status, database 404
 
  • F. Locci, F. Ehm, L. Gallerani, J. Lauener, J.P. Palluel, R. Voirin
    CERN, Meyrin, Switzerland
 
  The CERN accelerator controls infrastructure spans several thousands of machines and devices used for Accelerator control and data acquisition. In 2009 a full home-made CERN solution has been developed (DIAMON) to monitor and diagnose the complete controls infrastructure. The adoption of the solution by an enlarged community of users and its rapid expansion led to a final product that became more difficult to operate and maintain, in particular because of the multiplicity and redundancy of the services, the centralized management of the data acquisition and visualization software, the complex configuration and also the intrinsic scalability limits. At the end 2017, a complete new monitoring system for the beam controls infrastructure was launched. The new "COSMOS" system was developed with two main objectives in mind: First, detect instabilities and prevent breakdowns of the control system infrastructure and to provide users with a more coherent and efficient solution for the development of their specific data monitoring agents and related dashboards. This paper describes the overall architecture of COSMOS, focusing on the conceptual and technological choices of the system.  
poster icon Poster MOPHA085 [1.475 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA085  
About • paper received ※ 29 September 2019       paper accepted ※ 19 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA117 Big Data Archiving From Oracle to Hadoop database, network, operation, SCADA 497
 
  • I. Prieto Barreiro, M. Sobieszek
    CERN, Meyrin, Switzerland
 
  The CERN Accelerator Logging Service (CALS) is used to persist data of around 2 million predefined signals coming from heterogeneous sources such as the electricity infrastructure, industrial controls like cryogenics and vacuum, or beam related data. This old Oracle based logging system will be phased out at the end of the LHC’s Long Shut-down 2 (LS2) and will be replaced by the Next CERN Accelerator Logging Service (NXCALS) which is based on Hadoop. As a consequence, the different data sources must be adapted to persist the data in the new logging system. This paper describes the solution implemented to archive into NXCALS the data produced by QPS (Quench Protection System) and SCADAR (Supervisory Control And Data Acquisition Relational database) systems, which generate a total of around 175, 000 values per second. To cope with such a volume of data the new service has to be extremely robust, scalable and fail-safe with guaranteed data delivery and no data loss. The paper also explains how to recover from different failure scenarios like e.g. network disruption and how to manage and monitor this highly distributed service.  
poster icon Poster MOPHA117 [1.227 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA117  
About • paper received ※ 29 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA118 Improving Alarm Handling for the TI Operators by Integrating Different Sources in One Alarm Management and Information System framework, interface, database, controls 502
 
  • M. Bräger, M. Bouzas Reguera, U. Epting, E. Mandilara, E. Matli, I. Prieto Barreiro, M.P. Rafalski
    CERN, Geneva, Switzerland
 
  CERN uses a central alarm system to monitor its complex technical infrastructure. The Technical Infrastructure (TI) operators must handle a large number of alarms coming from several thousand equipments spread around CERN. In order to focus on the most important events and improve the time required to solve the problem, it is necessary to provide extensive helpful information such as alarm states of linked systems, a geographical overview on a detailed map and clear instructions to the operators. In addition, it is useful to temporarily inhibit alarms coming from equipment during planned maintenance or interventions. The tool presents all necessary information in one place and adds simple and intuitive functionality to ease the operation with an enhanced interface.  
poster icon Poster MOPHA118 [0.907 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA118  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA124 Local Oscillator Rear Transition Module for 704.42 MHz LLRF Control System at ESS controls, LLRF, cavity, operation 516
 
  • I. Rutkowski, K. Czuba, M.G. Grzegrzółka
    Warsaw University of Technology, Institute of Electronic Systems, Warsaw, Poland
 
  Funding: Work supported by Polish Ministry of Science and Higher Education, decision number DIR/WK/2016/03.
This paper describes the specifications, architecture, and measurements’ results of the MTCA-compliant Local Oscillator (LO) Rear Transition Module (RTM) board providing low phase noise clock and heterodyne signals for the 704.42 MHz Low Level Radio Frequency (LLRF) control system at the European Spallation Source (ESS). The clock generation and LO synthesis circuits are based on the module presented at ICALEPCS 2017. The conditioning circuits for the input and output signals must simultaneously achieve the desired impedance matching, spectral purity, output power as well as the phase noise requirements. The reference conditioning circuit presents an additional challenge due to input power range being significantly wider than the output range. The circuits monitoring the power levels of critical signals and voltages of supply rails for remote diagnostics as well as the programmable logic devices used to set the operating parameters via Zone3 connector are described.
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA124  
About • paper received ※ 04 October 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPHA147 Integrating the First SKA MPI Dish Into the MeerKAT Array TANGO, controls, interface, software 575
 
  • S.N. Twum, A.F. Joubert, K. Madisa
    SARAO, Cape Town, South Africa
 
  Funding: National Research Foundation
The 64-antenna MeerKAT interferometric radio telescope is a precursor to the SKA which will host hundreds of receptor dishes with a collecting area of 1 sq km. During the pre-construction phase of the SKA1 MID, the SKA DSH Consortium plans to build, integrate and qualify an SKA1 MID DSH Qualification Model (SDQM) against MeerKAT. Before the system level qualification testing can start on the SDQM, the qualified Dish sub-elements have to be integrated onto the SDQM and set to work. The SKA MPI DISH, a prototype SKA dish funded by the Max Planck Institute, will be used for early verification of the hardware and the control system. This prototype dish uses the TANGO framework for monitoring and control while MeerKAT uses the Karoo Array Telescope Control Protocol (KATCP). To aid the integration of the SKA MPI DSH, the MeerKAT Control and Monitoring (CAM) subsystem has been upgraded by incorporating a translation layer and a specialized SKA antenna proxy that will enable CAM to monitor and command the SKA dish as if it were a MeerKAT antenna.
 
poster icon Poster MOPHA147 [0.915 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA147  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOSH1001 Current Status of KURAMA-II radiation, network, survey, detector 641
 
  • M. Tanigaki
    Kyoto University, Research Reactor Institute, Osaka, Japan
 
  KURAMA-II, a successor of a carborne gamma-ray survey system named KURAMA (Kyoto University RAdiation MApping system), has become one of the major systems for the activities related to the nuclear accident at TEPCO Fukushima Daiichi Nuclear Power Plant in 2011. The development of KURAMA-II is still on the way to extend its application areas beyond specialists. One of such activities is the development of cloud services for serving an easy management environment for data management and interactions with existing radiation monitoring schemes. Another trial is to port the system to a single-board computer for serving KURAMA-II as a tool for the prompt establishment of radiation monitoring in a nuclear accident. In this paper, the current status of KURAMA-II on its developments and applications along with some results from its applications are introduced.  
slides icon Slides MOSH1001 [94.239 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOSH1001  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL04 Public Cloud-based Remote Access Infrastructure for Neutron Scattering Experiments at MLF, J-PARC experiment, operation, neutron, software 707
 
  • K. Moriyama
    CROSS, Ibaraki, Japan
  • T. Nakatani
    JAEA/J-PARC, Tokai-mura, Japan
 
  An infrastructure for remote access supporting research workflow is essential for neutron scattering user facilities such as J-PARC MLF. Because the experimental period spans day and night, service monitoring the measurement status from outside the facility is required. Additionally, convenient way to bring a large amount of data back to user’s home institution and to analyze it after experiments is required. To meet these requirements, we are developing a remote access infrastructure as a front-end for facility users based on public clouds. Recently, public clouds have been rapidly developed, so that development and operation schemes of computer systems have changed considerably. Various architectures provided by public clouds enable advanced systems to develop quickly and effectively. Our cloud-based infrastructure comprises services for experimental monitoring, data download and data analysis, using architectures, such as object storage, event-driven server-less computing, and virtual desktop infrastructure (VDI). Facility users can access this infrastructure using a web browser and a VDI client. This contribution reports the current status of the remote access infrastructure.  
slides icon Slides TUBPL04 [6.858 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUBPL04  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL06 Energy Consumption Monitoring With Graph Databases and Service Oriented Architecture database, network, operation, interface 719
 
  • A. Kiourkos, S. Infante, K.S. Seintaridis
    CERN, Meyrin, Switzerland
 
  CERN is a major electricity consumer. In 2018 it consumed 1.25 TWh, 1/3 the consumption of Geneva. Monitoring of this consumption is crucial for operational reasons but also for raising awareness of the users regarding energy utilization. This monitoring is done via a system, developed internally and is quite popular within the CERN community therefore to accommodate the increasing requirements, a migration is underway that utilizes the latest technologies for data modeling and processing. We present the architecture of the new energy monitoring system with an emphasis on the data modeling, versioning and the use of graphs to store and process the model of the electrical network for the energy calculations. The algorithms that are used are presented and a comparison with the existing system is performed in order to demonstrate the performance improvements and flexibility of the new approach. The system embraces the Service Oriented Architecture principles and we illustrate how these have been applied in its design. The different modules and future possibilities are also presented with an analysis of their strengths, weaknesses, and integration within the CERN infrastructure.  
slides icon Slides TUBPL06 [3.018 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUBPL06  
About • paper received ※ 29 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUDPP01 A Monitoring System for the New ALICE O2 Farm detector, network, database, controls 835
 
  • G. Vino, D. Elia
    INFN-Bari, Bari, Italy
  • V. Chibante Barroso, A. Wegrzynek
    CERN, Meyrin, Switzerland
 
  The ALICE Experiment has been designed to study the physics of strongly interacting matter with heavy-ion collisions at the CERN LHC. A major upgrade of the detector and computing model (O2, Offline-Online) is currently ongoing. The ALICE O2 farm will consist of almost 1000 nodes enabled to readout and process on-the-fly about 27 Tb/s of raw data. To increase the efficiency of computing farm operations a general-purpose near real-time monitoring system has been developed: it lays on features like high-performance, high-availability, modularity, and open source. The core component (Apache Kafka) ensures high throughput, data pipelines, and fault-tolerant services. Additional monitoring functionality is based on Telegraf as metric collector, Apache Spark for complex aggregation, InfluxDB as time-series database, and Grafana as visualization tool. A logging service based on Elasticsearch stack is also included. The designed system handles metrics coming from operating system, network, custom hardware, and in-house software. A prototype version is currently running at CERN and has been also successfully deployed by the ReCaS Datacenter at INFN Bari for both monitoring and logging.  
slides icon Slides TUDPP01 [1.128 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUDPP01  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEAPP01 Old and New Generation Control Systems at ESA controls, operation, ECR, interface 859
 
  • M. Pecchioli
    ESA/ESOC, Darmstadt, Germany
 
  Traditionally Mission Control Systems for spacecraft operated at the European Space Operations Centre (ESOC) have been developed based on large re-use of a common implementation covering the majority of the required functions, which is referred to as mission control system infrastructure. The generation currently in operations has been successfully used for all categories of missions, including many commercial ones operated outside ESOC. It is however anticipated that its implementation is going to face obsolescence in the coming years, thus an ambitious Project is currently on-going aiming at the development and deployment of a completely new generation. This Project capitalizes as much as possible on the European initiative (referred to as EGS-CC) which is progressively developing and delivering a modern and advanced platform forming the basis for any type of monitoring and control applications for space systems. This paper is going to provide a technical overview of the two infrastructure generations, highlighting the main differences from a technical and usability standpoints. Lessons learned from previous and current developments will also be analyzed.  
slides icon Slides WEAPP01 [4.794 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEAPP01  
About • paper received ※ 26 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEBPP02 Centralized System Management of IPMI Enabled Platforms Using EPICS EPICS, interface, controls, database 887
 
  • K. Vodopivec
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This work was supported by the U.S. Department of Energy under contract DE-AC0500OR22725.
Intelligent Platform Management Interface (IPMI) is a specification for computer hardware platform management and monitoring. The interface includes features for monitoring hardware sensors like fan speed and device temperature, inventory discovery, event propagation and logging. All IPMI functionality is accessible without the host operating system running. With its wide support across hardware vendors and the backing of a standardization committee, it is a compelling instrumentation for integration into a control system for large experimental physics projects. Integrating IPMI into EPICS provides the benefit of centralized monitoring, archiving and alarming integrated with the facility control system. A new project has been started to enable this capability by creating a native EPICS device driver built on the open-source FreeIPMI library for the remote host connection interface. The driver supports automatic system components discovery for creating EPICS database templates, detailed device information from Field Replaceable Unit interface, sensor monitoring with remote threshold management, geographical PV addressing in PICMG based platforms and PICMG front panel lights readout.
 
slides icon Slides WEBPP02 [7.978 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEBPP02  
About • paper received ※ 02 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEMPR001 Data Analysis Infrastructure for Diamond Light Source Macromolecular & Chemical Crystallography and Beyond experiment, detector, database, data-acquisition 1031
 
  • M. Gerstel, A. Ashton, R.J. Gildea, K. Levik, G. Winter
    DLS, Oxfordshire, United Kingdom
 
  The Diamond Light Source data analysis infrastructure, Zocalo, is built on a messaging framework. Analysis tasks are processed by a scalable pool of workers running on cluster nodes. Results can be written to a common file system, sent to another worker for further downstream processing and/or streamed to a LIMS. Zocalo allows increased parallelization of computationally expensive tasks and makes the use of computational resources more efficient. The infrastructure is low-latency, fault-tolerant, and allows for highly dynamic data processing. Moving away from static workflows expressed in shell scripts we can easily re-trigger processing tasks in the event that an issue is found. It allows users to re-run tasks with additional input and ensures that automatically and manually triggered processing results are treated equally. Zocalo was originally conceived to cope with the additional demand on infrastructure by the introduction of Eiger detectors with up to 18 Mpixels and running at up to 560 Hz framerate on single crystal diffraction beamlines. We are now adapting Zocalo to manage processing tasks for ptychography, tomography, cryo-EM, and serial crystallography workloads.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEMPR001  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA019 MONARC: Supervising the Archiving Infrastructure of CERN Control Systems database, controls, SCADA, data-acquisition 1111
 
  • J-C. Tournier, E. Blanco Viñuela
    CERN, Geneva, Switzerland
 
  The CERN industrial control systems, using WinCC OA as SCADA (Supervisory Control and Data Acquisition), share a common history data archiving system relying on an Oracle infrastructure. It consists of 2 clusters of two nodes for a total of more than 250 schemas. Due to the large number of schemas and of the shared nature of the infrastructure, three basic needs arose: (1) monitor, i.e. get the inventory of all DB nodes and schemas along with their configurations such as the type of partitioning and their retention period; (2) control, i.e. parameterise each schema individually; and (3) supervise, i.e. have an overview of the health of the infrastructure and be notified of misbehaving schemas or database node. In this publication, we are presenting a way to monitor, control and supervise the data archiving system based on a classical SCADA system. The paper is organized in three parts: the first part presents the main functionalities of the application, while the second part digs into its architecture and implementation. The third part presents a set of use cases demonstrating the benefit of using the application.  
poster icon Poster WEPHA019 [2.556 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA019  
About • paper received ※ 30 September 2019       paper accepted ※ 19 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA104 Managing Cybersecurity for Control System Safety System development environments controls, network, software, ISOL 1343
 
  • R. Mudingay, S. Armanet
    ESS, Lund, Sweden
 
  At ESS, we manage cyber security for our control system infrastructure by mixing together technologies that are relevant for each system. User access to the control system networks is controlled by an internal DMZ concept whereby we use standard security tools (vulnerability scanners, central logging, firewall policies, system and network monitoring), and users have to go through dedicated control points (reverse proxy, jump hosts, privileged access management solutions or EPICS channel or PV access gateways). The infrastructure is managed though a DevOps approach: describing each component using a configuration management solution; using version control to track changes, with continuous integration workflows to our development process; and constructing the deployment of the lab/staging area to mimic the production environment. We also believe in the flexibility of visualization. This is particularly true for safety systems where the development of safety-critical code requires a high level of isolation. To this end, we utilize dedicated virtualized infrastructure and isolated development environments to improve control (remote access, software update, safety code management).  
poster icon Poster WEPHA104 [0.840 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA104  
About • paper received ※ 27 September 2019       paper accepted ※ 03 November 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA120 Management of MicroTCA Systems and its Components with a DOOCS-Based Control System controls, GUI, interface, operation 1372
 
  • V. Petrosyan, K. Rehlich, E. Sombrowski
    DESY, Hamburg, Germany
 
  An extensive management functionality is one of the key advantages of the MicroTCA.4 standard. Monitoring and control of more than 350 MicroTCA crates and thousands of AMC and RTM modules installed at XFEL, FLASH, SINBAD and ANGUS experiments has been integrated into the DOOCS-based control system. A DOOCS middle layer server together with Java-based GUIs - JDDD and JDTool - developed at DESY, enable remote management and provide information about MicroTCA shelves and components. The integrated management includes inventory information, monitoring current consumption, temperatures, voltages and various types of the built-in sensors. The system event logs and collected histories of the sensors are used to investigate failures and issues.  
poster icon Poster WEPHA120 [1.612 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA120  
About • paper received ※ 24 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA125 Integrating IoT Devices Into the CERN Control and Monitoring Platform controls, simulation, data-acquisition, framework 1385
 
  • B. Copy, M. Bräger, A. Papageorgiou Koufidis, E. Piselli, I. Prieto Barreiro
    CERN, Geneva, Switzerland
 
  The CERN Control and Monitoring Platform (C2MON) offers interesting features required in the industrial controls domain to support Internet of Things (IoT) scenarios. This paper aims to highlight the main advantages of a cloud deployment solution, in order to support large-scale embedded data acquisition and edge computing. Several IoT use cases will be explained, illustrated by real examples carried out in collaboration with CERN Knowledge Transfer programme.  
poster icon Poster WEPHA125 [1.854 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA125  
About • paper received ※ 27 September 2019       paper accepted ※ 20 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA131 Evaluation of an SFP Based Test Loop for a Future Upgrade of the Optical Transmission for CERN’s Beam Interlock System operation, diagnostics, network, hardware 1399
 
  • R. Secondo, M.A. Galilée, J.C. Garnier, C. Martin, I. Romera, A.P. Siemko, J.A. Uythoven
    CERN, Meyrin, Switzerland
 
  The Beam Interlock System (BIS) is the backbone of CERN’s machine protection system. The BIS is responsible for relaying the so-called Beam Permit signal, initiating in case of need the controlled removal of the beam by the LHC Beam Dumping System. The Beam Permit is encoded as a specific frequency traveling over a more than 30 km long network of optical fibers all around the LHC ring. The progressive degradation of the optical fibers and the aging of electronics affect the decoding of the Beam Permit, thus potentially resulting in an undesired beam dump event and by this reduce the machine availability. Commercial off-the-shelf SFP transceivers were studied with the aim to improve the performance of the optical transmission of the Beam Permit Network. This paper describes the tests carried out in the LHC accelerator to evaluate the selected SFP transceivers and it reports the results of the test loop reaction time measurements during operation. The use of SFPs to optically transmit safety critical signals is being considered as an interesting option not only for the planned major upgrade of the BIS for the HL-LHC era but also for other protection systems.  
poster icon Poster WEPHA131 [0.826 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA131  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA134 Monitoring System for IT Infrastructure and EPICS Control System at SuperKEKB EPICS, network, controls, status 1413
 
  • S. Sasaki, T.T. Nakamura
    KEK, Ibaraki, Japan
  • M. Hirose
    KIS, Ibaraki, Japan
 
  The monitoring system has been deployed to efficiently monitor IT infrastructure and EPICS control system at SuperKEKB. The system monitors two types of data: metrics and logs. Metrics such as network traffic and CPU usage are monitored with Zabbix. In addition, we developed an EPICS Channel Access client application that sends PV values to Zabbix server and the status of each IOC is monitored with it. The archived data in Zabbix are visualized on Grafana, which allows us to easily create dashboards and analyze the data. Logs such as text data are monitored with the Elastic Stack, which lets us collect, search, analyze and visualize logs. We apply it to monitor broadcast packets in the control network and the frequency of Channel Access search for each PV. Moreover, a Grafana plugin is developed to visualize the data from pvAccess RPC servers and various data such as CSS alarm status data can be displayed on it.  
poster icon Poster WEPHA134 [0.732 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA134  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA139 Scaling Up the Deployment and Operation of an ELK Technology Stack SCADA, controls, operation, framework 1431
 
  • S. Boychenko, P. Martel, B. Schofield
    CERN, Geneva, Switzerland
 
  Since its integration into the CERN industrial controls environment, the SCADA Statistics project has become a valuable asset for controls engineers and hardware experts in their daily monitoring and maintenance tasks. The adoption of the tool outside of the Industrial Controls and Safety Systems group scope is currently being evaluated by ALICE, since they have similar requirements for alarms and value changes monitoring in their experiment. The increasing interest in scaling up the SCADA Statistics project with new customers has motivated the review of the infrastructure deployment, configuration management and service maintenance policies. In this paper we present the modifications we have integrated in order to improve its configuration flexibility, maintainability and reliability. With this improved solution we believe we can propose our solution to a wider scope of customers.  
poster icon Poster WEPHA139 [0.342 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA139  
About • paper received ※ 27 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA151 A Very Lightweight Process Variable Server controls, FPGA, GUI, software 1449
 
  • A. Sukhanov, J.P. Jamilkowski
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Modern instruments are often supplied with rich proprietary software tools, which makes it difficult to integrate them to an existing control systems. The liteServer is very lightweight, low latency, cross-platform network protocol for signal monitoring and control. It provides very basic functionality of popular channel access protocols like CA or pvAccess of EPICS. It supports request-reply patterns: ’info’, ’get’ and ’set’ requests and publish-subscribe pattern: ’monitor’ request. The main scope of the liteServer is: 1) provide control and monitoring for instruments supplied with proprietary software, 2) provide fastest possible Ethernet transactions, 3) make it possible to implement in FPGA without CPU core. The transport protocol is connection-less (UDP) and data serialization format is Universal Binary JSON (UBJSON). The UBJSON provides complete compatibility with the JSON specification, it is very efficient and fast. A liteServer-based system can be connected to existing control system using simple bridge program (bridges for EPICS and RHIC Ado are provided).
 
poster icon Poster WEPHA151 [0.383 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA151  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPHA174 ADUVC - an EPICS Areadetector Driver for USB Video Class Devices detector, EPICS, controls, experiment 1492
 
  • J. Wlodek
    Stony Brook University, Computer Science Department, Stony Brook, New York, USA
  • K.J. Gofron
    BNL, Upton, New York, USA
 
  Most devices supported by EPICS areaDetector fall under one of two categories: detectors and cameras. Many of the cameras in this group can be classified as industrial cameras, and allow for fine control of exposure time, gain, frame rate, and many other image acquisition parameters. This flexibility can come at a cost however, with most such industrial cameras’ prices starting near one thousand dollars, with the price rising for cameras with more features and better hardware. While these prices are justified for situations that require a large amount of control over the camera, for monitoring tasks and some basic data acquisition the use of consumer devices may be sufficient while being far less cost-prohibitive. The solution we developed was to write an areaDetector driver for USB Video Class (UVC) devices, which allows for a variety of cameras and webcams to be used through EPICS and areaDetector, with most costing under $100.  
poster icon Poster WEPHA174 [1.658 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA174  
About • paper received ※ 01 October 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WESH2003 Toward Continuous Delivery Of A Nontrivial Distributed Software System software, controls, operation, distributed 1511
 
  • S. Wai
    SARAO, Cape Town, South Africa
 
  Funding: SKA South Africa National Research Foundation of South Africa Department of Science and Technology
The MeerKAT Control and Monitoring(CAM) solution is a mature software system that has undergone multiple phases of construction and expansion. It is a distributed system with a run-time environment of 15 logical nodes featuring dozens of interdependent, short-lived processes that interact with a number of long-running services. This presents a challenge for the development team to balance operational goals with continued discovery and development of useful enhancements for its users (astronomers, telescope operators). Continuous Delivery is a set of practices designed to always keep software in a releasable state. It employs the discipline of release engineering to optimise the process of taking changes from source control to production. In this paper, we review the current path to production (build, test and release) of CAM, identify shortcomings and introduce approaches to support further incremental development of the system. By implementing patterns such as deployment pipelines and immutable release candidates we hope to simplify the release process and demonstrate increased throughput of changes, quality and stability in the future
 
slides icon Slides WESH2003 [2.933 MB]  
poster icon Poster WESH2003 [1.448 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WESH2003  
About • paper received ※ 30 September 2019       paper accepted ※ 09 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WESH3003 Waltz - A Platform for Tango Controls Web Applications TANGO, controls, framework, SRF 1519
 
  • I. Khokhriakov, F. Wilde
    HZG, Geesthacht, Germany
  • O. Merkulova
    IK, Moscow, Russia
 
  Funding: Tango Controls Collaboration, contract 2018, PO 712608/WP1&WP2
The idea of creating Tango web platform was born at Tango Users Meeting in 2013, later a feature request was defined (v10 roadmap #6) – provide a generic web application for browsing and monitoring Tango devices. The work started in 2017* and a name Waltz was selected by voting at Tango Users meeting #32. Waltz is the result of joint efforts of Tango Community, HZG and IK. This paper gives an overview of Waltz as a platform for Tango web applications, the overall framework architecture and presents an end result of real-life applications**. The work shows that having Waltz platform web developer can intuitively and quickly create full web application for his/her needs. Different architectural layers provide maintainability. The platform has a number of abstractions and ready-to-use widgets that can be used by web developer to quickly produce web based solutions. Among Waltz features are user context saving, device control and monitoring, plot and drag-n-drop interface solutions. Communication with Tango happens via Tango REST API using HTTP/2.0 and Server-Sent Events. Waltz can be also treated as a system for device monitoring and control from any part of the world.
*Andrew Goetz, et al., TANGO Kernel Development Status, ICALEPCS2017
**Matteo Canzari, et al., A GUI prototype for SKA1 TM Services: compliance with user-centered design approach, Proc. SPIE 10707
 
poster icon Poster WESH3003 [3.056 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WESH3003  
About • paper received ※ 19 July 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THCPL04 SCIBORG: Analyzing and Monitoring LMJ Facility Health and Performance Indicators software, controls, database, laser 1597
 
  • J-P. Airiau, V. Denis, P. Fourtillan, C. Lacombe, S. Vermersch
    CEA, LE BARP cedex, France
 
  The Laser MegaJoule (LMJ) is a 176-beam laser facility, located at the CEA CESTA laboratory near Bordeaux (France). It is designed to deliver about 1.4 MJ of energy to targets, for high energy density physics experiments, including fusion experiments. It operates, since June 2018, 5 of the 22 bundles expected in the final configuration. Monitoring system health and performance of such a facility is essential to maintain high operational availability. SCIBORG is the first step of a larger software that will collect in one tool all the facility parameters. Nowadays SCIBORG imports experiment setup and results, alignment and PAM* control command parameters. It is designed to perform data analysis (temporal/crossed) and implements monitoring features (dashboard). This paper gives a first user feedback and the milestones for the full spectrum system.
*PreAmplifier Module
 
slides icon Slides THCPL04 [4.882 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-THCPL04  
About • paper received ※ 01 October 2019       paper accepted ※ 08 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THCPR06 The ITk Common Monitoring and Interlock System detector, controls, radiation, operation 1634
 
  • S. Kersten, P. Kind, M. Wensing
    Bergische Universität Wuppertal, Wuppertal, Germany
  • C.W. Chen, J.-P. Martin, N.A. Starinski
    GPP, Montreal, Canada
  • S.H. Connell
    University of Johannesburg, Johannesburg, South Africa
  • D. Florez, C. Sandoval
    UAN, Bogotá D.C., Colombia
  • I. Mandić
    JSI, Ljubljana, Slovenia
  • P.W. Phillips
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
  • E. Stanecka
    IFJ-PAN, Kraków, Poland
 
  For the upgrade of the LHC to the High Luminosity LHC the ATLAS detector will install a new all-silicon Inner Tracker (ITk). The innermost part is composed by pixel detectors, the outer part by strip detectors. All together ca. 28000 detector modules will be installed in the ITk volume. Although different technologies were chosen for the inner and outer part, both detectors share a lot of commonalities concerning their requirements. These are operation in the harsh radiation environment, the restricted space for services, and the high power density, which requires a high efficient cooling system. While the sub detectors have chosen different strategies to reduce their powering services, they share the same cooling system, CO2. The main risks for operation are heat ups and condensation, therefore a common detector control system is under development. It provides a detailed monitoring of the temperature, the radiation and the humidity in the tracker volume. Additionally an interlock system, a hardware based safety system, is designed to protect the sensitive detector elements against upcoming risks. The components of the ITk common monitoring and interlock system are presented.  
slides icon Slides THCPR06 [3.847 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-THCPR06  
About • paper received ※ 30 September 2019       paper accepted ※ 10 October 2019       issue date ※ 30 August 2020  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)