Keyword: monitoring
Paper Title Other Keywords Page
MOBPL03 The SKA Telescope Control System Guidelines and Architecture ion, TANGO, controls, GUI 34
 
  • L. Pivetta
    SKA Organisation, Macclesfield, United Kingdom
  • A. DeMarco
    ISSA, Msida, Malta
  • S. Riggi
    INAF-OACT, Catania, Italy
  • L. Van den Heever
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
  • S. Vrcic
    NRC-Herzberg, Penticton, BC, Canada
 
  The Square Kilometre Array (SKA) project is an international collaboration aimed at building the world's largest radio telescope, with eventually over a square kilometre of collecting area, co-hosted by South Africa, for the mid-frequency arrays, and Australia for the low-frequency array. Since 2015 the SKA Consortia joined in a global effort to identify, investigate and select a single control system framework suitable for providing the functionalities required by the SKA telescope monitoring and control. The TANGO Controls framework has been selected and comprehensive work has started to provide telescope-wide detailed guidelines, design patterns and architectural views to build Element and Central monitoring and control systems exploiting the TANGO Controls framework capabilities.  
video icon Talk as video stream: https://youtu.be/S-C9zPdmld0  
slides icon Slides MOBPL03 [6.980 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-MOBPL03  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MODPL04 Framework Upgrade of the Detector Control System for JUNO ion, detector, controls, software 107
 
  • Ms. Ye, Z.G. Han
    IHEP, Bejing, People's Republic of China
 
  Funding: Jiangmen Underground Neutrino Observatory(JUNO) Experiment
The Jiangmen Underground Neutrino Observatory (JUNO) is the second phase of the Daya Bay reactor neutrino experiment. The detector of the experiment was designed as a 20k ton LS with a inner diameter of 34.5 meters casting material acrylic ball shape. Due to the gigantic shape of the detector there are approximate 40k monitoring point including 20k channels of high voltage of array PMT, temperature and humidity, electric crates as well as the power monitoring points. Since most of the DCS of the DayaBay was developed on the framework based on LabVIEW, which is limited by the operation system upgrade and running license, the framework migration and upgrade are needed for DCS of JUNO. The paper will introduce the new framework of DCS based on EPICS (Experimental Physics and Industrial Control System). The implementation of the IOCs of the high-voltage crate and modules, stream device drivers, and the embedded temperature firmware will be presented. The software and hardware realization and the remote control method will be presented. The upgrade framework can be widely used in devices with the same hardware and software interfaces.
 
video icon Talk as video stream: https://youtu.be/BHsxVf3Su0k  
slides icon Slides MODPL04 [17.636 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-MODPL04  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MODPL07 How Low-Cost Devices Can Help on the Way to ALICE Upgrade ion, experiment, controls, electron 114
 
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
  • A. Augustinus, P.M. Bond, P.Ch. Chochula, A.N. Kurepin, M. Lechman, J.L. LÃ¥ng, O. Pinazza
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
 
  The ambitious upgrade plan of the ALICE experiment expects a complete redesign of its data flow after the LHC shutdown scheduled for 2019, for which new electronics modules are being developed in the collaborating institutes. Access to prototypes is at present very limited and full scale prototypes are expected only close to the installation date. To overcome the lack of realistic HW, the ALICE DCS team built small-scale prototypes based on low-cost commercial components (Arduino, Raspberry PI), equipped with environmental sensors, and installed in the experiment areas around and inside the ALICE detector. Communication and control software was developed, based on the architecture proposed for the future detectors, including CERN JCOP FW and ETM WINCC OA. Data provided by the prototypes has been recorded for several months, in presence of beam and magnetic field. The challenge of the harsh environment revealed some insurmountable weaknesses, thus excluding this class of devices from usage in a production setup. They did prove, however, to be robust enough for test purposes, and are still a realistic test-bed for developers while the production of final electronics is continuing.  
video icon Talk as video stream: https://youtu.be/utSHzqk44hQ  
slides icon Slides MODPL07 [9.016 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-MODPL07  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPL06 The Graphical User Interface of the Operator of the Cherenkov Telescope Array ion, GUI, interface, database 186
 
  • I. Sadeh, I. Oya
    DESY Zeuthen, Zeuthen, Germany
  • D. Dezman
    Cosylab, Ljubljana, Slovenia
  • E. Pietriga
    INRIA, Orsay Cedex, France
  • J. Schwarz
    INAF-Osservatorio Astronomico di Brera, Merate, Italy
 
  The Cherenkov Telescope Array (CTA) is the next generation gamma-ray observatory. CTA will incorporate about 100 imaging atmospheric Cherenkov telescopes (IACTs) at a southern site, and about 20 in the north. Previous IACT experiments have used up to five telescopes. Subsequently, the design of a graphical user interface (GUI) for the operator of CTA poses an interesting challenge. In order to create an effective interface, the CTA team is collaborating with experts from the field of Human-Computer Interaction. We present here our GUI prototype. The back-end of the prototype is a Python Web server. It is integrated with the observation execution system of CTA, which is based on the Alma Common Software (ACS). The back-end incorporates a redis database, which facilitates synchronization of GUI panels. redis is also used to buffer information collected from various software components and databases. The front-end of the prototype is based on Web technology. Communication between Web server and clients is performed using Web Sockets, where graphics are generated with the d3.js Javascript library.  
video icon Talk as video stream: https://youtu.be/8ZvUj-DHSgE  
slides icon Slides TUBPL06 [54.366 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUBPL06  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPA02 Monitoring the New ALICE Online-Offline Computing System ion, network, detector, database 195
 
  • A. Wegrzynek, V. Chibante Barroso
    CERN, Geneva, Switzerland
  • G. Vino
    INFN-Bari, Bari, Italy
 
  ALICE (A Large Ion Collider Experiment) particle detector has been successfully collecting physics data since 2010. Currently, it is in preparations for a major upgrade of the computing system, called O2 (Online-Offline). The O2 system will consist of 268 FLPs (First Level Processors) equipped with readout cards and 1500 EPNs (Event Processing Node) performing data aggregation, calibration, reconstruction and event building. The system will readout 27 Tb/s of raw data and record tens of PBs of reconstructed data per year. To allow an efficient operation of the upgraded experiment, a new Monitoring subsystem will provide a complete overview of the O2 computing system status. The O2 Monitoring subsystem will collect up to 600 kHz of metrics. It will consist of a custom monitoring library and a toolset to cover four main functional tasks: collection, processing, storage and visualization. This paper describes the Monitoring subsystem architecture and the feature set of the monitoring library. It also shows the results of multiple benchmarks, essential to ensure performance requirements. In addition, it presents the evaluation of pre-selected tools for each of the functional tasks.  
slides icon Slides TUBPA02 [11.846 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUBPA02  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUCPA02 Leveraging Splunk for Control System Monitoring and Management ion, controls, laser, alignment 253
 
  • M.A. Fedorov, P. Adams, G.K. Brunton, B.T. Fishler, M.S. Flegel, K.C. Wilhelmsen, E.F. Wilson
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
The National Ignition Facility (NIF) is the world's largest and most energetic laser experimental facility with 192 beams capable of delivering 1.8 megajoules and 500-terawatts of ultraviolet light to a target. To aid in NIF control system troubleshooting, the commercial product Splunk was introduced to collate and view system log files collected from 2,600 processes running on 1,800 servers, front-end processors, and embedded controllers. We have since extended Splunk's access into current and historical control system configuration data, as well as experiment setup and results. Leveraging Splunk's built-in data visualization and analytical features, we have built custom tools to gain insight into the operation of the control system and to increase its reliability and integrity. Use cases include predictive analytics for alerting on pending failures, analyzing shot operations critical path to improve operational efficiency, performance monitoring, project management, and in analyzing and monitoring system availability. This talk will cover the various ways we've leveraged Splunk to improve and maintain NIF's integrated control system.
LLNL-ABS-728830
 
slides icon Slides TUCPA02 [1.762 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPA02  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUCPA04 Model Learning Algorithms for Anomaly Detection in CERN Control Systems ion, controls, cryogenics, operation 265
 
  • F.M. Tilaro, B. Bradu, M. Gonzalez-Berges, F. Varela
    CERN, Geneva, Switzerland
  • M. Roshchin
    Siemens AG, Corporate Technology, München, Germany
 
  At CERN there are over 600 different industrial control systems with millions of deployed sensors and actuators and their monitoring represents a challenging and complex task. This paper describes three different mathematical approaches that have been designed and developed to detect anomalies in CERN control systems. Specifically, one of these algorithms is purely based on expert knowledge while the other two mine historical data to create a simple model of the system, which is then used to detect anomalies. The methods presented can be categorized as dynamic unsupervised anomaly detection; "dynamic" since the behaviour of the system is changing in time, "unsupervised" because they predict faults without reference to prior events. Consistent deviations from the historical evolution can be seen as warning signs of a possible future anomaly that system experts or operators need to check. The paper also presents some results, obtained from the analysis of the LHC Cryogenic system. Finally the paper briefly describes the deployment of Spark and Hadoop into the CERN environment to deal with huge datasets and to spread the computational load of the analysis across multiple nodes.  
slides icon Slides TUCPA04 [1.965 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPA04  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA011 A New Distributed Control System for the Consolidation of the CERN Tertiary Infrastructures ion, controls, distributed, interface 390
 
  • L. Scibile, C. Martel, P. Villeton Pachot
    CERN, Geneva, Switzerland
 
  The operation of the CERN tertiary infrastructures is carried out via a series of control systems distributed over the CERN sites. The scope comprises: 260 buildings, 2 large heating plants with 27 km heating network and 200 radiators circuits, 500 air handling units, 52 chillers, 300 split systems, 3000 electric boards and 100k light points. With the infrastructure consolidations, CERN is carrying out a migration and an extension of the old control systems dated back to the 70's, 80's and 90's to a new simplified, yet innovative, distributed control system aimed at minimizing the programming and implementation effort, standardizing equipment and methods and reducing lifecycle costs. This new methodology allows for a rapid development and simplified integration of the new controlled infrastructure processes. The basic principle is based on open standards PLC technology that allows to easily interface to a large range of proprietary systems. The local and remote operation and monitoring is carried out seamlessly with Web HMIs that can be accessed via PC, touchpads or mobile devices. This paper reports on the progress and future challenges of this new control system.  
poster icon Poster TUPHA011 [1.662 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA011  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA030 Using AI in the Fault Management Predictive Model of the SKA TM Services: A Preliminary Study ion, software, ISOL, operation 435
 
  • M. Canzari, M. Di Carlo, M. Dolci
    INAF - OA Teramo, Teramo, Italy
  • R. Smareglia
    INAF-OAT, Trieste, Italy
 
  SKA (Square Kilometer Array) is a project aimed to build a very large radio-telescope, composed by thousands of antennae and related support systems. The overall orchestration is performed by the Telescope Manager (TM), a suite of software applications. In order to ensure the proper and uninterrupted operation of TM, a local monitoring and control system is developed, called TM Services. Fault Management (FM) is one of these services, and is composed by processes and infrastructure associated with detecting, diagnosing and fixing faults, and finally returning to normal operations. The aim of the study, introducing artificial intelligence algorithms during the detection phase, is to build a predictive model, based on the history and statistics of the system, in order to perform trend analysis and failure prediction. Based on monitoring data and health status detected by the software system monitor and on log files gathered by the ELK (Elasticsearch, Logstash, and Kibana) server, the predictive model ensures that the system is operating within its normal operating parameters and takes corrective actions in case of failure.  
poster icon Poster TUPHA030 [2.851 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA030  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA034 SCADA Statistics Monitoring Using the Elastic Stack (Elasticsearch, Logstash, Kibana) ion, controls, database, network 451
 
  • J.A.G. Hamilton, M. Gonzalez-Berges, B. Schofield, J-C. Tournier
    CERN, Geneva, Switzerland
 
  The Industrial Controls and Safety systems group at CERN, in collaboration with other groups, has developed and currently maintains around 200 controls applications that include domains such as LHC magnet protection, cryogenics and electrical network supervision systems. Millions of value changes and alarms from many devices are archived to a centralised Oracle database but it is not easy to obtain high-level statistics from such an archive. A system based on the Elastic Stack has been implemented in order to provide easy access to these statistics. This system provides aggregated statistics based on the number of value changes and alarms, classified according to several criteria such as time, application domain, system and device. The system can be used, for example, to detect abnormal situations and alarm misconfiguration. In addition to these statistics each application generates text-based log files which are parsed, collected and displayed using the Elastic Stack to provide centralised access to all the application logs. Further work will explore the possibilities of combining the statistics and logs to better understand the behaviour of CERN's controls applications.  
poster icon Poster TUPHA034 [5.094 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA034  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA036 Applying Service-Oriented Architecture to Archiving Data in Control and Monitoring Systems ion, controls, solenoid, insertion 461
 
  • J.M. Nogiec, K. Trombly-Freytag
    Fermilab, Batavia, Illinois, USA
 
  Funding: Work supported by the U.S. Department of Energy under contract no. DE-AC02-07CH11359
Current trends in the architectures of software systems focus our attention on building systems using a set of loosely coupled components, each providing a specific functionality known as service. It is not much different in control and monitoring systems, where a functionally distinct sub-system can be identified and independently designed, implemented, deployed and maintained. One functionality that renders itself perfectly to becoming a service is archiving the history of the system state. The design of such a service and our experience of using it are the topic of this article. The service is built with responsibility segregation in mind, therefore, it provides for reducing data processing on the data viewer side and separation of data access and modification operations. The service architecture and the details concerning its data store design are discussed. An implementation of a service client capable of archiving EPICS process variables and LabVIEW shared variables is presented. The use of a gateway service for saving data from GE iFIX is also outlined. Data access tools, including a browser-based data viewer (HTML 5) and a mobile viewer (Android app), are also presented.
 
poster icon Poster TUPHA036 [0.952 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA036  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA050 The SKA Dish Local Monitoring and Control System ion, TANGO, controls, software 508
 
  • S. Riggi, U. Becciani, A. Costa, A. Ingallinera, F. Schillirò, C. Trigilio
    INAF-OACT, Catania, Italy
  • S. Buttaccio, G. Nicotra
    INAF IRA, Bologna, Italy
  • R. Cirami, A. Marassi
    INAF-OAT, Trieste, Italy
 
  The Square Kilometre Array (SKA) will be the world's largest and most sensitive radio observatory ever built. SKA is currently completing the pre-construction phase before initiating mass construction phase 1, in which two arrays of radio antennas - SKA1-Mid and SKA1-Low - will be installed in the South Africa's Karoo region and Western Australia's Murchinson Shire, each covering a different range of radio frequencies. The SKA1-Mid array comprises 130 15-m diameter dish antennas observing in the 350 MHz-14 GHz range and will be remotely orchestrated by the SKA Telescope Manager (TM) system. To enable onsite and remote operations each dish will be equipped with a Local Monitoring and Control (LMC) system responsible to directly manage and coordinate antenna instrumentation and subsystems, providing a rolled-up monitoring view and high-level control to TM. This paper gives a status update of the antenna instrumentation and control software design and provides details on the LMC software prototype being developed.  
poster icon Poster TUPHA050 [3.507 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA050  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA091 A Reliable White Rabbit Network for the FAIR General Timing Machine ion, network, timing, Ethernet 627
 
  • C. Prados, J.N. Bai, A. Hahn
    GSI, Darmstadt, Germany
  • A. Suresh. Suresh
    Hochschule Darmstadt, University of Applied Science, Darmstadt, Germany
 
  A new timing system based on White Rabbit (WR) is being developed for the upcoming FAIR facility at GSI in collaboration with CERN and other partners. The General Timing Machine (GTM) is responsible for the synchronization of nodes and distribution of timing events, which allows the real-time control of the accelerator equipment. WR is a time-deterministic, low latency Ethernet-based network for general data transfer and sub-ns time and frequency distribution. The FAIR WR network is considered operational only if it provides deterministic and resilient data delivery and reliable time distribution. In order to achieve this level of service, methods and techniques to increase the reliability of the GTM and WR network has been studied and evaluated. Besides, GSI has developed a network monitoring and logging system to measure the performance and detect failures of the WR network. Finally, we describe the continuous integration system at GSI and how it has improve the overall reliability of the GTM.  
poster icon Poster TUPHA091 [0.630 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA091  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA181 Web Extensible Display Manager ion, EPICS, controls, network 852
 
  • R.J. Slominski, T. L. Larrieu
    JLab, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177
Jefferson Lab's Web Extensible Display Manager (WEDM) allows staff to access EDM control system screens from a web browser in remote offices and from mobile devices. Native browser technologies are leveraged to avoid installing and managing software on remote clients such as browser plugins, tunnel applications, or an EDM environment. Since standard network ports are used firewall exceptions are minimized. To avoid security concerns from remote users modifying a control system, WEDM exposes read-only access and basic web authentication can be used to further restrict access. Updates of monitored EPICS channels are delivered via a Web Socket using a web gateway. The software translates EDM description files (denoted with the edl suffix) to HTML with Scalable Vector Graphics (SVG) following the EDM's edl file vector drawing rules to create faithful screen renderings. The WEDM server parses edl files and creates the HTML equivalent in real-time allowing existing screens to work without modification. Alternatively, the familiar drag and drop EDM screen creation tool can be used to create optimized screens sized specifically for smart phones and then rendered by WEDM.
 
poster icon Poster TUPHA181 [1.818 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA181  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA207 Tm Services: An Architecture for Monitoring and Controlling the Square Kilometre Array (SKA) Telescope Manager (Tm) ion, controls, software, TANGO 943
 
  • M. Di Carlo, M. Canzari, M. Dolci
    INAF - OA Teramo, Teramo, Italy
  • D. Barbosa, J.P. Barraca, J.B. Morgado
    GRIT, Aveiro, Portugal
  • R. Smareglia
    INAF-OAT, Trieste, Italy
 
  The SKA project is an international effort (10 member and 10 associated countries with the involvement of 100 companies and research institutions) to build the world's largest radio telescope. The SKA Telescope Manager (TM) is the core package of the SKA Telescope aimed at scheduling observations, controlling their execution, monitoring the telescope and so on. To do that, TM directly interfaces with the Local Monitoring and Control systems (LMCs) of the other SKA Elements (e.g. Dishes), exchanging commands and data with them by using the TANGO controls framework. TM in turn needs to be monitored and controlled, in order its continuous and proper operation is ensured. This higher responsibility together with others like collecting and displaying logging data to operators, performing lifecycle management of TM applications, directly deal - when possible - with management of TM faults (which also includes a direct handling of TM status and performance data) and interfacing with the virtualization platform compose the TM Services (SER) package that is discussed and presented in the present paper.  
poster icon Poster TUPHA207 [6.137 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA207  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUSH303 Managing your Timing System as a Standard Ethernet Network ion, network, timing, HOM 1007
 
  • A. Wujek, G. Daniluk, M.M. Lipinski
    CERN, Geneva, Switzerland
  • A. Rubini
    GNUDD, Pavia, Italy
 
  White Rabbit (WR) is an extension of Ethernet which allows deterministic data delivery and remote synchronization of nodes with accuracies below 1 nanosecond and jitter better than 10 ps. Because WR is Ethernet, a WR-based timing system can benefit from all standard network protocols and tools available in the Ethernet ecosystem. This paper describes the configuration, monitoring and diagnostics of a WR network using standard tools. Using the Simple Network Management Protocol (SNMP), clients can easily monitor with standard monitoring tools like Nagios, Icinga and Grafana e.g. the quality of the data link and synchronization. The former involves e.g. the number of dropped frames; The latter concerns parameters such as the latency of frame distribution and fibre delay compensation. The Link Layer Discovery Protocol (LLDP) allows discovery of the actual topology of a network. Wireshark and PTP Track Hound can intercept and help with analysis of the content of WR frames of live traffic. In order to benefit from time-proven, scalable, standard monitoring solutions, some development was needed in the WR switch and nodes.  
poster icon Poster TUSH303 [1.608 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUSH303  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEAPL02 Automatic PID Performance Monitoring Applied to LHC Cryogenics ion, controls, cryogenics, operation 1017
 
  • B. Bradu, E. Blanco Viñuela, R. Marti, F.M. Tilaro
    CERN, Geneva, Switzerland
 
  At CERN, the LHC (Large Hadron Collider) cryogenic system employs about 4900 PID (Proportional Integral Derivative) regulation loops distributed over the 27 km of the accelerator. Tuning all these regulation loops is a complex task and the systematic monitoring of them should be done in an automated way to be sure that the overall plant performance is improved by identifying the poorest performing PID controllers. It is nearly impossible to check the performance of a regulation loop with a classical threshold technique as the controlled variables could evolve in large operation ranges and the amount of data cannot be manually checked daily. This paper presents the adaptation and the application of an existing regulation indicator performance algorithm on the LHC cryogenic system and the different results obtained in the past year of operation. This technique is generic for any PID feedback control loop, it does not use any process model and needs only a few tuning parameters. The publication also describes the data analytics architecture and the different tools deployed on the CERN control infrastructure to implement the indicator performance algorithm.  
video icon Talk as video stream: https://youtu.be/7dCglp2Pn_c  
slides icon Slides WEAPL02 [1.651 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-WEAPL02  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THBPL01 C2MON SCADA Deployment on CERN Cloud Infrastructure ion, software, database, network 1103
 
  • B. Copy, M. Bräger, F. Ehm, A. Lossent, E. Mandilara
    CERN, Geneva, Switzerland
 
  The CERN Control and Monitoring Platform (C2MON) is an open-source platform for industrial controls data acquisition, monitoring, control and data publishing. C2MON's high-availability, redundant capabilities make it particularly suited for a large, geographically scattered context such as CERN. The C2MON platform relies on the Java technology stack at all levels of its architecture. Since end of 2016, CERN offers a platform as a service (PaaS) solution based on RedHat Openshift. Initially envisioned at CERN for web application hosting, Openshift can be leveraged to host any software stack due to its adoption of the Docker container technology. In order to make C2MON more scalable and compatible with Cloud Computing, it was necessary to containerize C2MON components for the Docker container platform. Containerization is a logical process that forces one to rethink a distributed architecture in terms of decoupled micro-services suitable for a cloud environment. This paper explains the challenges met and the principles behind containerizing a server-centric Java application, demonstrating how simple it has now become to deploy C2MON in any cloud-centric environment.

 
video icon Talk as video stream: https://youtu.be/4NbM1yDO_TM  
slides icon Slides THBPL01 [3.176 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPL01  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THBPL04 The Design of Tango Based Centralized Management Platform for Software Devices ion, controls, software, TANGO 1121
 
  • Z. Ni, J. Liu, J. Luo, X. Zhou
    CAEP, Sichuan, People's Republic of China
 
  Tango provides the Tango device server object model(TDSOM), whose basic idea is to treat each device as an object. The TDSOM can be divided into 4 basic elements, including the device, the server, the database and the application programmers interface. On the basis of the TDSOM, we design a centralized platform for software device management, named VisualDM, providing standard servers and client management software. Thus the functionality of VisualDM are mutli-folds: 1) dynamically defining or configuring the composition of a device container at run-time; 2) visualization of remote device management based on system scheduling model; 3) remote deployment and update of software devices; 4) registering, logouting, starting and stopping devices. In this paper, platform compositions, module functionalities, the design concepts are discussed. The platform is applied in computer integrated control systems of SG facilities.  
video icon Talk as video stream: https://youtu.be/5RveBXleczw  
slides icon Slides THBPL04 [1.509 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPL04  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THCPL02 Highlights of the European Ground System - Common Core Initiative ion, controls, operation, interface 1175
 
  • M. Pecchioli
    ESA/ESOC, Darmstadt, Germany
  • J.M. Carranza
    ESA-ESTEC, Noordwijk, The Netherlands
 
  Funding: European Space Agency
The European Ground System Common Core (EGS-CC) initiative is now materializing. The goal of the this initiative is to define, build and share a software framework and implementation that will be used as the main basis for pre- and post- launch ground systems (Electrical Ground Support Equipment and Mission Control System) of future European space projects. The initiative is in place since year 2011 and is being led by the European Space Agency as a formal collaboration of the main European stakeholders in the space systems control domain, including European Space National Agencies and European Prime Industry. The main expected output of the EGS-CC initiative is a core system which can be adapted and extended to support the execution of pre- and post-launch Monitoring and Control operations for all types of missions and throughout the complete life-cycle of space projects. This presentation will introduce the main highlights of the EGS-CC initiative, its governance principles, the fundamental concepts of the resulting products and the challenges that the team is facing.
 
video icon Talk as video stream: https://youtu.be/xguMZe2WuKE  
slides icon Slides THCPL02 [7.580 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THCPL02  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THCPA06 A Real-Time Beam Monitoring System for Highly Dynamic Irradiations in Scanned Proton Therapy ion, proton, radiation, real-time 1224
 
  • G. Klimpki, C. Bula, M. Eichin, A.L. Lomax, D. Meer, S. Psoroulas, U. Rechsteiner, D.C. Weber
    PSI, Villigen PSI, Switzerland
  • D.C. Weber
    University of Zurich, University Hospital, Zurich, Switzerland
 
  Funding: This work is supported by the Giuliana and Giorgio Stefanini Foundation.
Patient treatments in scanned proton therapy exhibit dead times, e.g. when adjusting beamline settings for a different energy or lateral position. On the one hand, such dead times prolong the overall treatment time, but on the other hand they grant possibilities to (retrospectively) validate that the correct amount of protons has been delivered to the correct position. Efforts in faster beam delivery aim to minimize such dead times, which calls for different means of monitoring irradiation parameters. To address this issue, we report on a real-time beam monitoring system that supervises the proton beam position and current during beam-on, hence while the patient is under irradiation. For this purpose, we sample 1-axis Hall probes placed in beam-scanning magnets and plane-parallel ionization chambers every 10 μs. FPGAs compare sampled signals against verification tables - time vs. position/current charts containing upper and lower tolerances for each signal - and issue interlocks whenever samples fall outside. Furthermore, we show that by implementing real-time beam monitoring in our facility, we are able to respect patient safety margins given by international norms and guidelines.
 
slides icon Slides THCPA06 [1.841 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THCPA06  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMPL04 Telescope Control System of the ASTRI SST-2M prototype for the Cherenkov Telescope Array ion, controls, software, site 1266
 
  • E. Antolini, G. Tosti
    Università degli di Perugia, Perugia, Italy
  • L.A. Antonelli, S. Gallozzi, S. Lombardi, F. Lucarelli, M. Mastropietro, V. Testa
    INAF O.A. Roma, Roma, Italy
  • P. Bruno, G. Leto, S. Scuderi
    INAF-OACT, Catania, Italy
  • A. Busatta, C. Manfrin, G. Marchiori, E. Marcuzzi
    EIE Group s.r.l., Venezia, Italy
  • R. Canestrari, G. Pareschi, J. Schwarz, S. Scuderi, G. Sironi, G. Tosti
    INAF-Osservatorio Astronomico di Brera, Merate, Italy
  • E. Cascone
    INAF - Osservatorio Astronomico di Capodimonte, Napoli, Italy
  • V. Conforti, F. Gianotti, M. Trifoglio
    INAF, Bologna, Italy
  • D. Di Michele, C. Grigolon, P. Guarise
    Beckhoff Automation Srl, Limbiate, Italy
  • E. Giro
    INAF- Osservatorio Astronomico di Padova, Padova, Italy
  • N. La Palombara
    INAF - Istituto di Astrofisica Spaziale e Fisica Cosmica di Milano, Milano, Italy
  • F. Russo
    INAF O.A. Torino, Pino Torinese, Italy
 
  The ASTRI SST-2M telescope is a prototype proposed for the Small Size class of Telescopes of the Cherenkov Telescope Array (CTA). The ASTRI prototype adopts innovative solutions for the optical system, which poses stringent requirements in the design and development of the Telescope Control System (TCS), whose task is the coordination of the telescope devices. All the subsystems are managed independently by the related controllers, which are developed through a PC-Based technology and making use of the TwinCAT3 environment for the software PLC. The TCS is built upon the ALMA Common Software framework and uses the OPC-UA protocol for the interface with the telescope components, providing a simplified full access to the capabilities offered by the telescope subsystems for normal operation, testing, maintenance and calibration activities. In this contribution we highlight how the ASTRI approach for the design, development and implementation of the TCS has made the prototype a stand-alone intelligent and active machine, providing also an easy way for the integration in an array configuration such as the future ASTRI mini-array proposed to be installed at the southern site of the CTA.  
slides icon Slides THMPL04 [1.212 MB]  
poster icon Poster THMPL04 [1.773 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THMPL04  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMPL09 VME Based Digitizers for Waveform Monitoring System of Linear Induction Accelerator (LIA-20) ion, timing, hardware, FPGA 1291
 
  • E.S. Kotov, A.M. Batrakov, G.A. Fatkin, A.V. Pavlenko, K.S. Shtro, M.Yu. Vasilyev
    BINP SB RAS, Novosibirsk, Russia
  • G.A. Fatkin, E.S. Kotov, A.V. Pavlenko, M.Yu. Vasilyev
    NSU, Novosibirsk, Russia
 
  Waveform monitoring system plays a special role in the control system of powerful pulse installations providing the most complete information about the installation functioning and its parameters. The report describes the family of VME modules used in the waveform monitoring system of a linear induction accelerator LIA-20. In order to organize inter-module synchronization the VME-64 bus extension implemented in the VME64-BINP crates is applied in the waveform digitizers.  
slides icon Slides THMPL09 [1.653 MB]  
poster icon Poster THMPL09 [1.777 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THMPL09  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA032 EPICS and Open Source Data Analytics Platforms ion, EPICS, database, controls 1420
 
  • C.R. Haskins
    CASS, Epping, Australia
 
  SKA scale distributed control and monitoring systems present challenges in hardware sensor monitoring, archiving, hardware fault detection and fault prediction. The size and scale of hardware involved and telescope high availability requirements suggest the machine learning and other automated methods will be required for fault finding and fault prediction of hardware components. Modern tools are needed leveraging open source time series database & data analytic platforms. We describe DiaMoniCA for The Australian SKA Pathfinder Radio Telescope which integrates EPICS, our own monitoring archiver MoniCA, with an open source time series database and web based data visualisation and analytic platform.  
poster icon Poster THPHA032 [7.517 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA032  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA066 MeerKAT Project Status Report ion, controls, software, interface 1531
 
  • L.R. Brederode, L. Van den Heever
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  The MeerKAT radio telescope is currently in full production in South Africa's Karoo region and will be the largest and most sensitive radio telescope array in the centimeter wavelength regime in the southern skies until the SKA1 MID telescope is operational. This paper identifies the key telescope specifications, discusses the high-level architecture and current progress to meet the specifications. The MeerKAT Control and Monitoring subsystem is an integral component of the MeerKAT telescope that orchestrates all other subsystems and facilitates telescope level integration and verification. This paper elaborates on the development plan, processes and roll-out status of this vital component.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA066  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA088 A Time Stamping TDC for SPEC and ZEN Platforms Based on White Rabbit ion, timing, FPGA, experiment 1587
 
  • M. Brückner
    PSI, Villigen PSI, Switzerland
  • R. Wischnewski
    DESY Zeuthen, Zeuthen, Germany
 
  Sub-nsec precision time synchronization is requested for data-acquisition components distributed over up to tens of km2 in modern astroparticle experiments, like upcoming Gamma-Ray and Cosmic-Ray detector arrays, to ensure optimal triggering, pattern recognition and background rejection. The White-Rabbit (WR) standard for precision time and frequency transfer is well suited for this purpose. We present two multi-channel general-purpose TDC units, which are firmware-implemented on two widely used WR-nodes: the SPEC (Spartan 6) and ZEN (Zynq) boards. Their main features: TDCs with 1 nsec resolution (default), running deadtime-free and capable of local buffering and centralized level-2 trigger architectures. The TDC stamps pulses are in absolute TAI. With off-the-shelve mezzanine boards (5ChDIO-FMC-boards), up to 5 TDC channels are available per WR-node. Higher density, customized simple I/O boards allow to turn this into 8 to 32-channel units, with an excellent price to performance ratio. The TDC units have shown excellent long-term performance in a harsh environment application at TAIGA-HiSCORE/Siberia, for the Front-End DAQ and the central GPSDO clock facility.  
poster icon Poster THPHA088 [2.880 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA088  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA107 Safety Control of the Spiral2 Radioactive Gas Storage System ion, controls, PLC, vacuum 1629
 
  • Q. Tura, C. Berthe, O. Danna, M. Faye, A. Savalle, J. Suadeau
    GANIL, Caen, France
 
  The phase 1 of the SPIRAL2 facility, extension project of the GANIL laboratory, is under construction and the commissioning had started. During the run phases, radioactive gas, mainly composed of hydrogen, will be extracted from the vacuum chambers. The radioactive gas storage system function is to prevent any uncontrolled release of activated gas by storing it in gas tank during the radioactive decay, while monitoring the hydrogen rate in the tanks under a threshold. This confinement of radioactive materials is a safety function. The filling and the discharge of the tanks are processed with monostable valves, making the storage a passive safety system. Two separate redundant control subsystems, based on electrical hardware technologies, allow the opening of the redundant safety valves, according to redundant pressure captors, redundant di-hydrogen rate analyzers and limit switches of the valves. The redundancy of the design of the control system meets the single failure criterion. The monitoring of the consistency of the two redundant safety subsystems, and the non-safety control functions of the storage process, are then managed by a Programmable Logic Controller.  
poster icon Poster THPHA107 [0.530 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA107  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA137 Distributing Near Real Time Monitoring and Scheduling Data for Integration With Other Systems at Scale ion, controls, GUI, interface 1703
 
  • F. Joubert, M.J. Slabber
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  Funding: National Research Foundation (South Africa)
The MeerKAT radio telescope control system generates monitoring and scheduling data that internal and external systems require to operate. Distributing this data in near real-time, requires a scalable messaging strategy to ensure optimal performance regardless of the number of systems connected. Internal systems include the MeerKAT Graphical User Interfaces, the MeerKAT Science Data Processing subsystem and the MeerKAT Correlator Beamformer subsystem. External systems include Pulsar Timing User Supplied Equipment, MeerLICHT and the Search for Extraterrestrial Intelligence (SETI). Many more external systems are expected to join MeerKAT in the future. This paper describes the strategy adopted by the Control and Monitoring team to distribute near real-time monitoring and scheduling data at scale. This strategy is implemented using standard web technologies and the publish/subscribe design pattern.
 
poster icon Poster THPHA137 [6.692 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA137  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA142 The SKA Dish SPF and LMC Interaction Design: Interfaces, Simulation, Testing and Integration ion, controls, interface, TANGO 1712
 
  • A. Marassi
    INAF-OAT, Trieste, Italy
  • J. Kotze, T.J. Steyn, C. van Niekerk
    EMSS Antennas, Stellenbosch, South Africa
  • S. Riggi, F. Schillirò
    INAF-OACT, Catania, Italy
  • G. Smit
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  The Square Kilometre Array (SKA) project is responsible for developing the SKA Observatory, the world's largest radio telescope ever built: eventually two arrays of radio antennas - SKA1-Mid and SKA1-Low - will be installed in the South Africa's Karoo region and Western Australia's Murchison Shire respectively, each covering a different range of radio frequencies. In particular, the SKA1-Mid array will comprise of 133 15m diameter dish antennas observing in the 350 MHz-14 GHz range, each locally managed by a Local Monitoring and Control (LMC) system and remotely orchestrated by the SKA Telescope Manager (TM) system. All control system functionality run on the Tango Controls platform. The Dish Single Pixel Feed (SPF) work element will design the combination of feed elements, orthomode transducers (OMTs), and low noise amplifiers (LNAs) that receive the astronomical radio signals. Some SPFs have cryogenically cooled chambers to obtain the sensitivity requirements. This paper gives a status update of the SKA Dish SPF and LMC interaction design, focusing on SPF, LMC simulators and engineering/operational user interfaces, prototypes being developed and technological choices.  
poster icon Poster THPHA142 [0.321 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA142  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA162 Monitoring of CERN's Data Interchange Protocol (DIP) System ion, controls, interface, real-time 1797
 
  • B. Copy, E. Mandilara, I. Prieto Barreiro, F. Varela
    CERN, Geneva, Switzerland
 
  CERN's Data Interchange Protocol (DIP)* is a publish-subscribe middleware infrastructure developed at CERN to allow lightweight communications between distinct industrial control systems (such as detector control systems or gas control systems). DIP is a rudimentary data exchange protocol with a very flat and short learning curve and a stable specification. It also lacks support for access control, smoothing or data archiving. This paper presents a mechanism which has been implemented to keep track of every single publisher or subscriber node active in the DIP infrastructure, along with the DIP name servers supporting it. Since DIP supports more than 55,000 publications, regrouping hundreds of industrial control processes, keeping track of the system activity requires advanced visualization mechanisms (e.g. connectivity maps, live historical charts) and a scalable web-based interface** to render this information is essential.
* W. Salter et al., "DIP Description" LDIWG (2004) https://edms.cern.ch/file/457113/2/DIPDescription.doc
** B. Copy et al., "MOPPC145" - ICALEPCS 2013, San Francisco, USA
 
poster icon Poster THPHA162 [3.066 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA162  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA188 The SKA Dish Local Monitoring and Control System User Interface ion, controls, interface, GUI 1880
 
  • A. Marassi
    INAF-OAT, Trieste, Italy
  • M. Brambilla
    PoliMi, Milano, Italy
  • A. Ingallinera, S. Riggi, C. Trigilio
    INAF-OACT, Catania, Italy
  • G. Nicotra
    INAF IRA, Bologna, Italy
  • G. Smit
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  The Square Kilometre Array (SKA) project is responsible for developing the SKA Observatory, the world's largest radiotelescope ever built: eventually two arrays of radio antennas - SKA1-Mid and SKA1-Low - will be installed in the South Africa's Karoo region and Western Australia's Murchison Shire, each covering a different range of radio frequencies. In particular SKA1-Mid array will comprise 133 15m diameter dish antennas observing in the 350 MHz-14 GHz range, each locally managed by a Local Monitoring and Control (LMC) system and remotely orchestrated by the SKA Telescope Manager (TM) system. Dish LMC will provide a Graphical User Interface (GUI) to be used for monitoring and Dish control in standalone mode for testing, TM simulation, integration, commissioning and maintenance. This paper gives a status update of the LMC GUI design involving users and tasks analysis, system prototyping, interface evaluation and provides details on the GUI prototypes being developed and technological choices.  
poster icon Poster THPHA188 [0.712 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA188  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)