Paper | Title | Other Keywords | Page |
---|---|---|---|
MOAPL04 | SwissFEL Control System - Overview, Status, and Lessons Learned | ion, FEL, controls, electron | 19 |
|
|||
The SwissFEL is a new free electron laser facility at the Paul Scherrer Institute (PSI) in Switzerland. Commissioning started in 2016 and resulted in first lasing in December 2016 (albeit not on the design energy). In 2017, the commissioning continued and will result in the first pilot experiments at the end of the year. The close interaction of experiment and accelerator components as well as the pulsed electron beam required a well thought out integration of the control system including some new concepts and layouts. This paper presents the current status of the control system together with some lessons learned. | |||
![]() |
Talk as video stream: https://youtu.be/oaGDyYYzKJ4 | ||
![]() |
Slides MOAPL04 [2.258 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-MOAPL04 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOCPL04 | LTE/3G Based Wireless Communications for Remote Control and Monitoring of PLC-Controlled Vacuum Mobile Devices | ion, PLC, controls, SCADA | 64 |
|
|||
All particle accelerators and most experiments at CERN require high (HV) or ultra-high (UHV) vacuum levels. Contributing to vacuum production are two types of mobile devices: Turbo-Molecular Pumping Groups and Bakeout Racks. During accelerator stops, these PLC-controlled devices are temporarily installed in the tunnels and integrated in the Vacuum SCADA, through wired Profibus-DP. This method, though functional, poses cer-tain issues which a wireless solution would greatly miti-gate. The CERN private LTE/3G network is available in the accelerators through a leaky-feeder antenna cable which spans the whole length of the tunnels. This paper describes the conception and implementation of an LTE/3G-based modular communication system for PLC-controlled vacuum mobile devices. It details the hardware and software architecture of the system and lays the foun-dation of a solution that can be easily adapted to systems other than vacuum. | |||
![]() |
Talk as video stream: https://youtu.be/1u6WmPACSs8 | ||
![]() |
Slides MOCPL04 [4.354 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-MOCPL04 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOCPL07 | The Integrated Alarm System of the Alma Observatory | ion, software, controls, database | 81 |
|
|||
ALMA is composed of many hardware and software systems each of which must be properly functioning to ensure the maximum efficiency. Operators in the control room, follow the operational state of the observatory by looking at a set of non-homogeneous panels. In case of problems, they have to find the reason by looking at the right panel, interpret the information and implement the counter-action that is time consuming so after an investigation, we started the development of an integrated alarm system that takes monitor point values and alarms from the monitored systems and presents alarms to operators in a coherent, efficient way. A monitored system has a hierarchical structure modeled with an acyclic graph whose nodes represent the components of the system. Each node digests monitor point values and alarms against a provided transfer function and sets its output as working or non nominal, taking into account the operational phase. The model can be mapped in a set of panels to increase operators' situation awareness and improve the efficiency of the facility. | |||
![]() |
Talk as video stream: https://youtu.be/HC-eOY97EME | ||
![]() |
Slides MOCPL07 [2.428 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-MOCPL07 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUBPA02 | Monitoring the New ALICE Online-Offline Computing System | ion, monitoring, detector, database | 195 |
|
|||
ALICE (A Large Ion Collider Experiment) particle detector has been successfully collecting physics data since 2010. Currently, it is in preparations for a major upgrade of the computing system, called O2 (Online-Offline). The O2 system will consist of 268 FLPs (First Level Processors) equipped with readout cards and 1500 EPNs (Event Processing Node) performing data aggregation, calibration, reconstruction and event building. The system will readout 27 Tb/s of raw data and record tens of PBs of reconstructed data per year. To allow an efficient operation of the upgraded experiment, a new Monitoring subsystem will provide a complete overview of the O2 computing system status. The O2 Monitoring subsystem will collect up to 600 kHz of metrics. It will consist of a custom monitoring library and a toolset to cover four main functional tasks: collection, processing, storage and visualization. This paper describes the Monitoring subsystem architecture and the feature set of the monitoring library. It also shows the results of multiple benchmarks, essential to ensure performance requirements. In addition, it presents the evaluation of pre-selected tools for each of the functional tasks. | |||
![]() |
Slides TUBPA02 [11.846 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUBPA02 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUCPL01 | Refurbishment of the ESRF Accelerator Synchronization System Using White Rabbit | ion, SRF, timing, booster | 224 |
|
|||
The ESRF timing system, dating from the early 90's and still in operation, is built around a centralized RF driven sequencer distributing synchronization signals along copper cables. The RF clock is broadcasted over a separate copper network. White Rabbit, offers many attractive features for the refurbishment of a synchrotron timing system, the key one being the possibility to carry RF over the White Rabbit optical fiber network. CERN having improved the feature to provide network-wide phase together with frequency control over the distributed RF, the whole technology is now mature enough to propose a White Rabbit based solution for the replacement of the ESRF system, providing flexibility and accurate time stamping of events. We describe here the main features and first performance results of the WHIST module, an ESRF development based on the White Rabbit standalone SPEC board embedding the White Rabbit protocol and a custom mezzanine (DDSIO) extending the FMC-DDS hardware to provide up to 12 programmable output signals. All WHIST modules in the network run in phase duplicates of a common RF driven sequencer. A master module broadcasts the RF and the injection trigger. | |||
![]() |
Talk as video stream: https://youtu.be/Ege_6IGHNPU | ||
![]() |
Slides TUCPL01 [1.595 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPL01 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUCPL06 | Verification of the FAIR Control System Using Deterministic Network Calculus | ion, controls, operation, timing | 238 |
|
|||
Funding: Carl Zeiss Foundation The FAIR control system (CS) is an alarm-based design and employs White Rabbit time synchronization over a GbE network to issue commands executed accurate to 1 ns. In such a network based CS, graphs of possible machine command sequences are specified in advance by physics frameworks. The actual traffic pattern, however, is determined at runtime, depending on interlocks and beam requests from experiments and accelerators. In 'unlucky' combinations, large packet bursts can delay commands beyond their deadline, potentially causing emergency shutdowns. Thus, prior verification if any possible combination of given command sequences can be delivered on time is vital to guarantee deterministic behavior of the CS. Deterministic network calculus (DNC) can derive upper bounds on message delivery latencies. This paper presents an approach for calculating worst-case descriptors of runtime traffic patterns. These so-called arrival curves are deduced from specified partial traffic sequences and are used to calculate end-to-end traffic properties. With the arrival curves and a DNC model of the FAIR CS network, a worst-case latency for specific packet flows or the whole CS can be obtained. |
|||
![]() |
Talk as video stream: https://youtu.be/t1AXzTi8kJA | ||
![]() |
Slides TUCPL06 [0.203 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPL06 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUCPA03 | Experience with Machine Learning in Accelerator Controls | ion, controls, extraction, framework | 258 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy. The repository of data for the Relativistic Heavy Ion Collider and associated pre-injector accelerators consists of well over half a petabyte of uncompressed data. By todays standard, this is not a large amount of data. However, a large fraction of that data has never been analyzed and likely contains useful information. We will describe in this paper our efforts to use machine learning techniques to pull out new information from existing data. Our focus has been to look at simple problems, such as associating basic statistics on certain data sets and doing predictive analysis on single array data. The tools we have tested include unsupervised learning using Tensorflow, multimode neural networks, hierarchical temporal memory techniques using NuPic, as well as deep learning techniques using Theano and Keras. |
|||
![]() |
Slides TUCPA03 [6.658 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPA03 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUMPL06 | Conceptual Design of Developing a Mobile App for Distributed Information Services for Control Systems (DISCS) | ion, database, controls, EPICS | 315 |
|
|||
In physical systems for having best performance in processes like maintenance, troubleshooting, design, construction, update and etc., we need to store data that describe systems state and its components characteristics. Thus we need a framework for developing an application which can store, integrate and manage data and also execute permitted operations. DISCS (Distributed Information Services for Control Systems) as a framework with mentioned capabilities can help us achieve our goals. In this paper, we first assessed DISCS and its basic architecture and then we implement this framework for maintenance domain of a system. With implementation of maintenance module, we'll be able to store preventive maintenance data and information which help us to trace the problems and analyze situation caused failure and destruction. | |||
![]() |
Slides TUMPL06 [2.386 MB] | ||
![]() |
Poster TUMPL06 [2.184 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUMPL06 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA020 | MATLAB Control Applications Embedded Into Epics Process Controllers (IOC) and their Impact on Facility Operations at Paul Scherrer Institute | ion, controls, EPICS, embedded | 416 |
|
|||
An automated tool for converting MATLAB based controls algorithms into C codes, executable directly on EPICS process control computers (IOCs), was developed at the Paul Scherrer Institute (PSI). Based on this tool, several high level control applications were embedded into the IOCs, which are directly connected to the control system sensors and actuators. Such embedded applications have significantly reduced the network traffic, and thus the data handling latency, which increased the reliability of the control system. The paper concentrates on the most important components of the automated tool and the performance of MATLAB algorithms converted by this tool. | |||
![]() |
Poster TUPHA020 [0.784 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA024 | ModBus/TCP Applications for CEBAF Accelerator Control System | ion, EPICS, controls, interface | 424 |
|
|||
Modbus-TCP is the Modbus RTU protocol with the TCP interface running on Ethernet. In our applications, an XPort device utilizing Modbus-TCP is used to control remote devices and communicates with the accelerator control system (EPICS). Modbus software provides a layer between the standard EPICS asyn support and EPICS asyn for TCP/IP or serial port driver. The EPICS application for each specific Modbus device is developed and it can be deployed on a soft IOC. The configuration of XPort and Modbus-TCP is easy to setup and suitable for applications that do not require high-speed communications. Additionally, the use of Ethernet makes it quicker to develop instrumentation for remote deployment. An eight-channel 24-bit Data Acquisition (DAQ) system is used to test the hardware and software capabilities.
Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. |
|||
![]() |
Poster TUPHA024 [0.785 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA024 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA034 | SCADA Statistics Monitoring Using the Elastic Stack (Elasticsearch, Logstash, Kibana) | ion, controls, database, monitoring | 451 |
|
|||
The Industrial Controls and Safety systems group at CERN, in collaboration with other groups, has developed and currently maintains around 200 controls applications that include domains such as LHC magnet protection, cryogenics and electrical network supervision systems. Millions of value changes and alarms from many devices are archived to a centralised Oracle database but it is not easy to obtain high-level statistics from such an archive. A system based on the Elastic Stack has been implemented in order to provide easy access to these statistics. This system provides aggregated statistics based on the number of value changes and alarms, classified according to several criteria such as time, application domain, system and device. The system can be used, for example, to detect abnormal situations and alarm misconfiguration. In addition to these statistics each application generates text-based log files which are parsed, collected and displayed using the Elastic Stack to provide centralised access to all the application logs. Further work will explore the possibilities of combining the statistics and logs to better understand the behaviour of CERN's controls applications. | |||
![]() |
Poster TUPHA034 [5.094 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA034 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA040 | Development of Real-Time Data Publish and Subscribe System Based on Fast RTPS for Image Data Transmission | ion, real-time, diagnostics, experiment | 473 |
|
|||
Funding: This work was supported by the Korean Ministry of Science ICT & Future Planning under the KSTAR project. In fusion experiment, real-time network is essential to control plasma real-time network used to transfer the diagnostic data from diagnostic device and command data from PCS(Plasma Control System). Among the data, transmitting image data from diagnostic system to other system in real-time is difficult than other type of data. Because, image has larger data size than other type of data. To transmit the images, it need to have high throughput and best-effort property. And To transmit the data in real-time manner, the network need to has low-latency. RTPS(Real Time Publish Subscribe) is reliable and has Quality of Service properties to enable best effort protocol. In this paper, eProsima Fast RTPS was used to implement RTPS based real-time network. Fast RTPS has low latency, high throughput and enable to best-effort and reliable publish and subscribe communication for real-time application via standard Ethernet network. This paper evaluates Fast RTPS about suitability to real-time image data transmission system. To evaluate performance of Fast RTPS base system, Publisher system publish image data and multi subscriber system subscribe image data. * giilkwon@nfri.re.kr, Control team, National Fusion Research Institute, Daejeon, South Korea |
|||
![]() |
Poster TUPHA040 [8.164 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA040 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA048 | VDI (Virtual Desktop Infrastructure) Implementation for Control System - Overview and Analysis | ion, controls, hardware, software | 501 |
|
|||
At Solaris (National Synchrotron Radiation Center , Kraków ) we have deployed test VDI software to virtualize physical desktops in the control room to ensure stability, more efficient support, system updates, and restores. The test was aimed to accelerate the installation of new work places for the single users. Horizon software gives us an opportunity to create roles and access permission . VDI software has contributed to efficient management and lower maintenance costs of virtual machines than physical hosts. We are still testing VMware Horizon 7 at Solaris. | |||
![]() |
Poster TUPHA048 [2.441 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA048 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA058 | The Control Systems of SXFEL and DCLS | ion, controls, FEL, interface | 525 |
|
|||
The high-gain free electron lasers (FEL) have given scientists hopes for new scientific discoveries in many frontier research areas. The Shanghai X-Ray Free-Electron Laser (SXFEL) test facility is commissioning at the Shanghai Synchrotron Radiation Facility (SSRF) campus. The Dalian Coherent Light Source (DCLS) has successfully commissioned in the northeast of China, which is the brightest vacuum ultraviolet (VUV) free electron laser facility. The control systems of the two facilities are base on EPICS. The industrial computer, programmable logic controller (PLC) and field programmable gate array (FPGA) are adopt for device control. The archiver is based on the PostgreSQL database. The high-level applications are developed using Python. The details of the control system design, construction and commissioning will be reported in this paper. | |||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA058 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA059 | Status of the GBAR control project at CERN | ion, experiment, controls, EPICS | 531 |
|
|||
One yet unanswered questions in physics today concerns the action of gravity upon antimatter. The GBAR experiment proposes to measure the free fall acceleration of neutral antihydrogen atoms. Installation of the project at CERN (ELENA) began in late 2016. This research project is facing new challenges and needs flexibility with hardware and software. EPICS modularity and distributed architecture has been tested for control system and to provide flexibility for future installation improvement. This paper describes the development of the software and the set of software tools that are being used on the project. | |||
![]() |
Poster TUPHA059 [1.078 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA059 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA066 | A Real-Time, Distributed Power Measuring and Transient Recording System for Accelerators' Electrical Networks | ion, controls, software, FPGA | 553 |
|
|||
Particle accelerators are complex machines with fast and high power absorption peaks. Power quality is a critical aspect for correct operation. External and internal disturbances can have significant repercussions causing beam losses or severe perturbations. Mastering the load and understanding how network disturbances propagate across the network is a crucial step for developing the grid model and realizing the limits of the existing installations. Despite the fact that several off-the-shelf solutions for real time data acquisition are available, an in-house FPGA based solution was developed to create a distributed measurement system. The system can measure power and power quality on demand as well as acquire raw current and voltage data on a defined trigger, similar to a distributed oscilloscope. In addition, the system allows recording many digital signals from the high voltage switchgear enabling electrical perturbations to be easily correlated with the state of the network. The result is a scalable system with fully customizable software, written specifically for this purpose. The system prototype has been in service for two years and full-scale deployment is currently ongoing. | |||
![]() |
Poster TUPHA066 [1.292 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA066 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA090 | TiCkS: A Flexible White-Rabbit Based Time-Stamping Board | ion, hardware, interface, controls | 622 |
|
|||
We have developed the TiCkS board based on the White Rabbit (WR) SPEC node, to provide ns-precision time-stamps (TSs) of input signals (e.g., triggers from a connected device) and transmission of these TSs to a central collection point. TiCkS was developed within the specifications of the Cherenkov Telescope Array (CTA) as one of the candidate TS nodes, with a small form-factor allowing its use in any CTA camera. The essential part of this development concerns the firmware in its Spartan-6 FPGA, with the addition of: 1) a 1ns-precision TDC for the TSs; 2) a UDP stack to transmit TSs and auxiliary information over the WR fibre, and to receive configuration & slow control commands over the same fibre. It also provides a 1-PPS and other clock signals to the connected device, from which it can receive auxiliary event-type information over an SPI link. A version of TiCkS with an FMC connector will be made available in the WR OpenHardware repository, so allowing the use of a mezzanine card with varied formats of input/output connectors, providing a cheap, flexible, and reliable solution for ns-precision time-stamping of trigger signals up to 200 kHz, for use in other experiments. | |||
![]() |
Poster TUPHA090 [4.610 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA090 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA091 | A Reliable White Rabbit Network for the FAIR General Timing Machine | ion, timing, Ethernet, monitoring | 627 |
|
|||
A new timing system based on White Rabbit (WR) is being developed for the upcoming FAIR facility at GSI in collaboration with CERN and other partners. The General Timing Machine (GTM) is responsible for the synchronization of nodes and distribution of timing events, which allows the real-time control of the accelerator equipment. WR is a time-deterministic, low latency Ethernet-based network for general data transfer and sub-ns time and frequency distribution. The FAIR WR network is considered operational only if it provides deterministic and resilient data delivery and reliable time distribution. In order to achieve this level of service, methods and techniques to increase the reliability of the GTM and WR network has been studied and evaluated. Besides, GSI has developed a network monitoring and logging system to measure the performance and detect failures of the WR network. Finally, we describe the continuous integration system at GSI and how it has improve the overall reliability of the GTM. | |||
![]() |
Poster TUPHA091 [0.630 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA091 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA092 | Two Years of FAIR General Machine Timing - Experiences and Improvements | ion, timing, controls, real-time | 633 |
|
|||
The FAIR General Machine Timing system has been in operation at GSI since 2015 and significant progress has been made in the last two years. The CRYRING accelerator was the first machine on campus operated with the new timing system and serves as a proving ground for new control system technology to this day. A White Rabbit (WR) network was set up, connecting parts of the existing facility. The Data Master was put under control of the LSA physics core. It was enhanced with a powerful schedule language and extensive research for delay bound analysis with network calculus was undertaken. Several form factors of Timing Receivers were improved, their hard and software now being in their second release and subject to a continuous series of automated long- and short-term tests in varying network scenarios. The final goal is time-synchronization of 2000-3000 nodes using the WR Precision-Time-Protocol distribution of TAI time stamps and synchronized command and control of FAIR equipment. Promising test results for scalability and accuracy were obtained when moving from temporary small lab setups to CRYRING's control system with more than 30 nodes connected over 3 layers of WR Switches. | |||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA092 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA103 | LIA-20 Experiment Protection System | ion, controls, experiment, power-supply | 660 |
|
|||
In Budker Institute of Nuclear Physics is being developed linear induction accelerator with beam energy 20MeV (LIA-20) for radiography. Distinctive feature of this accelerator in protection scope is existence both machine, person protection and experiment protection system. Main goal of this additional system is timely experiment inhibit in event of some accelerator faults. This system based on uniform protection controllers in VME form-factor which connected to each other by optical fiber. By special lines protection controller fast receive information about various faults from accelerator parts like power supplies, magnets, vacuum pumps and etc. Moreover each pulse power supply (modulator) fast send its current state through special 8 channel interlock processing board, which is base for modulator controller. This system must processing over 4000 signals for decision in several microseconds for experiment inhibit or permit.
interlocks VME LIA-20 protection |
|||
![]() |
Poster TUPHA103 [17.042 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA103 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA128 | Using LabVIEW to Build Distributed Control System of a Particle Accelerator | ion, controls, LabView, interface | 714 |
|
|||
New isochronous cyclotron DC-280 is being created at the FLNR, JINR. Total amount of the process variables is about 4000. The variety of field devices of different types is 20. This paper describes architecture and basic principles of the distributed control system using LabVIEW DSC module. | |||
![]() |
Poster TUPHA128 [2.255 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA128 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA130 | Design and Development of the Control System for a Compact Carbon-14 AMS Facility | ion, controls, interface, site | 722 |
|
|||
Funding: Beijing Science and Technology Committee A compact AMS facility which is special used for further analyzing atmospheric pollution especially in north China via carbon-14 measurement was developed at CIAE (China Institute of Atomic Energy). This machine is a single acceleration stage AMS, running with the highest accelerate voltage of 200kV. The control system is based on distributed Ethernet control system, using standard TCP/IP protocol as main communication protocol. In order to connect to the main control network freely, device-level data-link layers were developed also. A LabVIEW client, developing virtual machine applied environment, provides friendly graphical user interface for the devices management and measurement data processing. |
|||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA130 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUPHA181 | Web Extensible Display Manager | ion, EPICS, controls, monitoring | 852 |
|
|||
Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177 Jefferson Lab's Web Extensible Display Manager (WEDM) allows staff to access EDM control system screens from a web browser in remote offices and from mobile devices. Native browser technologies are leveraged to avoid installing and managing software on remote clients such as browser plugins, tunnel applications, or an EDM environment. Since standard network ports are used firewall exceptions are minimized. To avoid security concerns from remote users modifying a control system, WEDM exposes read-only access and basic web authentication can be used to further restrict access. Updates of monitored EPICS channels are delivered via a Web Socket using a web gateway. The software translates EDM description files (denoted with the edl suffix) to HTML with Scalable Vector Graphics (SVG) following the EDM's edl file vector drawing rules to create faithful screen renderings. The WEDM server parses edl files and creates the HTML equivalent in real-time allowing existing screens to work without modification. Alternatively, the familiar drag and drop EDM screen creation tool can be used to create optimized screens sized specifically for smart phones and then rendered by WEDM. |
|||
![]() |
Poster TUPHA181 [1.818 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA181 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUSH303 | Managing your Timing System as a Standard Ethernet Network | ion, monitoring, timing, HOM | 1007 |
|
|||
White Rabbit (WR) is an extension of Ethernet which allows deterministic data delivery and remote synchronization of nodes with accuracies below 1 nanosecond and jitter better than 10 ps. Because WR is Ethernet, a WR-based timing system can benefit from all standard network protocols and tools available in the Ethernet ecosystem. This paper describes the configuration, monitoring and diagnostics of a WR network using standard tools. Using the Simple Network Management Protocol (SNMP), clients can easily monitor with standard monitoring tools like Nagios, Icinga and Grafana e.g. the quality of the data link and synchronization. The former involves e.g. the number of dropped frames; The latter concerns parameters such as the latency of frame distribution and fibre delay compensation. The Link Layer Discovery Protocol (LLDP) allows discovery of the actual topology of a network. Wireshark and PTP Track Hound can intercept and help with analysis of the content of WR frames of live traffic. In order to benefit from time-proven, scalable, standard monitoring solutions, some development was needed in the WR switch and nodes. | |||
![]() |
Poster TUSH303 [1.608 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUSH303 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THBPL01 | C2MON SCADA Deployment on CERN Cloud Infrastructure | ion, monitoring, software, database | 1103 |
|
|||
The CERN Control and Monitoring Platform (C2MON) is an open-source platform for industrial controls data acquisition, monitoring, control and data publishing. C2MON's high-availability, redundant capabilities make it particularly suited for a large, geographically scattered context such as CERN. The C2MON platform relies on the Java technology stack at all levels of its architecture. Since end of 2016, CERN offers a platform as a service (PaaS) solution based on RedHat Openshift. Initially envisioned at CERN for web application hosting, Openshift can be leveraged to host any software stack due to its adoption of the Docker container technology. In order to make C2MON more scalable and compatible with Cloud Computing, it was necessary to containerize C2MON components for the Docker container platform. Containerization is a logical process that forces one to rethink a distributed architecture in terms of decoupled micro-services suitable for a cloud environment. This paper explains the challenges met and the principles behind containerizing a server-centric Java application, demonstrating how simple it has now become to deploy C2MON in any cloud-centric environment.
|
|||
![]() |
Talk as video stream: https://youtu.be/4NbM1yDO_TM | ||
![]() |
Slides THBPL01 [3.176 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPL01 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THBPL03 | A New ACS Bulk Data Transfer Service for CTA | ion, experiment, controls, software | 1116 |
|
|||
Funding: Centro Científico Tecnológico de Valparaíso (CONICYT FB-0821) The ALMA Common Software (ACS) framework provides Bulk Data Transfer (BDT) service implementations that need to be updated for new projects that will use ACS, such as the Cherenkov Telescope Array (CTA) and other projects, with most cases having quite different requirements than ALMA. We propose a new open-source BDT service for ACS based on ZeroMQ, that meets CTA data transfer specifications while maintaining retro-compatibility with the closed-source solution used in ALMA. The service uses the push-pull pattern for data transfer, the publisher-subscriber pattern for data control, and Protocol Buffers for data serialization, having also the option to integrate other serialization options easily. Besides complying with ACS interface definition to be used by ACS components and clients, the service provide an independent API to be used outside the ACS framework. Our experiments show a good compromise between throughput and computational effort, suggesting that the service could scale up in terms of number of producers, number of consumers and network bandwidth. |
|||
![]() |
Talk as video stream: https://youtu.be/F0jOkHOz0uw | ||
![]() |
Slides THBPL03 [7.087 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPL03 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THBPL06 | High Performance RDMA-Based Daq Platform Over PCIe Routable Network | ion, detector, FPGA, hardware | 1131 |
|
|||
Funding: Wassim Mansour acknowledges support from the EUCALL project which has received funding from the European Union's H2020 research and innovation programme under grant agreement No 654220. The ESRF initiated few years ago the development of a novel platform for optimised transfer of 2D detector data based on zero-copy Remote Direct Memory Access techniques. The purpose of this new scheme, under the name of RASHPA, is to efficiently dispatch with no CPU intervention multiple parallel multi-GByte/s data streams produced by modular detectors directly from the detector head to computer clusters for data storage, visualisation and distributed data treatment. The RASHPA platform is designed to be implementable using any data link and transfer protocol that supports RDMA write operations and that can trigger asynchronous events. This paper presents the ongoing work for the first implementation of RASHPA in a real system using the hardware platform of the Medipix3 based SMARTPIX hybrid pixel detector developed at ESRF and relying on switched PCIe over cable network for data transfer. It details the implementation of the RASPHA controller at the detector side and provides input on the software for the management of the overall data acquisition system at the receiver side. The implementation and use of a PCIe switch built with components off-the-shelf is also discussed. |
|||
![]() |
Talk as video stream: https://youtu.be/dJDtekXejfg | ||
![]() |
Slides THBPL06 [3.835 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPL06 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THBPA01 | Cyber Threats, the World Is No Longer What We Knew… | ion, controls, operation, PLC | 1137 |
|
|||
Security policies are becoming hard to apply as instruments are smarter than ever. Every oscilloscope gets its own stick with a Windows tag, everybody would like to control his huge installation through the air, IOT is on every lips' Stuxnet, the recent Ed. Snowden revelations have shown that cyber threat on SCADAs cannot be only played in James Bond movies. This paper aims to give simple advises in order to protect and make our installations more and more secure. How to write security files? What are the main precautions we have to take care of? Where are the vulnerabilities of my installation? Cyber security is everyone's matter, not only the cyber staff's! | |||
![]() |
Slides THBPA01 [9.135 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA01 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THBPA02 | Securing Light Source SCADA Systems | ion, controls, device-server, SCADA | 1142 |
|
|||
Funding: European X-Ray Free-Electron Laser Facility GmbH Cyber security aspects are often not thoroughly addressed in the design of light source SCADA system. In general the focus remains on building a reliable and fully-functional ecosystem. The underlying assumption is that a SCADA infrastructure is a closed ecosystem of sufficiently complex technologies to provide some security through trust and obscurity. However, considering the number of internal users, engineers, visiting scientists, students going in and out light source facilities cyber security threats can no longer be minored. At the European XFEL, we envision a comprehensive security layer for the entire SCADA infrastructure. There, Karabo [1], the control, data acquisition and analysis software shall implement these security paradigms known in IT but not applicable off-the-shelf to the FEL context. The challenges are considerable: (i) securing access to photon science hardware that has not been designed with security in mind; (ii) granting limited fine-grained permissions to external users; (iii) truly securing Control and Data acquisition APIs while preserving performance. Only tailored solution strategies, as presented in this paper, can fulfill these requirements. [1] Heisen et al (2013) "Karabo: An Integrated Software Framework Combining Control, Data Management, and Scientific Computing Tasks". Proc. of 14th ICALEPCS 2013, Melbourne, Australia (p. FRCOAAB02) |
|||
![]() |
Slides THBPA02 [1.679 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA02 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THBPA03 | The Back-End Computer System for the Medipix Based PI-MEGA X-Ray Camera | ion, Linux, Ethernet, MMI | 1149 |
|
|||
The Brazilian Synchrotron, in partnership with BrPhotonics, is designing and developing pi-mega, a new X-Ray camera using Medipix chips, with the goal of building very large and fast cameras to supply Sirius' new demands. This work describes the design and testing of the back end computer system that will receive, process and store images. The back end system will use RDMA over Ethernet technology and must be able to process data at a rate ranging from 50 Gbps to 100 Gbps per pi-mega element. Multiple pi-mega elements may be combined to produce a large camera. Initial applications include tomographic reconstruction and coherent diffraction imaging techniques. | |||
![]() |
Slides THBPA03 [1.918 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA03 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THBPA04 | Orchestrating MeerKAT's Distributed Science Data Processing Pipelines | ion, controls, framework, GPU | 1152 |
|
|||
The 64-antenna MeerKAT radio telescope is a precursor to the Square Kilometre Array. The telescope's correlator beamformer streams data at 600 Gb/s to the science data processing pipeline that must consume it in real time. This requires significant compute resources, which are provided by a cluster of heterogeneous hardware nodes. Effective utilisation of the available resources is a critical design goal, made more challenging by requiring multiple, highly configurable pipelines. We initially used a static allocation of processes to hardware nodes, but this approach is insufficient as the project scales up. We describe recent improvements to our distributed container deployment, using Apache Mesos for orchestration. We also discuss how issues like non-uniform memory access (NUMA), network partitions, and fractional allocation of graphical processing units (GPUs) are addressed using a custom scheduler for Mesos. | |||
![]() |
Slides THBPA04 [8.485 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA04 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THBPA05 | IT Infrastructure Tips and Tricks for Control System and PLC | ion, controls, PLC, device-server | 1158 |
|
|||
The network infrastructure in Solaris (National Synchrotron Radiation Center, Kraków) is carrying traffic between around 900 of physical devices and dedicated virtual machines running Tango control system. The Machine Protection System based on PLCs is also interconnected by network infrastructure. We have performed an extensive measurements of traffic flows and analysis of traffic patterns that revealed congestion of aggregated traffic from high speed acquisition devices. We have also applied the flow based anomaly detection systems that give an interesting low level view on Tango control system traffic flows. All issues were successfully addressed, thanks to proper analysis of traffic nature. This paper presents the essential techniques and tools for network traffic patterns analysis, tips and tricks for improvements and real-time data examples. | |||
![]() |
Slides THBPA05 [3.026 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA05 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THMPL01 | A Simple Temporal Network for Coordination of Emergent Knowledge Processes in a Collaborative System-of-Systems | ion, experiment, operation, diagnostics | 1252 |
|
|||
Funding: U.S. Department of Energy's National Nuclear Security Administration, DE-NA0003525 The Z Machine is the world's largest pulsed power machine, routinely delivering over 20 MA of electrical current to targets in support of US nuclear stockpile stewardship and in pursuit of inertial confinement fusion. The large-scale, multi-disciplinary nature of experiments ('shots') on the Z Machine requires resources and expertise from disparate organizations with independent functions and management, forming a Collaborative System-of-Systems. This structure, combined with the Emergent Knowledge Processes central to preparation and execution, creates significant challenges in planning and coordinating required activities leading up to a given experiment. The present work demonstrates an approach to scheduling planned activities on shot day to aid in coordinating workers among these different groups, using minimal information about activities' temporal relationships to form a Simple Temporal Network (STN). Historical data is mined, allowing a standard STN to be created for common activities, with the lower bounds between those activities defined. Activities are then scheduled at their earliest possible times to provide participants a time to check-in when interested. maschaf@sandia.gov |
|||
![]() |
Slides THMPL01 [1.367 MB] | ||
![]() |
Poster THMPL01 [2.878 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THMPL01 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THMPL08 | The SLAC Common-Platform Firmware for High-Performance Systems | ion, interface, FPGA, Ethernet | 1286 |
|
|||
Funding: Work supported by the US Department of Energy, Office of Science under contract DE-AC02-76SF00515 LCLS-II's high beam rate of almost 1MHz and the requirement that several "high-performance" systems (such as MPS, BPM, LLRF, timing etc.) shall resolve individual bunches precludes the use of a traditional software based control system but requires many core services to be implemented in FPGA logic. SLAC has created a comprehensive open-source firmware framework which implements many commonly used blocks (e.g., timing, globally-synchronized fast data buffers, MPS, diagnostic data capture), libraries (Ethernet protocol stack, AXI interconnect, FIFOs, memory etc.) and interfaces (e.g., for timing, diagnostic data etc.) thus providing a versatile platform on top of which powerful high-performance systems can be built and rapidly integrated. |
|||
![]() |
Slides THMPL08 [0.579 MB] | ||
![]() |
Poster THMPL08 [0.630 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THMPL08 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THMPA06 | Building Controls Applications Using HTTP Services | ion, software, interface, controls | 1320 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy. This paper describes the development and use of an HTTP services architecture for building controls applications within the BNL Collider-Accelerator department. Instead of binding application services (access to live, database, and archived data, etc.) into monolithic applications using libraries written in C++ or Java, this new method moves those services onto networked processes that communicate with the core applications using the HTTP protocol and a RESTful interface. This allows applications to be built for a variety of different environments, including web browsers and mobile devices, without the need to rewrite existing library code that has been built and tested over many years. Making these HTTP services available via a reverse proxy server (NGINX) adds additional flexibility and security. This paper presents implementation details, pros and cons to this approach, and expected future directions. |
|||
![]() |
Slides THMPA06 [0.966 MB] | ||
![]() |
Poster THMPA06 [0.386 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THMPA06 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA013 | Control System Projects at the Electron Storage Ring DELTA | ion, controls, EPICS, feedback | 1361 |
|
|||
Data logging and archiving is an important task to identify and investigate malfunctions during storage ring operation. In order to enable a high-performance fault analysis, large amounts of data must be processed effectively. For this purpose a fundamental redesign of the present SQL database was necessary. The VME/VxWorks-driven CAN bus has been used for many years as the main field bus of the DELTA control system. Unfortunately, the corresponding CAN bus I/O modules were discontinued by the manufacturer. Thus, the CAN field bus is currently being replaced by a more up to date Modbus/TCP-IP communication (WAGO), which largely supersedes the VME/VxWorks layer. After hard- and software integration into the EPICS environment, several projects have been realized using this powerful field bus communication. The server migration to a 64-bit architecture was already carried out in the past. By now, all client programs and software tools have also been converted to 64-bit versions. In addition, the fast orbit feedback system project, using an in-house developed FPGA-based hardware, has been resumed. This report provides an overview of the developments and results of each project. | |||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA013 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA019 | Control System Evolution on the ISIS Spallation Neutron Source | ion, controls, hardware, interface | 1377 |
|
|||
The ISIS spallation neutron source has been a production facility for over 30 years, with a second target station commissioned in 2008. Over that time, the control system has had to incorporate several generations of computer and embedded systems, and interface with an increasingly diverse range of equipment. We discuss some of the challenges involved in maintaining and developing such a long lifetime facility. | |||
![]() |
Poster THPHA019 [0.827 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA019 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA047 | Network System Operation for J-PARC Accelarators | ion, controls, operation, radiation | 1470 |
|
|||
The network systems for J-PARC accelerators have been operated over ten years. This report gives: a) an overview of the control network system, b) discussion on relationship between the control network and the office network, and c) recent security issues (policy for antivirus) for terminals and servers. Operation experiences, including troubles, are also presented. | |||
![]() |
Poster THPHA047 [1.056 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA047 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA048 | New IT-Infrastructure of Accelerators at BINP | ion, controls, hardware, operation | 1474 |
|
|||
In 2017 the Injection Complex at Budker Institute, Novosibirsk, Russia began to operate for its consumers - colliders VEPP-4 and VEPP-2000. For successful functioning of these installations is very important to ensure a stable operation of their control systems and IT-infrastructure. The given article is about new IT-infrastructures of three accelerators: Injection Complex, VEPP-2000 and VEPP-4. IT-infrastructure for accelerators consists of servers, network equipment and system software with 10-20 years life-cycle and timely support. The reasons to create IT-infrastructure with the same principles are costs minimization and simplification of support. The following points that underlie during designing are high availability, flexibility and low cost. First is achieved through redundancy of hardware - doubling of servers, disks and network interconnections. Flexibility is caused by extensive use of virtualization that allows easy migration from one hardware to another in case of fault and gives users an ability to use custom system environment. Low cost - from equipment unification and minimizing proprietary solutions | |||
![]() |
Poster THPHA048 [2.132 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA048 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA076 | A Novel General Purpose Data Acquisition Board with a DIM Interface | ion, controls, interface, software | 1565 |
|
|||
A new general purpose data acquisition and control board (Board51) is presented in this paper. Board51 has primarily been developed for use in the ALICE experiment at CERN, but its open design allows for a wide use in any application requiring flexible and affordable data acquisition system. It provides analog I/O functionalities and is equipped with software bundle, allowing for easy integration into the SCADA. Based on the Silicon Labs C8051F350 MCU, the board features a fully-differential 24-bit ADC that provides an ability to perform very precise DAQ at sampling rate up to 1kHz. For analog outputs two 8-bit current-mode DACs can be used. Board51 is equipped with UART to USB interface that allows communication with any computer platform. As a result the board can be controlled through the DIM system. This is provided by a program running on a computer publishing services that include measured analog values of each ADC channel and accepts commands for setting ADC readout rate and DACs voltage. Digital inputs/outputs are also accessible using the DIM communication system. These services enable any computer on a common network to read measured values and control the board. | |||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA076 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA134 | Ground Vibration Monitoring at CERN as Part of the International Seismic Network | ion, ground-motion, real-time, database | 1695 |
|
|||
The civil engineering activities in the framework of the High Luminosity LHC project, the Geneva GEothermie 2020 and the continuous monitoring of the LHC civil infrastructures triggered the need for the installation of a seismic network at CERN. A 24 bits data acquisition system has been deployed in 3 places at CERN: ATLAS, CMS and the Prévessin site. The system is sending all the raw data to the Swiss Seismological Service and performs FFT on the fly to be stored in the LHC database. The system has shown a good sensitivity of 10-16 (m/s)2/Hz at 1 Hz. | |||
![]() |
Poster THPHA134 [2.775 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA134 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA138 | YCPSWASYN: EPICS Driver for FPGA Register Access and Asynchronous Messaging | ion, hardware, FPGA, interface | 1707 |
|
|||
The Linac Coherent Light Source II (LCLS-II) is a major upgrade of the LCLS facility at SLAC, scheduled to start operations in 2020. The High Performance Systems (HPS) defines a set of LCLS-II controls sub-systems which are directly impacted by its 1 MHz operation. It is formed around a few key concepts: ATCA based packaging, digital and analog application boards, and 10G Ethernet based interconnections for controls. The Common Platform provides the common parts of the HPS in term of hardware, firmware, and software. The Common Platform Software (CPSW) provides a standardized interface to the common platform's FPGA for all high-level software. YAML is used to define the hardware topology and all necessary parameters. YCPSWASYN is an asynPortDriver based EPICS module for FPGA register access and asynchronous messaging using CPSW. YCPSWSYN has two operation modes: an automatic mode where PVs are automatically created for all registers and the record's fields are populated with information found in YAML; and a manual mode where the engineer can choose which register to expose via PVs and freely choose the record's filed information. | |||
![]() |
Poster THPHA138 [1.189 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA138 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA152 | Renovation and Extension of Supervision Software Leveraging Reactive Streams | ion, software, MMI, GUI | 1753 |
|
|||
Inspired by the recent developments of reactive programming and the ubiquity of the concept of streams in modern software industry, we assess the relevance of a reactive streams solution in the context of accelerator controls. The promise of reactive streams, to govern the exchange of data across asynchronous boundaries at a rate sustainable for both the sender and the receiver, is alluring to most data-centric processes of CERN's accelerators. Taking advantage of the renovation of one key software piece of our supervision layer, the Beam Interlock System GUI, we look at the architecture, design and implementation of a reactive streams based solution. Additionally, we see how this model allows us to re-use components and contributes naturally to the extension of our tool set. Lastly, we detail what hindered our progression and how our solution can be taken further. | |||
![]() |
Poster THPHA152 [0.879 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA152 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA174 | Preventing Run-Time Bugs at Compile-Time Using Advanced C++ | ion, embedded, controls, status | 1834 |
|
|||
When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques. | |||
![]() |
Poster THPHA174 [0.239 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA174 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA181 | Web Based Visualization Tools for Epics Embedded Systems: An Application to Belle2 | ion, EPICS, controls, database | 1857 |
|
|||
Common EPICS visualization tools include standalone Graphical User Interface [*] or archiving applications [**] that are not suitable to create custom web dashboards from IOC published PVs. The solution proposed in this work is a data publishing architecture based on three open-source components: - Collectd: a very popular data collection daemon with a specialized plugin developed to fetch EPICS PVs; - InfluxDB: a Time Series DataBase (TSDB) that provides an high performance datastore written specifically for time series data; - Grafana: a web application for time series analytics and visualization able to query data from different datasources. A live demo will be provided showing flexibility and user friendliness of such developed solution. As a case study, we show the environment developed and deployed in the Belle2 experiment at KEK Laboratory (Tsukuba, Japan) to monitor data from the endcap calorimeter during the installation phase.
* K.Kasemir, Control System Studio Applications, Proc. of ICALEPCS 2007, Knoxville, Tennessee, USA ** M.Shankar et al., The EPICS Archiver Appliance, Proc. of ICALEPCS 2015, Melbourne, Australia |
|||
![]() |
Poster THPHA181 [4.457 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA181 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THPHA204 | CLARA Virtual Accelerator | ion, controls, simulation, EPICS | 1926 |
|
|||
STFC Daresbury Laboratory is developing CLARA (Compact Linear Accelerator for Research and Applications), a novel FEL (Free Electron Laser) test facility focussed on the generation of ultra-short photon pulses of coherent light with high levels of stability and synchronisation. The main motivation for CLARA is to test new FEL schemes that can later be implemented on existing and future short wavelength FELs. Particular focus will be on ultra-short pulse generation, pulse stability, and synchronisation with external sources. Knowledge gained from the development and operation of CLARA will inform the aims and design of a future UK-XFEL. To aid in the development of high level physics software, EPICS, a distributed controls framework, and ASTRA, a particle tracking code have been combined to simulate the facility as a virtual accelerator. | |||
![]() |
Poster THPHA204 [1.241 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA204 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
FRAPL05 | Hardware Architecture of the ELI Beamlines Control and DAQ System | ion, controls, laser, hardware | 2000 |
|
|||
The ELI Beamlines facility is a Petawatt laser facility in the final construction and commissioning phase in Prague, Czech Republic. End 2017, a first experiment will be performed. In the end, four lasers will be used to control beamlines in six experimental halls. The central control system connects and controls more than 40 complex subsystems (lasers, beam transport, beamlines, experiments, facility systems, safety systems), with high demands on network, synchronisation, data acquisition, and data processing. It relies on a network based on more than 15.000 fibres, which is used for standard technology control (PowerLink over fibre and standard Ethernet), timing (WhiteRabbit) and dedicated high-throughput data acquisition. Technology control is implemented on standard industrial platforms (B&R) in combination with uTCA for more demanding applications. The data acquisition system is interconnected via Infiniband, with an option to integrate OmniPath. Most control hardware installations are completed, and many subsystems are already successfully in operation. An overview and status will be given. | |||
![]() |
Talk as video stream: https://youtu.be/W2TF37cRWTo | ||
![]() |
Slides FRAPL05 [5.051 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-FRAPL05 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||