Paper | Title | Other Keywords | Page |
---|---|---|---|
MOPHA014 | Building and Packaging EPICS Modules With Conda | EPICS, factory, software, Windows | 223 |
|
|||
Conda is an open source package, dependency and environment management system. It runs on Windows, macOS and Linux and can package and distribute software for any language (Python, R, Ruby, C/C++…). It allows one to build a software in a clean and repeatable way. EPICS is made of many different modules that need to be compiled together. Conda makes it easy to define and track dependencies between EPICS base and the different modules (and their versions). Anaconda’s new compilers allow conda to build binaries that can run on any modern linux distribution (x8664). Not relying on any specific OS packages removes issues that can arise when upgrading the OS. At ESS, conda packages are built using gitlab-ci and pushed to a local channel on our Artifactory server. Using conda makes it easy for the users to install the EPICS modules they want, where they want (locally on a machine, in a docker container for testing…). All dependencies and requirements are handled by conda. Conda environments make it possible to work on different versions on the same machine without any conflict. | |||
![]() |
Poster MOPHA014 [0.847 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA014 | ||
About • | paper received ※ 27 September 2019 paper accepted ※ 08 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPHA034 | Software Architecture for Next Generation Beam Position Monitors at Fermilab | software, data-acquisition, hardware, interface | 275 |
|
|||
Funding: This work was supported by the DOE contract No. DEAC02-07CH11359 to the Fermi Research Alliance LLC. The Fermilab Accelerator Division / Instrumentation Department develops Beam Position Monitor (BPM) systems in-house to support its sprawling accelerator complex. Two new BPM systems have been deployed and another upgraded over the last two years. These systems are based on a combination of VME and Gigabit Ethernet connected hardware and a common Linux-based embedded software platform with modular components. The architecture of this software platform and the considerations for adapting to future machines or upgrade projects will be described. |
|||
![]() |
Poster MOPHA034 [1.424 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA034 | ||
About • | paper received ※ 30 September 2019 paper accepted ※ 08 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPHA044 | Development of Ethernet Based Real-Time Applications in Linux Using DPDK | network, feedback, Ethernet, real-time | 297 |
|
|||
In the last decade Ethernet has become the most popular way to interface hardware devices and instruments to the control system. Lower cost per connection, reuse of existing network infrastructures, very high data rates, good noise rejection over long cables and finally an easier maintainability of the software in the long term are the main reasons of its success. In addition, the need of low latency systems of the High Frequency Trading community has boosted the development of new strategies, such as CPU isolation, to run real-time applications in plain Linux with a determinism of the order of microseconds. DPDK (Data Plane Development Kit), an open source software solution mainly sponsored by Intel, addresses the request of high determinism over Ethernet by bypassing the network stack of Linux and providing a more friendly framework to develop tasks which are even able to saturate a 100 Gbit connection. Benchmarks regarding the real-time performance and preliminary results of employing DPDK in the acquisition of beam position monitors for the fast orbit feedback of the Elettra storage ring will be presented. | |||
![]() |
Poster MOPHA044 [2.626 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA044 | ||
About • | paper received ※ 29 September 2019 paper accepted ※ 08 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPHA106 | FGC3.2: A New Generation of Embedded Controls Computer for Power Converters at CERN | controls, software, embedded, hardware | 468 |
|
|||
Modern power converters (power supplies) at CERN are controlled by devices known as Function Generator/Controllers (FGCs), which are embedded computer systems providing function generation, current and field regulation, and state control. FGCs were originally conceived for the LHC in the early 2000s, though later generations are now increasingly being deployed in the accelerators in the LHC Injector Chain (Linac4, Booster, Proton Synchrotron and SPS) to replace obsolete equipment. A new generation of FGC known as the FGC3.2 is currently in development, which will provide for the evolving needs of the CERN accelerator complex and additionally be supplied to other HEP laboratories through CERN’s Knowledge and Technology Transfer program. This paper describes the evolution of FGCs, summarizes tests performed to evaluate candidate components for the FGC3.2 and details the final hardware and software architectures which were chosen. The new controller will make use of a multi-core ARM-based system-on-chip (SoC) running an embedded Linux operating system in contrast to earlier generations which combined a microcontroller and DSP with software running on ’bare metal’. | |||
![]() |
Poster MOPHA106 [2.986 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA106 | ||
About • | paper received ※ 27 September 2019 paper accepted ※ 10 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPHA115 | Code Generation Tools and Editor for Memory Maps | hardware, software, interface, GUI | 493 |
|
|||
Cheburashka, a toolset created in the Radio Frequency Group at CERN, has become an essential part of our hardware and software developments. Due to changing requirements, this toolset has been recently rewritten in C++ and Python. A hardware developer, using the graphical editor, defines a memory map, which is subsequently used to ensure consistency between software and hardware. The memory map file is an input for a variety of tools used by the hardware engineers, such as VHDL code generators. In addition to aiding the firmware development, our tools generate C++ wrapper libraries. The wrapper provides a simple interface on top of a Linux device driver to read and write registers by exposing memory map nodes in a hierarchical way, performing all low-level bit manipulations and checks internally. To interact with the hardware, a software that runs on a front-end computer is needed. Cheburashka allows us to generate FESA (Front-End Software Architecture) classes with parts of the operational interface already present. This paper describes the evolution of the graphical editor and the Python tools used for C++ code generation, along with a description of their main features. | |||
![]() |
Poster MOPHA115 [0.708 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA115 | ||
About • | paper received ※ 26 September 2019 paper accepted ※ 10 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPHA134 | PyDM - Status Update | controls, Windows, framework, EPICS | 536 |
|
|||
PyDM (Python Display Manager) is a Python and Qt-based framework for building user interfaces for control systems providing a no-code, drag-and-drop system to make simple screens, as well as a straightforward Python framework to build complex applications. In this brief presentation we will talk about the state of PyDM, the new functionality that has been added in the last year of development, including full support for EPICS PVAccess and other structured data sources as well as the features targeted for release in 2020. | |||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA134 | ||
About • | paper received ※ 30 September 2019 paper accepted ※ 10 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPHA156 | The Linux Device Driver Framework for High-Throughput Lossless Data Streaming Applications | software, interface, neutron, FPGA | 602 |
|
|||
Funding: This work was supported by the U.S. Department of Energy under contract DE-AC0500OR22725. Many applications in experimental physics facilities require custom hardware solutions to control process parameters or to acquire data at high rates with high integrity. These hardware solutions typically require custom software implementations. The neutron scattering detectors at the Spallation Neutron Source at ORNL* implement custom protocols over optical fiber connected to a PCI express based read-out board. A dedicated kernel device driver provides an interface to the software application and must be able to sustain data bursts from a pulsed source while acquiring data for long periods of time. The same optical channel is also used as low-latency communication link to detector electronics for configuration and real time fault detection. This paper presents a Linux device driver design, implementation challenges in a low-latency high-throughput setup, real use case benchmarks and importance of clean application programming interface for seamless integration in control systems. This software implementation was developed as a generic framework and has been extended beyond neutron data acquisition. It is suitable to diverse applications where it allows for rapid FPGA development. *Oak Ridge National Laboratory |
|||
![]() |
Poster MOPHA156 [4.163 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA156 | ||
About • | paper received ※ 02 October 2019 paper accepted ※ 10 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPHA167 | Cloud Computing Platform for High-level Physics Applications Development | controls, software, LEBT, EPICS | 629 |
|
|||
Funding: Work supported by the U.S. Department of Energy Office of Science under Cooperative Agreement DESC0000661 To facilitate software development for the high-level applications on the particle accelerator, we proposed and prototyped a computing platform, so-called ’phyapps-cloud’. Based on the technology stack composed by Python, JavaScript, Docker, and Web service, such a system could greatly decouple deployment and development. That is, the users (app developers) only need to focus on the feature development by working on the infrastructure that is served by ’phyapps-cloud’, while the cloud service provider (which develop and deploy ’phyapps-cloud’) could focus on the development of the infrastructure. In this contribution, the development details will be addressed, as well as the demonstration of a simple Python script development on this platform. |
|||
![]() |
Poster MOPHA167 [1.442 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-MOPHA167 | ||
About • | paper received ※ 30 September 2019 paper accepted ※ 10 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
TUCPL02 | Processing System Design for Implementing a Linear Quadratic Gaussian (LQG) Controller to Optimize the Real-Time Correction of High Wind-Blown Turbulence | controls, software, real-time, optics | 761 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 with document release number LLNL-PROC-792238. LLNL has developed a low latency, real-time, closed-loop, woofer-tweeter Adaptive Optics Control (AOC) system with a feedback control update rate of greater than 16 kHz. The Low-Latency Adaptive Mirror System (LLAMAS) is based on controller software previously developed for the successful Gemini Planet Imager (GPI) instrument which had an update rate of 1 kHz. By tuning the COTS operating system, tuning and upgrading the processing hardware, and adapting existing software, we have the computing power to implement a Linear-Quadratic-Gaussian (LQG) Controller in real time. The implementation of the LQG leverages hardware optimizations developed for low latency computing and the video game industry, such as fused multiply add accelerators and optimized Fast Fourier Transforms. We used the Intel Math Kernel Library (MKL) to implement the high-order LQG controller with a batch mode execution of 576 6x6 matrix multiplies. We will share our progress, lessons learned and our plans to further optimize performance by tuning high order LQG parameters. |
|||
![]() |
Slides TUCPL02 [2.521 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-TUCPL02 | ||
About • | paper received ※ 03 October 2019 paper accepted ※ 02 October 2020 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEAPP03 | Converting From NIS to Redhat Identity Management | network, controls, database, interface | 871 |
|
|||
Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177. The Jefferson Lab (JLab) accelerator controls network has transitioned to a new authentication and directory service infrastructure. The new system uses the Red Hat Identity Manager (IdM) as a single integrated front-end to the Lightweight Directory Access Protocol (LDAP) and a replacement for NIS and a stand-alone Kerberos authentication service. This system allows for integration of authentication across Unix and Windows environments and across different JLab computing environments, including across firewalled networks. The decision making process, conversion steps, issues and solutions will be discussed. |
|||
![]() |
Slides WEAPP03 [3.898 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEAPP03 | ||
About • | paper received ※ 01 October 2019 paper accepted ※ 09 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WECPL05 | Migrating to Tiny Core Linux in a Control System | controls, Windows, hardware, embedded | 920 |
|
|||
The ISIS Accelerator Controls (IAC) group currently uses a version of Microsoft Windows Embedded as its chosen Operating System (OS) for control of front-line hardware. Upgrading to the current version of the Windows Embedded OS is not possible without also upgrading hardware, or changing the way software is delivered to the hardware platform. The memory requirements are simply too large to be considered a viable option. A new alternative was sought and that process led to Tiny Core Linux being selected due to its frugal memory requirements and ability to run from a RAM-disk. This paper describes the process of migrating from Windows Embedded Standard 2009 to Tiny Core Linux as the OS platform for IAC embedded hardware. | |||
![]() |
Slides WECPL05 [1.455 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WECPL05 | ||
About • | paper received ※ 27 September 2019 paper accepted ※ 09 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPHA078 | A Virtualized Beamline Control and DAQ Environment at PAL | framework, software, controls, hardware | 1273 |
|
|||
At least three different computers are used in the beamline of PAL, first for EPICS IOC, second for device control and data acquisition(DAQ), and third for analyzing data for users. In the meantime, stable beamline control was possible by maintaining the policy of separating applications listed above from the hardware layer. As data volumes grow and the resulting data throughput increases, demands for replacement of highly efficient computers has increased. Advances in virtualization technology and robust computer performance have enabled a policy shift from hardware-level isolation to software-level isolation without replacing all the computers. DAQ and analysis software using the Bluesky Data Collection Framework have been implemented on this virtualized OS. In this presentation, we introduce the DAQ system implemented by this virtualization method. | |||
![]() |
Poster WEPHA078 [1.152 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2019-WEPHA078 | ||
About • | paper received ※ 29 September 2019 paper accepted ※ 20 October 2019 issue date ※ 30 August 2020 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||