| Paper | Title | Other Keywords | Page |
|---|---|---|---|
| WCO101 | Drivers and Software for MicroTCA.4 | controls, interface, Linux, LLRF | 1 |
|
|||
| The MicroTCA.4 crate standard provides a powerful electronic platform for digital and analogue signal processing. Besides excellent hardware modularity, it is the software reliability and flexibility as well as the easy integration into existing software infrastructures that will drive the widespread adoption of the new standard. The DESY MicroTCA.4 User Tool Kit (MTCA4U) comprises three main components: A Linux device driver, a C++ API for accessing the MicroTCA.4 devices and a control system interface layer. The main focus of the tool kit is flexibility to enable fast development. The universal, expandable PCIexpress driver and a register mapping library allow out of the box operation of all MicroTCA.4 devices which carry firmware developed with the DESY FPGA board support package. The control system adapter provides callback functions to decouple the application code from the middleware layer. Like this the same business logic can be used at different facilities without further modification. | |||
|
Slides WCO101 [0.760 MB] | ||
| WCO201 | Computing Infrastructure for Online Monitoring and Control of High-throughput DAQ Electronics | detector, controls, GPU, software | 10 |
|
|||
| New imaging stations with high-resolution pixel detectors and other synchrotron instrumentation have ever increasing sampling rates and put strong demands on the complete signal processing chain. Key to successful systems is high-throughput computing platform consisting of DAQ electronics, PC hardware components, communication layer and system and data processing software components. Based on our experience building a high-throughput platform for real-time control of X-ray imaging experiments, we have designed a generalized architecture enabling rapid deployment of data acquisition system. We have evaluated various technologies and come up with solution which can be easily scaled up to several gigabytes-per-second of aggregated bandwidth while utilizing reasonably priced mass-market products. The core components of our system are an FPGA platform for ultra-fast data acquisition, Infiniband interconnects and GPU computing units. The presentation will give an overview on the hardware, interconnects, and the system level software serving as foundation for this high-throughput DAQ platform. This infrastructure is already successfully used at KIT's synchrotron ANKA. | |||
|
Slides WCO201 [2.948 MB] | ||
| WPO008 | An Extensible Equipment Control Library for Hardware Interfacing in the FAIR Control System | controls, power-supply, software, framework | 49 |
|
|||
| In the FAIR control system the SCU (Scalable Control Unit, an industry PC with a bus system for interfacing electronics) is the standard front-end controller for power supplies. The FESA-framework is used to implement front-end software in a standardized way, to give the user a unified look on the installed equipment. As we were dealing with different power converters and thus with different SCU slave card configurations, we had two main things in mind: First, we wanted to be able to use common FESA classes for different types of power supplies, regardless of how they are operated or which interfacing hardware they use. Second, code dealing with the equipment specifics should not be buried in the FESA-classes but instead be reusable for the implementation of other programs. To achieve this we built up a set of libraries which interface the whole SCU functionality as well as the different types of power supplies in the field. Thus it is now possible to easily integrate new power converters and the SCU slave cards controlling them in the existing equipment software and to build up test programs quickly. | |||
| WPO019 | STARS: Current Development Status | controls, GUI, interface, status | 75 |
|
|||
|
STARS (Simple Transmission and Retrieval System) [1] is extremely simple and useful software for small-scale control systems and it runs on various operating system. STARS consists of client programs (STARS clients) and a server (STARS server) program. Each client is connected to the server via a TCP/IP socket and each client and the server communicate with text based message. STARS is used for various system at the KEK Photon Factory (beamline control system, experimental hall access control system, key handling system etc.) and development of stars (development many kind of STARS clients, interconnection of Web2c [2] and STARS etc.) is still going. We will describe current development status of STARS.
[1] http://stars.kek.jp/ [2] http://adweb.desy.de/mcs/web2cToolkit/web2chome.htm |
|||
|
Slides WPO019 [2.604 MB] | ||
| WPO031 | Diagnostics Test Stand Setup at PSI and its Controls in Light of the Future SwissFEL | controls, software, interface, diagnostics | 108 |
|
|||
| In order to provide high quality electron beams, the future SwissFEL machine needs very precise and reliable beam diagnostics tools. At Paul Scherrer Institute (PSI), the development of such tools is performed based on the SwissFEL Injector Test Facility and a dedicated automated diagnostics test stand. The test stand is equipped by not only major SwissFEL beam diagnostics elements (cameras, beam loss monitors, beam current monitors, etc.) but also their controls and data processing hardware and software. The paper describes diagnostics test stand controls software components, which were designed in view of the future SwissFEL operational requirements. | |||
|
Poster WPO031 [0.637 MB] | ||
| TCO201 | Managing the FAIR Control System Development | controls, ion, project-management, storage-ring | 135 |
|
|||
| After years of careful preparation and planning, construction and implementation works for the new international accelerator complex FAIR (Facility for Antiproton and Ion Research) at GSI have seriously been started. The FAIR accelerators will extend the present GSI accelerator chain, then being used as injector, and provide anti-proton, ion, and rare isotope beams with unprecedented intensity and quality for a variety of research programs. The accelerator control system for the FAIR complex is presently being designed and developed by the GSI Controls group with a team of about 50 soft- and hardware developers, complemented by an international in-kind contribution from the FAIR member state Slovenia. This paper presents requirements and constraints from being a large and international project and focusses on the organizational and project management strategies and tools for the control system subproject. This includes the project communication, design methodology, release cycle planning, testing strategies and ensuring technical integrity and coherence of the whole system during the full project phase. | |||
|
Slides TCO201 [2.781 MB] | ||
| TCO207 | Common Device Interface 2.0 | database, device-server, controls, interface | 147 |
|
|||
|
The Common Device Interface (CDI) [1] is a popular device layer in TINE control systems [2]. Indeed, a de-facto device server (more specifically a 'property server') can be instantiated merely by supplying a hardware address database, somewhat reminiscent of an epics IOC. It has in fact become quite popular among uses to do precisely this, although the original design intent anticipated embedding CDI as a hardware layer within a dedicated device server. When control system client applications and central services communicate directly to a CDI server, this places the burden of providing useable, viewable data (and in an efficient manner) squarely on CDI and its address database. In its initial release variant, any modifications to this hardware database needed to be made on the file system used by the CDI device server. In this report we shall describe some of the many new features of CDI release 2.0, which have drawn on the user/developer experience over the past eight years.
[1] 'Using the Common Device Interface in TINE', Duval and Wu, PCaPAC 2006 [2] http://tine.desy.de |
|||
|
Slides TCO207 [1.616 MB] | ||
| TCO301 | Inexpensive Scheduling in FPGAs | FPGA, controls, distributed, interface | 150 |
|
|||
| In the new scheme for machine control used within the FAIR project, actions are distributed to front-end controllers (FEC) with absolute execution timestamps. The execution time must be both precise to the nanosecond and scheduled faster than a microsecond, requiring a hardware solution. Although the actions are scheduled at the FEC out of order, they must be executed in sorted order. The typical hardware approaches to implementing a priority queue (CAMs, shift-registers, etc.) work well in ASIC designs, but must be implemented in expensive FPGA core logic. Conversely, the typical software approaches (heaps, calendar queues, etc.) are either too slow or too memory intensive. We present an approach which exploits the time-ordered nature of our problem to sort in constant-time using only a few memory blocks. | |||
|
Slides TCO301 [1.370 MB] | ||
| TCO304 | Launching the FAIR Timing System with CRYRING | timing, controls, software, network | 155 |
|
|||
| During the past two years, significant progress has been made on the development of the General Machine Timing system for the upcoming FAIR facility at GSI. The prime features are time-synchronization of 2000-3000 nodes using the White Rabbit Precision-Time-Protocol (WR-PTP), distribution of International Atomic Time (TAI) time stamps and synchronized command and control of FAIR control system equipment. A White Rabbit network has been set up connecting parts of the existing facility and a next version of the Timing Master has been developed. Timing Receiver nodes in form factors Scalable Control Unit (standard front-end controller for FAIR), VME, PCIe and standalone have been developed. CRYRING is the first machine on the GSI/FAIR campus to be operated with this new timing system and serves as a test-ground for the complete control system. Installation of equipment starts in late spring followed by commissioning of equipment in summer 2014. | |||
|
Slides TCO304 [7.818 MB] | ||
| TCO305 | TCP/IP Control System Interface Development Using Microchip* Brand Microcontrollers | controls, interface, Ethernet, electronics | 158 |
|
|||
|
Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. Even as the diversity and capabilities of Single-Board-Computers (SBCs) like the Raspberry Pi and BeagleBoard continue to increase, low level microprocessor solutions also offer the possibility of robust distributed control system interfaces. Since they can be smaller and cheaper than even the least expensive SBC, they are easily integrated directly onto printed circuit boards either via direct mount or pre-installed headers. The ever increasing flash-memory capacity and processing clock speeds has enabled these types of microprocessors to handle even relatively complex tasks such as management of a full TCP/IP software and hardware stack. The purpose of this work is to demonstrate several different implementation scenarios wherein a computer control system can communicate directly with an off-the-shelf Microchip brand microcontroller and its associated peripherals. The microprocessor can act as a Hardware-to-Ethernet communication bridge and provide services such as distributed reading and writing of analog and digital values, webpage serving, simple network monitoring and others to any custom electronics solution. * Microchip Technology Inc., www.microchip.com |
|||
|
Slides TCO305 [3.904 MB] | ||
| FCO106 | The Role of the CEBAF Element Database in Commissioning the 12 GeV Accelerator Upgrade | database, controls, interface, software | 161 |
|
|||
|
Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to this manuscript. The CEBAF Element Database (CED) was first developed in 2010 as a resource to support model-driven configuration of the Jefferson Lab Continuous Electron Beam Accelerator (CEBAF). Since that time, its uniquely flexible schema design, robust programming interface, and support for multiple concurrent versions has permitted it to evolve into a more broadly useful operational and control system tool. The CED played a critical role before and during the 2013 startup and commissioning of CEBAF following its 18-month long shutdown and upgrade. Information in the CED about hardware components and their relations to one-another facilitated a thorough Hot Checkout process involving more than 18,000 system checks. New software relies on the CED to generate EDM screens for operators on-demand thereby ensuring that the information on those screens is correct and up-to-date. The CED also continues to fulfill its original mission of supporting model-driven accelerator setup. Using the new ced2elegant and eDT (elegant Download Tool), accelerator physicists have proven able to compute and apply energy-dependent set points with greater efficiency than ever before. |
|||
|
Slides FCO106 [2.698 MB] | ||
| FPO013 | Beam Data Logging System Base on NoSQL Database at SSRF | database, storage-ring, injection, synchrotron | 188 |
|
|||
|
Funding: Supported by the Knowledge Innovation Program of Chinese Academy of Sciences To improve the accelerator reliability and stability, a beam data logging system was built at SSRF, which is base on NOSQL database Couchbase. The Couchbase is an open source software, and can be used both as a document database or pure key-value database. The logging system stores beam parameters under predefined conditions. It is mainly used for the fault diagnosis, beam parameters tracking or automatic report generation. The details of the data logging system will be reported in this paper. |
|||
| FPO014 | New Data Archive System for SPES Project Based on EPICS RDB Archiver with PostgreSQL Backend | EPICS, controls, database, network | 191 |
|
|||
|
SPES project [1] is a ISOL facility under construction at INFN, Laboratori Nazionali di Legnaro, which requires the integration between the accelerator systems actually used and the new line composed by the primary beam and the ISOL target. As consequence, a migration from the actual control system to a new one based on EPICS [2] is mandatory to realize a distributed control network for the new facility. One of the first implementation realized for this purpose is the Archiver System, an important service required for experiments. Comparing information and experiences provided by other Laboratories, an EPICS Archive System [3] based on PostgreSQL is implemented to provide this service. Preliminary tests are done with a dedicated hardware and following the project requirements. After these tests used to determinate a good configuration for Database and EPICS Application, the system is going to be moved in production, where it will be integrated with the first subsystem upgraded to EPICS. Dedicated customizations are made to the application for providing a simple user experience in managing and interact with the archiver system.
[1] https://web.infn.it/spes [2] http://www.aps.anl.gov/epics [3] http://sourceforge.net/apps/trac/cs-studio/wiki/RDBArchive |
|||
| FPO018 | Setup and Diagnostics of Motion Control at ANKA Beamlines | controls, software, TANGO, interface | 201 |
|
|||
| The precise motion control in high resolution is one of the necessary conditions for making high quality measurements at beamline experiments. At a common ANKA beamline up to one hundred actuator axes are working together to align and shape beam, to select beam Energy and to position probes. Some Experiments need additional motion axes supported by transportable controllers plugged temporaly to a local beamline control system. In terms of process control all the analog and digital signals from different sources have to be verified, leveled and interfaced to the motion controllers. They have to be matched and calibrated in the control systems configuration file to real physical quantities which give the input for further data processing. A set of hard- and software tools and methods developed at ANKA over the years is presented in this paper. | |||
|
Poster FPO018 [1.608 MB] | ||
| FCO202 | OpenGL-Based Data Analysis in Virtualized Self-Service Environments | software, network, GPU, synchrotron | 237 |
|
|||
|
Funding: Federal Ministry of Education and Research, Germany Modern data analysis applications for 2D/3D data samples apply complex visual output features which are often based on OpenGL, a multi-platform API for rendering vector graphics. They demand special computing workstations with a corresponding CPU/GPU power, enough main memory and fast network interconnects for a performant remote data access. For this reason, users depend heavily on available free workstations, both temporally and locally. The provision of virtual machines (VMs) accessible via a remote connection could avoid this inflexibility. However, the automatic deployment, operation and remote access of OpenGL-capable VMs with professional visualization applications is a non-trivial task. In this paper, we discuss a concept for a flexible analysis infrastructure that will be part in the project ASTOR, which is the abbreviation for “Arthropod Structure revealed by ultra-fast Tomography and Online Reconstruction”. We present an Analysis-as-a-Service (AaaS) approach based on the on-demand allocation of VMs with dedicated GPU cores and a corresponding analysis environment to provide a cloud-like analysis service for scientific users. |
|||
|
Slides FCO202 [1.126 MB] | ||