Keyword: software
Paper Title Other Keywords Page
WCO102 Controls Middleware for FAIR controls, framework, CORBA, operation 4
 
  • V. Rapp
    GSI, Darmstadt, Germany
  • W. Sliwinski
    CERN, Geneva, Switzerland
 
  With the FAIR complex, the control systems at GSI will face new scalability challenges due to significant amount of new hardware coming with the new facility. Although, the old systems have proven themselves as sustainable and reliable, they are based on technologies, which have become obsolete years ago. During the FAIR construction time and the associated shutdown GSI will replace multiple components of the control system. The success in the integration of CERNs FESA and LSA frameworks had moved GSI to extend the cooperation with the controls middleware and especially Remote Device Access (RDA) and Java API for Parameter Control (JAPC) frameworks. However, the current version of RDA is based on CORBA technology, which itself, can be considered obsolete. Consequently, it will be replaced by a newer version (RDA3), which will be based on ZeroMQ, and will offer a new improved API based on the experience from previous usage. The collaboration between GSI and CERN shows that new RDA is capable to comply with requirements of both environments. In this paper we present general architecture of the new RDA and depict its integration in the GSI control system.  
slides icon Slides WCO102 [0.323 MB]  
 
WCO201 Computing Infrastructure for Online Monitoring and Control of High-throughput DAQ Electronics detector, controls, GPU, hardware 10
 
  • S.A. Chilingaryan, C.M. Caselle, T. Dritschler, T. Faragó, A. Kopmann, U. Stevanovic, M. Vogelgesang
    KIT, Eggenstein-Leopoldshafen, Germany
 
  New imaging stations with high-resolution pixel detectors and other synchrotron instrumentation have ever increasing sampling rates and put strong demands on the complete signal processing chain. Key to successful systems is high-throughput computing platform consisting of DAQ electronics, PC hardware components, communication layer and system and data processing software components. Based on our experience building a high-throughput platform for real-time control of X-ray imaging experiments, we have designed a generalized architecture enabling rapid deployment of data acquisition system. We have evaluated various technologies and come up with solution which can be easily scaled up to several gigabytes-per-second of aggregated bandwidth while utilizing reasonably priced mass-market products. The core components of our system are an FPGA platform for ultra-fast data acquisition, Infiniband interconnects and GPU computing units. The presentation will give an overview on the hardware, interconnects, and the system level software serving as foundation for this high-throughput DAQ platform. This infrastructure is already successfully used at KIT's synchrotron ANKA.  
slides icon Slides WCO201 [2.948 MB]  
 
WCO207 A New Data Acquisition Software and Analysis for Accurate Magnetic Field Integral Measurement at BNL Insertion Devices Laboratory controls, data-acquisition, insertion-device, EPICS 28
 
  • M. Musardo, D.A. Harder, P. He, C.A. Kitegi, T. Tanabe
    BNL, Upton, New York, USA
 
  A new data acquisition software has been developed in LabVIEW to measure the first and second magnetic field integral distributions of Insertion Devices (IDs). The main characteristics of the control system and the control interface program are presented. The new system has the advantage to make automatic and synchronized measurements as a function of gap and/or phase of an ID. The automatic gap and phase control is a real-time communication based on EPICS system and the eight servomotors of the measurement system are controlled using a Delta Tau GeoBrick PMAC-2. The methods and the measurement techniques are described and the performance of the system together with the recent results will be discussed.  
slides icon Slides WCO207 [8.786 MB]  
 
WPO003 Setup of a History Storage Engine Based on a Non-Relational Database at ELSA database, controls, operation, interface 34
 
  • D. Proft, F. Frommberger, W. Hillert
    ELSA, Bonn, Germany
 
  The electron stretcher facility ELSA provides a beam of unpolarized and polarized electrons of up to 3.2 GeV energy to external hadron physics experiments. Its in house developed distributed computer control system is able to provide real time beam diagnostics as well as steering tasks in one homogeneous environment. Recently it was ported from HP-UX running on three HP workstations to a single Linux personal computer. This upgrade to powerful PC hardware opened up the way for the development of a new archive engine with a noSQL database backend based on Hyptertable. The system is capable of recording every parameter change at any given time. Beside the visualization in a newly developed graphical history data browser, the data can be exported to several programs - for example a diff-like tool to compare and recall settings of the accelerator. This contribution will give details on recent improvements of the control system and the setup of the history storage engine.  
 
WPO004 News from the FAIR Control System under Development controls, timing, framework, ion 37
 
  • R. Bär, D.H. Beck, C. Betz, J. Fitzek, S. Jülicher, U. Krause, M. Thieme, R. Vincelli
    GSI, Darmstadt, Germany
 
  The control system for the FAIR (Facility for Antiproton and Ion Research) accelerator facility is presently under development and implementation. The FAIR accelerators will extend the present GSI accelerator chain, then being used as injector, and provide anti-proton, ion, and rare isotope beams with unprecedented intensity and quality for a variety of research programs. This paper shortly summarizes the general status of the FAIR project and focusses on the progress of the control system design and its implementation. The poster presents the general system architecture and updates on the status of major building blocks of the control system. We highlight the control system implementation efforts for CRYRING, a new accelerator presently under recommissioning at GSI, which will serve as a test-ground for the complete control system stack and evaluation of the new controls concepts.  
slides icon Slides WPO004 [1.039 MB]  
 
WPO006 FESA3 Integration in GSI for FAIR site, framework, controls, timing 43
 
  • S. Matthies, H. Bräuning, A. Schwinn
    GSI, Darmstadt, Germany
  • S. Deghaye
    CERN, Geneva, Switzerland
 
  GSI decided to use FESA (Front-End Software Architecture) as the front-end software toolkit for the FAIR accelerator complex. FESA was originally developed at CERN. Since 2010 FESA3, a revised version of FESA, is developed in the frame of an international collaboration between CERN and GSI. During development of FESA3 emphasis was placed on the possibility of flexible customization for different environments and to provide site-specific extensions to allow adaptation for the contributors. GSI is the first institute different than CERN to integrate FESA3 into its control system environment. Some of the necessary preparations have already been performed to establish FESA3 at GSI. Examples are RPM packaging for multiple installations, support for site-specific properties and data types, first integration of the White Rabbit based timing system, etc. . Further developments such as e.g. integration of a site-specific database or the full integration of GSI's beam process concept for FAIR will follow.  
 
WPO008 An Extensible Equipment Control Library for Hardware Interfacing in the FAIR Control System controls, power-supply, hardware, framework 49
 
  • M. Wiebel
    GSI, Darmstadt, Germany
 
  In the FAIR control system the SCU (Scalable Control Unit, an industry PC with a bus system for interfacing electronics) is the standard front-end controller for power supplies. The FESA-framework is used to implement front-end software in a standardized way, to give the user a unified look on the installed equipment. As we were dealing with different power converters and thus with different SCU slave card configurations, we had two main things in mind: First, we wanted to be able to use common FESA classes for different types of power supplies, regardless of how they are operated or which interfacing hardware they use. Second, code dealing with the equipment specifics should not be buried in the FESA-classes but instead be reusable for the implementation of other programs. To achieve this we built up a set of libraries which interface the whole SCU functionality as well as the different types of power supplies in the field. Thus it is now possible to easily integrate new power converters and the SCU slave cards controlling them in the existing equipment software and to build up test programs quickly.  
 
WPO009 An Optics-Suite and -Server for the European XFEL optics, controls, interface, emittance 52
 
  • S.M. Meykopff
    DESY, Hamburg, Germany
 
  A software library for optics calculations was developed for the European XFEL Project. The calculations will be done with ELEGANT as the backend. The new software is available as a shared library as well as an own standing server in the control system. It creates and analyses all input and output files and allows to use different optics at the same time. The lattice is derived from an EXCEL file which is also used for machine installation purposes. The access from the control system uses a TINE interface; a MATLAB object offers an easy programming interface.  
poster icon Poster WPO009 [0.417 MB]  
 
WPO012 The EMBL Beamline Control Framework BICFROCK controls, PLC, LabView, operation 60
 
  • U. Ristau, S. Fiedler, A. Kolozhvari
    EMBL, Hamburg, Germany
 
  The EMBL hosts three Beamlines at the Petra Synchrotron at DESY. The control of the Beamlines is based on a Labview TINE Framework. Working examples of the layered structure of the control software and the signal transport with the Fieldbus based control electronic using Ethercat will be presented as well as the layout of the synchronization implementation of all beamline elements.  
slides icon Slides WPO012 [0.877 MB]  
 
WPO017 IFMIF EVEDA RFQ Local Control System to Power Tests controls, EPICS, rfq, network 69
 
  • M.G. Giacchini, L. Antoniazzi, M. Montis
    INFN/LNL, Legnaro (PD), Italy
 
  In the IFMIF EVEDA project, normal conducting Radio Frequency Quadrupole (RFQ) is used to bunch and accelerate a 130 mA steady beam to 5 MeV. RFQ cavity is divided into three structures, named super-modules. Each super-module is divided into 6 modules for a total of 18 modules for the overall structure. The final three modules have to be tested at high power to test and validate the most critical RF components of RFQ cavity and, the control system itself. The choice of the last three modules is due to the fact that they will operate in the most demanding conditions in terms of power density (100 kW/m) and surface electric field (1.8*Ekp). The Experimental Physics and Industrial Control System (EPICS) environment [1] provides the framework to control any equipment connected to it. This paper report the usage of this framework to the RFQ power tests at Legnaro National Laboratories [2].
[1] http://www.aps.anl.gov/epics
[2] http://www.lnl.infn.it/~epics
 
 
WPO018 Upgrade of Beam Diagnostics System of ALPI-PIAVE Accelerator's Complex at LNL diagnostics, EPICS, controls, interface 72
 
  • B.J. Liu
    CIAE, Beijing, People's Republic of China
  • G. Bassato, M.G. Giacchini, M. Montis, M. Poggi
    INFN/LNL, Legnaro (PD), Italy
 
  The beam diagnostics system of ALPI-PIAVE accelerators has been recently upgraded by migrating the control software to EPICS. The system is based on 40 modules each one including a Faraday cup and a beam profiler made of a couple of wire grids. The device's insertion is controlled by stepper motors in ALPI and by pnematic valves in PIAVE. To reduce the upgrade costs the existing VME hardware used for data acquisition has been left unchanged, while the motor controllers only have been replaced by new units developed in house. The control software has been rebuilt from scratch using EPICS tools. The operator interface is based on CSS; a Channel Archiver based on .. has been installed to support the analysis of transport setup during tests of new beams. The ALPI-PIAVE control system is also a bench test for the new beam diagnostics under development for the SPES facility, whose installation is foreseen in mid 2015.  
 
WPO020 Development and Application of the STARS-based Beamline Control System and Softwares at the KEK Photon Factory controls, status, undulator, detector 78
 
  • Y. Nagatani, T. Kosuge
    KEK, Ibaraki, Japan
 
  STARS is a message transferring software for small-scale control systems originally developed at the Photon Factory. It has a server-client architecture using TCP/IP sockets and can work on various types of operating systems. Since the Photon Factory adopted STARS as a common beamline control software, we have developed beamline control system which controls optical devices (mirror, monochrometer etc.). We developed also various system and softwares, such as information delivering system of Photon Factory ring status based on STARS and TINE or measurement softwares based on the STARS, for the Photon Factory beamlines. Now many kinds of useful STARS applications (device clients, simple data acquisitions, user interfaces etc.) are available. We will describe the development and installation status of the STARS-based beamline system and softwares.  
 
WPO029 Implementation of the Distributed Alarm System for the Particle Accelerator FAIR Using an Actor Concurrent Programming Model and the Concept of an Agent distributed, framework, monitoring, simulation 102
 
  • D. Kumar, G.G. Gašperšic, M. Pleško
    Cosylab, Ljubljana, Slovenia
  • R. Huhmann, S. Krepp
    GSI, Darmstadt, Germany
 
  The Alarm System is a software system that enables operators to identify and locate conditions which indicate hardware and software components malfunctioning or nearby malfunctioning. The FAIR Alarm System is being constructed as a Slovenian in-kind contribution to FAIR project. The purpose of the paper is to show how to simplify the development of a highly available distributed alarm system for the particle accelerator FAIR using a concurrent programming model based on actors and on the concept of an agent. The agents separate the distribution of the alarm status signals to the clients from the processing of the alarm signals. The logical communication between an alarm client and an agent is between an actor in the alarm client and an actor in the agent. These two remote actors exchange messages through Java MOM. The following will be addressed: the tree-like hierarchy of actors that are used for the fault tolerance communication between an agent and an alarm client; a custom message protocol used by the actors; the message system and corresponding technical implications; and details of software components that were developed using the Akka programming library.  
 
WPO030 Vacuum Pumping Group Controls Based on PLC vacuum, controls, PLC, status 105
 
  • S. Blanchard, F. Antoniotti, F. Bellorini, J-P. Boivin, J. Gama, P. Gomes, H.F. Pereira, G. Pigny, B. Rio, H. Vestergard
    CERN, Geneva, Switzerland
  • L. Kopylov, S. Merker, M.S. Mikheev
    IHEP, Moscow Region, Russia
 
  In CERN accelerators, high vacuum is needed in the beam pipes and for thermal isolation of cryogenic equipment. The first element in the chain of vacuum production is the pumping group. It is composed of a primary pump, a turbo-molecular pump and a few isolation and intermediate valves; as optional devices we can also find: vacuum gauges, venting valves and leak detection valves. At CERN accelerators, the pumping groups controllers may be found in several hardware configurations, depending on the environment and on the vacuum system used; all of them are based on PLCs and communicate over a field bus; they are controlled by the same flexible and portable software. They are remotely accessed through a SCADA application and can be locally controlled by the same mobile touch-panel. More than 250 pumping groups are permanently installed in the Large Hardron Collider, Linacs or North Area Experiments.  
poster icon Poster WPO030 [1.849 MB]  
 
WPO031 Diagnostics Test Stand Setup at PSI and its Controls in Light of the Future SwissFEL controls, hardware, interface, diagnostics 108
 
  • P. Chevtsov, R. Ischebeck
    PSI, Villigen PSI, Switzerland
 
  In order to provide high quality electron beams, the future SwissFEL machine needs very precise and reliable beam diagnostics tools. At Paul Scherrer Institute (PSI), the development of such tools is performed based on the SwissFEL Injector Test Facility and a dedicated automated diagnostics test stand. The test stand is equipped by not only major SwissFEL beam diagnostics elements (cameras, beam loss monitors, beam current monitors, etc.) but also their controls and data processing hardware and software. The paper describes diagnostics test stand controls software components, which were designed in view of the future SwissFEL operational requirements.  
poster icon Poster WPO031 [0.637 MB]  
 
WPO032 Magnet Measurement System Upgrade at PSI controls, EPICS, network, operation 111
 
  • P. Chevtsov, V. Vranković
    PSI, Villigen PSI, Switzerland
 
  The magnet measurement system at the Paul Scherrer Institute (PSI) was significantly upgraded in the last few years. At the moment, it consists of automated Hall probe, rotating wire, and vibrating wire setups, which form a very efficient magnet measurement facility. The paper concentrates on the automation hardware and software implementation, which has made it possible not only to significantly increase the performance of the magnet measurement facility at PSI, but also to simplify magnet measurement data handling and processing.  
poster icon Poster WPO032 [1.313 MB]  
 
TCO101 Benefits, Drawbacks and Challenges During a Collaborative Development of a Settings Management System for CERN and GSI controls, operation, framework, feedback 126
 
  • R. Müller, J. Fitzek, H.C. Hüther
    GSI, Darmstadt, Germany
  • G. Kruk
    CERN, Geneva, Switzerland
 
  The settings management system LSA (LHC Software Architecture) was originally developed for the LHC (Large Hadron Collider). For FAIR (Facility for Antiproton and Ion Research) a renovation of the GSI control system was necessary. When it was decided in 2008 to use the LSA system for settings management for FAIR, the middle management of the two institutes agreed on a collaborative development. This paper highlights the insights gained during the collaboration, from three different perspectives: organizational aspects of the collaboration, like roles that have been established, planned procedures, the preparation of a formal contract and social aspects to keep people working as a team across institutes. It also shows technical benefits and drawbacks that arise from the collaboration for both institutes as well as challenges that are encountered during development. Furthermore, it provides an insight into aspects of the collaboration which were easy to establish and which still take time.  
slides icon Slides TCO101 [0.728 MB]  
 
TCO102 Eplanner Software for Machine Activities Management operation, database, network, synchrotron 129
 
  • B.S.K. Srivastava, R.K. Agrawal
    RRCAT, Indore (M.P.), India
  • P. Fatnani
    Raja Ramanna Centre For Advanced Technology, Indore, India
 
  For Indus-2, a 2.5 GeV Synchrotron Radiation Source, operational at Indore, India, the need was felt for software for easily managing various related activities for avoiding communication gaps among the crew members and clearly bringing out the important communications for machine operation. Typical requirements were to have the facility to enter and display daily, weekly and longer operational calendars, to convey system specific and machine operation related standing instructions, to log and track the faults occurring during the operations and follow up actions on the faults logged etc. Overall, the need was for a system to easily manage the number of jobs related to planning the day to day operations of a national facility. The paper describes such a web based system developed and in use regular use and found extremely useful.  
slides icon Slides TCO102 [5.439 MB]  
 
TCO103 Recent Highlights from Cosylab controls, TANGO, project-management, EPICS 132
 
  • M. Pleško, F. Amand
    Cosylab, Ljubljana, Slovenia
 
  Cosylab was established 13 years ago by a group of regular visitors of the PCaPAC. In the meantime, it has grown to a company of 90 employees that covers the majority of accelerator control projects. In this talk, I will present the most interesting developments that we have done in the past two years on a very different range of projects and I will show how we had to get organized in order to be able to manage them all. The developments were made for labs like KIT, ITER, PSI, EBG-MedAustron, European Spallation Source, Maxlab, SLAC, ORNL, GSI/FAIR but also generally for community software like EPICS, TANGO, Control System Studio, White Rabbit, etc. And they range from electronics development to high level software: electric signal conditioning and interfacing, timing system, machine protection system, fibre-optic communication, linux driver development, core EPICS development, packaging, high performance networks, medical device integration, database development, all the way up to turnkey systems. Efficient organisation comprises a matrix structure of teams and groups versus projects and accounts, supported by rigorous reporting, measurements and drill-down analyses.  
slides icon Slides TCO103 [13.372 MB]  
 
TCO304 Launching the FAIR Timing System with CRYRING timing, controls, network, hardware 155
 
  • M. Kreider
    Glyndŵr University, Wrexham, United Kingdom
  • R. Bär, D.H. Beck, A. Hahn, M. Kreider, C. Prados, S. Rauch, W.W. Terpstra, M. Zweig
    GSI, Darmstadt, Germany
  • J.N. Bai
    IAP, Frankfurt am Main, Germany
 
  During the past two years, significant progress has been made on the development of the General Machine Timing system for the upcoming FAIR facility at GSI. The prime features are time-synchronization of 2000-3000 nodes using the White Rabbit Precision-Time-Protocol (WR-PTP), distribution of International Atomic Time (TAI) time stamps and synchronized command and control of FAIR control system equipment. A White Rabbit network has been set up connecting parts of the existing facility and a next version of the Timing Master has been developed. Timing Receiver nodes in form factors Scalable Control Unit (standard front-end controller for FAIR), VME, PCIe and standalone have been developed. CRYRING is the first machine on the GSI/FAIR campus to be operated with this new timing system and serves as a test-ground for the complete control system. Installation of equipment starts in late spring followed by commissioning of equipment in summer 2014.  
slides icon Slides TCO304 [7.818 MB]  
 
FCO106 The Role of the CEBAF Element Database in Commissioning the 12 GeV Accelerator Upgrade database, hardware, controls, interface 161
 
  • T. L. Larrieu, M.E. Joyce, M. Keesee, C.J. Slominski, D.L. Turner
    JLab, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to this manuscript.
The CEBAF Element Database (CED) was first developed in 2010 as a resource to support model-driven configuration of the Jefferson Lab Continuous Electron Beam Accelerator (CEBAF). Since that time, its uniquely flexible schema design, robust programming interface, and support for multiple concurrent versions has permitted it to evolve into a more broadly useful operational and control system tool. The CED played a critical role before and during the 2013 startup and commissioning of CEBAF following its 18-month long shutdown and upgrade. Information in the CED about hardware components and their relations to one-another facilitated a thorough Hot Checkout process involving more than 18,000 system checks. New software relies on the CED to generate EDM screens for operators on-demand thereby ensuring that the information on those screens is correct and up-to-date. The CED also continues to fulfill its original mission of supporting model-driven accelerator setup. Using the new ced2elegant and eDT (elegant Download Tool), accelerator physicists have proven able to compute and apply energy-dependent set points with greater efficiency than ever before.
 
slides icon Slides FCO106 [2.698 MB]  
 
FPO001 InfiniBand interconnects for high-throughput data acquisition in a TANGO environment TANGO, controls, interface, network 164
 
  • T. Dritschler, S.A. Chilingaryan, T. Faragó, A. Kopmann, M. Vogelgesang
    KIT, Eggenstein-Leopoldshafen, Germany
 
  Advances in computational performance allow for fast image-based control. To realize efficient control loops in a distributed experiment setup, large amounts of data need to be transferred, requiring high-throughput networks with low latencies. In the European synchrotron community, TANGO has become one of the prevalent tools to remotely control hardware and processes. In order to improve the data bandwidth and latency in a TANGO network, we realized a secondary data channel based on native InfiniBand communication. This data channel is implemented as part of a TANGO device and by itself is independent of the main TANGO network communication. TANGO mechanisms are used for configuration, thus the data channel can be used by any TANGO-based software that implements the corresponding interfaces. First results show that we can achieve a maximum bandwidth of 30 Gb/s which is close to the theoretical maximum of 32 Gb/s, possible with our 4xQDR InfiniBand test network, with average latencies as low as 6 μs. This means that we are able to surpass the limitations of standard TCP/IP networks while retaining the TANGO control schemes, enabling high data throughput in a TANGO environment.  
slides icon Slides FPO001 [0.511 MB]  
poster icon Poster FPO001 [3.767 MB]  
 
FPO008 LabVIEW PCAS Interface for NI CompactRIO interface, LabView, EPICS, real-time 173
 
  • G. Liu, C. Li, J.G. Wang, K. Xuan
    USTC/NSRL, Hefei, Anhui, People's Republic of China
  • K. Yang, K. Zheng
    National Instruments China, Shanghai, People's Republic of China
 
  When the NI LabVIEW EPICS Server I/O Server is used to integrate NI CompactRIO devices running under VxWorks into EPICS, we notice that it only supports "VAL" field, neither alarms nor time stamps are supported. In order to overcome these drawbacks, a new LabVIEW Channel Access Portable Server (PCAS) Interface is developed, and is applied to the Hefei Light Source (HLS) cooling water monitor system. The test results in the HLS cooling water monitor system indicate that this approach can greatly improve the performance of the NI CompactRIO devices in EPICS environment.  
 
FPO009 HLS Power Supply Control System Based on Virtual Machine controls, power-supply, feedback, interface 176
 
  • J.G. Wang, C. Li, G. Liu, K. Xuan
    USTC/NSRL, Hefei, Anhui, People's Republic of China
 
  The Hefei Light Source (HLS) is a VUV synchrotron radiation light source. It is upgraded recently to improve its performance. The power supply control system is a part of the HLS upgrade project. Five soft IOC applications running on the virtual machine are used to control 190 power supplies via MOXA's serial-to-Ethernet device servers. The power supply control system has been under operation since November 2013, and the operation results show the power supply control system is reliable and can satisfy the demands of slow orbit feedback with the frequency of 1Hz.  
 
FPO017 Managing Multiple Function Generators for FAIR Linux, controls, real-time, FPGA 199
 
  • S. Rauch, R. Bär, M. Thieme
    GSI, Darmstadt, Germany
 
  In the FAIR control system, equipment which needs to be controlled with ramped nominal values (e.g. power converters) is controlled by a standard front-end controller called scalable control unit (SCU). An SCU combines a ComExpressBoard with Intel CPU and an FPGA baseboard and acts as bus-master on the SCU host-bus. Up to 12 function generators can be implemented in slave-board FPGAs and can be controlled from one SCU. The real-time data supply for the generators demands a special software/hardware approach. Direct control of the generators with a FESA (front-end control software architecture) class, running on an Intel Atom CPU with Linux, does not meet the timing requirements. So an extra layer with an LM32 soft-core CPU is added to the FPGA. Communication between Linux and the LM32 is done via shared memory and a circular buffer data structure. The LM32 supplies the function generators with new parameter sets when it is triggered by interrupts. This two-step approach decouples the Linux CPU from the hard real-time requirements. For synchronous start and coherent clocking of all function generators, special pins on the SCU backplane are being used to avoid bus latencies.  
poster icon Poster FPO017 [1.098 MB]  
 
FPO018 Setup and Diagnostics of Motion Control at ANKA Beamlines controls, TANGO, interface, hardware 201
 
  • K. Cerff, D. Haas, J. Jakel, M. Schmitt
    KIT, Eggenstein-Leopoldshafen, Germany
 
  The precise motion control in high resolution is one of the necessary conditions for making high quality measurements at beamline experiments. At a common ANKA beamline up to one hundred actuator axes are working together to align and shape beam, to select beam Energy and to position probes. Some Experiments need additional motion axes supported by transportable controllers plugged temporaly to a local beamline control system. In terms of process control all the analog and digital signals from different sources have to be verified, leveled and interfaced to the motion controllers. They have to be matched and calibrated in the control systems configuration file to real physical quantities which give the input for further data processing. A set of hard- and software tools and methods developed at ANKA over the years is presented in this paper.  
poster icon Poster FPO018 [1.608 MB]  
 
FPO029 Redesign of Alarm Monitoring System Application "BeamlineAlarminfoClient" at DESY device-server, GUI, controls, monitoring 219
 
  • S. Aytac
    DESY, Hamburg, Germany
 
  The alarm monitoring system “BeamlineAlarminfoClient” is a very useful technical-service application at DESY, as it visually renders the locations of important alarms in some sections (e.g. fire or other emergencies). The aim of redesigning this application is to improve the software architecture and allow the easy integration of new observable areas including a new user interface design. This redesign also requires changes on server-side, where alarms are handled and the necessary alarm information is prepared for display. Currently, the client manages alarm data from 17 different servers. This number will increase dramatically in 2014 when new beam lines come into play. Thus creating templates to simplify the addition of new sections makes sense both for the server and client. The client and server are based on the Tine control system and make use of the Tine-Studio utilities, the Alarm Viewer and the Archive Viewer. This paper presents how the redesign is arranged in close collaboration with the customers.  
poster icon Poster FPO029 [0.164 MB]  
 
FCO202 OpenGL-Based Data Analysis in Virtualized Self-Service Environments network, GPU, hardware, synchrotron 237
 
  • V. Mauch, M. Bonn, S.A. Chilingaryan, A. Kopmann, W. Mexner, D. Ressmann
    KIT, Karlsruhe, Germany
 
  Funding: Federal Ministry of Education and Research, Germany
Modern data analysis applications for 2D/3D data samples apply complex visual output features which are often based on OpenGL, a multi-platform API for rendering vector graphics. They demand special computing workstations with a corresponding CPU/GPU power, enough main memory and fast network interconnects for a performant remote data access. For this reason, users depend heavily on available free workstations, both temporally and locally. The provision of virtual machines (VMs) accessible via a remote connection could avoid this inflexibility. However, the automatic deployment, operation and remote access of OpenGL-capable VMs with professional visualization applications is a non-trivial task. In this paper, we discuss a concept for a flexible analysis infrastructure that will be part in the project ASTOR, which is the abbreviation for “Arthropod Structure revealed by ultra-fast Tomography and Online Reconstruction”. We present an Analysis-as-a-Service (AaaS) approach based on the on-demand allocation of VMs with dedicated GPU cores and a corresponding analysis environment to provide a cloud-like analysis service for scientific users.
 
slides icon Slides FCO202 [1.126 MB]