Keyword: FPGA
Paper Title Other Keywords Page
TCO301 Inexpensive Scheduling in FPGAs hardware, controls, distributed, interface 150
 
  • W.W. Terpstra, D.H. Beck, M. Kreider
    GSI, Darmstadt, Germany
 
  In the new scheme for machine control used within the FAIR project, actions are distributed to front-end controllers (FEC) with absolute execution timestamps. The execution time must be both precise to the nanosecond and scheduled faster than a microsecond, requiring a hardware solution. Although the actions are scheduled at the FEC out of order, they must be executed in sorted order. The typical hardware approaches to implementing a priority queue (CAMs, shift-registers, etc.) work well in ASIC designs, but must be implemented in expensive FPGA core logic. Conversely, the typical software approaches (heaps, calendar queues, etc.) are either too slow or too memory intensive. We present an approach which exploits the time-ordered nature of our problem to sort in constant-time using only a few memory blocks.  
slides icon Slides TCO301 [1.370 MB]  
 
FPO012 A Real-Time Data Logger for the MICE Superconducting Magnets LabView, real-time, EPICS, controls 185
 
  • J.T.G. Wilson
    STFC/DL, Daresbury, Warrington, Cheshire, United Kingdom
 
  The Muon Ionisation Cooling Experiment (MICE) being constructed at STFC’s Rutherford Appleton Laboratory will allow scientists to gain working experience of the design, construction and operation of a muon cooling channel. Among the key components are a number of superconducting solenoid and focus coil magnets specially designed for the MICE project and built by industrial partners. During testing it became apparent that fast, real-time logging of magnet performance before, during and after a quench was required to diagnose unexpected magnet behaviour. To this end a National Instruments Compact RIO (cRIO) data logger system was created, so that it was possible to see how the quench propagates through the magnet. The software was written in Real-Time LabVIEW and makes full use of the cRIO built-in FPGA to obtain synchronised, multi-channel data logging at rates of up to 10kHz. This paper will explain the design and capabilities of the created system, how it has helped to better understand the internal behaviour of the magnets during a quench and additional development to allow simultaneous logging of multiple magnets and integration into the existing EPICS control system.  
 
FPO017 Managing Multiple Function Generators for FAIR Linux, controls, software, real-time 199
 
  • S. Rauch, R. Bär, M. Thieme
    GSI, Darmstadt, Germany
 
  In the FAIR control system, equipment which needs to be controlled with ramped nominal values (e.g. power converters) is controlled by a standard front-end controller called scalable control unit (SCU). An SCU combines a ComExpressBoard with Intel CPU and an FPGA baseboard and acts as bus-master on the SCU host-bus. Up to 12 function generators can be implemented in slave-board FPGAs and can be controlled from one SCU. The real-time data supply for the generators demands a special software/hardware approach. Direct control of the generators with a FESA (front-end control software architecture) class, running on an Intel Atom CPU with Linux, does not meet the timing requirements. So an extra layer with an LM32 soft-core CPU is added to the FPGA. Communication between Linux and the LM32 is done via shared memory and a circular buffer data structure. The LM32 supplies the function generators with new parameter sets when it is triggered by interrupts. This two-step approach decouples the Linux CPU from the hard real-time requirements. For synchronous start and coherent clocking of all function generators, special pins on the SCU backplane are being used to avoid bus latencies.  
poster icon Poster FPO017 [1.098 MB]  
 
FPO019 FPGA Utilization in the Accelerator Interlock System (About the MPS Development in the LIPAc) interface, controls, status, neutron 204
 
  • K. Nishiyama
    Japan Atomic Energy Agency (JAEA), International Fusion Energy Research Center (IFERC), Rokkasho, Kamikita, Aomori, Japan
  • R. Gobin
    CEA/IRFU, Gif-sur-Yvette, France
  • J. Knaster, A. Marqueta Barbero, Y. Okumura
    IFMIF/EVEDA, Rokkasho, Japan
  • T. Kojima, T. Narita, H. Sakaki, H. Takahashi
    JAEA, Aomori, Japan
 
  The development of IFMIF (International Fusion Material Irradiation Facility) to generate a 14 MeV source of neutrons with the spectrum of DT fusion reactions is indispensable to qualify suitable materials for the First Wall of the nuclear vessel in fusion power plants. As part of IFMIF validation activities , LIPAc (Linear IFMIF Prototype Accelerator) facility, currently under installation at Rokkasho (Japan) , will accelerate a 125mA CW and 9MeV deuteron beam with a total beam power of 1.125MW. The Machine Protection System (MPS) of LIPAc provides an essential interlock function of stopping the beam in case of anomalous beam loss or other hazardous situations. High speed processing is necessary to achieve properly the MPS main goal. This high speed processing of the signals, distributed alongside the accelerator facility, is based on FPGA technology. This paper describes the basis of FPGA use in the accelerator interlock system through the development of LIPAc’s MPS, with a comparison with using of FPGA of the other accelerator control system.  
 
FPO022 New developments on the FAIR Data Master controls, operation, network, timing 207
 
  • M. Kreider, J. Davies, V. Grout
    Glyndŵr University, Wrexham, United Kingdom
  • R. Bär, D.H. Beck, M. Kreider, W.W. Terpstra
    GSI, Darmstadt, Germany
 
  During the last year, a small scale timing system has been built with a first version of the Data Master. In this paper, we will describe field test progress as well as new design concepts and implementation details of the new prototype to be tested with the CRYRING accelerator timing system. The message management layer has been introduced as a hardware acceleration module for the timely dispatch of control messages. It consists of a priority queue for outgoing messages, combined with a scheduler and network load balancing. This loosens the real-time constraints for the CPUs composing the control messages noticeably, making the control firmware very easy to construct and deterministic. It is further opening perspectives away from the current virtual machine-like implementation on to a specialized programming language for accelerator control. In addition, a streamlined and better fitting model for beam production chains and cycles has been devised for use in the data master firmware. The processing worst case execution time becomes completely calculable, enabling fixed time-slices for safe multiplexing of cycles in all of the CPUs.  
slides icon Slides FPO022 [0.890 MB]