Paper | Title | Page |
---|---|---|
TUPPC045 | Software Development for High Speed Data Recording and Processing | 665 |
|
||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 283745. The European XFEL beam delivery defines a unique time structure that requires acquiring and processing data in short bursts of up to 2700 images every 100 ms. The 2D pixel detectors being developed produce up to 10 GB/s of 1-Mpixel image data. Efficient handling of this huge data volume requires large network bandwidth and computing capabilities. The architecture of the DAQ system is hierarchical and modular. The DAQ network uses 10 GbE switched links to provide large bandwidth data transport between the front-end interfaces (FEI), data handling PC layer servers, and storage and analysis clusters. Front-end interfaces are required to build images acquired during a burst into pulse ordered image trains and forward them to PC layer farm. The PC layer consists of dedicated high-performance computers for raw data monitoring, processing and filtering, and aggregating data files that are then distributed to on-line storage and data analysis clusters. In this contribution we give an overview of the DAQ system architecture, communication protocols, as well as software stack for data acquisition pre-processing, monitoring, storage and analysis. |
||
![]() |
Poster TUPPC045 [1.323 MB] | |
TUPPC046 | Control Using Beckhoff Distributed Rail Systems at the European XFEL | 669 |
|
||
The European XFEL project is a 4th generation light source producing spatially coherent 80fs short photon x-ray pulses with a peak brilliance of 1032-1034 photons/s/mm2/mrad2/0.1% BW in the energy range from 0.26 to 24 keV at an electron beam energy 14 GeV. Six experiment stations will start data taking in fall 2015. In order to provide a simple, homogeneous solution, the DAQ and control systems group at the European XFEL are standardizing on COTS control hardware for use in experiment and photon beam line tunnels. A common factor within this standardization requirement is the integration with the Karabo software framework of Beckhoff TwinCAT 2.11 or TwinCAT3 PLCs and EtherCAT. The latter provides the high degree of reliability required and the desirable characteristics of real time capability, fast I/O channels, distributed flexible terminal topologies, and low cost per channel. In this contribution we describe how Beckhoff PLC and EtherCAT terminals will be used to control experiment and beam line systems. This allows a high degree of standardization for control and monitoring of systems.
Hardware Technology - POSTER |
||
![]() |
Poster TUPPC046 [1.658 MB] | |
TUPPC086 | Electronics Developments for High Speed Data Throughput and Processing | 778 |
|
||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 283745 The European XFEL DAQ system has to acquire and process data in short bursts every 100ms. Bursts lasts for 600us and contain a maximum of 2700 x-ray pulses with a repetition rate of 4.5MHz which have to be captured and processed before the next burst starts. This time structure defines the boundary conditions for almost all diagnostic and detector related DAQ electronics required and currently being developed for start of operation in fall 2015. Standards used in the electronics developments are: MicroTCA.4 and AdvancedTCA crates, use of FPGAs for data processing, transfer to backend systems via 10Gbps (SFP+) links, and feedback information transfer using 3.125Gbps (SFP) links. Electronics being developed in-house or in collaboration with external institutes and companies include: a Train Builder ATCA blade for assembling and processing data of large-area image detectors, a VETO MTCA.4 development for evaluating pulse information and distributing a trigger decision to detector front-end ASICs and FPGAs with low-latency, a MTCA.4 digitizer module, interface boards for timing and similar synchronization information, etc. |
||
![]() |
Poster TUPPC086 [0.983 MB] | |
TUPPC087 | High Level FPGA Programming Framework Based on Simulink | 782 |
|
||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No 283745. Modern diagnostic and detector related data acquisition and processing hardware are increasingly being implemented with Field Programmable Gate Array (FPGA) technology. The level of flexibility allows for simpler hardware solutions together with the ability to implement functions during the firmware programming phase. The technology is also becoming more relevant in data processing, allowing for reduction and filtering to be done at the hardware level together with implementation of low-latency feedback systems. However, this flexibility and possibilities require a significant amount of design, programming, simulation and testing work usually done by FPGA experts. A high-level FPGA programming framework is currently under development at the European XFEL in collaboration with the Oxford University within the EU CRISP project. This framework allows for people unfamiliar with FPGA programming to develop and simulate complete algorithms and programs within the MathWorks Simulink graphical tool with real FPGA precision. Modules within the framework allow for simple code reuse by compiling them into libraries, which can be deployed to other boards or FPGAs. |
||
![]() |
Poster TUPPC087 [0.813 MB] | |
FRCOAAB02 | Karabo: An Integrated Software Framework Combining Control, Data Management, and Scientific Computing Tasks | 1465 |
|
||
The expected very high data rates and volumes at the European XFEL demand an efficient concurrent approach of performing experiments. Data analysis must already start whilst data is still being acquired and initial analysis results must immediately be usable to re-adjust the current experiment setup. We have developed a software framework, called Karabo, which allows such a tight integration of these tasks. Karabo is in essence a pluggable, distributed application management system. All Karabo applications (called “Devices”) have a standardized API for self-description/configuration, program-flow organization (state machine), logging and communication. Central services exist for user management, access control, data logging, configuration management etc. The design provides a very scalable but still maintainable system that at the same time can act as a fully-fledged control or a highly parallel distributed scientific workflow system. It allows simple integration and adaption to changing control requirements and the addition of new scientific analysis algorithms, making them automatically and immediately available to experimentalists. | ||
![]() |
Slides FRCOAAB02 [2.523 MB] | |