Author: Boukhelef, D.
Paper Title Page
TUPPC045 Software Development for High Speed Data Recording and Processing 665
 
  • D. Boukhelef, J. Szuba, K. Wrona, C. Youngman
    XFEL. EU, Hamburg, Germany
 
  Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 283745.
The European XFEL beam delivery defines a unique time structure that requires acquiring and processing data in short bursts of up to 2700 images every 100 ms. The 2D pixel detectors being developed produce up to 10 GB/s of 1-Mpixel image data. Efficient handling of this huge data volume requires large network bandwidth and computing capabilities. The architecture of the DAQ system is hierarchical and modular. The DAQ network uses 10 GbE switched links to provide large bandwidth data transport between the front-end interfaces (FEI), data handling PC layer servers, and storage and analysis clusters. Front-end interfaces are required to build images acquired during a burst into pulse ordered image trains and forward them to PC layer farm. The PC layer consists of dedicated high-performance computers for raw data monitoring, processing and filtering, and aggregating data files that are then distributed to on-line storage and data analysis clusters. In this contribution we give an overview of the DAQ system architecture, communication protocols, as well as software stack for data acquisition pre-processing, monitoring, storage and analysis.
 
poster icon Poster TUPPC045 [1.323 MB]  
 
FRCOAAB02 Karabo: An Integrated Software Framework Combining Control, Data Management, and Scientific Computing Tasks 1465
 
  • B.C. Heisen, D. Boukhelef, S.G. Esenov, S. Hauf, I. Kozlova, L.G. Maia, A. Parenti, J. Szuba, K. Weger, K. Wrona, C. Youngman
    XFEL. EU, Hamburg, Germany
 
  The expected very high data rates and volumes at the European XFEL demand an efficient concurrent approach of performing experiments. Data analysis must already start whilst data is still being acquired and initial analysis results must immediately be usable to re-adjust the current experiment setup. We have developed a software framework, called Karabo, which allows such a tight integration of these tasks. Karabo is in essence a pluggable, distributed application management system. All Karabo applications (called “Devices”) have a standardized API for self-description/configuration, program-flow organization (state machine), logging and communication. Central services exist for user management, access control, data logging, configuration management etc. The design provides a very scalable but still maintainable system that at the same time can act as a fully-fledged control or a highly parallel distributed scientific workflow system. It allows simple integration and adaption to changing control requirements and the addition of new scientific analysis algorithms, making them automatically and immediately available to experimentalists.  
slides icon Slides FRCOAAB02 [2.523 MB]