Paper | Title | Page |
---|---|---|
TUPPC026 | Concept and Prototype for a Distributed Analysis Framework for the LHC Machine Data | 604 |
|
||
The Large Hadron Collider (LHC) at CERN produces more than 50 TB of diagnostic data every year, shared between normal running periods as well as commissioning periods. The data is collected in different systems, like the LHC Post Mortem System (PM), the LHC Logging Database and different file catalogues. To analyse and correlate data from these systems it is necessary to extract data to a local workspace and to use scripts to obtain and correlate the required information. Since the amount of data can be huge (depending on the task to be achieved) this approach can be very inefficient. To cope with this problem, a new project was launched to bring the analysis closer to the data itself. This paper describes the concepts and the implementation of the first prototype of an extensible framework, which will allow integrating all the existing data sources as well as future extensions, like hadoop* clusters or other parallelization frameworks.
*http://hadoop.apache.org/ |
||
![]() |
Poster TUPPC026 [1.378 MB] | |
TUPPC030 | System Relation Management and Status Tracking for CERN Accelerator Systems | 619 |
|
||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close interplay to allow reliable operation and at the same time ensure the correct functioning of the protection systems required when operating with large energies stored in magnet system and particle beams. Examples for systems are e.g. magnets, power converters, quench protection systems as well as higher level systems like java applications or server processes. All these systems have numerous and different kind of links (dependencies) between each other. The knowledge about the different dependencies is available from different sources, like Layout databases, Java imports, proprietary files, etc . Retrieving consistent information is difficult due to the lack of a unified way of retrieval for the relevant data. This paper describes a new approach to establish a central server instance, which allows collecting this information and providing it to different clients used during commissioning and operation of the accelerator. Furthermore, it explains future visions for such a system, which includes additional layers for distributing system information like operational status, issues or faults. | ||
![]() |
Poster TUPPC030 [4.175 MB] | |
THPPC078 | The AccTesting Framework: An Extensible Framework for Accelerator Commissioning and Systematic Testing | 1250 |
|
||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close interplay to allow reliable operation and at the same time ensure the correct functioning of the protection systems required when operating with large energies stored in magnet system and particle beams. The systems for magnet powering and beam operation are qualified during dedicated commissioning periods and retested after corrective or regular maintenance. Based on the experience acquired with the initial commissioning campaigns of the LHC magnet powering system, a framework was developed to orchestrate the thousands of tests for electrical circuits and other systems of the LHC. The framework was carefully designed to be extendable. Currently, work is on-going to prepare and extend the framework for the re-commissioning of the machine protection systems at the end of 2014 after the LHC Long Shutdown. This paper describes concept, current functionality and vision of this framework to cope with the required dependability of test execution and analysis. | ||
![]() |
Poster THPPC078 [5.908 MB] | |
THPPC079 | Using a Java Embedded DSL for LHC Test Analysis | 1254 |
|
||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close cooperation. All systems for magnet powering and beam operation are qualified during dedicated commissioning periods and retested after corrective or regular maintenance. Already for the first commissioning of the magnet powering system in 2006, the execution of such tests was automated to a high degree to facilitate the execution and tracking of the more than 10.000 required test steps. Most of the time during today’s commissioning campaigns is spent in analysing test results, to a large extend still done manually. A project was launched to automate the analysis of such tests as much as possible. A dedicated Java embedded Domain Specific Language (eDSL) was created, which allows system experts to describe desired analysis steps in a simple way. The execution of these checks results in simple decisions on the success of the tests and provides plots for experts to quickly identify the source of problems exposed by the tests. This paper explains the concepts and vision of the first version of the eDSL. | ||
![]() |
Poster THPPC079 [1.480 MB] | |
THPPC119 | Software Architecture for the LHC Beam-based Feedback System at CERN | 1337 |
|
||
This paper presents an overview of beam based feedback systems at the LHC at CERN. It will cover the system architecture which is split into two main parts – a controller (OFC) and a service unit (OFSU). The paper presents issues encountered during beam commissioning and lessons learned including follow-up from a recent review which took place at CERN | ||
![]() |
Poster THPPC119 [1.474 MB] | |