Author: Krol, K.H.
Paper Title Page
WEPGF046 Towards a Second Generation Data Analysis Framework for LHC Transient Data Recording 802
 
  • S. Boychenko, C. Aguilera-Padilla, M. Dragu, M.A. Galilée, J.C. Garnier, M. Koza, K.H. Krol, R. Orlandi, M.C. Poeschl, T.M. Ribeiro, K.S. Stamos, M. Zerlauth
    CERN, Geneva, Switzerland
  • M. Zenha-Rela
    University of Coimbra, Coimbra, Portugal
 
  Dur­ing the last two years, CERNs Large Hadron Col­lider (LHC) and most of its equip­ment sys­tems were up­graded to col­lide par­ti­cles at an en­ergy level twice higher com­pared to the first op­er­a­tional pe­riod be­tween 2010 and 2013. Sys­tem up­grades and the in­creased ma­chine en­ergy rep­re­sent new chal­lenges for the analy­sis of tran­sient data record­ings, which have to be both de­pend­able and fast. With the LHC hav­ing op­er­ated for many years al­ready, sta­tis­ti­cal and trend analy­sis across the col­lected data sets is a grow­ing re­quire­ment, high­light­ing sev­eral con­straints and lim­i­ta­tions im­posed by the cur­rent soft­ware and data stor­age ecosys­tem. Based on sev­eral analy­sis use-cases, this paper high­lights the most im­por­tant as­pects and ideas to­wards an im­proved, sec­ond gen­er­a­tion data analy­sis frame­work to serve a large va­ri­ety of equip­ment ex­perts and op­er­a­tion crews in their daily work.  
poster icon Poster WEPGF046 [0.501 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF047 Smooth Migration of CERN Post Mortem Service to a Horizontally Scalable Service 806
 
  • J.C. Garnier, C. Aguilera-Padilla, S. Boychenko, M. Dragu, M.A. Galilée, M. Koza, K.H. Krol, T. Martins Ribeiro, R. Orlandi, M.C. Poeschl, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The Post Mortem ser­vice for CERNs ac­cel­er­a­tor com­plex stores and analy­ses tran­sient data record­ings of var­i­ous equip­ment sys­tems fol­low­ing cer­tain events, like a beam dump or mag­net quenches. The main pur­pose of this frame­work is to pro­vide fast and re­li­able di­ag­nos­tic to the equip­ment ex­perts and op­er­a­tion crews to de­cide whether ac­cel­er­a­tor op­er­a­tion can con­tinue safely or whether an in­ter­ven­tion is re­quired. While the Post Mortem Sys­tem was ini­tially de­signed to serve CERNs Large Hadron Col­lider (LHC), the scope has been rapidly ex­tended to in­clude as well Ex­ter­nal Post Op­er­a­tional Checks and In­jec­tion Qual­ity Checks in the LHC and its in­jec­tor com­plex. These new use cases im­pose more strin­gent time-con­straints on the stor­age and analy­sis of data, call­ing to mi­grate the sys­tem to­wards bet­ter scal­a­bil­ity in terms of stor­age ca­pac­ity as well as I/O through­put. This paper pre­sents an overview on the cur­rent ser­vice, the on­go­ing in­ves­ti­ga­tions and plans to­wards a scal­able data stor­age so­lu­tion and API, as well as the pro­posed strat­egy to en­sure an en­tirely smooth tran­si­tion for the cur­rent Post Mortem users.  
poster icon Poster WEPGF047 [1.454 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)