Paper | Title | Other Keywords | Page |
---|---|---|---|
WEP17 | Extending the Remote Control Capabilities in the CMS Detector Control System with Remote Procedure Call Services | controls, software, interface, operation | 73 |
|
|||
Funding: The authors would like to thank the Swiss National Science Foundation for the financial support. The CMS Detector Control System (DCS) is implemented as a large distributed and redundant system, with applications interacting and sharing data in multiple ways. The CMS XML-RPC is a software toolkit implementing the standard Remote Procedure Call (RPC) protocol, using the Extensible Mark-up Language (XML) and a custom lightweight variant using the JavaScript Object Notation (JSON) to model, encode and expose resources through the Hypertext Transfer Protocol (HTTP). The CMS XML-RPC toolkit complies with the standard specification of the XML-RPC protocol that allows system developers to build collaborative software architectures with self-contained and reusable logic, and with encapsulation of well-defined processes. The implementation of this protocol introduces not only a powerful communication method to operate and exchange data with web-based applications, but also a new programming paradigm to design service-oriented software architectures within the CMS DCS domain. This paper presents details of the CMS XML-RPC implementation in WinCC Open Architecture (OA) Control Language using an object-oriented approach. |
|||
![]() |
Poster WEP17 [3.379 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-PCaPAC2018-WEP17 | ||
About • | paper received ※ 09 October 2018 paper accepted ※ 15 October 2018 issue date ※ 21 January 2019 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THCB3 | Improving Web2cHMI Gesture Recognition Using Machine Learning | controls, interface, extraction, TANGO | 148 |
|
|||
Web2cHMI is multi-modal human-machine interface which seamlessly incorporates actions based on various interface modalities in a single API, including finger, hand and head gestures as well as spoken commands. The set of native gestures provided by off-the-shelf 2D- or 3D-interface devices such as the Myo gesture control armband can be enriched or extended by additional custom gestures. This paper discusses a particular method and its implementation in recognizing different finger, hand and head movements using supervised machine learning algorithms including a non-linear regression for feature extraction of the movement and a k-nearest neighbor method for movement classification using memorized training data. The method is capable of distinguishing between fast and slow, short and long, up and down, or right and left linear as well as clockwise and counterclockwise circular movements, which can then be associated with specific user interactions. | |||
![]() |
Slides THCB3 [0.934 MB] | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-PCaPAC2018-THCB3 | ||
About • | paper received ※ 14 September 2018 paper accepted ※ 16 October 2018 issue date ※ 21 January 2019 | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||