Source selection languages: A usability evaluation

I. Galpin, E. Abel, N.W. Paton

Publikation: Kapitel i bog/rapport/konference-proceedingKonferencebidrag i proceedingsForskningpeer review


When looking to obtain insights from data, and given numerous possible data sources, there are certain quality criteria that retrieved data from selected sources should exhibit so as to be most fit-for-purpose. An effective source selection algorithm can only provide good results in practice if the requirements of the user have been suitably captured, and therefore, an important consideration is how users can effectively express their requirements.

In this paper, we carry out an experiment to compare user performance in two different languages for expressing user requirements in terms of data quality characteristics, pairwise comparison of criteria values, and single objective constrained optimization. We employ crowdsourcing to evaluate, for a set of tasks, user ability to choose effective formulations in each language. The results of this initial study show that users were able to determine more effective formulations for the tasks using pairwise comparisons. Furthermore, it was found that users tend to express a preference for one language over the other, although it was not necessarily the language that they performed best in.
TitelProceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA 2018
Antal sider6
ForlagAssociation for Computing Machinery
StatusUdgivet - 2018
Udgivet eksterntJa


Dyk ned i forskningsemnerne om 'Source selection languages: A usability evaluation'. Sammen danner de et unikt fingeraftryk.