Browsing
     by title


0-9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

or enter first few letters:   
OK
Full Text
Peer Reviewed
See detailUsing process data for assessment in Intelligent Tutoring Systems. A psychometrician’s, cognitive psychologist's, and computer scientist’s perspective.
Greiff, Samuel UL; Gasevic, Dragan; von Davier, Alina A.

in Sottilare, Robert A.; Graesser, Arthur C.; Hu, Xiangen (Eds.) et al Design recommendations for intelligent tutoring systems. Volume 5 (2017)

Detailed reference viewed: 141 (2 UL)
Full Text
Peer Reviewed
See detailUsing process data to explain group differences in complex problem solving
Eichmann, B; Goldhammer, F; Greiff, Samuel UL et al

in Journal of Educational Psychology (2020), 122

Detailed reference viewed: 27 (0 UL)
Peer Reviewed
See detailUsing process data to explain group differences in complex problem solving
Eichmann, B.; Pucite, L.; Naumann, J. et al

Scientific Conference (2018, April)

Detailed reference viewed: 77 (1 UL)
Full Text
Peer Reviewed
See detailUsing Regularization to Infer Cell Line Specificity in Logical Network Models of Signaling Pathways.
De Landtsheer, Sébastien; Lucarelli, Philippe UL; Sauter, Thomas UL

in Frontiers in physiology (2018), 9

Understanding the functional properties of cells of different origins is a long-standing challenge of personalized medicine. Especially in cancer, the high heterogeneity observed in patients slows down ... [more ▼]

Understanding the functional properties of cells of different origins is a long-standing challenge of personalized medicine. Especially in cancer, the high heterogeneity observed in patients slows down the development of effective cures. The molecular differences between cell types or between healthy and diseased cellular states are usually determined by the wiring of regulatory networks. Understanding these molecular and cellular differences at the systems level would improve patient stratification and facilitate the design of rational intervention strategies. Models of cellular regulatory networks frequently make weak assumptions about the distribution of model parameters across cell types or patients. These assumptions are usually expressed in the form of regularization of the objective function of the optimization problem. We propose a new method of regularization for network models of signaling pathways based on the local density of the inferred parameter values within the parameter space. Our method reduces the complexity of models by creating groups of cell line-specific parameters which can then be optimized together. We demonstrate the use of our method by recovering the correct topology and inferring accurate values of the parameters of a small synthetic model. To show the value of our method in a realistic setting, we re-analyze a recently published phosphoproteomic dataset from a panel of 14 colon cancer cell lines. We conclude that our method efficiently reduces model complexity and helps recovering context-specific regulatory information. [less ▲]

Detailed reference viewed: 159 (7 UL)
Full Text
Peer Reviewed
See detailUsing rule-based machine learning for candidate disease gene prioritization and sample classification of cancer gene expression data
Glaab, Enrico UL; Bacardit, Jaume; Garibaldi, Jonathan M. et al

in PLoS ONE (2012), 7(7), 39932-39932

Microarray data analysis has been shown to provide an effective tool for studying cancer and genetic diseases. Although classical machine learning techniques have successfully been applied to find ... [more ▼]

Microarray data analysis has been shown to provide an effective tool for studying cancer and genetic diseases. Although classical machine learning techniques have successfully been applied to find informative genes and to predict class labels for new samples, common restrictions of microarray analysis such as small sample sizes, a large attribute space and high noise levels still limit its scientific and clinical applications. Increasing the interpretability of prediction models while retaining a high accuracy would help to exploit the information content in microarray data more effectively. For this purpose, we evaluate our rule-based evolutionary machine learning systems, BioHEL and GAssist, on three public microarray cancer datasets, obtaining simple rule-based models for sample classification. A comparison with other benchmark microarray sample classifiers based on three diverse feature selection algorithms suggests that these evolutionary learning techniques can compete with state-of-the-art methods like support vector machines. The obtained models reach accuracies above 90% in two-level external cross-validation, with the added value of facilitating interpretation by using only combinations of simple if-then-else rules. As a further benefit, a literature mining analysis reveals that prioritizations of informative genes extracted from BioHEL’s classification rule sets can outperform gene rankings obtained from a conventional ensemble feature selection in terms of the pointwise mutual information between relevant disease terms and the standardized names of top-ranked genes. [less ▲]

Detailed reference viewed: 129 (5 UL)
Full Text
Peer Reviewed
See detailUsing Selene to Verify your Vote in JCJ
Iovino, Vincenzo UL; Rial, Alfredo UL; Roenne, Peter UL et al

in Workshop on Advances in Secure Electronic Voting (VOTING'17) (2017, April 07)

Detailed reference viewed: 269 (31 UL)
Full Text
Peer Reviewed
See detailUsing simple relays to improve physical-layer security
Zheng, Gan UL; Jiangyuan, Li; Wong, Kai-Kit et al

in Communications in China (ICCC), 2012 1st IEEE International Conference on (2012)

This paper studies different uses of two cooperating relays to improve the secrecy rate of a wiretap channel. These two relays are assumed to perform only simple functions: either amplify-and-forward (AF ... [more ▼]

This paper studies different uses of two cooperating relays to improve the secrecy rate of a wiretap channel. These two relays are assumed to perform only simple functions: either amplify-and-forward (AF) or jamming. Complex functions such as decode-and-forward (DF) are not considered. We study three modes of cooperation: i) cooperative jamming (CJ), ii) AF-aided beamforming and iii) mixed AF-aided beamforming and CJ, all with individual relay power constraints. While i) is known in the literature, our efforts are spent on ii) and iii). In particular, for iii), we assume that the jamming signals in two communication stages are correlated, giving rise to improved performances. We also propose a heuristic approach for selecting the appropriate cooperating mode. Simulation results illustrate the performance gain of each scheme under different channel conditions and the effectiveness of the proposed mode selection method. [less ▲]

Detailed reference viewed: 138 (0 UL)
Full Text
Peer Reviewed
See detailUsing social media to disseminate primary care research
Lygidakis, Charilaos UL; Gomez Bravo, Raquel UL

in Goodyear-Smith, Felicity; Mash, Robert (Eds.) How To Do Primary Care Research (2018)

Social media are a powerful means of communication among health-care professionals, patients and the public. Their use has been increasing steadily globally, transforming the way that people exchange ... [more ▼]

Social media are a powerful means of communication among health-care professionals, patients and the public. Their use has been increasing steadily globally, transforming the way that people exchange information, interact and collaborate. Physicians are using more and more social networks to connect with broader audiences, communicate with their patients and their colleagues and build a network of trustworthy peers. Researchers are also leveraging social media, capitalising on the velocity with which the messages can spread and the ability to disseminate their messages to the general public in addition to research communities, thus attracting more attention and increasing the influence and impact of their work. [less ▲]

Detailed reference viewed: 92 (13 UL)
See detailUsing Space-based Data for Humanitarian Causes
Blount, Percy UL; Michael, Dodge

in ROOM: The Space Journal (2018)

Detailed reference viewed: 31 (1 UL)
Full Text
Peer Reviewed
See detailUsing storytelling to teach vocabulary in language lessons – does it work? (print)
Kirsch, Claudine UL

in Language Learning Journal (2012), 44(1), 33-51

It has long been claimed that stories are a powerful tool for language learning. Storytelling is often used as a discrete pedagogical approach in primary modern foreign language (MFL) lessons in England ... [more ▼]

It has long been claimed that stories are a powerful tool for language learning. Storytelling is often used as a discrete pedagogical approach in primary modern foreign language (MFL) lessons in England. There has, however, been little investigation into how storytelling might impact on vocabulary learning in the primary classroom. This article focuses on how a London primary teacher used stories in German lessons in a Year 6 class (ages 10–11), and analyses the words and sentences the case-study children remembered over a brief period of time. Data were collected over two terms through observations, interviews and posttests. The findings illustrate the wide range of teaching strategies that allowed for explicit and incidental learning and encouraged meaningful language use. They also show that children recalled a considerable number of words and sentences. [less ▲]

Detailed reference viewed: 435 (16 UL)
Full Text
Peer Reviewed
See detailUsing the Cross-Entropy method for control optimization: A case study of see-and-avoid on unmanned aerial vehicles
Olivares Mendez, Miguel Angel UL; Fu, Changhong; Kannan, Somasundar UL et al

in Control and Automation (MED), 2014 22nd Mediterranean Conference of (2014, June)

This paper presents an adaptation of the Cross-Entropy (CE) method to optimize fuzzy logic controllers. The CE is a recently developed optimization method based on a general Monte-Carlo approach to ... [more ▼]

This paper presents an adaptation of the Cross-Entropy (CE) method to optimize fuzzy logic controllers. The CE is a recently developed optimization method based on a general Monte-Carlo approach to combinatorial and continuous multi-extremal optimization and importance sampling. This work shows the application of this optimization method to optimize the inputs gains, the location and size of the different membership functions' sets of each variable, as well as the weight of each rule from the rule's base of a fuzzy logic controller (FLC). The control system approach presented in this work was designed to command the orientation of an unmanned aerial vehicle (UAV) to modify its trajectory for avoiding collisions. An onboard looking forward camera was used to sense the environment of the UAV. The information extracted by the image processing algorithm is the only input of the fuzzy control approach to avoid the collision with a predefined object. Real tests with a quadrotor have been done to corroborate the improved behavior of the optimized controllers at different stages of the optimization process. [less ▲]

Detailed reference viewed: 170 (12 UL)
Full Text
Peer Reviewed
See detailUsing the Empowerment Scale with unemployed people in lifelong learning: Is the tool sound and useful?
Meyers, Raymond UL; Pignault, Anne UL; Houssemand, Claude UL

in Psychology Research (2016), 6(11), 648-659

Empowerment is a widely used construct in research on social work, mental health and community interventions, but has only been exploited indirectly with the unemployed. But job finding is an important ... [more ▼]

Empowerment is a widely used construct in research on social work, mental health and community interventions, but has only been exploited indirectly with the unemployed. But job finding is an important dimension of empowerment and could be used to test the accuracy of the concept and of its measures. The Making Decisions Empowerment Scale was used with 97 unemployed people who had been jobless for 6 months. Even though the psychometric qualities of the 5 subscales and the total scale were mixed, convergent and discriminant validity with several adaptive and non-adaptive dimensions could be established for the global scale and for the Esteem, Power, Control and, to a lesser degree, the Activism subscales. The results were only marginally better for the 28 items global scale compared to the 9 items Esteem scale. Empowerment could be adequately modelled by using three dimensions: change coping, depression, and chance control of unemployment. Comparing 6 months later those who had found a job with the still unemployed, the 2 groups differed significantly on 2 of the 5 subscales (Activism and Control) though not on the total empowerment scale, nor on the other psychometric scales. The results throw some doubt on the accuracy of an aggregate measure that sums up divergent dimensions. Instead, it is proposed that more specific and individualized constructs be used, at least in unemployment research. [less ▲]

Detailed reference viewed: 196 (9 UL)
Full Text
Peer Reviewed
See detailUsing the GPU for fast symmetry-based dense stereo matching in high resolution images
Mota, Vasco; Falcao, Gabriel; Goncalves Almeida Antunes, Michel UL et al

in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2014)

Detailed reference viewed: 94 (9 UL)
Peer Reviewed
See detailUsing the LLTM to determine an item-generating system for reading comprehension
Sonnleitner, Philipp UL

Poster (2008)

Due to inconclusive findings concerning the components responsible for the difficulty of reading comprehension items, this paper attempts to set up an item-generating system using hypothesis-driven ... [more ▼]

Due to inconclusive findings concerning the components responsible for the difficulty of reading comprehension items, this paper attempts to set up an item-generating system using hypothesis-driven modeling of item complexity applying Fischer’s (1973) linear logistic test model (LLTM) to a German reading comprehension test. This approach guarantees an evaluation of the postulated item-generating system; moreover construct validity of the administered test is investigated. Previous findings in this field are considered; additionally, some text features are introduced to this debate and their impact on item difficulty is discussed. Results once more show a strong influence of formal components (e.g. the number of presented response options in a multiple-choice-format), but also indicate how this effect can be minimized. [less ▲]

Detailed reference viewed: 30 (0 UL)
Full Text
Peer Reviewed
See detailUsing the LLTM to evaluate an item-generating system for reading comprehension
Sonnleitner, Philipp UL

in Psychology Science Quarterly (2008), 50(3), 345-362

Due to inconclusive findings concerning the components responsible for the difficulty of reading comprehension items, this paper attempts to set up an item-generating system using hypothesis-driven ... [more ▼]

Due to inconclusive findings concerning the components responsible for the difficulty of reading comprehension items, this paper attempts to set up an item-generating system using hypothesis-driven modeling of item complexity applying Fischer’s (1973) linear logistic test model (LLTM) to a German reading comprehension test. This approach guarantees an evaluation of the postulated item-generating system; moreover construct validity of the administered test is investigated. Previous findings in this field are considered; additionally, some text features are introduced to this debate and their impact on item difficulty is discussed. Results once more show a strong influence of formal components (e.g. the number of presented response options in a multiple-choice-format), but also indicate how this effect can be minimized. [less ▲]

Detailed reference viewed: 178 (1 UL)
Full Text
Peer Reviewed
See detailUsing the max-plus algorithm for multiagent decision making in coordination graphs
Kok, Jelle R.; Vlassis, Nikos UL

in Proc. RoboCup Int. Symposium, Osaka, Japan (2006)

Coordination graphs offer a tractable framework for cooperative multiagent decision making by decomposing the global payoff function into a sum of local terms. Each agent can in principle select an ... [more ▼]

Coordination graphs offer a tractable framework for cooperative multiagent decision making by decomposing the global payoff function into a sum of local terms. Each agent can in principle select an optimal individual action based on a variable elimination algorithm performed on this graph. This results in optimal behavior for the group, but its worst-case time complexity is exponential in the number of agents, and it can be slow in densely connected graphs. Moreover, variable elimination is not appropriate for real-time systems as it requires that the complete algorithm terminates before a solution can be reported. In this paper, we investigate the max-plus algorithm, an instance of the belief propagation algorithm in Bayesian networks, as an approximate alternative to variable elimination. In this method the agents exchange appropriate payoff messages over the coordination graph, and based on these messages compute their individual actions. We provide empirical evidence that this method converges to the optimal solution for tree-structured graphs (as shown by theory), and that it finds near optimal solutions in graphs with cycles, while being much faster than variable elimination. [less ▲]

Detailed reference viewed: 77 (1 UL)
Full Text
Peer Reviewed
See detailUsing the Moles and the Mini Moles software sytem to bridge the gap between indoor and outdoor learning experiences
Melzer, André UL; Hadley, L.; Glasemann, M. et al

in IADIS International Journal on WWW/Internet (2006), 4(2), 48-58

Detailed reference viewed: 42 (0 UL)
Full Text
Peer Reviewed
See detailUsing the singular value decomposition to extract 2D correlation functions from scattering patterns
Bender, Philipp Florian UL; Zákutná, Dominika; Disch, Sabrina et al

in Acta Crystallographica. Section A, Foundations and Advances (2019), A75

Detailed reference viewed: 62 (2 UL)
Full Text
See detailUsing the Vertical Land Movement estimates from the IGS TIGA combined solution to derive Global Mean Sea Level changes
Bogusz, Janusz; Hunegnaw, Addisu UL; Teferle, Felix Norman UL et al

Scientific Conference (2019, December 13)

Global mean sea level (GMSL) is now widely recognized to have risen between 1 to 2 mm/yr depending on location since the 20th century. Prior to the satellite altimetry era, GMSL was primarily estimated ... [more ▼]

Global mean sea level (GMSL) is now widely recognized to have risen between 1 to 2 mm/yr depending on location since the 20th century. Prior to the satellite altimetry era, GMSL was primarily estimated from a set of secular tide gauge records relative to coastal benchmarks. Recent measurements of GPS (Global Positioning System) have been demonstrated as a useful tool of a direct estimate of Vertical Land Motion (VLM) induced by both long and short-term geophysical and human-induced processes in a geocentric reference frame. This presentation will provide the results of a combination performed using the CATREF software of three independent GPS daily solutions provided by British Isles continuous GNSS Facility – University of Luxembourg consortium (BLT), German Research Centre for Geosciences (GFZ) and University of La Rochelle (ULR) under the auspices of the Tide Gauge Benchmark Monitoring (TIGA) Working Group (WG), that results in a spatially comprehensive map of VLM near or close to tide gauge benchmarks. The combination was performed in accordance with the second re-processing campaign (repro2) of the IGS (International GNSS Service). Long coastal tide gauge records from the archives maintained at the Permanent Service for Mean Sea Level (PSMSL) were extracted for relative sea level estimates. To cross-compare the sea level rates over the years, we employed observations between 1900-2016. Then, the time series were cut and analyzed separately, ceteris paribus, for the period 1960-2016. This analysis was aimed at a cross-comparison of relative sea level trends and their changes over the years. The stochastic part of the tide gauge records was analyzed with Maximum Likelihood Estimation (MLE) and assumed several different combinations of noise models with the Bayesian Information Criterion (BIC) providing a means to identify the preferred one. The relative sea level estimates were corrected by the inverted barometric effect to the tide-gauge records using data from the 20th century Reanalysis project version V2C, the effect of wind stress on the surface of the ocean in both, zonal and meridional components, as well as Pacific Decadal Oscillation (PDO) and the North Pacific Gyre Oscillation (NPGO) influencing Pacific tide gauge records. The GPS-based velocities were corrected by Glacial Isostatic Adjustment (GIA) effect using ICE-6G(VM5a) model with associated geoid rate and post seismic decays using ITRF2014 estimates. Also, environmental loading models were employed to account for present-day elastic loading in VLM. The Mean Sea Level (MSL) trends from tide gauges and VLM-corrected MSL trends using GIA model (TG+GIA) and the TIGA combination (TG+TIGA) were determined. Our final reconstruction of GMSL based on the MSL records from 1900 to 2016 where the VLM uncertainty is smaller than 0.7 mm/yr indicate a long-term trend of 1.75 +/- 0.2 mm/yr and is in good agreement with several similar determinations. [less ▲]

Detailed reference viewed: 49 (1 UL)