Bannière Faculté des sciences DIC
 
Description
Title : DIC Seminar Archive 2006-2011
Number : 3/12
Status : Not exceeded
Location : Vous trouverez sur cette page les anciens séminaires (2006-2011) programmés par le DIC et correspondant aux cours DIC9270 et DIC9271.


Bookable : 45
Summary

Affect or Cognition in education? Here is a question that we have been asking ourselves, wrongly, for many years. In fact, Cognitive Science, due in part to technological progress, have proved in the last years the contribution of these two dimensions to education and human reasoning.

The human mind, a (sometimes) skillful cocktail of the two, uses affect and emotion in daily decision-making and learning processes.
Moreover, the digitalization of practically all areas of society and the miniaturization of technologies have enabled researchers to hope to mesure and quantify these two dimensions in order to enrich the machine in the goal to hope to better “understand” the learner that interacts with it. But to which point are we really able to mesure in a pertinent way these dimesnions? Are me able to model all this in a machine?
In this presentation, I will describe the functioning of different types of sensors, more and more inconspicuous, measuring physiological reactions (pulse, electro-dermal reaction of the skin, eye-tracking, brain waves) that are used to discretize the affective and cognitive dimensions of the human mind.
Later, I will explain the contribution of each of these sensors to Tutoring Systems according to affect and cognition.
I will end the presentation with some future perspectives concerning the usage of physiological sensors in tomorrow's tutoring systems.



 

Summary
The recent and rapid resolution of wireless technologies entails a high demande in terms of spectrum resources. To address this issue, we need good management and a more efficient usage of the spectrum. It is in within this framework that studies have been carried out in the area of cognitive radio.

Cognitive radio is a system that enables a power terminal to interact with its environment. This means that it is able to perceive its environment, to model it and adapt to it. It can then detect unused frequencies and use them, contributing to a better spectrum efficiency.
Cognitive radio will be presented in this seminar, in its different aspects: Principles, Architecture and Applications.


Summary
To create Intelligent Tutoring Systmes able to help students in problem-solving activities, an author must provide them with knowledge regarding the domain. The traditional approach is to specify this knowledge manually, which is costly in time, and hard to apply in domains which are difficult to formalize. As an alternative, we propose to use data mining algorithms to automatically extract task models using demonstrations. For automatic extraction, data mining algorithms have been developed with regards to searching for sequential patterns and rules. The approach was succesfully applied in an Intelligent Tutoring System developed in collaboration with the Canadian Space Agency, for teaching the manipulation of the Canadarm2 robotic arm.
In this presentation, we first present our work in this project, then we briefly present our recent work to develop new algorithms for data mining to search for sequential rules, as well as a second application of these algorithms in a cognitive agent.



Summary
During the process of genome sequencing, computer science has played a very important role, through various algorithms. Now, with the availability of several sequenced genomes (including the human genome), computer science also helps us understand the mechanisms of the multitudes of genes that constitute a genome. It contributed to the implementation of navigation engines of genomes based on several data bases and Web Interfaces. It also contributes to the prediction of new genes, functional regions, regulatory regions, through algorithms for the comparion of genomes and machine learning algorithms. During this presentation, I will present to you the main problems, their informatic formalization and the techniques used to resolve them. I will present the predicted knowledge representation mechanisms to make them accessible to the scientific community.


Summary

Turing machines have enabled thte development of computationalism, but we have yet to present them with an algorithm that can simulate intelligence. Representationalists have proposed a psychological hypothesis according to which intelligence can be directly dimulated by algorithmizing logical reasoning and symbol manipuation; but where do these symbols come from? Connectionists prefer the neuro-biological hypothesis which advocate a simulation of neuronal mechanisms from which psychological phenomena can emerge. In this approach are integrated the concepts of situated incarnation that bring some of the elements of the answer to the question on symbol anchoring because sensors are able to be semiotic perceptions even before consciousness can interpret them as symbols. Before being symbolic, these systems must first be semiotic, able to discover signes in physic signals from the environment.

In parallel, Maturana and Yarela have proposed the hypothesis that cognition and life result from a single and same process that they call autopoiesis. From their work, we can conclude that an autopoietic process is a process whose product is the process itself. In other words, life creates and maintains life, and cognition, participating in this process, auto-generates and auto-maintains itself as well. Any simulation based on a biological hypothethis must take into account this constraint. To be biologically plausible, an artificial neural network must contribute to its own genesis and development.
On the cognitive side, we specify the hypothesis of Newell and Simon, and propose that only semiotic autopoietical systems are capable of actions of general intelligence.



Summary

In this conference, we present the research approach taken in the Lingot projet, a multidisciplinary project whose objective is to orchestrate the cognitive diversity management of learners to facilitate learning of elementary algebra. The tools developed are now accessible on the online platform of the Sésamaths association, which is the most used in France for teaching mathematics in secondary school. We present teh methodology that we have implemented to develop a tool for competence diagnostic and a tool to aid the implementation of a learning course adapted to the profiles diagnosed.

The results of our work concern (i) the analysis of student reasoning by identifying several levels of indicators of algebra competence according to the usages of the diagnostic, (ii) a rather general modeling of competence diagnosis, (iii) the implementation of informatic artefacts based on these models and (iv) the distribution of the tools resulting from this research to teachers. Finally, we will address the work in progress concerning modeling of learning courses and the system to help their implementation.




Summary
The presentation will outline the Astus cognitive architecture, which is integrated to a platform to generate Intelligent Tutoring Systems (ITSs) for well-defined domains and tasks. The Astus architecture enables us to model how a teacher mentally represents a learner who is attempting to resolve a problem. In this architecture, domain knowlege is modeled in a way that is close to that presented by a teacher to a student. This form is also close to the way in which a learner initially encodes domain knowledge. Later, we will addresse how this architecture facilitates the interpretation of the learner's actions in a learning environment, as well as the integration of the knowledge we want to teach with the knowledge already acquired by the learner, but necessary to carry out a computational model of the domain.


Summary

The most common cognitive disorders are those that affect the cerebral search engine. Beyond causing us to make mistakes, these executive disorders often cause us to ignore important information, possibilities and objectives in a given situation. They reduce the imagination at the basis of the resolution of daily problems. The cerebral research engine is sensitive to expertise and to interference that affects content research. It is also sensitive to motivation and the dynamic of cognitive themes which vie for access to consciousness. A better comprehension of these factors enables us to better simluate its functioning to better predict human error and to develop tools to help people who have lost some of their cognitive capabilities.




Abstract
The convergence of three technologies will have a major impact on next generation of technology enhanced learning environments, namely; pervasive computing, social media and the semantic web. Each one of these technologies provides its own opportunities and challenges as their convergence has the potential to enable the development of new educational practices in situated and networked learning. In pervasive computing, the physical environment is directly related to learning goals and activities and the learning system is dynamically adapted to the learning context. Personal Learning Environments (PLE) have emerged from the combination of Web 2.0 and social media tools to support learning. From an educational perspective, these kinds of tools fit well with socio-constructivist learning approaches as they provide spaces for collaborative knowledge building and reflective practices. The semantic web provides a common framework that allows data to be shared and reused across! Web 2.0 tools and community boundaries.
The previous technologies and tools presented above can be used to conceptualize and design the next generation of learning landscapes relying on pervasive computing, social media and semantic web standards. Inquiry-Based Science Teaching in the context of History of Science and Technology is explored to find out the needed functionalities and adaptation to ensure data gathering, notification and sharing on a historical problem of technology - the swinging bridge of Brest over the Penfeld (1861-1944).

Serge Garlatti's web page
affiche

Summary
Software systems need to evolve in a continuous way in order to reply to user needs and to environments in continuous change. Nonetheless, contrary to conception templates, the anti-templates (the wrong solutions to recurring design and implementation problems), have the consequence of slowing down the evolution of these systems in making adaptation and maintenance tasks more difficult. A detection and a correction of these anti-templates will facilitate the evolution of these systems. In this presentation, we are interested in the automatic detection and correction of anti-templates in three different types of architectures: object- oriented architectures, model-driven architectures, and service-oriented architectures. We present methods and techniques adapted to each of these architectures, while highlighting their similarities.




Summary
This communication presents a work in progress in machine learning applied to architectural design. This involves modeling abductive production of architectural solutions in a algorithm of Separators with vast margin (SVM). One of the central prolems in architectural design is gathering of a knowledge-conception acquired in advance and its usage to create a new spatial configuration. First, we will examine the way abduction works while highlinghting its creative aspect and we wil link this aspect to architectural recuperation. Later, we will carry out a brief review of principles that underlie SVM algorithms and their link to Core Methods, to find the adequation criteria for abduction. Then, we will deepend the notion of dimensional augmentation of the previous knowledge space as a foundation of the creativity of SVM. Finally, we discuss the future developments of this research.




Abstract
Machine Learning and Semantic web are covering conceptually different sides of the same story - Semantic Web’s typical approach is top-down modeling of knowledge and proceeding down towards the data while Machine Learning is almost entirely data-driven bottom-up approach trying to discover the structure in the data and express it in the more abstract ways and rich knowledge formalisms.
The talk will discuss possible interaction and usage of Machine Learning and Knowledge discovery for Semantic Web with emphases on ontology construction. In the second half of the talk we will take a look at some research using machine learning for Semantic Web and demos of the corresponding prototype systems.

Dunja Mladenic's web page

Summary
The problem of Automatic Speech Recognition (ASR) has been an active field of study since the early fifties. The model most used for the purpose is the Hidden Markov Model (HMM), due to its legibility and the relative simplicity and effectiveness of its main algorithms. Nevertheless, several weaknesses are recognized in conjunction with HMM, of which the constraining assumptions of uncorrelated data and a priori knowledge of the relevant probabilities. Over the past years, an extension of HMM models gave rise to hybridization with artificial neural networks, particularly the Multi-Layer Perceptron (MLP). Such hybrid systems benefit from the discriminating capacity and robustness of MLPs and the flexibility of HMMs in order to obtain better performances than traditional HMMs alone, Many works have showed the effectiveness of these HMM/MLP models for speech recognition (continuous and isolated), independently of the speaker for the small vocabulary ones.
The recognition accuracy of HMM/PMC models does not suffer from the strong dependency of regular HMM on the size of the training data, but their training time does. To remedy this problem, we propose a new method of training based on data fusion. It relies on past experiments that have shown the improvement of the performance of hybrid systems by combining models. The underlying idea is that models that are trained with different part of the acoustic data file will unravel different data properties, leading to improved recognition accuracy when the results are combined. The combination is accomplished according to various criteria, with the aim of selecting the most probable word that was spoken. In addition to the fusion method, we propose a data clustering preprocessing stage that is performed with a different algorithm than vector quantization (VQ), since VQ provides a hard decision (not probabilized) in connection with the observations in the HMMs states. In this respect, the c-means and genetic algorithms are compared.
The integration of the described fusion method into the MMC/PMC hybrid model led to a substantial improvement of recognition accuracy when applied to train and recognize Arabic speech.

Summary

In our day and age, the advent of new data-acquisition technologies exponentially augments the volume of data stocked in data bases. Knowledge-extraction techniques have been created to enable user to extract the maximal quantity of pertinent information from this data. A multitude of approaches developed in the last two decades have shown their efficiency in this framework, all the while considered as using 'black box' type models. This presentation will enumerate these approaches, as well as existing solutions in litterature to associate semantics to extracted knowledge.

This presentation will focus on multi-layered neural networks in supervised classification, as well as pattern-matching.


Engelbert Mephu Nguifo's web page

Abstract
This paper explores the emergence of language from the perspectives of usage-based approaches and of complex systems (CS). One of the mysteries of language development is that each of us as learners has had different language experiences and yet somehow we have converged on broadly the same language system. From diverse, often noisy samples, we end up with similar linguistic competence. How can that be ? There must be some constraints in our estimation of how language works. Some views hold that the constraints are in the learner, as expectations of linguistic universals pre-programmed in some form of innate language acquisition device. Others hold that the constraints are in the dynamics of language itself - that language form, language meaning, and language use come together to promote robust induction by means of statistical learning over limited samples. The research described here explores this question with regard English verbs, their grammatical form, semantics, and patterns of usage. It exemplifies CS principles such as agent-based emergence and the importance of scale-free distributions, and CS methods such as distributional analysis, connectionist modeling, and networks analysis.
Nick Ellis's web page
poster/affiche

Summary
Many problems in Artificial Intelligence and pattern recognition can be resolved as optimization problems that we can resolve by choosing the best alternative from a set of potential solutions. In practice, the process is reduced to finding a point in the given research space that contains all the possible solutions after the proper encoding of the problem to be solved. When the space contains a small quantity of points including the correct solution, the usage of linear or hierarchical research techniques is enough to find it. If not – and this is often the case when we define a research space via combinations of attributes, which rapidly results in a combinatory explosion of the number of possible solutions- heuristic research offers an effective solution track. Mnetic intelligence techniques are often used to this goal and several algorithms exist which use biological, ethological or natural-evolution based algorithms. However, the sources of “plagiarism” are many, and other approaches are possible. This presentation will present a new source of musical inspiration, the algorithm of which combines “instrument sounds” (variable values) in order to find a melodious air (the optimal solution), similar to what a composer does. This approach will be illustrated with an application that aims to detect conception flaws in software. P { margin-bottom: 0.21cm; }A:link { }
Elisabeth Delozanne, lecturer in Computer Science at the UPMC-Sorbonne-Universités and member of the MOCAH team of LIP6
"Competency Diagnostic and the Course of Learning Adapted to Diagnosed Cognitive Profiles: te Pépite and Lingot Projects"