Pedro Larrañaga

Pedro Larrañaga

Pedro Larrañaga

Pedro Larrañaga is Full Professor in Computer Science and Artificial Intelligence at the Technical University of Madrid (UPM) since 2007, where he co-leads the Computational Intelligence Group. He received the MSc degree in mathematics (statistics) from the University of Valladolid and the PhD degree in computer science from the University of the Basque Country (excellence award). Before moving to UPM, his academic career was developed at the University of the Basque Country (UPV-EHU) at several faculty ranks: Assistant Professor (1985-1998), Associate Professor (1998-2004) and Full Professor (2004-2007). He earned the habilitation qualification for Full Professor in 2003.

Professor Larrañaga has served as Expert Manager of Computer Technology area at the Deputy Directorate of research projects of the Spanish Ministry of Science and Innovation (2007-2010). He has been a Member of the Advisory Committee 6.2 (Communication, Computing and Electronics Engineering) of the CNEAI (Spanish Ministry of Education) in 2010-2011.

His research interests are primarily in the areas of probabilistic graphical models, data science, metaheuristics, and real applications, like biomedicine, bioinformatics, neuroscience, industry 4.0 and sports. He has published more than 200 papers in impact factor journals and has supervised 25 PhD theses. He is fellow of the European Association for Artificial Intelligence since 2012 and Fellow of the Academia Europaea since 2018. He has been awarded the 2013 Spanish National Prize in Computer Science and the Spanish Association for Artificial Intelligence prize in 2018.

Bayesian Networks in Action

The recent use of Bayesian networks in challenging real world applications will be presented in three different areas: neuroscience, industry4.0 and sport analytics.

Neuroscience applications will cover problems at different scales: from neuroanatomy questions, such as interneuron classification and spine clustering, to diagnosis of neurodegenerative Parkinson and Alzheimer diseases. The applications in industry4.0 will be related to the automatic inspection of a laser process and the discovery of fingerprints in a real machinery performing servo-motor movements. Finally, the scouting problem and the football as a science will be introduced as representatives of sport analytics.

From the machine learning point of view, several techniques, such as probabilistic clustering, multi-view clustering, anomaly detection, supervised classification, multi-label classification and multi-output regression, in both static and dynamic scenarios will be used for providing solutions to the previous real world applications.

Sergio Guadarrama

Sergio Guadarrama

Sergio Guadarrama

Dr. Sergio Guadarrama is a Senior Software Engineer at Google Research, where he works in Machine Perception and Deep Learning as a member of the VALE team with Dr. Kevin Murphy. His research focus is on new deep network architectures for multi-task dense predictions, such as object detection instance segmentation, color prediction and visual question-answering. Currently he is a core developer of TensorFlow and co-creator of TensorFlow-Slim. Before joining Google he was a Research Scientist at University of California, Berkeley EECS with Prof. Trevor Darrell and Prof. Lotfi Zadeh. At UC-Berkeley he was a core developer of Caffe: Convolutional Architecture for Fast Feature Embedding. He received his Bachelor and PhD degrees from the Technical University of Madrid, and did a postdoc at the European Center for Soft Computing with Prof. Enric Trillas.

Dr. Guadarrama has published over 60 papers in top-tier international conferences (CVPR, NIPS, AAAI, ICCV, RSS, ICRA, IROS, ACM, BMVC,...) and journals on Artificial Intelligence and Computer Vision, which have since garnered over 10000 citations.

Dr. Guadarrama’s original research and contributions to the field has earned him the following awards: the 2017 Everingham Prize for providing the open-source deep learning framework to the community Caffe: Convolutional Architecture for Fast Feature Embedding, was part of winning team of the COCO 2016 Detection Challenge, the ACM Multimedia Open-Source Software Competition in 2014, the Mobility Grant for Postdoctoral Research awarded by the Spanish Ministry, the "Juan de la Cierva" Award in Computer Science by the Spanish Department of Science and Innovation, and the Best Doctoral Dissertation Award from 2006-2007 by the Technical University of Madrid (advisor: Prof. Enric Trillas).

Artificial Intelligence in Google: latest advances and trends

Over last few years, Artificial Intelligence (AI) has seen huge growth, mainly due to the raise of Deep Learning and its impressive results on long standing AI research fields as speech understanding, natural language processing, computer vision, or in robotics.

Some specific problems successfully tackled include: Machine Translation, Speech Understanding and Generation, Object Detection, Semantic Segmentation and Pose Estimation, Deep Reinforcement Learning for Robotic Manipulation and Self-Supervised Learning.

The growth of Deep Learning has been driven by improvements in hardware (GPUs, TPUs, etc.) and by improvements in software (Caffe, TensorFlow, PyTorch, etc.), but also by improvements in models (Convolutional Nets, Residual Nets, Recurrent Nets, Generative Adversarial Nets, etc.).

Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for a specific type of problems alone (as image classification). Reinforcement Learning and Evolutionary algorithms provide techniques to discover such networks automatically, recently there has been a lot of development in AutoML.

Finally there is a big potential of applying Machine Learning to help solve important problems in different fields of science. For example, last year researchers utilized neural networks and deep learning for predicting molecular properties in quantum chemistry, finding new exoplanets in astronomical datasets, predicting earthquake aftershock, guiding automated proof systems.

Carlos A. Coello Coello

Carlos Coello

Carlos A. Coello Coello

Carlos Artemio Coello Coello has a BSc in civil engineering from the Autonomous University of Chiapas, from where he graduated summa cum laude in 1991. That same year, he received the Diario de México Medal for being one of the best students in Mexico. Subsequently, he obtained a scholarship from the Mexican Ministry of Education to pursue a Masters and a PhD in Computer Science at Tulane University (New Orleans, USA), graduating in 1993 and 1996, respectively. Since 2001, he has been a Researcher at the Center for Research and Advanced Studies of the National Polytechnic Institute (CINVESTAV-IPN) in Mexico City.

Dr. Coello has pioneered an area now known as multi-objective evolutionary optimization, which focuses on solving optimization problems with two or more objective functions (usually in conflict with each other) using biologically inspired algorithms. His work has focused primarily on the design of algorithms, several of which have been used to solve real-world problems in the United States, Colombia, Chile, Japan, Iran, Cuba, and Mexico

Dr. Coello has more than 450 publications (including 1 monographic book in English, more than 140 articles in peer-reviewed journals and 55 chapters in books), which currently report more than 40,000 citations in Google Scholar (his h index is 78). He is also an Associate Editor of several international journals, including the two most important in his area (IEEE Transactions on Evolutionary Computation and Evolutionary Computation). He is also a member of the Advisory Board of Springer's Natural Computing Book Series.

Throughout his career he has received several awards, including the 2007 National Research Prize in "exact sciences" from the Mexican Academy of Sciences, the 2009 Medal of Scientific Merit from Mexico City’s Congress, the 2012 Scopus Mexico Award in Engineering, the 2011 Heberto Castillo Martínez Capital City Award in Basic Sciences and the 2012 National Science and Arts Award in the area of Physical-Mathematical and Natural Sciences. The latter is the most important prize awarded by the Mexican government to a scientist. Since January 2011 he is an IEEE Fellow for his "contributions to single- and multi-target optimization using metaheuristics.” He also received the 2013 IEEE Kiyo Tomiyasu Award for "pioneering contributions to single- and multi-objective optimization using bio-inspired meta-heuristics" and in November of this year he will receive the 2016 World Academy of Sciences (TWAS) Award in Engineering Sciences for "pioneering contributions to the development of new algorithms based on bio-inspired meta-heuristics to solve single-objective and multi-objective optimization problems".

Where is the research on evolutionary multi-objective optimization heading to?

The first multi-objective evolutionary algorithm was published in 1985. However, it was not until the late 1990s that so-called evolutionary multi-objective optimization began to gain popularity as a research area. Throughout these 33 years, there have been several important advances in the area, including the development of different families of algorithms, test problems, performance indicators, hybrid methods and real-world applications, among many others. In the first part of this talk we will take a quick look at some of these developments, focusing mainly on some of the most important recent achievements. In the second part of the talk, a critical analysis will be made of the by analogy research that has proliferated in recent years in specialized journals and conferences (perhaps as a side effect of the abundance of publications in this area). Much of this research has a very low level of innovation and almost no scientific input, but is backed by a large number of statistical tables and analyses. In the third and final part of the talk, some of the future research challenges for this area, which, after 33 years of existence, is just beginning to mature, will be briefly mentioned.

João Gama

João Gama

João Gama

João Gama is an Associate Professor at the University of Porto, Portugal. He is also a senior researcher and member of the board of directors of the Laboratory of Artificial Intelligence and Decision Support (LIAAD), a group belonging to INESC Porto.
João Gama serves as the member of the Editorial Board of Machine Learning Journal, Data Mining and Knowledge Discovery, Intelligent Data Analysis and New Generation Computing. He served as Cochair of ECML 2005, DS09, ADMA09 and a series of Workshops on KDDS and Knowledge Discovery from Sensor Data with ACM SIGKDD. He was also the chair for the conference of Intelligent Data Analysis 2011. His main research interest is in knowledge discovery from data streams and evolving data. He is the author of more than 200 papers reviewed by peers and author of a recent book on Knowledge Discovery from Data Streams. He has extensive publications in the area of data stream learning.

Real-Time Data Mining

Nowadays, there are applications in which the data are modelled best not as persistent tables, but rather as transient data streams. In this keynote, we discuss the limitations of current machine learning and data mining algorithms. We discuss the fundamental issues in learning in dynamic environments like learning decision models that evolve over time, learning and forgetting, concept drift and change detection. Data streams are characterized by huge amounts of data that introduce new constraints in the design of learning algorithms: limited computational resources in terms of memory, processing time and CPU power. In this talk, we present some illustrative algorithms designed to taking these constrains into account. We identify the main issues and current challenges that emerge in learning from data streams, and present open research lines for further developments

Humberto Bustince

Humberto Bustince

Humberto Bustince

Humberto Bustince is full professor of Computer Science and Artificial Intelligence in the Public University of Navarra and Honorary Professor at the University of Nottingham. He is the main researcher of the Artificial Intelligence and Approximate Reasoning group of the former University, whose main research lines are both theoretical (aggregation and pre-aggregation functions, information and comparison measures, fuzzy sets and extensions) and applied (image processing, classification, machine learning, data mining, big data and deep learning). He has led 11 I+D public-funded research projects, at a national and at a regional level. He has been in charge of research projects collaborating with first-line private companies in fields such as banking, removable energies or security, among others. He has taken part in two international research projects.

He has co-authored more than 240 works, according to Web of Science, in conferences and international journals, most of them in journals of the first quartile of JCR. Moreover, six of these works are also among the highly cited papers of the last ten years, according to Science Essential Indicators of Web of Science. He is editor-in-chief of the online magazine Mathware&Soft Computing of the European Society for Fuzzy Logic and technologies, EUSFLAT) and of the Axioms journal. He is associated editor of the IEEE Transactions on Fuzzy Systems journal and member of the editorial board of the journals Fuzzy Sets and Systems, Information Fusion, International Journal of Computational Intelligence Systems and Journal of Intelligent & Fuzzy Systems.

He is Senior member of the IEEE Association and Fellow of the International Fuzzy Systems Association (IFSA). In 2015 he was awarded for the best outstanding paper published in IEEE Transactions on Fuzzy Systems in 2013, and in 2017, he received the Cross of Carlos III el Noble from the Government of Navarra.

Pre-aggregations from integrals and their application to classification, the computational brain and image processing

From the last research in Big Data and Deep Learning, there has been a huge interest in developing new methods. These methods, not being aggregation functions because they are not modelized using monotone functions in the usual sense, are of great value for applications in any field where data fusion is a relevant step. One important step in this direction has been the introduction of the notion of pre-aggregation function, which is a function with the same boundary conditions as usual aggregation functions, but for which only monotonicity along some fixed direction is required. We will see how examples of these functions can be obtained by generalizing the usual Choquet and Sugeno integrals. We will also see how these generalizations can be used to build edge detectors or fuzzy rule based classifiers which are able to provide results as good as or better than the state of the art in image processing or classification. We will discuss the applications of these pre-aggregations in the study of the computational brain for the classification of signals.

Waiting for your contributions!

Download the CfP