The Quebec AI Meetup of 2018

More than 500 attendees gathered last month for the Artificial Intelligence Meetup in Quebec City. Presentations from researchers and industry practitioners were focused on deep learning, natural language processing, computer vision, and industrial applications of artificial intelligence. This post presents a summary of each technical talk of the meetup.

Outline

  1. Artificial Intelligence and Deep Learning - Yoshua Bengio
  2. Artificial Intelligence: Between Potential and Possible - François Laviolette
  3. Overview of the AI Ecosystem in Quebec - Christian Gagné and Alexandra Masson
  4. Coveo Machine Learning: Extensible Machine Learning Platform for Personalized Predictions - Sébastien Paquet
  5. How AI Helps Gartner Know (Almost) Everything on the Job Market Worldwide - Andriy Burkov
  6. Health and Toxicity in Online Conversations - Richard Khoury
  7. SCALE.AI : AI-Powered Supply Chains - Louis Roy
  8. Artificial Intelligence and Augmented Reality - Jean-François Lalonde
  9. Towards Intelligent Nanoscopy: Artificial Intelligence Applications in the Study of Brain Molecular Mechanisms - Flavie Lavoie-Cardinal
  10. Real Time Analysis of Decision Patterns and Bio-Behavioral Data for Augmented Human-Machine Systems - Daniel Lafond
  11. Panel : Artificial Intelligence and Industry 4.0 in Quebec - Alexandre Vallières, Jonathan Gaudreault, Yves Proteau, Sébastien Bujold, and Kevin Spahr

1. Artificial Intelligence and Deep Learning

Yoshua Bengio, Scientific Director, Montreal Institute for Learning Algorithms (MILA)

Artificial intelligence applications are everywhere. They can converse in natural language, recognize objects in images, play complex games like Go, and even detect cancer cells in images. These advances were made possible with the evolution of AI techniques over the years. While early stages of AI were mostly focused on the formalization of knowledge, we now use machine learning methods to let computers gather knowledge by themselves from observations. Now, the source of intelligence is data.

Deep learning is a set of machine learning methods based on neural networks. Concretized by Yoshua Bengio, Geoffrey Hinton, and Yann LeCun in 2006, deep learning approaches are particularly good on perception tasks like interpreting sounds or images. Their ability to absorb a lot of information and adapt to different contexts have made them methods of choice for many machine learning tasks.

Deep learning methods create an implicit representation of the knowledge. As a result, they can transform data in a form that is more semantically interesting. This phenomenon was observed on a convolutional neural network that was trained to recognize scenes (e.g. office, restaurant) in images (Zhou, 2014). While the network had only access to a training set of images with scene labels, some computation units of the network had specialized to detect people, animals, and lighting. Hence, the model created a new representation that helped to generalize and improve its performances.

Learning representations is also useful for natural language processing tasks. Around 2000, Bengio and his team trained a neural network to learn word representations (Bengio, 2003). This trained model could transform a word into a real-valued vector such that words with similar meanings were close to each other in that vector space. For instance, a 2-dimensional representation of that space could show that "was" and "were" or "come" and "go" were neighbors in that space. This work was the foundation of word embeddings which are a key component of most natural language processing tasks today.

Deep learning algorithms can learn to generate images, music, and text. Generative adversarial networks (GANs) are a type of deep neural networks that learn to approximate the distribution of the data (Goodfellow, 2014). GANs are composed of two networks, the generative and discriminative networks, which compete against each other as adversaries. The generative network generates data and tries to fool the discriminative network, while the discriminative network acts as a cop and tries to detect whether the data is real or was generated. Both networks optimize a different objective and are trained in parallel.

The field of AI has made great progress over the years. However, we are still far from human intelligence. Most industrial applications of machine learning are currently based on supervised learning, where the machine learning model is trained on a dataset that contains the right answers for the task to learn. Conversely, humans tend to learn by interacting with their environment, in an unsupervised way.

2. Artificial Intelligence: Between Potential and Possible

François Laviolette, Director, Big Data Research Center (CRDM), Laval University

Artificial intelligence is no longer restricted to large corporations. Companies of various sizes and sectors have benefited from adding AI into their core operations. But despite the potential of AI, there are some limitations.

AI systems are no better than the data they are trained on. Microsoft's AI Chatbot Tay, which learned to converse by interacting with internet users, started to send Nazi messages and was stopped 16 hours after its launch. While Tay is an extreme example, racism and sexism can be present in the data. Originally applied for domain adaptation problems, domain adversarial training of neural networks (DANN) can help to deal with fairness issues (Ajakan, 2014). DANN can learn a new representation of the data that performs well on the task to learn, but performs poorly on a discriminative task like discriminating between men and women.

Problems can also arise when the training data is not representative of the reality. Consider a machine learning model that learns to classify images of cats and dogs. Suppose that each cat image in the training data contains a bowl of food, while each dog image contains a ball. A machine learning model trained on this data could learn to use only bowl and ball features to classify cats and dogs perfectly. The system would perform well on this particular dataset, but perform poorly in a real-life scenario.

Robustness is another issue with AI Systems. Small perturbations to input examples can make machine learning models output incorrect answers with high confidence. This phenomenon was observed on a neural network trained to classify images (Goodfellow, 2014). The model correctly labeled a panda image with a 57% confidence. However, the same model labeled the panda image as a gibbon with 99% confidence after the input pixels were slightly altered.

AI systems have to potential to impact companies in many ways. But we need to understand their limitations. Above all, we need to be careful on how we choose to use the data we collect.

3. Overview of the AI Ecosystem in Quebec

Christian Gagné, Professor, Department of Electrical and Computer Engineering, Laval University
Alexandra Masson, Director - Innovation, Québec International

Quebec City is becoming a key player in the field of artificial intelligence.

Laval University has developed an AI expertise in both fundamental research and applications. Research groups such as the DAMAS, GRAAL, and REPARTI are advancing AI research in fields like robotics, computer vision, natural language processing, and bioinformatics. Other groups like the CRDM and FORAC are working closely with the industry. The CRDM helps businesses with big data challenges and regroups researchers from five faculties.

Laval University also has an international visibility in AI. François Laviolette, Christian Gagné, Jean-François Lalonde, Mario Marchand, Philippe Giguère, and Brahim Chaib-Draa are six professors at Laval University that have contributed significantly to top AI conferences.

More than 75 companies in Quebec City use AI in their operations. These companies are operating in diverse sectors including health, security, insurance, management, marketing, and entertainment. The AI expertise in industries is also diversified and includes machine learning, sensors, and automation.

The current number of AI experts in Quebec City is not enough to fulfill the industry needs. For that reason, a new Professional Master’s Program in Artificial Intelligence will start this fall at Laval University. This program will combine fundamental AI courses with internships to help train 35 to 50 students in AI each year.

4. Coveo Machine Learning: Extensible Machine Learning Platform for Personalized Predictions

Sébastien Paquet, Team Lead - Data Analysis, Coveo

Coveo is a provider of search engines for businesses. Founded in 2005, the company currently has more than 300 employees. Their products are used by thousands of companies in 18 different languages.

Machine learning is central to Coveo's products. Their machine learning models can suggest search queries, optimize search results, make recommendations based on navigation history, and detect important terms in queries. Their infrastructure is 100% automated to train machine learning models and select hyper-parameters.

Coveo uses machine learning to improve search results. Consider a user that searches for a computer mouse with a serial number that does not exist in the system. If the search engine is uniquely based on keyword matching, no results would be returned for that search query. With machine learning, the system can learn from the queries of that user and the results he clicked. As more users search for the same serial number, the machine learning model can learn to match it with the corresponding mouse product, and thus improve the search over time.

Coveo's machine learning models can make personalized predictions. When a user is typing a query, their machine learning models can suggest a set of candidate queries to complete the prefix entered by the user. Often, the machine learning model only has a few input characters to make a prediction. To improve the suggestions, they use a clustering approach to group users into different clusters prior to the query suggestion. Hence a salesforce and an anonymous users could get different query suggestions for the same prefix.

Coveo has different machine learning projects in progress, including chatbots and e-commerce recommendations.

5. How AI Helps Gartner Know (Almost) Everything on the Job Market Worldwide

Andriy Burkov, Global ML Team Leader, Gartner

Gartner is a research and advisory firm that provides insights, advice, and tools to business leaders worldwide. Wanted Technologies, a provider of talent recruitment tools, was acquired by Corporate Executive Board (CEB) in 2015, which was later acquired by Gartner in 2017. Following the work of Wanted Technologies, Gartner uses artificial intelligence to help human resources with talent recruitment.

Artificial intelligence has many applications in talent recruitment. Gartner uses artificial intelligence to predict the ideal profile for a job, find candidates that match a specific profile, and estimate job salaries. Their AI models can predict candidate specific information such as whether the candidate is ready to leave his current employment and if he will stay long in the company. Their predictions can also help human resources estimate the duration of the recruitment process.

Gartner has an automated AI pipeline to harvest, normalize, and index job data. Their crawlers download millions of job postings from different websites. They use machine learning to extract job posting attributes like company names, locations, salaries, occupations, and dates. They can also detect duplicate posts or posts that refer to multiple jobs.

Gartner uses machine learning and natural language processing for salary extraction. They use a binary classification model to predict if a number is a salary or not. When a number is predicted as a salary, they divide the phrase into tokens for further analysis. For instance, in $2k monthly, the $ symbol is the currency, 2k is the amount and means 2000, and monthly is the period. Hence, 2000 should be multiplied by 12 to become a yearly salary.

Gartner's machine learning team is planning to work on different projects in the future, such as crawlers that can automatically understand website structures to find specific information, and a writing assistant for job postings.

6. Health and Toxicity in Online Conversations

Richard Khoury, Professor, Department of Computer Science and Software Engineering, Laval University

Toxicity is present in online conversations. Some individuals use online media to manipulate public opinion, propagate racist and sexist ideologies, and encourage suicide. Groups like ISIS use social media to expand their influence and recruit people. And with the massive volume of online messages, it becomes impossible for human moderators to monitor everything.

Artificial Intelligence can help with online toxicity. The goal is not to monitor everything, but rather to identify the individuals that should be monitored. One possible approach is to model the personality of online users, and then predict which types of personality are more likely to become toxic. Human moderators can then focus their efforts on users with a potentially toxic personality.

Traditional approach to measure personalities are challenging to apply online. Personality models like the Big Five and the Dark Triad can be used to describe personalities. However, they are typically measured by answering long questionnaires. The user could simply refuse to answer the questionnaire or answer dishonestly.

Alternatively, artificial intelligence can be used to predict user personality from online messages. Richard Khoury and his team did an experiment with 899 twitter accounts that were selected randomly to predict personality traits from Twitter messages. Their results suggested for instance that personalities with high psychopathy and Machiavellism were associated with glorifying and aggressive messages.

For their next experiments, Richard Khoury and his team plan to use more users, diversify their sources, and work with psychologists to define toxicity rating metrics.

7. SCALE.AI : AI-Powered Supply Chains

Louis Roy, President and Founder, Optel Group

Supply chains are the sequences of processes involved in moving a product from a supplier to a consumer. They include the sourcing, manufacturing, distribution, and delivery of products and services. SCALE.AI (Supply Chains and Logistics Excellence.AI) is an industry-led consortium that aims to use AI technologies to create an intelligent supply chain platform.

Supply chains are crucial to our society. They contribute to the transformation of natural resources into finished products and create a massive amount of jobs along the way. But supply chains are not perfect. We currently consume in six months what the earth can produce in a year. If we do not change our current supply chains and stop wasting natural resources, what are the impacts for our planet?

Technology can improve supply chain processes in many ways. Manufacturers can use technology to optimize their stocks and minimize their losses. Companies can improve their competitiveness by automating different processes. Blockchain can trace products back to their sources and improve transaction security. With the massive amount of data created by supply chains, artificial intelligence and other technologies can thus contribute to the improvement of supply chain activities.

8. Artificial Intelligence and Augmented Reality

Jean-François Lalonde, Professor, Department of Computer Science and Software Engineering, Laval University

Augmented reality is a combination of a real world and a virtual environment. The perception of the real world environment is altered by the incorporation of illusions that are perceived as being part of the environment. For instance, augmented reality can simulate realistic situations to train surgeons, and help architects add virtual 3D models of objects in their designs. Currently, the two main challenges of augmented reality are to adapt illusions to the movement of real-world objects and to the lighting conditions of the real-world environment.

Mathieu Garon in collaboration with Creaform worked on the real-time tracking of objects for augmented reality. Consider the problem of creating an illusion to make a red toy dragon appear blue in a real-world environment. As the dragon moves, the blue illusion must move as well to avoid showing any red on the dragon. Assuming that the 3D model of the dragon is known, the goal is to train a neural network to predict the change in position and orientation of the dragon at each time step. During prediction time, the network receives as input the current and previous images, the previous position, and the previous orientation. However, by using only these inputs, small prediction errors will accumulate at each time step and the predicted position of the object will eventually diverge from the real one. For that reason, the neural network must also use the predicted change in position and orientation as input. That way, the network can learn to correct its predictions and stop the error propagation.

Virtual objects that are illuminated with the real lighting conditions can better integrate with the real world environment. When the lighting conditions in an image are known, they can be used to estimate the illumination that must be applied on the virtual object. Marc-André Gardner worked in collaboration with Adobe to estimate lighting conditions in images (Gardner, 2017). His team used a large dataset of outdoor panoramas to train a convolutional neural network for this task. By using a panorama, they were able to obtain the ground truth of the lighting conditions. They then cut the panoramas into smaller images and trained the network with these images. At prediction time, they were able to predict lighting conditions for new images and insert virtual objects that appeared realistic.

9. Towards Intelligent Nanoscopy: Artificial Intelligence Applications in the Study of Brain Molecular Mechanisms

Flavie Lavoie-Cardinal, Researcher, CERVO, Laval University

Flavie Lavoie-Cardinal studies the molecular interactions in living cells. These interactions include the communication between neurons and the evolution and changes in synapses. With a super-resolution optical microscope, living cells can be observed in real time. However, the quality of images can vary greatly depending on the microscope parameters and the structure of interest. Since non-experts have difficulty to judge the quality of super-resolution images, selecting the best parameters for this microscope is highly challenging.

Artificial intelligence can help non-experts optimize a super-resolution optical microscope to obtain high-quality images. Flavie Lavoie-Cardinal and her team used a convolutional neural network (CNN) to learn to predict the quality of super-resolution images (Robitaille, 2018). They modeled the problem as a regression problem, where the CNN is given an input image and outputs a real-valued score between zero and one that represents the quality of the image. The CNN was trained on a set of high-resolution images that were labeled by an expert, and now runs automatically during their experiments to assist non-experts in predicting the quality of images. Non-experts can thus use the feedback of the CNN to optimize the parameters of the microscope and obtain high-quality images.

10. Real Time Analysis of Decision Patterns and Bio-Behavioral Data for Augmented Human-Machine Systems

Daniel Lafond, Specialist in Cognitive Engineering and Human Factors, Thales

Thales provides services for the avionics, defense, security, aerospace, and transportation markets. Founded in 2000, Thales currently has 61,000 employees and operates in 56 countries. Thales Quebec is their fifth technology research center and focuses on human, data, and IoT.

Thales is working on the real-time detection of critical states in humans. Workers like pilots or truck drivers must be extremely vigilant during their work. Their performance can be affected by stress, mental charge, and tiredness. With biosensors and artificial intelligence, workers can be monitored during their activities and critical states can be inferred from data. However, critical state classifiers have some limitations. Thales observed a significant drop in their classifier performances when they were not tested on the same individuals as they were trained on. Hence, the classifiers must currently be calibrated on each individual to perform well.

Thales also uses interpretable machine learning models to understand and support human decisions. Since human judgment can be affected by different factors, a statistical model of a doctor can be better than the doctor himself at making certain decisions. Expertise can also be hard to communicate. Hence, learning a statistical model from the data can be easier than developing an expert system. As an example, Joanny Grenier worked on understanding doctor decisions for the detection of sepsis in patients (Grenier, 2016). Sepsis is a dangerous disease where symptoms can vary highly among patients. She modeled the decision process of individual doctors by learning a decision tree on past decisions and constraining the model to use only the information consulted by the doctor. The trained doctor model can then be compared to a collective model with a better performance to understand in which cases the decision process differs.

11. Panel : Artificial Intelligence and Industry 4.0 in Quebec

Moderator : Alexandre Vallières, Cofounder, AIworx

Jonathan Gaudreault, Director, Research Consortium in Engineering and Industrial Systems 4.0, Laval University
Following steam power, electricity, and computer automation, technologies like AI and IoT are giving rise to the fourth industrial revolution. AI can help the manufacturing sector by supporting human decisions, detecting causes of manufacturing problems, and making predictions for different scenarios. For example, a manufacturer of Nespresso machines used a neural network to predict the optimal configuration of the lacquering process given the meteorological conditions. These predictions decreased the percentage of parts that had to be reprocessed from 60% to 5%. Hence, manufacturers can greatly benefit from adding AI technologies into their operations.

Yves Proteau, Codirector, APN
APN is a machine shop that transforms metal into high precision products. APN uses AI to predict product measurements, plan their production optimally, and predict when their machines should be maintained. They also have a robot that uses AI to avoid workers on the floor. AI has many applications in their industry, and collecting the right data is key to the success of their AI technologies.

Sébastien Bujold, Analyst - Production Systems, Aluminerie Alouette
Aluminerie Alouette is an aluminum manufacturing company that was founded in 1989. The company is currently based in Sept-Îles and produces approximately 600 000 metric tones of aluminum per year. Aluminerie Alouette is using artificial intelligence to improve their electrolysis process. They trained a convolution neural network on time series data to detect anode effects and instabilities during the electrolysis process. With their trained network, they can ensure continuous monitoring and early detection of these problems to minimize their production losses.

Kevin Spahr, Physician and Data Analyst, LeddarTech
LeddarTech specializes in the development of Light Detection and Ranging (LIDAR) sensors. A LIDAR creates a point cloud (i.e. a set of points in a 3D space) that measures the distance to a target area. The point cloud is created by sending light pulses on the target and capturing the light that was reflected. LIDARs have many applications in self-driving cars. They can for instance be used to detect pedestrians, obstacles, and objects in a given area.

References

Ajakan, Hana, Pascal Germain, Hugo Larochelle, François Laviolette, and Mario Marchand. 2014. “Domain-Adversarial Neural Networks.” ArXiv Preprint ArXiv:1412.4446.

Bengio, Yoshua, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. “A Neural Probabilistic Language Model.” Journal of Machine Learning Research 3 (Feb): 1137–55.

Gardner, Marc-André, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gambaretto, Christian Gagné, and Jean-François Lalonde. 2017. “Learning to Predict Indoor Illumination from a Single Image.” ACM Transactions on Graphics (SIGGRAPH Asia) 9 (4).

Goodfellow, Ian J, Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” ArXiv Preprint ArXiv:1412.6572.

Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems, 2672–80.

Grenier, Joanny. 2016. “Processus décisionnel En Contexte de détection Du Sepsis Pédiatrique.” PhD thesis, Université Laval.

Robitaille, Louis-Émile, Audrey Durand, Marc-André Gardner, Christian Gagné, Paul De Koninck, and Flavie Lavoie-Cardinal. 2018. “Learning to Become an Expert: Deep Networks Applied to Super-Resolution Microscopy.” ArXiv Preprint ArXiv:1803.10806.

Zhou, Bolei, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2014. “Object Detectors Emerge in Deep Scene Cnns.” ArXiv Preprint ArXiv:1412.6856.

Show Comments