Learn how to leverage generative AI to support and improve your productivity at work. The next cohort will take place online on April 28 and 30, 2026, in French.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Socially Assistive Robots for patients with Alzheimer's Disease: A scoping review.
Substitution of dietary monounsaturated fatty acids from olive oil for saturated fatty acids from lard increases low-density lipoprotein apolipoprotein B-100 fractional catabolic rate in subjects with dyslipidemia associated with insulin resistance: a randomized controlled trial
Louis-Charles Desjardins
Francis Brière
André J Tremblay
Maryka Rancourt-Bouchard
Jean-Philippe Drouin-Chartier
J. Corbeil
Valéry Lemelin
Amélie Charest
Ernst J Schaefer
Benoit Lamarche
Patrick Couture
2024-02-29
American Journal of Clinical Nutrition (published)
Substitution of dietary monounsaturated fatty acids from olive oil for saturated fatty acids from lard increases LDL apolipoprotein B-100 fractional catabolic rate in subjects with dyslipidemia associated with insulin resistance: a randomized controlled trial.
Louis-Charles Desjardins
Francis Brière
André J Tremblay
Maryka Rancourt-Bouchard
Jean-Philippe Drouin-Chartier
J. Corbeil
Valéry Lemelin
Amélie Charest
Ernst J Schaefer
Benoit Lamarche
Patrick Couture
2024-02-29
American Journal of Clinical Nutrition (published)
The « jingle-jangle fallacy » of empathy: Delineating affective, cognitive and motor components of empathy from behavioral synchrony using a virtual agent
The paper focuses on the role of the World Health Organization (WHO) in promoting a healthy world population as a generative and robust idea… (see more) within health policy. The WHO’s health credo transcends national boundaries to promote health globally. It is embedded in norms, values, and standards promulgated by the organization and contributes in shaping the health responses of national governments. Ideational robustness refers to the ability of the WHO to adapt its health credo to changing contexts and circumstances, thus promoting the legitimacy of an international health order. Disturbances, including the Covid-19 pandemic, test the credo’s robustness, forcing the WHO to constantly work at reframing ideas to adapt to political forces and competing logics that structure the field of international health. Empirically, the paper is based on an historical analysis of the evolution of the health credo of the WHO since its inception. Qualitative content analysis of secondary sources, such as policy documents, explores how ideational work performed by WHO leaders impacts on the organization’s position and legitimacy. Ideational robustness appears to be largely influenced by leadership vision, preexisting organizational structure, and the political economy of international health. Ideational robustness appears as a powerful yet insufficient ingredient of policy success.
Efficiently solving a vehicle routing problem (VRP) in a practical runtime is a critical challenge for delivery management companies. This p… (see more)aper explores both a theoretical and experimental connection between the Capacitated Vehicle Routing Problem (CVRP) and the Constrained Centroid-Based Clustering (CCBC). Reducing a CVRP to a CCBC is a synonym for a transition from an exponential to a polynomial complexity using commonly known algorithms for clustering, i.e K-means. At the beginning, we conduct an exploratory analysis to highlight the existence of such a relationship between the two problems through illustrative small-size examples and simultaneously deduce some mathematically-related formulations and properties. On a second level, the paper proposes a CCBC based approach endowed with some enhancements. The proposed framework consists of three stages. At the first step, a constrained centroid-based clustering algorithm generates feasible clusters of customers. This methodology incorporates three enhancement tools to achieve near-optimal clusters, namely: a multi-start procedure for initial centroids, a customer assignment metric, and a self-adjustment mechanism for choosing the number of clusters. At the second step, a traveling salesman problem (T SP) solver is used to optimize the order of customers within each cluster. Finally, we introduce a process relying on routes cutting and relinking procedure, which calls upon solving a linear and integer programming model to further improve the obtained routes. This step is inspired by the ruin&recreate algorithm. This approach is an extension of the classical cluster-first, route-second method and provides near-optimal solutions on well-known benchmark instances in terms of solution quality and computational runtime, offering a milestone in solving VRP.
Assessing the quality of summarizers poses significant challenges. In response, we propose a novel task-oriented evaluation approach that as… (see more)sesses summarizers based on their capacity to produce summaries that are useful for downstream tasks, while preserving task outcomes. We theoretically establish a direct relationship between the resulting error probability of these tasks and the mutual information between source texts and generated summaries. We introduce
The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), … (see more)introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data.