Learn how to leverage generative AI to support and improve your productivity at work. The next cohort will take place online on April 28 and 30, 2026, in French.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Enabling Secure Trustworthiness Assessment and Privacy Protection in Integrating Data for Trading Person-Specific Information
Rashid Hussain Khokhar
Farkhund Iqbal
Benjamin C. M. Fung
Jamal Bentahar
With increasing adoption of cloud services in the e-market, collaboration between stakeholders is easier than ever. Consumer stakeholders de… (see more)mand data from various sources to analyze trends and improve customer services. Data-as-a-service enables data integration to serve the demands of data consumers. However, the data must be of good quality and trustful for accurate analysis and effective decision making. In addition, a data custodian or provider must conform to privacy policies to avoid potential penalties for privacy breaches. To address these challenges, we propose a twofold solution: 1) we present the first information entropy-based trust computation algorithm, IEB_Trust, that allows a semitrusted arbitrator to detect the covert behavior of a dishonest data provider and chooses the qualified providers for a data mashup and 2) we incorporate the Vickrey–Clarke–Groves (VCG) auction mechanism for the valuation of data providers’ attributes into the data mashup process. Experiments on real-life data demonstrate the robustness of our approach in restricting dishonest providers from participation in the data mashup and improving the efficiency in comparison to provenance-based approaches. Furthermore, we derive the monetary shares for the chosen providers from their information utility and trust scores over the differentially private release of the integrated dataset under their joint privacy requirements.
2021-01-31
IEEE transactions on engineering management (published)
Training neural networks to recognize speech increased their correspondence to the human auditory pathway but did not yield a shared hierarchy of acoustic features
Trained CNNs more similar to auditory fMRI activity than untrainedNo evidence of a shared representational hierarchy for acoustic featuresAl… (see more)l ROIs were most similar to the first fully-connected layerCNN performance on speech recognition task positively associated with fmri similarity
Utilization of latent space to capture a lower-dimensional representation of a complex dynamics model is explored in this work. The targeted… (see more) application is of a robotic manipulator executing a complex environment interaction task, in particular, cutting a wooden object. We train two flavours of Variational Autoencoders---standard and Vector-Quantised---to learn the latent space which is then used to infer certain properties of the cutting operation, such as whether the robot is cutting or not, as well as, material and geometry of the object being cut. The two VAE models are evaluated with reconstruction, prediction and a combined reconstruction/prediction decoders. The results demonstrate the expressiveness of the latent space for robotic interaction inference and the competitive prediction performance against recurrent neural networks.
2021-01-23
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (published)
Correction to: The patient advisor, an organizational resource as a lever for an enhanced oncology patient experience (PAROLEonco): a longitudinal multiple case study protocol
M. P. Pomey
M. de Guise
M. Desforges
K. Bouchard
C. Vialaron
L. Normandin
M. Iliescu-Nelea
I. Fortin
I. Ganache
C. Régis
Z. Rosberger
D. Charpentier
L. Bélanger
M. Dorval
D. P. Ghadiri
M. Lavoie-Tremblay
A. Boivin
J. F. Pelletier
N. Fernandez
A. M. Danino
An amendment to this paper has been published and can be accessed via the original article.
The human brain differs from that of other primates, but the genetic basis of these differences remains unclear. We investigated the evoluti… (see more)onary pressures acting on almost all human protein-coding genes (N = 11,667; 1:1 orthologs in primates) based on their divergence from those of early hominins, such as Neanderthals, and non-human primates. We confirm that genes encoding brain-related proteins are among the most strongly conserved protein-coding genes in the human genome. Combining our evolutionary pressure metrics for the protein-coding genome with recent data sets, we found that this conservation applied to genes functionally associated with the synapse and expressed in brain structures such as the prefrontal cortex and the cerebellum. Conversely, several genes presenting signatures commonly associated with positive selection appear as causing brain diseases or conditions, such as micro/macrocephaly, Joubert syndrome, dyslexia, and autism. Among those, a number of DNA damage response genes associated with microcephaly in humans such as BRCA1, NHEJ1, TOP3A, and RNF168 show strong signs of positive selection and might have played a role in human brain size expansion during primate evolution. We also showed that cerebellum granule neurons express a set of genes also presenting signatures of positive selection and that may have contributed to the emergence of fine motor skills and social cognition in humans. This resource is available online and can be used to estimate evolutionary constraints acting on a set of genes and to explore their relative contributions to human traits.
Denoising Score Matching with Annealed Langevin Sampling (DSM-ALS) has recently found success in generative modeling. The approach works by … (see more)first training a neural network to estimate the score of a distribution, and then using Langevin dynamics to sample from the data distribution assumed by the score network. Despite the convincing visual quality of samples, this method appears to perform worse than Generative Adversarial Networks (GANs) under the Fréchet Inception Distance, a standard metric for generative models. We show that this apparent gap vanishes when denoising the final Langevin samples using the score network.
In addition, we propose two improvements to DSM-ALS: 1) Consistent Annealed Sampling as a more stable alternative to Annealed Langevin Sampling, and 2) a hybrid training formulation, composed of both Denoising Score Matching and adversarial objectives. By combining these two techniques and exploring different network architectures, we elevate score matching methods and obtain results competitive with state-of-the-art image generation on CIFAR-10.
Despite recent successes of reinforcement learning (RL), it remains a challenge for agents to transfer learned skills to related environment… (see more)s. To facilitate research addressing this problem, we propose CausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment. The environment is a simulation of an open-source robotic platform, hence offering the possibility of sim-to-real transfer. Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures. The key strength of CausalWorld is that it provides a combinatorial family of such tasks with common causal structure and underlying factors (including, e.g., robot and object masses, colors, sizes). The user (or the agent) may intervene on all causal variables, which allows for fine-grained control over how similar different tasks (or task distributions) are. One can thus easily define training and evaluation distributions of a desired difficulty level, targeting a specific form of generalization (e.g., only changes in appearance or object mass). Further, this common parametrization facilitates defining curricula by interpolating between an initial and a target task. While users may define their own task distributions, we present eight meaningful distributions as concrete benchmarks, ranging from simple to very challenging, all of which require long-horizon planning as well as precise low-level motor control. Finally, we provide baseline results for a subset of these tasks on distinct training curricula and corresponding evaluation protocols, verifying the feasibility of the tasks in this benchmark.
While unsupervised domain translation (UDT) has seen a lot of success recently, we argue that mediating its translation via categorical sema… (see more)ntic features could broaden its applicability. In particular, we demonstrate that categorical semantics improves the translation between perceptually different domains sharing multiple object categories. We propose a method to learn, in an unsupervised manner, categorical semantic features (such as object labels) that are invariant of the source and target domains. We show that conditioning the style encoder of unsupervised domain translation methods on the learned categorical semantics leads to a translation preserving the digits on MNIST