Publications

Conjugate Adder Net (CAddNet) - a Space-Efficient Approximate CNN
Lulan Shen
Maryam Ziaeefard
Brett Meyer
James J. Clark
The AdderNet was recently developed as a way to implement deep neural networks without needing multiplication operations to combine weights … (voir plus)and inputs. Instead, absolute values of the difference between weights and inputs are used, greatly reducing the gate-level implementation complexity. Training of AdderNets is challenging, however, and the loss curves during training tend to fluctuate significantly. In this paper we propose the Conjugate Adder Network, or CAddNet, which uses the difference between the absolute values of conjugate pairs of inputs and the weights. We show that this can be implemented simply via a single minimum operation, resulting in a roughly 50% reduction in logic gate complexity as compared with AdderNets. The CAddNet method also stabilizes training as compared with AdderNets, yielding training curves similar to standard CNNs.
High-Throughput and Energy-Efficient VLSI Architecture for Ordered Reliability Bits GRAND
Syed Mohsin Abbas
Thibaud Tonnellier
Furkan Ercan
Marwan Jalaleddine
Ultrareliable low-latency communication (URLLC), a major 5G new-radio (NR) use case, is the key enabler for applications with strict reliabi… (voir plus)lity and latency requirements. These applications necessitate the use of short-length and high-rate channel codes. Guessing random additive noise decoding (GRAND) is a recently proposed maximum likelihood (ML) decoding technique for these short-length and high-rate codes. Rather than decoding the received vector, GRAND tries to infer the noise that corrupted the transmitted codeword during transmission through the communication channel. As a result, GRAND can decode any code, structured or unstructured. GRAND has hard-input as well as soft-input variants. Among these variants, ordered reliability bits GRAND (ORBGRAND) is a soft-input variant that outperforms hard-input GRAND and is suitable for parallel hardware implementation. This work reports the first hardware architecture for ORBGRAND, which achieves an average throughput of up to 42.5 Gb/s for a code length of 128 at a target frame error rate (FER) of 10−7. Furthermore, the proposed hardware can be used to decode any code as long as the length and rate constraints are met. In comparison to the GRAND with ABandonment (GRANDAB), a hard-input variant of GRAND, the proposed architecture enhances decoding performance by at least 2 dB. When compared to the state-of-the-art fast dynamic successive cancellation flip decoder (Fast-DSCF) using a 5G polar code (PC) (128, 105), the proposed ORBGRAND VLSI implementation has
Optimal Control of Network-Coupled Subsystems: Spectral Decomposition and Low-Dimensional Solutions
Shuang Gao
In this article, we investigate the optimal control of network-coupled subsystems with coupled dynamics and costs. The dynamics coupling may… (voir plus) be represented by the adjacency matrix, the Laplacian matrix, or any other symmetric matrix corresponding to an underlying weighted undirected graph. Cost couplings are represented by two coupling matrices which have the same eigenvectors as the coupling matrix in the dynamics. We use the spectral decomposition of these three coupling matrices to decompose the overall system into
OptiMaP: swarm-powered Optimized 3D Mapping Pipeline for emergency response operations
Leandro R. Costa
Daniel Aloise
Luca G. Gianoli
Andrea Lodi
A smart application in sensing is mainly powered by a two-stage process comprising sensing (collect data) and computing (process data). Whil… (voir plus)e the sensing stage is typically performed locally through a dedicated Internet of Things infrastructure, the computing stage may require a powerful infrastructure in the cloud. However, when connectivity is poor and low latency becomes a requirement — as in emergency response and disaster relief operations — edge computing and ad hoc cloud paradigms come in support to keep the computing stage locally. Being local network connectivity and data processing limited, it is vital to properly optimize how the computing workload will be consumed by the local ad hoc cloud. For this purpose, we present and evaluate the swarm-powered Optimized 3D Mapping Pipeline (OptiMaP) for emergency response 3D mapping missions, which is implemented as a collaborative embedded Robot Operating System (ROS) application integrating an ad hoc telecommunication middleware.We simulate — with Software-In-The-Loop — realistic 3D mapping missions comprising up to 5 drones and 363 images covering 0.293km2. We show how the completion times of mapping missions carried out in a typical centralized manner can be dramatically reduced by two versions of the OptiMaP framework powered, respectively, by a variable neighborhood search heuristic and a greedy method.
Optimization and Simplification of PCPA Decoder for Reed-Muller Codes
Jiajie Li
The collapsed projection-aggregation (CPA) decoder reduces the computational complexity of the recursive projection-aggregation (RPA) decode… (voir plus)r by removing the recursive structure. From simulations, the CPA decoder has similar error-correction performance as the RPA decoder, when decoding Reed-Muller (RM) (7, 3) and (8, 2) codes. The computational complexity can be further reduced by only selecting a subset of sub-spaces, which is achieved by pruning CPA decoders. In this work, optimization methods are proposed to find the pruned CPA (PCPA) decoder with small performance loss. Furthermore, the min-sum approximation is used to replace non-linear projection and aggregation functions, and a simplified list decoder based on the syndrome check is proposed. Under the same complexity, the optimized PCPA decoder has less performance loss than randomly constructed PCPA decoders in most case. The min-sum approximation incurs less than 0.15 dB performance loss at a target frame error rate of 10−4, and the simplified list decoder does not have noticeable performance loss.
Towards an AAK Theory Approach to Approximate Minimization in the Multi-Letter Case
We study the approximate minimization problem of weighted finite automata (WFAs): given a WFA, we want to compute its optimal approximation … (voir plus)when restricted to a given size. We reformulate the problem as a rank-minimization task in the spectral norm, and propose a framework to apply Adamyan-Arov-Krein (AAK) theory to the approximation problem. This approach has already been successfully applied to the case of WFAs and language modelling black boxes over one-letter alphabets \citep{AAK-WFA,AAK-RNN}. Extending the result to multi-letter alphabets requires solving the following two steps. First, we need to reformulate the approximation problem in terms of noncommutative Hankel operators and noncommutative functions, in order to apply results from multivariable operator theory. Secondly, to obtain the optimal approximation we need a version of noncommutative AAK theory that is constructive. In this paper, we successfully tackle the first step, while the second challenge remains open.
Bias-inducing geometries: an exactly solvable data model with fairness implications
Stefano Sarao Mannelli
Federica Gerace
Luca Saglietti
JARV1S: Phenotype Clone Search for Rapid Zero-Day Malware Triage and Functional Decomposition for Cyber Threat Intelligence
Christopher Molloy
Philippe Charland
Steven H. H. Ding
Cyber threat intelligence (CTI) has become a critical component of the defense of organizations against the steady surge of cyber attacks. M… (voir plus)alware is one of the most challenging problems for CTI, due to its prevalence, the massive number of variants, and the constantly changing threat actor behaviors. Currently, Malpedia has indexed 2,390 unique malware families, while the AVTEST Institute has recorded more than 166 million new unique malware samples in 2021. There exists a vast number of variants per malware family. Consequently, the signature-based representation of patterns and knowledge of legacy systems can no longer be generalized to detect future malware attacks. Machine learning-based solutions can match more variants. However, as a black-box approach, they lack the explainability and maintainability required by incident response teams.There is thus an urgent need for a data-driven system that can abstract a future-proof, human-friendly, systematic, actionable, and dependable knowledge representation from software artifacts from the past for more effective and insightful malware triage. In this paper, we present the first phenotype-based malware decomposition system for quick malware triage that is effective against malware variants. We define phenotypes as directly observable characteristics such as code fragments, constants, functions, and strings. Malware development rarely starts from scratch, and there are many reused components and code fragments. The target under investigation is decomposed into known phenotypes that are mapped to known malware families, malware behaviors, and Advanced Persistent Threat (APT) groups. The implemented system provides visualizable phenotypes through an interactive tree map, helping the cyber analysts to navigate through the decomposition results. We evaluated our system on 200,000 malware samples, 100,000 benign samples, and a malware family with over 27,284 variants. The results indicate our system is scalable, efficient, and effective against zero-day malware and new variants of known families.
Works for Me! Cannot Reproduce – A Large Scale Empirical Study of Non-reproducible Bugs
Mohammad Masudur Rahman
Marco Castelluccio
Contextual bandit optimization of super-resolution microscopy
Anthony Bilodeau
Renaud Bernatchez
Albert Michaud-Gagnon
Flavie Lavoie-Cardinal
Evaluating Multimodal Interactive Agents
Josh Abramson
Arun Ahuja
Federico Carnevale
Petko Georgiev
Alex Goldin
Alden Hung
Jessica Landon
Timothy P. Lillicrap
Alistair M. Muldal
Adam Santoro
Tamara von Glehn
Greg Wayne
Nathaniel Wong
Chen Yan
Creating agents that can interact naturally with humans is a common goal in artificial intelligence (AI) research. However, evaluating these… (voir plus) interactions is challenging: collecting online human-agent interactions is slow and expensive, yet faster proxy metrics often do not correlate well with interactive evaluation. In this paper, we assess the merits of these existing evaluation metrics and present a novel approach to evaluation called the Standardised Test Suite (STS). The STS uses behavioural scenarios mined from real human interaction data. Agents see replayed scenario context, receive an instruction, and are then given control to complete the interaction offline. These agent continuations are recorded and sent to human annotators to mark as success or failure, and agents are ranked according to the proportion of continuations in which they succeed. The resulting STS is fast, controlled, interpretable, and representative of naturalistic interactions. Altogether, the STS consolidates much of what is desirable across many of our standard evaluation metrics, allowing us to accelerate research progress towards producing agents that can interact naturally with humans. A video may be found at https://youtu.be/YR1TngGORGQ.
Assessing the Quality of Direct-to-Consumer Teleconsultation Services in Canada
Jean Noel Nikiema
Eleah Stringer
Marie-Pierre Moreault
Priscille Pana
Marco Laverdiere
Jean-Louis Denis
Béatrice Godard
Mylaine Breton
Guy Paré
Aviv Shachak
Claudia Lai
Elizabeth M. Borycki
Andre W. Kushniruk
Aude Motulsky