Portrait de Mark Coates

Mark Coates

Membre académique associé
Professeur agrégé, McGill University, Département de génie électrique et informatique
Sujets de recherche
Apprentissage de représentations
Apprentissage sur graphes
Réseaux de neurones en graphes
Systèmes de recommandation
Systèmes dynamiques

Biographie

Mark Coates est professeur au Département de génie électrique et informatique de l'Université McGill, auquel il s’est joint en 2002. Il a obtenu une licence en génie des systèmes informatiques de l'Université d'Adélaïde (Australie) en 1995 et un doctorat en génie de l'information de l'Université de Cambridge (Royaume-Uni) en 1999. Il a été associé de recherche et conférencier à l'Université Rice, au Texas, de 1999 à 2001. En 2012-2013, il a travaillé en tant que scientifique principal chez Winton Capital Management à Oxford, au Royaume-Uni. Il a assumé de multiples rôles éditoriaux, notamment en tant que rédacteur principal pour IEEE Signal Processing Letters, rédacteur associé pour IEEE Transactions on Signal Processing et rédacteur associé pour IEEE Transactions on Signal and Information Processing over Networks. Les recherches de Mark Coates portent sur l'apprentissage automatique et le traitement statistique des signaux, l'inférence bayésienne et Monte Carlo, et l'apprentissage sur les graphes et les réseaux. Ses contributions les plus influentes et les plus citées concernent la tomographie des réseaux et le filtrage distribué des particules.

Étudiants actuels

Publications

Distributed Average Consensus With Dithered Quantization
In this paper, we develop algorithms for distributed computation of averages of the node data over networks with bandwidth/power constraints… (voir plus) or large volumes of data. Distributed averaging algorithms fail to achieve consensus when deterministic uniform quantization is adopted. We propose a distributed algorithm in which the nodes utilize probabilistically quantized information, i.e., dithered quantization, to communicate with each other. The algorithm we develop is a dynamical system that generates sequences achieving a consensus at one of the quantization values almost surely. In addition, we show that the expected value of the consensus is equal to the average of the original sensor data. We derive an upper bound on the mean-square-error performance of the probabilistically quantized distributed averaging (PQDA). Moreover, we show that the convergence of the PQDA is monotonic by studying the evolution of the minimum-length interval containing the node values. We reveal that the length of this interval is a monotonically nonincreasing function with limit zero. We also demonstrate that all the node values, in the worst case, converge to the final two quantization bins at the same rate as standard unquantized consensus. Finally, we report the results of simulations conducted to evaluate the behavior and the effectiveness of the proposed algorithm in various scenarios.
Distributed Average Consensus With Dithered Quantization
Tuncer Can Aysal
In this paper, we develop algorithms for distributed computation of averages of the node data over networks with bandwidth/power constraints… (voir plus) or large volumes of data. Distributed averaging algorithms fail to achieve consensus when deterministic uniform quantization is adopted. We propose a distributed algorithm in which the nodes utilize probabilistically quantized information, i.e., dithered quantization, to communicate with each other. The algorithm we develop is a dynamical system that generates sequences achieving a consensus at one of the quantization values almost surely. In addition, we show that the expected value of the consensus is equal to the average of the original sensor data. We derive an upper bound on the mean-square-error performance of the probabilistically quantized distributed averaging (PQDA). Moreover, we show that the convergence of the PQDA is monotonic by studying the evolution of the minimum-length interval containing the node values. We reveal that the length of this interval is a monotonically nonincreasing function with limit zero. We also demonstrate that all the node values, in the worst case, converge to the final two quantization bins at the same rate as standard unquantized consensus. Finally, we report the results of simulations conducted to evaluate the behavior and the effectiveness of the proposed algorithm in various scenarios.
Greedy Gossip With Eavesdropping
Deniz Ustebay
Boris Oreshkin
This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average c… (voir plus)onsensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper, we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.