Nous utilisons des témoins pour analyser le trafic et l’utilisation de notre site web, afin de personnaliser votre expérience. Vous pouvez désactiver ces technologies à tout moment, mais cela peut restreindre certaines fonctionnalités du site. Consultez notre Politique de protection de la vie privée pour en savoir plus.
Paramètre des cookies
Vous pouvez activer et désactiver les types de cookies que vous souhaitez accepter. Cependant certains choix que vous ferez pourraient affecter les services proposés sur nos sites (ex : suggestions, annonces personnalisées, etc.).
Cookies essentiels
Ces cookies sont nécessaires au fonctionnement du site et ne peuvent être désactivés. (Toujours actif)
Cookies analyse
Acceptez-vous l'utilisation de cookies pour mesurer l'audience de nos sites ?
Multimedia Player
Acceptez-vous l'utilisation de cookies pour afficher et vous permettre de regarder les contenus vidéo hébergés par nos partenaires (YouTube, etc.) ?
Text-to-SQL enables users to interact with databases through natural language, simplifying access to structured data. Although highly capabl… (voir plus)e large language models (LLMs) achieve strong accuracy for complex queries, they incur unnecessary latency and dollar cost for simpler ones. In this paper, we introduce the first LLM routing approach for Text-to-SQL, which dynamically selects the most cost-effective LLM capable of generating accurate SQL for each query. We present two routing strategies (score- and classification-based) that achieve accuracy comparable to the most capable LLM while reducing costs. We design the routers for ease of training and efficient inference. In our experiments, we highlight a practical and explainable accuracy-cost trade-off on the BIRD dataset.
Text-to-SQL enables users to interact with databases through natural language, simplifying access to structured data. Although highly capabl… (voir plus)e large language models (LLMs) achieve strong accuracy for complex queries, they incur unnecessary latency and dollar cost for simpler ones. In this paper, we introduce the first LLM routing approach for Text-to-SQL, which dynamically selects the most cost-effective LLM capable of generating accurate SQL for each query. We present two routing strategies (score- and classification-based) that achieve accuracy comparable to the most capable LLM while reducing costs. We design the routers for ease of training and efficient inference. In our experiments, we highlight a practical and explainable accuracy-cost trade-off on the BIRD dataset.
Large-scale training of latent diffusion models (LDMs) has enabled unprecedented quality in image generation. However, the key components of… (voir plus) the best performing LDM training recipes are oftentimes not available to the research community, preventing apple-to-apple comparisons and hindering the validation of progress in the field. In this work, we perform an in-depth study of LDM training recipes focusing on the performance of models and their training efficiency. To ensure apple-to-apple comparisons, we re-implement five previously published models with their corresponding recipes. Through our study, we explore the effects of (i)~the mechanisms used to condition the generative model on semantic information (e.g., text prompt) and control metadata (e.g., crop size, random flip flag, etc.) on the model performance, and (ii)~the transfer of the representations learned on smaller and lower-resolution datasets to larger ones on the training efficiency and model performance. We then propose a novel conditioning mechanism that disentangles semantic and control metadata conditionings and sets a new state-of-the-art in class-conditional generation on the ImageNet-1k dataset -- with FID improvements of 7% on 256 and 8% on 512 resolutions -- as well as text-to-image generation on the CC12M dataset -- with FID improvements of 8% on 256 and 23% on 512 resolution.
High-level synthesis (HLS) produces hardware au-tomatically by scheduling and assigning resources based on an input control/data-flow graph.… (voir plus) One particular aspect of HLS for the digital signal processing (DSP) architecture is the het-erogeneous assignment problem (HAP) which maps operations into different types of functional units available in the electronic design automation tools to build efficient implementations. An optimal solution to this assignment problem can be found by formulating the problem as integer linear programming (ILP) and using a solver. However, given the slow nature of this process, heuristics tend to be used instead leading to sub-optimal designs. This paper revisits the classical ILP formulation of the HAP with time constraints for the DSP architecture by identifying redundant constraints. This paper proves theoretically, and demonstrates experimentally, that removing these constraints does not affect the obtained solution. This technique achieves speedups of more than 100 × in terms of runtime and reductions of more than 50 × in terms of memory usage of the solver. Also, this work proposes an updated heuristic that keeps reducing the latency of a path instead of finding a new critical path after giving a new node assignment. Runtime reductions (more than another 10×) due to reduced numbers of critical path searches are observed while returning similar results.
2024-11-04
IEEE Workshop on Signal Processing Systems (publié)
High-level synthesis (HLS) produces hardware au-tomatically by scheduling and assigning resources based on an input control/data-flow graph.… (voir plus) One particular aspect of HLS for the digital signal processing (DSP) architecture is the het-erogeneous assignment problem (HAP) which maps operations into different types of functional units available in the electronic design automation tools to build efficient implementations. An optimal solution to this assignment problem can be found by formulating the problem as integer linear programming (ILP) and using a solver. However, given the slow nature of this process, heuristics tend to be used instead leading to sub-optimal designs. This paper revisits the classical ILP formulation of the HAP with time constraints for the DSP architecture by identifying redundant constraints. This paper proves theoretically, and demonstrates experimentally, that removing these constraints does not affect the obtained solution. This technique achieves speedups of more than 100 × in terms of runtime and reductions of more than 50 × in terms of memory usage of the solver. Also, this work proposes an updated heuristic that keeps reducing the latency of a path instead of finding a new critical path after giving a new node assignment. Runtime reductions (more than another 10×) due to reduced numbers of critical path searches are observed while returning similar results.
2024-11-04
IEEE Workshop on Signal Processing Systems (publié)
High-level synthesis (HLS) produces hardware au-tomatically by scheduling and assigning resources based on an input control/data-flow graph.… (voir plus) One particular aspect of HLS for the digital signal processing (DSP) architecture is the het-erogeneous assignment problem (HAP) which maps operations into different types of functional units available in the electronic design automation tools to build efficient implementations. An optimal solution to this assignment problem can be found by formulating the problem as integer linear programming (ILP) and using a solver. However, given the slow nature of this process, heuristics tend to be used instead leading to sub-optimal designs. This paper revisits the classical ILP formulation of the HAP with time constraints for the DSP architecture by identifying redundant constraints. This paper proves theoretically, and demonstrates experimentally, that removing these constraints does not affect the obtained solution. This technique achieves speedups of more than 100 × in terms of runtime and reductions of more than 50 × in terms of memory usage of the solver. Also, this work proposes an updated heuristic that keeps reducing the latency of a path instead of finding a new critical path after giving a new node assignment. Runtime reductions (more than another 10×) due to reduced numbers of critical path searches are observed while returning similar results.
2024-11-04
IEEE Workshop on Signal Processing Systems (publié)
High-level synthesis (HLS) produces hardware au-tomatically by scheduling and assigning resources based on an input control/data-flow graph.… (voir plus) One particular aspect of HLS for the digital signal processing (DSP) architecture is the het-erogeneous assignment problem (HAP) which maps operations into different types of functional units available in the electronic design automation tools to build efficient implementations. An optimal solution to this assignment problem can be found by formulating the problem as integer linear programming (ILP) and using a solver. However, given the slow nature of this process, heuristics tend to be used instead leading to sub-optimal designs. This paper revisits the classical ILP formulation of the HAP with time constraints for the DSP architecture by identifying redundant constraints. This paper proves theoretically, and demonstrates experimentally, that removing these constraints does not affect the obtained solution. This technique achieves speedups of more than 100 × in terms of runtime and reductions of more than 50 × in terms of memory usage of the solver. Also, this work proposes an updated heuristic that keeps reducing the latency of a path instead of finding a new critical path after giving a new node assignment. Runtime reductions (more than another 10×) due to reduced numbers of critical path searches are observed while returning similar results.
2024-11-04
IEEE Workshop on Signal Processing Systems (publié)
Working memory is a central cognitive ability crucial for intelligent decision-making. Recent experimental and computational work studying w… (voir plus)orking memory has primarily used categorical (i.e., one-hot) inputs, rather than ecologically relevant, multidimensional naturalistic ones. Moreover, studies have primarily investigated working memory during single or few cognitive tasks. As a result, an understanding of how naturalistic object information is maintained in working memory in neural networks is still lacking. To bridge this gap, we developed sensory-cognitive models, comprising a convolutional neural network (CNN) coupled with a recurrent neural network (RNN), and trained them on nine distinct N-back tasks using naturalistic stimuli. By examining the RNN's latent space, we found that: (1) Multi-task RNNs represent both task-relevant and irrelevant information simultaneously while performing tasks; (2) The latent subspaces used to maintain specific object properties in vanilla RNNs are largely shared across tasks, but highly task-specific in gated RNNs such as GRU and LSTM; (3) Surprisingly, RNNs embed objects in new representational spaces in which individual object features are less orthogonalized relative to the perceptual space; (4) The transformation of working memory encodings (i.e., embedding of visual inputs in the RNN latent space) into memory was shared across stimuli, yet the transformations governing the retention of a memory in the face of incoming distractor stimuli were distinct across time. Our findings indicate that goal-driven RNNs employ chronological memory subspaces to track information over short time spans, enabling testable predictions with neural data.
Recent advances in artificial intelligence (AI) have produced systems capable of increasingly sophisticated performance on cognitive tasks. … (voir plus)However, AI systems still struggle in critical ways: unpredictable and novel environments (robustness), lack of transparency in their reasoning (explainability), challenges in communication and commitment (cooperation), and risks due to potential harmful actions (safety). We argue that these shortcomings stem from one overarching failure: AI systems lack wisdom. Drawing from cognitive and social sciences, we define wisdom as the ability to navigate intractable problems - those that are ambiguous, radically uncertain, novel, chaotic, or computationally explosive - through effective task-level and metacognitive strategies. While AI research has focused on task-level strategies, metacognition - the ability to reflect on and regulate one's thought processes - is underdeveloped in AI systems. In humans, metacognitive strategies such as recognizing the limits of one's knowledge, considering diverse perspectives, and adapting to context are essential for wise decision-making. We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety. By focusing on developing wise AI, we suggest an alternative to aligning AI with specific human values - a task fraught with conceptual and practical difficulties. Instead, wise AI systems can thoughtfully navigate complex situations, account for diverse human values, and avoid harmful actions. We discuss potential approaches to building wise AI, including benchmarking metacognitive abilities and training AI systems to employ wise reasoning. Prioritizing metacognition in AI research will lead to systems that act not only intelligently but also wisely in complex, real-world situations.
Recent advances in artificial intelligence (AI) have produced systems capable of increasingly sophisticated performance on cognitive tasks. … (voir plus)However, AI systems still struggle in critical ways: unpredictable and novel environments (robustness), lack of transparency in their reasoning (explainability), challenges in communication and commitment (cooperation), and risks due to potential harmful actions (safety). We argue that these shortcomings stem from one overarching failure: AI systems lack wisdom. Drawing from cognitive and social sciences, we define wisdom as the ability to navigate intractable problems - those that are ambiguous, radically uncertain, novel, chaotic, or computationally explosive - through effective task-level and metacognitive strategies. While AI research has focused on task-level strategies, metacognition - the ability to reflect on and regulate one's thought processes - is underdeveloped in AI systems. In humans, metacognitive strategies such as recognizing the limits of one's knowledge, considering diverse perspectives, and adapting to context are essential for wise decision-making. We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety. By focusing on developing wise AI, we suggest an alternative to aligning AI with specific human values - a task fraught with conceptual and practical difficulties. Instead, wise AI systems can thoughtfully navigate complex situations, account for diverse human values, and avoid harmful actions. We discuss potential approaches to building wise AI, including benchmarking metacognitive abilities and training AI systems to employ wise reasoning. Prioritizing metacognition in AI research will lead to systems that act not only intelligently but also wisely in complex, real-world situations.