We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
Publications
Aperiodic brain activity and response to anesthesia vary in disorders of consciousness
The potential of automatic code generation through Model-Driven Engineering (MDE) frameworks has yet to be realized. Beyond their ability to… (see more) help software professionals write more accurate, reusable code, MDE frameworks could make programming accessible for a new class of domain experts. However, domain experts have been slow to embrace these tools, as they still need to learn how to specify their applications' requirements using the concrete syntax (i.e., textual or graphical) of the new and unified domain-specific language. Conversational interfaces (chatbots) could smooth the learning process and offer a more interactive way for domain experts to specify their application requirements and generate the desired code. If integrated with MDE frameworks, chatbots may offer domain experts with richer domain vocabulary without sacrificing the power of agnosticism that unified modelling frameworks provide. In this paper, we discuss the challenges of integrating chatbots within MDE frameworks and then examine a specific application: the auto-generation of smart contract code based on conversational syntax. We demonstrate how this can be done and evaluate our approach by conducting a user experience survey to assess the usability and functionality of the chatbot framework. The paper concludes by drawing attention to the potential benefits of leveraging Language Models (LLMs) in this context.
2023-07-01
2023 IEEE International Conference on Software Services Engineering (SSE) (published)
Curriculum frameworks and educational programs in artificial intelligence for medical students, residents, and practicing physicians: a scoping review protocol.
OBJECTIVE
The aim of this scoping review is to synthesize knowledge from the literature on curriculum frameworks and current educational pro… (see more)grams that focus on the teaching and learning of artificial intelligence (AI) for medical students, residents, and practicing physicians.
INTRODUCTION
To advance the implementation of AI in clinical practice, physicians need to have a better understanding of AI and how to use it within clinical practice. Consequently, medical education must introduce AI topics and concepts into the curriculum. Curriculum frameworks are educational road maps to teaching and learning. Therefore, any existing AI curriculum frameworks must be reviewed and, if none exist, such a framework must be developed.
INCLUSION CRITERIA
This review will include articles that describe curriculum frameworks for teaching and learning AI in medicine, irrespective of country. All types of articles and study designs will be included, except conference abstracts and protocols.
METHODS
This review will follow the JBI methodology for scoping reviews. Keywords will first be identified from relevant articles. Another search will then be conducted using the identified keywords and index terms. The following databases will be searched: MEDLINE (Ovid), Embase (Ovid), Cochrane Central Register of Controlled Trials (CENTRAL), CINAHL (EBSCOhost), and Scopus. Gray literature will also be searched. Articles will be limited to the English and French languages, commencing from the year 2000. The reference lists of all included articles will be screened for additional articles. Data will then be extracted from included articles and the results will be presented in a table.
It is critical to measure and mitigate fairness-related harms caused by AI text generation systems, including stereotyping and demeaning har… (see more)ms. To that end, we introduce FairPrism, a dataset of 5,000 examples of AI-generated English text with detailed human annotations covering a diverse set of harms relating to gender and sexuality. FairPrism aims to address several limitations of existing datasets for measuring and mitigating fairness-related harms, including improved transparency, clearer specification of dataset coverage, and accounting for annotator disagreement and harms that are context-dependent. FairPrism’s annotations include the extent of stereotyping and demeaning harms, the demographic groups targeted, and appropriateness for different applications. The annotations also include specific harms that occur in interactive contexts and harms that raise normative concerns when the “speaker” is an AI system. Due to its precision and granularity, FairPrism can be used to diagnose (1) the types of fairness-related harms that AI text generation systems cause, and (2) the potential limitations of mitigation methods, both of which we illustrate through case studies. Finally, the process we followed to develop FairPrism offers a recipe for building improved datasets for measuring and mitigating harms caused by AI systems.
2023-07-01
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (published)
Intrusion detection systems (IDSs) are crucial in the security monitoring for the smart grid with increasing machine-to-machine communicatio… (see more)ns and cyber threats thereafter. However, the multi-sourced, correlated, and heterogeneous smart grid data pose significant challenges to the accurate attack detection by IDSs. To improve the attack detection, this paper proposes Reinforcement Learning-based Adaptive Feature Boosting, which aims to leverage a series of AutoEncoders (AEs) to capture critical features from the multi-sourced smart grid data for the classification of normal, fault, and attack events. Multiple AEs are utilized to extract representative features from different feature sets that are automatically generated through a weighted feature sampling process; each AE-extracted feature set is then applied to build a Random Forest (RF) base classifier. In the feature sampling process, Deep Deterministic Policy Gradient (DDPG) is introduced to dynamically determine the feature sampling probability based on the classification accuracy. The critical features that improve the classification accuracy are assigned larger sampling probabilities and increasingly participate in the training of next AE. The presence of critical features is increased in the event classification over the multi-sourced smart grid data. Considering potential different alarms among base classifiers, an ensemble classifier is further built to distinguish normal, fault, and attack events. Our proposed approach is evaluated on the two realistic datasets collected from Hardware-In-the-Loop (HIL) and WUSTIL-IIOT-2021 security testbeds, respectively. The evaluation on the HIL security dataset shows that our proposed approach achieves the classification accuracy with 97.28%, an effective 5.5% increase over the vanilla Adaptive Feature Boosting. Moreover, the proposed approach not only accurately and stably selects critical features on the WUSTIL-IIOT-2021 dataset based on the significant difference of feature sampling probabilities between critical and uncritical features, i.e., the probabilities greater than 0.08 and less than 0.01, but also outperforms the other best-performing approaches with the increasing Matthew Correlation Coefficient (MCC) of 8.03%.
Deep learning-based algorithms have been very successful in skeleton-based human activity recognition. Skeleton data contains 2-D or 3-D coo… (see more)rdinates of human body joints. The main focus of most of the existing skeleton-based activity recognition methods is on designing new deep architectures to learn discriminative features, where all body joints are considered equally important in recognition. However, the importance of joints varies as an activity proceeds within a video and across different activities. In this work, we hypothesize that selecting relevant joints, prior to recognition, can enhance performance of the existing deep learning-based recognition models. We propose a spatial hard attention finding method that aims to remove the uninformative and/or misleading joints at each frame. We formulate the joint selection problem as a Markov decision process and employ deep reinforcement learning to train the proposed spatial-attention-aware agent. No extra labels are needed for the agent’s training. The agent takes a sequence of features extracted from skeleton video as input and outputs a sequence of probabilities for joints. The proposed method can be considered as a general framework that can be integrated with the existing skeleton-based activity recognition methods for performance improvement purposes. We obtain very competitive activity recognition results on three commonly used human activity recognition datasets.
2023-07-01
IEEE Transactions on Systems, Man, and Cybernetics: Systems (published)