Portrait of Xiaojie Xu is unavailable

Xiaojie Xu

Alumni

Publications

Scalar Invariant Networks with Zero Bias
Just like weights, bias terms are the learnable parameters of many popular machine learning models, including neural networks. Biases are th… (see more)ought to enhance the representational power of neural networks, enabling them to solve a variety of tasks in computer vision. However, we argue that biases can be disregarded for some image-related tasks such as image classification, by considering the intrinsic distribution of images in the input space and desired model properties from first principles. Our findings suggest that zero-bias neural networks can perform comparably to biased networks for practical image classification tasks. We demonstrate that zero-bias neural networks possess a valuable property called scalar (multiplication) invariance. This means that the prediction of the network remains unchanged when the contrast of the input image is altered. We extend scalar invariance to more general cases, enabling formal verification of certain convex regions of the input space. Additionally, we prove that zero-bias neural networks are fair in predicting the zero image. Unlike state-of-the-art models that may exhibit bias toward certain labels, zero-bias networks have uniform belief in all labels. We believe dropping bias terms can be considered as a geometric prior in designing neural network architecture for image classification, which shares the spirit of adapting convolutions as the transnational invariance prior. The robustness and fairness advantages of zero-bias neural networks may also indicate a promising path towards trustworthy and ethical AI.
Towards Reliable Neural Specifications
Nham Le
Zhaoyue Wang
Arie Gurfinkel
Toward Reliable Neural Specifications
Nham Le
Zhaoyue Wang
Arie Gurfinkel
We propose a new family of specifications called neural as specification , which uses the intrinsic information of neural networks — neu… (see more)ral activation patterns (NAP), rather than input data to specify the correctness and/or robustness of neural network predictions. We present a simple statistical approach to mining dominant neural activation patterns. We analyze NAPs from a statistical point of view and find that a single can cover a large number of training and testing data points whereas ad hoc data-as-specification only covers the given reference data point. To show the effectiveness of discovered NAPs, we formally important properties, as various types of misclassifications happen for a and is no-ambiguity between different We show that by using we can verify the prediction of the space , of the we is a and for abstract the state of each neuron to only activated and deactivated by leveraging NAPs. We would like to explore more refined abstractions such as { ( −∞ ] , (0 , 1] , (1 , ∞ ] } in future work.