Ax Mu Nan, Muquan Yu, Weijian Mai, Jacob S. Prince, Hossein Adeli, Rui Zhang, Jiahang Cao, Benjamin Becker, John A. Pyles, Margaret M. Henderson, Chunfeng Song, Nikolaus Kriegeskorte, Michael J. Tarr, Xiaoqing Hu, Andrew F. Luo 7d ago

Meta-learning In-Context Enables Training-Free Cross Subject Brain Decoding

Meta-learning approach for brain signal decoding without per-subject training.

Ax Xiangru Jian, Hao Xu, Wei Pang, Xinjian Zhao, Chengyu Tao, Qixin Zhang, Xikun Zhang, Chao Zhang, Guanzhi Deng, Alex Xue, Juan Du, Tianshu Yu, Garth Tarr, Linqi Song, Qiuzhuang Sun, Dacheng Tao 7d ago

FORGE:Fine-grained Multimodal Evaluation for Manufacturing Scenarios

Benchmark dataset and evaluation for multimodal LLMs in manufacturing scenarios.

Ax Mohamed Ehab (Faculty of Computer Science, October University for Modern Science & Arts, Giza, Egypt), Ali Hamdi (Faculty of Computer Science, October University for Modern Science & Arts, Giza, Egypt), Khaled Shaban (Department of Computer Science and Engineering, Qatar University, Doha, Qatar) 7d ago

CAMO: A Class-Aware Minority-Optimized Ensemble for Robust Language Model Evaluation on Imbalanced Data

CAMO is an ensemble technique for imbalanced text classification that optimizes minority class performance through hierarchical voting, confidence calibration, and uncertainty estimation.

Ax Mohammad Siavashi, Mariano Scazzariello, Gerald Q. Maguire Jr., Dejan Kosti\'c, Marco Chiesa 7d ago

Blink: CPU-Free LLM Inference by Delegating the Serving Stack to GPU and SmartNIC

Blink is an LLM serving architecture that removes the host CPU from the critical path by delegating orchestration and token control to GPU and SmartNIC, improving inference performance and datacenter resource utilization.

Ax Giulio Valentino Dalla Riva, Matteo Dalla Riva 7d ago

Intensity Dot Product Graphs

Intensity Dot Product Graphs extending random dot product graphs with Poisson point process for latent positions.

Ax Yuanjian Xu, Tianze Sun, Changwei Xu, XinLong Zhao, Jianing Hao, Ran Chen, Yang Liu, Ruijie Xu, Stephen Chen, Guang Zhang 7d ago

Rethinking Data Mixing from the Perspective of Large Language Models

Studies data mixing strategies for LLM training, questioning domain definitions, human-model alignment, and impact of domain weighting on generalization.