Ax Angelika Romanou, Mark Ibrahim, Candace Ross, Chantal Shaib, Kerem Oktar, Samuel J. Bell, Anaelia Ovalle, Jesse Dodge, Antoine Bosselut, Koustuv Sinha, Adina Williams 4/7/2026

Brittlebench: Quantifying LLM robustness via prompt sensitivity

Framework for measuring LLM robustness to prompt variations, typos, and alternative phrasings in real-world inputs.

Ax Zijin Gu, Tatiana Likhomanenko, Vimal Thilak, Jason Ramapuram, Navdeep Jaitly 4/7/2026

Path-Constrained Mixture-of-Experts

Research on sparse Mixture-of-Experts architectures proposing expert path perspective to understand token routing patterns across layers.

Ax Yufei Xu, Fanxu Meng, Fan Jiang, Yuxuan Wang, Ruijie Zhou, Zhaohui Wang, Jiexi Wu, Zhixin Pan, Xiaojuan Tang, Wenjie Pei, Tongxuan Liu, Di Yin, Xing Sun, Muhan Zhang 4/7/2026

HISA: Efficient Hierarchical Indexing for Fine-Grained Sparse Attention

HISA: hierarchical indexing system for efficient sparse attention in LLMs, reducing indexer bottleneck in token-level sparse mechanisms.

Ax Gabriel U. Talasso, Meghdad Kurmanji, Allan M. de Souza, Nicholas D. Lane, Leandro A. Villas 4/7/2026

Task-Centric Personalized Federated Fine-Tuning of Language Models

arXiv paper on personalized federated fine-tuning of language models. Federated learning approach for task-centric LLM adaptation on private distributed data.

Ax Zheng Zhang, Cuong C. Nguyen, David Rosewarne, Kevin Wells, Gustavo Carneiro 4/7/2026

Fatigue-Aware Learning to Defer via Constrained Optimisation

arXiv paper on human-AI cooperation via fatigue-aware deferral systems. ML method modeling human fatigue to optimize when AI should defer to humans.

Ax Ken M. Nakanishi 4/7/2026

Screening Is Enough

arXiv paper introducing Multiscreen attention mechanism for language models. Alternative to softmax attention enabling absolute relevance scoring in transformers.

Ax Haitham Kanj, Seonho Kim, Kiryung Lee 4/7/2026

Sparse Max-Affine Regression

Sparse Gradient Descent algorithm for variable selection in convex piecewise linear regression models.