DACS mechanism for multi-agent LLM orchestration isolating per-agent context via registry and focus session modes to prevent context pollution.
GSSA-ViT framework using 3D Gaussian splatting for arbitrary-resolution weather forecasting and downscaling of atmospheric fields.
Unified framework viewing LLM post-training methods (SFT, preference optimization, RL, distillation) through off-policy and on-policy learning perspectives.
Uses deep reinforcement learning to automatically design quantum circuits for variational imaginary time evolution on NISQ devices.
Studies data mixing strategies for LLM training, questioning domain definitions, human-model alignment, and impact of domain weighting on generalization.
Discusses risks of LLM-generated peer reviews and automated editorial processes, proposing RAG-XAI detection framework for identifying machine-generated content.
DSCA method for lifelong vision-language model editing via dynamic subspace concept alignment, preventing degradation from sequential edits.
Decomposes long-context reasoning in LLMs into atomic skills, automatically identifying and improving fundamental capabilities for complex reasoning.
SearchAD large-scale dataset with 423k frames for rare image retrieval in autonomous driving scenarios across 11 established datasets.
CATMIL method for small brain structure segmentation in MRI using component-adaptive reweighting and lesion-level supervision.
First comprehensive survey of abductive reasoning in LLMs, defining taxonomy and exploring inference of plausible explanations from observations.
PrivFedTalk privacy-aware federated framework for personalized talking-head generation using diffusion models with identity-stable adapters.
LINE uses LLMs iteratively to explain individual neuron concepts in vision models without predefined vocabularies, enabling interpretability of neural networks.
DeepForestSound multi-species acoustic detector for biodiversity monitoring in African tropical forests using semi-supervised learning pipeline.
Training-free open-vocabulary semantic segmentation using global context awareness with pretrained vision and vision-language models without additional training.
Trust-adaptive differential privacy framework for data-driven systems balancing privacy and utility under heterogeneous user trust levels.
Method for 2-bit LLM quantization via optimal codebook initialization, enabling extreme compression for edge deployment with O(1) lookup dequantization.
Tempo framework compresses long videos for multimodal LLMs by query-aware selection of frames, addressing context limits and lost-in-middle problems.
Quantum-inspired ARIMA methodology for time series analysis using variational quantum circuits and swap-test-driven autocorrelation.
Diffusion model for virtual staining in histopathology that preserves cellular structures while translating immunohistochemistry images.
Study showing medical multimodal LLMs underperform traditional deep learning on image classification despite pretraining advantages.
Convolutional neural decoders for quantum error correction to accelerate decoding of quantum low-density parity-check codes.
Transformer-based floor plan generation using differentiable architectural loss functions to optimize room layouts beyond training data patterns.
RL primitive (Dataset Policy Gradient) optimizing synthetic data generators to produce targeted training examples for fine-tuning LLMs on differentiable metrics.
Explainable AI framework for onboard satellite fault detection with semantically annotated encodings for neural anomaly detectors.
RL framework extending verifiable-reward training to general reasoning tasks in LLMs using natural instruction data for causal and temporal understanding.
Unified evaluation platform for prompt injection attacks and defenses, addressing benchmark gaps in comparing robustness across diverse tasks.
Study of language generation under differential privacy constraints, proving privacy can be achieved without qualitative cost for countable language collections.
Analysis of on-policy distillation failure modes in LLM training, identifying length inflation and truncation collapse as destabilizing factors.
Survey of tabular data generation comparing GANs, diffusion models, and LLMs across sample quality, privacy, and controllability dimensions.
Meta-learning approach with uncertainty quantification for limited-data task learning, addressing out-of-distribution scenarios in safety-critical settings.
Comprehensive survey of generative AI covering large language models, architectures, deployment protocols, and real-world applications as of early 2026.
Continuous online learning framework for activity recognition systems addressing model drift and domain shift in long-term deployments.
Privacy-preserving face recognition using information-theoretic Privacy Funnel model with end-to-end trainable representation learning.
Multi-agent LLM framework acting as generative adversarial network for synthesizing tabular data in low-data regimes, targeting healthcare applications.
Federated learning security method detecting backdoor attacks using data distribution inference to filter poisoned model updates.
Causal structure learning method for linear systems (LV-SEM-ME) addressing unobserved variables and measurement error simultaneously.
AdaProb: efficient machine unlearning method using adaptive probability to remove specific data with reduced computational overhead.
RiTTA: text-to-audio generation model that improves modeling of event relations and temporal dynamics in audio synthesis.
OpenGLT: comprehensive benchmark for evaluating graph neural networks on graph-level tasks across multiple domains.
Orthogonal representation learning method for estimating causal quantities from high-dimensional observational data with theoretical guarantees.
SALSA-RL: stability analysis framework for deep reinforcement learning agents to assess safe behavior in continuous action spaces.
Federated search mechanism for retrieval-augmented generation across distributed knowledge sources to reduce LLM hallucinations.
ShuffleGate: unified gating mechanism for feature selection, model compression, and importance estimation in recommender systems.
AEQ-RVAE-ST: recurrent variational autoencoder with progressive training for quasi-periodic time series generation.
Study of adversarial robustness in tabular foundation models like TabPFN and TabICL, examining test-time attacks and in-context defenses.
DiffGradCAM improves class activation maps for CNNs by addressing adversarial vulnerabilities in gradient-based explanation methods.
BTC-LLM: sub-1-bit quantization framework for LLMs using learnable transformations and binary codebooks for extreme compression.
Data-driven interpolation method for functions on smooth manifolds using Laplace-Beltrami operators and Voronoi tessellations with diffusion processes.
Predicts case suffix in business processes with start/end timestamps for resource capacity planning using sequence models.