Diffusion model for virtual staining in histopathology that preserves cellular structures while translating immunohistochemistry images.
Study showing medical multimodal LLMs underperform traditional deep learning on image classification despite pretraining advantages.
Convolutional neural decoders for quantum error correction to accelerate decoding of quantum low-density parity-check codes.
Transformer-based floor plan generation using differentiable architectural loss functions to optimize room layouts beyond training data patterns.
RL primitive (Dataset Policy Gradient) optimizing synthetic data generators to produce targeted training examples for fine-tuning LLMs on differentiable metrics.
Explainable AI framework for onboard satellite fault detection with semantically annotated encodings for neural anomaly detectors.
RL framework extending verifiable-reward training to general reasoning tasks in LLMs using natural instruction data for causal and temporal understanding.
Unified evaluation platform for prompt injection attacks and defenses, addressing benchmark gaps in comparing robustness across diverse tasks.
Study of language generation under differential privacy constraints, proving privacy can be achieved without qualitative cost for countable language collections.
Analysis of on-policy distillation failure modes in LLM training, identifying length inflation and truncation collapse as destabilizing factors.
Survey of tabular data generation comparing GANs, diffusion models, and LLMs across sample quality, privacy, and controllability dimensions.
Meta-learning approach with uncertainty quantification for limited-data task learning, addressing out-of-distribution scenarios in safety-critical settings.
Comprehensive survey of generative AI covering large language models, architectures, deployment protocols, and real-world applications as of early 2026.
Continuous online learning framework for activity recognition systems addressing model drift and domain shift in long-term deployments.
Privacy-preserving face recognition using information-theoretic Privacy Funnel model with end-to-end trainable representation learning.
Multi-agent LLM framework acting as generative adversarial network for synthesizing tabular data in low-data regimes, targeting healthcare applications.
Federated learning security method detecting backdoor attacks using data distribution inference to filter poisoned model updates.
Causal structure learning method for linear systems (LV-SEM-ME) addressing unobserved variables and measurement error simultaneously.
AdaProb: efficient machine unlearning method using adaptive probability to remove specific data with reduced computational overhead.
RiTTA: text-to-audio generation model that improves modeling of event relations and temporal dynamics in audio synthesis.
OpenGLT: comprehensive benchmark for evaluating graph neural networks on graph-level tasks across multiple domains.
Orthogonal representation learning method for estimating causal quantities from high-dimensional observational data with theoretical guarantees.
SALSA-RL: stability analysis framework for deep reinforcement learning agents to assess safe behavior in continuous action spaces.
Federated search mechanism for retrieval-augmented generation across distributed knowledge sources to reduce LLM hallucinations.
ShuffleGate: unified gating mechanism for feature selection, model compression, and importance estimation in recommender systems.
AEQ-RVAE-ST: recurrent variational autoencoder with progressive training for quasi-periodic time series generation.
Study of adversarial robustness in tabular foundation models like TabPFN and TabICL, examining test-time attacks and in-context defenses.
DiffGradCAM improves class activation maps for CNNs by addressing adversarial vulnerabilities in gradient-based explanation methods.
BTC-LLM: sub-1-bit quantization framework for LLMs using learnable transformations and binary codebooks for extreme compression.
Data-driven interpolation method for functions on smooth manifolds using Laplace-Beltrami operators and Voronoi tessellations with diffusion processes.
Predicts case suffix in business processes with start/end timestamps for resource capacity planning using sequence models.
Investigates Kolmogorov-Arnold Networks as interpretable alternatives to black-box models for clinical tabular data classification.
Neural diffusion method for estimating transfer entropy in time series addressing curse of dimensionality and convergence issues.
Adaptive privacy budget allocation framework for mobile edge crowdsensing balancing privacy, utility, and device overhead.
Applies XAI techniques (Grad-CAM, SHAP) to interpret PhaseNet deep neural network for microseismic event detection.
Quantitative bounds analysis for permutation-invariant embeddings using sorting-based projections relevant to graph deep learning.
Theoretical analysis of interactions between chicken swarm optimization-based particle rejuvenation and KLD-adaptive sampling in particle filters.
Generative model combining adversarial and flow-based families with native one-step/multi-step generation trained via adversarial objective.
Research on high-dimensional Bayesian optimization showing simple Bayesian linear regression outperforms complex BO methods after geometric transformation.
Neural operator framework for solving PDEs on spherical domains using Green's function formulation preserving rotational geometry.
Case study applying LLMs to structured financial fraud detection data with focus on interpretability and feature analysis.
Comparative evaluation of embedding techniques for financial news sentiment analysis in resource-constrained environments.
First empirical study of machine unlearning in hybrid quantum-classical neural networks with adaptation of classical unlearning methods.
Empirical study of tabular foundation models versus classical ML for healthcare applications under class imbalance in critical care.
Tree-structured advantage redistribution method for group-based RL improving sample efficiency in LLM alignment on reasoning tasks.
Sample-efficient reinforcement learning algorithm for Value-at-Risk constrained optimization with safety guarantees during training.
Benchmark for evaluating multimodal LLMs on multi-criteria route planning reasoning tasks in heterogeneous graphs.
Extension of TabPFN foundation model to handle multimodal tabular data combining images, text and structured features.
Theoretical analysis explaining Adam optimizer's empirical advantages over SGD through second-moment normalization properties.
JAX framework for gradient-based training of spiking neural networks using differentiable ODE solving with exact gradients.