Wan 2.7 – AI Video Generator for Text-to-Video and Image-to-Video
Product announcement for Wan 2.7 AI video generator supporting text-to-video and image-to-video with 4K output.
Product announcement for Wan 2.7 AI video generator supporting text-to-video and image-to-video with 4K output.
Analysis of AI-generated spam vulnerability reports overwhelming open source maintainers like cURL's Daniel Stenberg, exploring AI's impact on open source security.
Open source tool providing persistent terminal context for AI agents. Maintains directory, environment variables, and nvm state across sessions via MCP protocol.
Brief headline about energy and cooling constraints on AI scaling without substantive content provided.
Discussion post asking if prompt engineering and LLM-driven development constitutes software engineering. Community debate on AI-assisted coding.
Headline only, no content provided.
TurboAgent is an LLM-driven multi-agent framework enabling autonomous end-to-end turbomachinery aerodynamic design through coordinated geometry, prediction, optimization stages.
Empirical study decomposing LLM-based agent competence to determine which capabilities derive from the language model versus explicit architectural structure.
DOVE benchmark evaluates LLM cultural value alignment using open-ended generation to address limitations of multiple-choice evaluation formats.
Survey synthesizing blockchain and AI integration for securing intelligent networks, covering ledger design, detection, and agentic workflows.
Formal proof that continuous wrapper defenses cannot protect LLMs from all prompt injection attacks, characterizing where every defense must fail.
Empirical study showing 52-88% of chain-of-thought tokens in reasoning models are generated after the answer is already recoverable, revealing the detection-extraction gap phenomenon.
WRAP++ enhances LLM pretraining through synthetic data rephrasing that captures cross-document relationships beyond single-document web page rewriting.
Comprehensive survey of generative AI and LLMs covering model families, deployment protocols, and real-world applications as of early 2026.
AdaProb proposes efficient machine unlearning via adaptive probability to address residual information and computational overhead in removing specific data from trained models.
Study of stealthy visual jailbreak attacks on mobile Vision-Language agents that exploit discrepancies between LVLM perception and human vision without user detection.
First empirical study of machine unlearning in variational quantum circuits and quantum-augmented neural networks, adapting classical unlearning methods to quantum settings.
Q-Probe: Agentic multi-scale probing approach for high-resolution image quality assessment using MLLMs with reinforcement learning for human preference alignment.
Prediction Arena: Benchmark evaluating AI model decision-making by enabling autonomous trading on live prediction markets with real capital and objective ground truth.
BLEG: Framework combining LLMs with Graph Neural Networks to enhance fMRI brain network analysis by addressing feature sparsity and domain knowledge limitations.
Decoupled offline-online fault injection framework using LLMs to generate diverse fault scenarios for testing autonomous vision systems on edge devices.
Analysis showing LLMs achieve benchmark gains without broader capability improvements due to benchmark-aligned training data limiting generalization.
Flow Learners framework for learning PDE solutions combining physics-informed constraints with generative AI paradigm for scalable scientific computing.
Empirical study on emotional prompt engineering for LLMs, exploring effects of four distinct emotions at varying intensity levels on model performance and behavior.
Study decomposing the spectral edge lifecycle during grokking, showing gradient-to-weight-decay transition where edge becomes compression axis critical for model generalization.
Analysis of latent geometric structure in LLM representations through emotion processing, investigating how emotional information is encoded and organized.
Cross-city transfer learning using optimal transport for region correspondence and improved prediction in label-scarce cities with incompatible geographic partitions.
Five-year SAHELI project applying restless multi-armed bandits algorithms to optimize healthcare worker resource scheduling for maternal and child health program engagement.
SauerkrautLM-Doom-MultiVec: 1.3M parameter specialized model for real-time DOOM gameplay outperforming LLMs 92,000x larger, using ModernBERT with hash embeddings and attention pooling.
Quantum-classical hybrid framework comparing computational paradigms for crime pattern analysis and classification on imbalanced datasets.
Graph foundation model for wireless network resource allocation using deep learning to solve optimization problems more efficiently than classical iterative algorithms.
Event-centric world modeling framework with memory-augmented retrieval for autonomous agent decision-making that balances computational efficiency with physical groundedness.
Physics-residual neural network framework for industrial time series forecasting that combines data-driven learning with physical constraints for non-stationary systems.
Context-aware hybrid attention mechanism for LLMs that dynamically allocates between full and sparse attention based on task demands to reduce computational complexity.
Curriculum learning strategy for diffusion model training that schedules images by complexity to improve training efficiency without modifying model architecture.
Sparse prompting method for continual learning on edge devices with reduced memory and computational overhead.
Techniques for accelerating autoregressive video generation training while managing error accumulation.
Spectral theory analysis of gradient descent optimization in ReLU networks, explaining why non-convex optimization works.
GAN-based model with domain adaptation for automated layout generation in poster design using new dataset.
Reinforcement learning with reward machines for optimizing sleep scheduling in mobile networks.
Bayesian optimization methods for efficient black-box optimization in mixed-variable scientific problems.
MUSIC multimodal LLM for multi-subject in-context image generation, addressing subject identity preservation in text-to-image synthesis.
GIRL latent world-model framework combining generative models with information-theoretic control for long-horizon model-based RL.
Replay Suppression Diagnostic protocol for safe RL under delayed harm, addressing environment-level memory for safety.
Constraint-aware heuristics for heterogeneous LLM allocation and serving under latency, accuracy, and budget constraints.
Cluster Attention mechanism for graph transformers improving receptive field while preserving graph-structure inductive biases.
SYN-DIGITS synthetic control framework for calibrating LLM-based digital twin simulations to match real human behavior.
Sparse Sum-of-Squares functional modeling framework for analytical belief propagation in Markov process models.
Stochastic Attention generative framework for synthetic patient generation from small longitudinal clinical cohorts.
LLM training as lossy compression framework explaining how LLMs learn by retaining task-relevant information from training data.