LLM Papers
Updated on 2025.06.28
Publish Date | Title | Authors | Code | |
---|---|---|---|---|
2025-06-26 | From Web Search towards Agentic Deep Research: Incentivizing Search with Reasoning Agents | Weizhi Zhang et.al. | 2506.18959 | null |
2025-06-26 | Where to find Grokking in LLM Pretraining? Monitor Memorization-to-Generalization without Test | Ziyue Li et.al. | 2506.21551 | null |
2025-06-26 | Enhancing User Engagement in Socially-Driven Dialogue through Interactive LLM Alignments | Jiashuo Wang et.al. | 2506.21497 | null |
2025-06-26 | Double-Checker: Enhancing Reasoning of Slow-Thinking LLMs via Self-Critical Fine-Tuning | Xin Xu et.al. | 2506.21285 | null |
2025-06-26 | HumanOmniV2: From Understanding to Omni-Modal Reasoning with Context | Qize Yang et.al. | 2506.21277 | null |
2025-06-26 | Complexity-aware fine-tuning | Andrey Goncharov et.al. | 2506.21220 | null |
2025-06-26 | Unveiling Causal Reasoning in Large Language Models: Reality or Mirage? | Haoang Chi et.al. | 2506.21215 | null |
2025-06-26 | $T^3$: Multi-level Tree-based Automatic Program Repair with Large Language Models | Quanming Liu et.al. | 2506.21211 | null |
2025-06-26 | MT2-CSD: A New Dataset and Multi-Semantic Knowledge Fusion Method for Conversational Stance Detection | Fuqiang Niu et.al. | 2506.21053 | null |
2025-06-26 | Large Language Models Acing Chartered Accountancy | Jatin Gupta et.al. | 2506.21031 | null |
2025-06-26 | STEP Planner: Constructing cross-hierarchical subgoal tree as an embodied long-horizon task planner | Zhou Tianxing et.al. | 2506.21030 | null |
2025-06-26 | LLM-guided Chemical Process Optimization with a Multi-Agent Approach | Tong Zeng et.al. | 2506.20921 | null |
2025-06-26 | FaSTA$^*$: Fast-Slow Toolpath Agent with Subroutine Mining for Efficient Multi-turn Image Editing | Advait Gupta et.al. | 2506.20911 | null |
2025-06-25 | No Free Lunch: Rethinking Internal Feedback for LLM Reasoning | Yanzhi Zhang et.al. | 2506.17219 | null |
2025-06-25 | Confucius3-Math: A Lightweight High-Performance Reasoning LLM for Chinese K-12 Mathematics Learning | Lixin Wu et.al. | 2506.18330 | null |
2025-06-25 | Thought Anchors: Which LLM Reasoning Steps Matter? | Paul C. Bogdan et.al. | 2506.19143 | null |
2025-06-25 | Semantic Caching for Improving Web Affordability | Hafsa Akbar et.al. | 2506.20420 | null |
2025-06-25 | Breaking the Boundaries of Long-Context LLM Inference: Adaptive KV Management on a Single Commodity GPU | He Sun et.al. | 2506.20187 | null |
2025-06-25 | Inside you are many wolves: Using cognitive models to interpret value trade-offs in LLMs | Sonia K. Murthy et.al. | 2506.20666 | null |
2025-06-25 | The Decrypto Benchmark for Multi-Agent Reasoning and Theory of Mind | Andrei Lupu et.al. | 2506.20664 | null |
2025-06-25 | Memento: Note-Taking for Your Future Self | Chao Wan et.al. | 2506.20642 | null |
2025-06-25 | Video Perception Models for 3D Scene Synthesis | Rui Huang et.al. | 2506.20601 | null |
2025-06-25 | Case-based Reasoning Augmented Large Language Model Framework for Decision Making in Realistic Safety-Critical Driving Scenarios | Wenbin Gan et.al. | 2506.20531 | null |
2025-06-25 | Asymmetric REINFORCE for off-Policy Reinforcement Learning: Balancing positive and negative rewards | Charles Arnal et.al. | 2506.20520 | null |
2025-06-25 | ReCode: Updating Code API Knowledge with Reinforcement Learning | Haoze Wu et.al. | 2506.20495 | null |
2025-06-25 | Generative AI for Vulnerability Detection in 6G Wireless Networks: Advances, Case Study, and Future Directions | Shuo Yang et.al. | 2506.20488 | null |
2025-06-25 | Automatic Demonstration Selection for LLM-based Tabular Data Classification | Shuchu Han et.al. | 2506.20451 | null |
2025-06-25 | An Agentic System for Rare Disease Diagnosis with Traceable Reasoning | Weike Zhao et.al. | 2506.20430 | null |
2025-06-25 | SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models | Dipayan Saha et.al. | 2506.20415 | null |
2025-06-25 | Tabular Feature Discovery With Reasoning Type Exploration | Sungwon Han et.al. | 2506.20357 | null |
2025-06-25 | Enterprise Large Language Model Evaluation Benchmark | Liya Wang et.al. | 2506.20274 | null |
2025-06-25 | Enhancing Large Language Models through Structured Reasoning | Yubo Dong et.al. | 2506.20241 | null |
2025-06-25 | SEED: A Structural Encoder for Embedding-Driven Decoding in Time Series Prediction with LLMs | Fengze Li et.al. | 2506.20167 | null |
2025-06-25 | A Modular Multitask Reasoning Framework Integrating Spatio-temporal Models and LLMs | Kethmi Hirushini Hettige et.al. | 2506.20073 | null |
2025-06-25 | Omniwise: Predicting GPU Kernels Performance with LLMs | Zixian Wang et.al. | 2506.20886 | null |
2025-06-25 | Uncovering Hidden Violent Tendencies in LLMs: A Demographic Analysis via Behavioral Vignettes | Quintin Myers et.al. | 2506.20822 | null |
2025-06-25 | MultiFinRAG: An Optimized Multimodal Retrieval-Augmented Generation (RAG) Framework for Financial Question Answering | Chinmay Gondhalekar et.al. | 2506.20821 | null |
2025-06-25 | Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications | Xinye Tang et.al. | 2506.20815 | null |
2025-06-25 | Towards Probabilistic Question Answering Over Tabular Data | Chen Shen et.al. | 2506.20747 | null |
2025-06-25 | Test-time Scaling Techniques in Theoretical Physics – A Comparison of Methods on the TPBench Dataset | Zhiqi Gao et.al. | 2506.20729 | null |
2025-06-24 | ReDit: Reward Dithering for Improved LLM Policy Optimization | Chenxing Wei et.al. | 2506.18631 | null |
2025-06-24 | Understanding Reasoning in Thinking Language Models via Steering Vectors | Constantin Venhoff et.al. | 2506.18167 | null |
2025-06-24 | KAG-Thinker: Interactive Thinking and Deep Reasoning in LLMs via Knowledge-Augmented Generation | Dalong Zhang et.al. | 2506.17728 | null |
2025-06-24 | AnTKV: Anchor Token-Aware Sub-Bit Vector Quantization for KV Cache in Large Language Models | Zeyu Li et.al. | 2506.19505 | null |
2025-06-24 | Mem4Nav: Boosting Vision-and-Language Navigation in Urban Environments with a Hierarchical Spatial-Cognition Long-Short Memory System | Lixuan He et.al. | 2506.19433 | null |
2025-06-24 | JoyAgents-R1: Joint Evolution Dynamics for Versatile Multi-LLM Agents with Reinforcement Learning | Ai Han et.al. | 2506.19846 | null |
2025-06-24 | MAM: Modular Multi-Agent Framework for Multi-Modal Medical Diagnosis via Role-Specialized Collaboration | Yucheng Zhou et.al. | 2506.19835 | null |
2025-06-24 | KnowRL: Exploring Knowledgeable Reinforcement Learning for Factuality | Baochang Ren et.al. | 2506.19807 | null |
2025-06-24 | KnowML: Improving Generalization of ML-NIDS with Attack Knowledge Graphs | Xin Fan Guo et.al. | 2506.19802 | null |
2025-06-24 | Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study | Yuqi Zhu et.al. | 2506.19794 | null |
2025-06-24 | Automatic Prompt Optimization for Knowledge Graph Construction: Insights from an Empirical Study | Nandana Mihindukulasooriya et.al. | 2506.19773 | null |
2025-06-24 | SRFT: A Single-Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning | Yuqian Fu et.al. | 2506.19767 | null |
2025-06-24 | Breaking Barriers: Do Reinforcement Post Training Gains Transfer To Unseen Domains? | Chuxuan Hu et.al. | 2506.19733 | null |
2025-06-24 | ECCoT: A Framework for Enhancing Effective Cognition via Chain of Thought in Large Language Model | Zhenke Duan et.al. | 2506.19599 | null |
2025-06-24 | KnowMap: Efficient Knowledge-Driven Task Adaptation for LLMs | Kelin Fu et.al. | 2506.19527 | null |
2025-06-24 | Commonsense Generation and Evaluation for Dialogue Systems using Large Language Models | Marcos Estecha-Garitagoitia et.al. | 2506.19483 | null |
2025-06-24 | Can Large Language Models Capture Human Annotator Disagreements? | Jingwei Ni et.al. | 2506.19467 | null |
2025-06-24 | KunLunBaizeRAG: Reinforcement Learning Driven Inference Performance Leap for Large Language Models | Cheng Li et.al. | 2506.19466 | null |
2025-06-24 | RecLLM-R1: A Two-Stage Training Paradigm with Reinforcement Learning and Chain-of-Thought v1 | Yu Xie et.al. | 2506.19235 | null |
2025-06-24 | Augmenting Multi-Agent Communication with State Delta Trajectory | Yichen Tang et.al. | 2506.19209 | null |
2025-06-24 | Persona-Assigned Large Language Models Exhibit Human-Like Motivated Reasoning | Saloni Dash et.al. | 2506.20020 | null |
2025-06-24 | Inference Scaled GraphRAG: Improving Multi Hop Question Answering on Knowledge Graphs | Travis Thompson et.al. | 2506.19967 | null |
2025-06-24 | Prover Agent: An Agent-based Framework for Formal Mathematical Proofs | Kaito Baba et.al. | 2506.19923 | null |
2025-06-23 | RAPID: Long-Context Inference with Retrieval-Augmented Speculative Decoding | Guanzheng Chen et.al. | 2502.20330 | link |
2025-06-23 | RealSR-R1: Reinforcement Learning for Real-World Image Super-Resolution with Vision-Language Chain-of-Thought | Junbo Qiao et.al. | 2506.16796 | link |
2025-06-23 | SLR: An Automated Synthesis Framework for Scalable Logical Reasoning | Lukas Helff et.al. | 2506.15787 | null |
2025-06-23 | CommVQ: Commutative Vector Quantization for KV Cache Compression | Junyan Li et.al. | 2506.18879 | null |
2025-06-23 | ReasonFlux-PRM: Trajectory-Aware PRMs for Long Chain-of-Thought Reasoning in LLMs | Jiaru Zou et.al. | 2506.18896 | null |
2025-06-23 | OMEGA: Can LLMs Reason Outside the Box in Math? Evaluating Exploratory, Compositional, and Transformative Generalization | Yiyou Sun et.al. | 2506.18880 | null |
2025-06-23 | LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning | Yuhao Wu et.al. | 2506.18841 | null |
2025-06-23 | Understanding Software Engineering Agents: A Study of Thought-Action-Result Trajectories | Islem Bouzenia et.al. | 2506.18824 | null |
2025-06-23 | Existing LLMs Are Not Self-Consistent For Simple Tasks | Zhenru Lin et.al. | 2506.18781 | null |
2025-06-23 | Programming by Backprop: LLMs Acquire Reusable Algorithmic Abstractions During Code Training | Jonathan Cook et.al. | 2506.18777 | null |
2025-06-23 | MedTVT-R1: A Multimodal LLM Empowering Medical Reasoning and Diagnosis | Yuting Zhang et.al. | 2506.18512 | null |
2025-06-23 | Comparative Evaluation of ChatGPT and DeepSeek Across Key NLP Tasks: Strengths, Weaknesses, and Domain-Specific Performance | Wael Etaiwi et.al. | 2506.18501 | null |
2025-06-23 | MeRF: Motivation-enhanced Reinforcement Finetuning for Large Reasoning Models | Junjie Zhang et.al. | 2506.18485 | null |
2025-06-23 | TReB: A Comprehensive Benchmark for Evaluating Table Reasoning Capabilities of Large Language Models | Ce Li et.al. | 2506.18421 | null |
2025-06-23 | Evaluating Causal Explanation in Medical Reports with LLM-Based and Human-Aligned Metrics | Yousang Cho et.al. | 2506.18387 | null |
2025-06-23 | LOGICPO: Efficient Translation of NL-based Logical Problems to FOL using LLMs and Preference Optimization | Koushik Viswanadha et.al. | 2506.18383 | null |
2025-06-23 | Dynamic Knowledge Exchange and Dual-diversity Review: Concisely Unleashing the Potential of a Multi-Agent Research Team | Weilun Yu et.al. | 2506.18348 | null |
2025-06-23 | Less Data Less Tokens: Multilingual Unification Learning for Efficient Test-Time Reasoning in LLMs | Kang Chen et.al. | 2506.18341 | null |
2025-06-23 | TranslationCorrect: A Unified Framework for Machine Translation Post-Editing with Predictive Error Assistance | Syed Mekael Wasti et.al. | 2506.18337 | null |
2025-06-23 | LLM-Integrated Digital Twins for Hierarchical Resource Allocation in 6G Networks | Majumder Haider et.al. | 2506.18293 | null |
2025-06-23 | RLPR: Extrapolating RLVR to General Domains without Verifiers | Tianyu Yu et.al. | 2506.18254 | null |
2025-06-23 | Distilling Tool Knowledge into Language Models via Back-Translated Traces | Xingyue Huang et.al. | 2506.19171 | null |
2025-06-23 | Command-V: Pasting LLM Behaviors via Activation Profiles | Barry Wang et.al. | 2506.19140 | null |
2025-06-23 | Human-Aligned Faithfulness in Toxicity Explanations of LLMs | Ramaravind K. Mothilal et.al. | 2506.19113 | null |
2025-06-23 | Baba is LLM: Reasoning in a Game with Dynamic Rules | Fien van Wetten et.al. | 2506.19095 | null |
2025-06-23 | Language Models Might Not Understand You: Evaluating Theory of Mind via Story Prompting | Nathaniel Getachew et.al. | 2506.19089 | null |
2025-06-23 | MFTCXplain: A Multilingual Benchmark Dataset for Evaluating the Moral Reasoning of LLMs through Hate Speech Multi-hop Explanation | Jackson Trager et.al. | 2506.19073 | null |
2025-06-23 | Mirage of Mastery: Memorization Tricks LLMs into Artificially Inflated Self-Knowledge | Sahil Kale et.al. | 2506.18998 | null |
2025-06-23 | SWE-SQL: Illuminating LLM Pathways to Solve User SQL Issues in Real-World Applications | Jinyang Li et.al. | 2506.18951 | null |
2025-06-22 | Integrating LLMs and Digital Twins for Adaptive Multi-Robot Task Allocation in Construction | Min Deng et.al. | 2506.18178 | null |
2025-06-22 | Programming Quantum Computers with Large Language Models | Elena R. Henderson et.al. | 2506.18125 | null |
2025-06-22 | Mental Health Equity in LLMs: Leveraging Multi-Hop Question Answering to Detect Amplified and Silenced Perspectives | Batool Haider et.al. | 2506.18116 | null |
2025-06-22 | InspireDebate: Multi-Dimensional Subjective-Objective Evaluation-Guided Reasoning and Optimization for Debating | Fuyu Wang et.al. | 2506.18102 | null |
2025-06-22 | Deep Research Agents: A Systematic Examination And Roadmap | Yuxuan Huang et.al. | 2506.18096 | null |
2025-06-22 | SegChange-R1:Augmented Reasoning for Remote Sensing Change Detection via Large Language Models | Fei Zhou et.al. | 2506.17944 | null |
2025-06-22 | Evolving Prompts In-Context: An Open-ended, Self-replicating Perspective | Jianyu Wang et.al. | 2506.17930 | null |
2025-06-22 | Leveraging Large Language Model for Intelligent Log Processing and Autonomous Debugging in Cloud AI Platforms | Cheng Ji et.al. | 2506.17900 | null |
2025-06-22 | How Alignment Shrinks the Generative Horizon | Chenghao Yang et.al. | 2506.17871 | null |
2025-06-21 | Bayesian Social Deduction with Graph-Informed Language Models | Shahab Rahimirad et.al. | 2506.17788 | null |
2025-06-21 | PAGENT: Learning to Patch Software Engineering Agents | Haoran Xue et.al. | 2506.17772 | null |
2025-06-21 | Towards a Unified Textual Graph Framework for Spectral Reasoning via Physical and Chemical Information Fusion | Jiheng Liang et.al. | 2506.17761 | null |
2025-06-21 | Resource-Friendly Dynamic Enhancement Chain for Multi-Hop Question Answering | Binquan Ji et.al. | 2506.17692 | null |
2025-06-21 | Measuring and Augmenting Large Language Models for Solving Capture-the-Flag Challenges | Zimo Ji et.al. | 2506.17644 | null |
2025-06-21 | Answer-Centric or Reasoning-Driven? Uncovering the Latent Memory Anchor in LLMs | Yang Wu et.al. | 2506.17630 | null |
2025-06-21 | CLiViS: Unleashing Cognitive Map through Linguistic-Visual Synergy for Embodied Visual Reasoning | Kailing Li et.al. | 2506.17629 | null |
2025-06-21 | Scene-R1: Video-Grounded Large Language Models for 3D Scene Reasoning without 3D Annotations | Zhihao Yuan et.al. | 2506.17545 | null |
2025-06-21 | DuaShepherd: Integrating Stepwise Correctness and Potential Rewards for Mathematical Reasoning | Yuanhao Wu et.al. | 2506.17533 | null |
2025-06-21 | Do LLMs Know When to Flip a Coin? Strategic Randomization through Reasoning and Experience | Lingyu Yang et.al. | 2506.18928 | null |
2025-06-20 | Domain Specific Benchmarks for Evaluating Multimodal Large Language Models | Khizar Anjum et.al. | 2506.12958 | null |
2025-06-20 | Towards AI Search Paradigm | Yuchen Li et.al. | 2506.17188 | null |
2025-06-20 | When Can Model-Free Reinforcement Learning be Enough for Thinking? | Josiah P. Hanna et.al. | 2506.17124 | null |
2025-06-20 | Towards Advanced Mathematical Reasoning for LLMs via First-Order Logic Theorem Proving | Chuxue Cao et.al. | 2506.17104 | null |
2025-06-20 | Chain-of-Thought Prompting Obscures Hallucination Cues in Large Language Models: An Empirical Evaluation | Jiahao Cheng et.al. | 2506.17088 | null |
2025-06-20 | Tower+: Bridging Generality and Translation Specialization in Multilingual LLMs | Ricardo Rei et.al. | 2506.17080 | null |
2025-06-20 | From Concepts to Components: Concept-Agnostic Attention Module Discovery in Transformers | Jingtong Su et.al. | 2506.17052 | null |
2025-06-20 | Latent Concept Disentanglement in Transformer-based Language Models | Guan Zhe Hong et.al. | 2506.16975 | null |
2025-06-20 | LaVi: Efficient Large Vision-Language Models via Internal Feature Modulation | Tongtian Yue et.al. | 2506.16691 | null |
2025-06-20 | Distilling On-device Language Models for Robot Planning with Minimal Human Intervention | Zachary Ravichandran et.al. | 2506.17486 | null |
2025-06-20 | Aha Moment Revisited: Are VLMs Truly Capable of Self Verification in Inference-time Scaling? | Mingyuan Wu et.al. | 2506.17417 | null |
2025-06-19 | Serving Large Language Models on Huawei CloudMatrix384 | Pengfei Zuo et.al. | 2506.12708 | null |
2025-06-19 | KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider | Jiahao Wang et.al. | 2506.02634 | link |
2025-06-19 | MultiFinBen: A Multilingual, Multimodal, and Difficulty-Aware Benchmark for Financial LLM Evaluation | Xueqing Peng et.al. | 2506.14028 | null |
2025-06-19 | LazyEviction: Lagged KV Eviction with Attention Pattern Observation for Efficient Long Reasoning | Haoyue Zhang et.al. | 2506.15969 | null |
2025-06-19 | SemAgent: A Semantics Aware Program Repair Agent | Anvith Pabba et.al. | 2506.16650 | null |
2025-06-19 | LLM-based Satisfiability Checking of String Requirements by Consistent Data and Checker Generation | Boqi Chen et.al. | 2506.16639 | null |
2025-06-19 | Robust Reward Modeling via Causal Rubrics | Pragya Srivastava et.al. | 2506.16507 | null |
2025-06-19 | SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity | Samir Khaki et.al. | 2506.16500 | null |
2025-06-19 | ML-Master: Towards AI-for-AI via Integration of Exploration and Reasoning | Zexi Liu et.al. | 2506.16499 | null |
2025-06-19 | Grounding Language Models with Semantic Digital Twins for Robotic Planning | Mehreen Naeem et.al. | 2506.16493 | null |
2025-06-19 | How Far Can Off-the-Shelf Multimodal Large Language Models Go in Online Episodic Memory Question Answering? | Giuseppe Lando et.al. | 2506.16450 | null |
2025-06-19 | Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights | Zhiyuan Liang et.al. | 2506.16406 | null |
2025-06-19 | TrajSceneLLM: A Multimodal Perspective on Semantic GPS Trajectory Analysis | Chunhou Ji et.al. | 2506.16401 | link |
2025-06-19 | OJBench: A Competition Level Code Benchmark For Large Language Models | Zhexu Wang et.al. | 2506.16395 | null |
2025-06-19 | From LLM-anation to LLM-orchestrator: Coordinating Small Models for Data Labeling | Yao Lu et.al. | 2506.16393 | null |
2025-06-19 | RiOT: Efficient Prompt Refinement with Residual Optimization Tree | Chenyi Zhou et.al. | 2506.16389 | link |
2025-06-19 | Large Language Models in Argument Mining: A Survey | Hao Li et.al. | 2506.16383 | null |
2025-06-19 | SHREC and PHEONA: Using Large Language Models to Advance Next-Generation Computational Phenotyping | Sarah Pungitore et.al. | 2506.16359 | null |
2025-06-19 | Explainable Rule Application via Structured Prompting: A Neural-Symbolic Approach | Albert Sadowski et.al. | 2506.16335 | link |
2025-06-19 | SGIC: A Self-Guided Iterative Calibration Framework for RAG | Guanhua Chen et.al. | 2506.16172 | null |
2025-06-19 | Under the Shadow of Babel: How Language Shapes Reasoning in LLMs | Chenxi Wang et.al. | 2506.16151 | null |
2025-06-19 | GRPO-CARE: Consistency-Aware Reinforcement Learning for Multimodal Reasoning | Yi Chen et.al. | 2506.16141 | link |
2025-06-19 | Seeing is Fixing: Cross-Modal Reasoning with Multimodal LLMs for Visual Software Issue Fixing | Kai Huang et.al. | 2506.16136 | null |
2025-06-19 | AutoV: Learning to Retrieve Visual Prompt for Large Vision-Language Models | Yuan Zhang et.al. | 2506.16112 | null |
2025-06-19 | Human-Centered Shared Autonomy for Motor Planning, Learning, and Control Applications | MH Farhadi et.al. | 2506.16044 | null |
2025-06-19 | DynScaling: Efficient Verifier-free Inference Scaling via Dynamic and Integrated Sampling | Fei Wang et.al. | 2506.16043 | null |
2025-06-19 | SimuPanel: A Novel Immersive Multi-Agent System to Simulate Interactive Expert Panel Discussion | Xiangyang He et.al. | 2506.16010 | null |
2025-06-19 | Privacy-Preserving LLM Interaction with Socratic Chain-of-Thought Reasoning and Homomorphically Encrypted Vector Databases | Yubeen Bae et.al. | 2506.17336 | link |
2025-06-19 | LMR-BENCH: Evaluating LLM Agent’s Ability on Reproducing Language Modeling Research | Shuo Yan et.al. | 2506.17335 | null |
2025-06-19 | Large Language Models for Spreadsheets: Benchmarking Progress and Evaluating Performance with FLARE | Simon Thorne et.al. | 2506.17330 | null |
2025-06-18 | Medha: Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations | Amey Agrawal et.al. | 2409.17264 | null |
2025-06-18 | Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs | Ling Team et.al. | 2506.14731 | null |
2025-06-18 | AIn’t Nothing But a Survey? Using Large Language Models for Coding German Open-Ended Survey Responses on Survey Motivation | Leah von der Heyde et.al. | 2506.14634 | null |
2025-06-18 | Probabilistic Aggregation and Targeted Embedding Optimization for Collective Moral Reasoning in Large Language Models | Chenchen Yuan et.al. | 2506.14625 | link |
2025-06-18 | eLLM: Elastic Memory Management Framework for Efficient LLM Serving | Jiale Xu et.al. | 2506.15155 | null |
2025-06-18 | CC-LEARN: Cohort-based Consistency Learning | Xiao Ye et.al. | 2506.15662 | null |
2025-06-18 | Revisiting Compositional Generalization Capability of Large Language Models Considering Instruction Following Ability | Yusuke Sakai et.al. | 2506.15629 | null |
2025-06-18 | Managing Complex Failure Analysis Workflows with LLM-based Reasoning and Acting Agents | Aline Dobrovsky et.al. | 2506.15567 | null |
2025-06-18 | Lessons from Training Grounded LLMs with Verifiable Rewards | Shang Hong Sim et.al. | 2506.15522 | null |
2025-06-18 | Optimizing Web-Based AI Query Retrieval with GPT Integration in LangChain A CoT-Enhanced Prompt Engineering Approach | Wenqi Guan et.al. | 2506.15512 | null |
2025-06-18 | SPARE: Single-Pass Annotation with Reference-Guided Evaluation for Automatic Process Supervision and Reward Modelling | Md Imbesat Hassan Rizvi et.al. | 2506.15498 | link |
2025-06-18 | RE-IMAGINE: Symbolic Benchmark Synthesis for Reasoning Evaluation | Xinnuo Xu et.al. | 2506.15455 | null |
2025-06-18 | AgentGroupChat-V2: Divide-and-Conquer Is What LLM-Based Multi-Agent System Need | Zhouhong Gu et.al. | 2506.15451 | link |
2025-06-18 | DeVisE: Behavioral Testing of Medical Large Language Models | Camila Zurdo Tagliabue et.al. | 2506.15339 | null |
2025-06-18 | Cohort Discovery: A Survey on LLM-Assisted Clinical Trial Recruitment | Shrestha Ghosh et.al. | 2506.15301 | null |
2025-06-18 | MinosEval: Distinguishing Factoid and Non-Factoid for Tailored Open-Ended QA Evaluation with LLMs | Yongqi Fan et.al. | 2506.15215 | link |
2025-06-18 | ProtoReasoning: Prototypes as the Foundation for Generalizable Reasoning in LLMs | Feng He et.al. | 2506.15211 | null |
2025-06-18 | Learning-Time Encoding Shapes Unlearning in LLMs | Ruihan Wu et.al. | 2506.15076 | link |
2025-06-18 | HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models | Trishna Chakraborty et.al. | 2506.15065 | null |
2025-06-18 | Truncated Proximal Policy Optimization | Tiantian Fan et.al. | 2506.15050 | null |
2025-06-18 | Language Models can perform Single-Utterance Self-Correction of Perturbed Reasoning | Sam Silver et.al. | 2506.15894 | null |
2025-06-18 | Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute | Sheng Liu et.al. | 2506.15882 | null |
2025-06-18 | MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents | Zijian Zhou et.al. | 2506.15841 | null |
2025-06-18 | Context Matters! Relaxing Goals with LLMs for Feasible 3D Scene Planning | Emanuele Musumeci et.al. | 2506.15828 | null |
2025-06-18 | Veracity: An Open-Source AI Fact-Checking System | Taylor Lynn Curtis et.al. | 2506.15794 | null |
2025-06-18 | ETrace:Event-Driven Vulnerability Detection in Smart Contracts via LLM-Based Trace Analysis | Chenyang Peng et.al. | 2506.15790 | null |
2025-06-17 | Unified Software Engineering agent as AI Software Engineer | Leonhard Applis et.al. | 2506.14683 | null |
2025-06-17 | Massive Supervised Fine-tuning Experiments Reveal How Data, Layer, and Training Factors Shape LLM Alignment Quality | Yuto Harada et.al. | 2506.14681 | null |
2025-06-17 | Revisiting Chain-of-Thought Prompting: Zero-shot Can Be Stronger than Few-shot | Xiang Cheng et.al. | 2506.14641 | null |
2025-06-17 | NetRoller: Interfacing General and Specialized Models for End-to-End Autonomous Driving | Ren Xin et.al. | 2506.14589 | link |
2025-06-17 | Automatic Qiskit Code Refactoring Using Large Language Models | José Manuel Suárez et.al. | 2506.14535 | null |
2025-06-17 | M2BeamLLM: Multimodal Sensing-empowered mmWave Beam Prediction with Large Language Models | Can Zheng et.al. | 2506.14532 | null |
2025-06-17 | SIRI-Bench: Challenging VLMs’ Spatial Intelligence through Complex Reasoning Tasks | Zijian Song et.al. | 2506.14512 | null |
2025-06-17 | LLM-Powered Swarms: A New Frontier or a Conceptual Stretch? | Muhammad Atta Ur Rahman et.al. | 2506.14496 | null |
2025-06-17 | How Far Can LLMs Improve from Experience? Measuring Test-Time Learning Ability in LLMs with Human Comparison | Jiayin Wang et.al. | 2506.14448 | null |
2025-06-17 | Excessive Reasoning Attack on Reasoning LLMs | Wai Man Si et.al. | 2506.14374 | null |
2025-06-17 | ELLIS Alicante at CQs-Gen 2025: Winning the critical thinking questions shared task: LLM-based question generation and selection | Lucile Favero et.al. | 2506.14371 | null |
2025-06-17 | A Vision for Geo-Temporal Deep Research Systems: Towards Comprehensive, Transparent, and Reproducible Geo-Temporal Information Synthesis | Bruno Martins et.al. | 2506.14345 | null |
2025-06-17 | ADRD: LLM-Driven Autonomous Driving Based on Rule-based Decision Systems | Fanzhi Zeng et.al. | 2506.14299 | null |
2025-06-17 | Large Language Model Empowered Design of Fluid Antenna Systems: Challenges, Frameworks, and Case Studies for 6G | Chao Wang et.al. | 2506.14288 | null |
2025-06-17 | Improving LoRA with Variational Learning | Bai Cong et.al. | 2506.14280 | null |
2025-06-17 | Don’t throw the baby out with the bathwater: How and why deep learning for ARC | Jack Cole et.al. | 2506.14276 | null |
2025-06-17 | Re-Initialization Token Learning for Tool-Augmented Large Language Models | Chenghao Li et.al. | 2506.14248 | null |
2025-06-17 | Reinforcement Learning with Verifiable Rewards Implicitly Incentivizes Correct Reasoning in Base LLMs | Xumeng Wen et.al. | 2506.14245 | null |
2025-06-17 | Causes in neuron diagrams, and testing causal reasoning in Large Language Models. A glimpse of the future of philosophy? | Louis Vervoort et.al. | 2506.14239 | null |
2025-06-17 | Xolver: Multi-Agent Reasoning with Holistic Experience Learning Just Like an Olympiad Team | Md Tanzib Hosain et.al. | 2506.14234 | null |
2025-06-17 | MIST: Towards Multi-dimensional Implicit Bias and Stereotype Evaluation of LLMs via Theory of Mind | Yanlin Li et.al. | 2506.14161 | null |
2025-06-17 | S$^4$C: Speculative Sampling with Syntactic and Semantic Coherence for Efficient Inference of Large Language Models | Tao He et.al. | 2506.14158 | null |
2025-06-17 | InsertRank: LLMs can reason over BM25 scores to Improve Listwise Reranking | Rahul Seetharaman et.al. | 2506.14086 | null |
2025-06-17 | AI-Facilitated Analysis of Abstracts and Conclusions: Flagging Unsubstantiated Claims and Ambiguous Pronouns | Evgeny Markhasin et.al. | 2506.13172 | null |
2025-06-17 | AgentOrchestra: A Hierarchical Multi-Agent Framework for General-Purpose Task Solving | Wentao Zhang et.al. | 2506.12508 | null |
2025-06-17 | LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification | Penghui Yang et.al. | 2502.17421 | link |
2025-06-17 | Cost-Efficient Serving of LLM Agents via Test-Time Plan Caching | Qizheng Zhang et.al. | 2506.14852 | null |
2025-06-17 | Revisiting Reinforcement Learning for LLM Reasoning from A Cross-Domain Perspective | Zhoujun Cheng et.al. | 2506.14965 | link |
2025-06-17 | Structured Moral Reasoning in Language Models: A Value-Grounded Evaluation Framework | Mohna Chakraborty et.al. | 2506.14948 | null |
2025-06-17 | CALM: Contextual Analog Logic with Multimodality | Maxwell J. Jacobson et.al. | 2506.14936 | null |
2025-06-17 | MDBench: A Synthetic Multi-Document Reasoning Benchmark Generated with Knowledge Guidance | Joseph J. Peper et.al. | 2506.14927 | null |
2025-06-16 | Steering LLM Thinking with Budget Guidance | Junyan Li et.al. | 2506.13752 | link |
2025-06-16 | Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability | Shova Kuikel et.al. | 2506.13746 | link |
2025-06-16 | TimeMaster: Training Time-Series Multimodal LLMs to Reason via Reinforcement Learning | Junru Zhang et.al. | 2506.13705 | null |
2025-06-16 | Lost in the Mix: Evaluating LLM Understanding of Code-Switched Text | Amr Mohamed et.al. | 2506.14012 | null |
2025-06-16 | Arctic Long Sequence Training: Scalable And Efficient Training For Multi-Million Token Sequences | Stas Bekman et.al. | 2506.13996 | link |
2025-06-16 | How Does LLM Reasoning Work for Code? A Survey and a Call to Action | Ira Ceka et.al. | 2506.13932 | null |
2025-06-16 | Spec2RTL-Agent: Automated Hardware Code Generation from Complex Specifications Using LLM Agent Systems | Zhongzhi Yu et.al. | 2506.13905 | null |
2025-06-16 | Investigating the interaction of linguistic and mathematical reasoning in language models using multilingual number puzzles | Antara Raaghavi Bhattacharya et.al. | 2506.13886 | null |
2025-06-16 | Balancing Knowledge Delivery and Emotional Comfort in Healthcare Conversational Systems | Shang-Chi Tsai et.al. | 2506.13692 | null |
2025-06-16 | An LLM’s Apology: Outsourcing Awkwardness in the Age of AI | Twm Stone et.al. | 2506.13685 | link |
2025-06-16 | LocationReasoner: Evaluating LLMs on Real-World Site Selection Reasoning | Miho Koda et.al. | 2506.13841 | link |
2025-06-16 | EvolvTrip: Enhancing Literary Character Understanding with Temporal Theory-of-Mind Graphs | Bohao Yang et.al. | 2506.13641 | link |
2025-06-16 | An Empirical Study of LLM-as-a-Judge: How Design Choices Impact Evaluation Reliability | Yusuke Yamauchi et.al. | 2506.13639 | null |
2025-06-16 | FreeQ-Graph: Free-form Querying with Semantic Consistent Scene Graph for 3D Scene Understanding | Chenlu Zhan et.al. | 2506.13629 | null |
2025-06-16 | CAMS: A CityGPT-Powered Agentic Framework for Urban Human Mobility Simulation | Yuwei Du et.al. | 2506.13599 | null |
2025-06-16 | Understand the Implication: Learning to Think for Pragmatic Understanding | Settaluri Lakshmi Sravanthi et.al. | 2506.13559 | null |
2025-06-16 | Implicit and Explicit Research Quality Score Probabilities from ChatGPT | Mike Thelwall et.al. | 2506.13525 | null |
2025-06-16 | BOW: Bottlenecked Next Word Exploration | Ming Shen et.al. | 2506.13502 | null |
2025-06-16 | Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study | Zhengyu Hu et.al. | 2506.13464 | null |
2025-06-16 | From Promise to Peril: Rethinking Cybersecurity Red and Blue Teaming in the Age of LLMs | Alsharif Abuadbba et.al. | 2506.13434 | null |
2025-06-16 | RealHiTBench: A Comprehensive Realistic Hierarchical Table Benchmark for Evaluating LLM-Based Table Analysis | Pengzuo Wu et.al. | 2506.13405 | null |
2025-06-16 | Decompositional Reasoning for Graph Retrieval with Large Language Models | Valentin Six et.al. | 2506.13380 | null |
2025-06-16 | Socratic RL: A Novel Framework for Efficient Knowledge Acquisition through Iterative Reflection and Viewpoint Distillation | Xiangfan Wu et.al. | 2506.13358 | null |
2025-06-16 | StoryBench: A Dynamic Benchmark for Evaluating Long-Term Memory with Multi Turns | Luanbo Wan et.al. | 2506.13356 | null |
2025-06-16 | Direct Reasoning Optimization: LLMs Can Reward And Refine Their Own Reasoning for Open-Ended Tasks | Yifei Xu et.al. | 2506.13351 | null |
2025-06-16 | Verifying the Verifiers: Unveiling Pitfalls and Potentials in Fact Verifiers | Wooseok Seo et.al. | 2506.13342 | link |
2025-06-16 | Towards Pervasive Distributed Agentic Generative AI – A State of The Art | Gianni Molinari et.al. | 2506.13324 | null |
2025-06-16 | Thought Crime: Backdoors and Emergent Misalignment in Reasoning Models | James Chua et.al. | 2506.13206 | null |
2025-06-16 | Breaking Thought Patterns: A Multi-Dimensional Reasoning Framework for LLMs | Xintong Tang et.al. | 2506.13192 | null |
2025-06-16 | Enhancing Large Language Models with Reliable Knowledge Graphs | Qinggang Zhang et.al. | 2506.13178 | null |
2025-06-16 | Rethinking Test-Time Scaling for Medical AI: Model and Task-Aware Strategies for LLMs and VLMs | Gyutaek Oh et.al. | 2506.13102 | null |
2025-06-16 | Discerning What Matters: A Multi-Dimensional Assessment of Moral Competence in LLMs | Daniel Kilov et.al. | 2506.13082 | null |
2025-06-16 | MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models? | Xixian Yong et.al. | 2506.13065 | null |
2025-06-16 | Metis-RISE: RL Incentivizes and SFT Enhances Multimodal Reasoning Model Learning | Haibo Qiu et.al. | 2506.13056 | null |
2025-06-16 | Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models | Muhammad Reza Qorib et.al. | 2506.13044 | null |
2025-06-16 | Knowledge Graph Fusion with Large Language Models for Accurate, Explainable Manufacturing Process Planning | Danny Hoang et.al. | 2506.13026 | null |
2025-06-16 | Towards a Cascaded LLM Framework for Cost-effective Human-AI Decision-Making | Claudio Fanconi et.al. | 2506.11887 | null |
2025-06-15 | I Know What You Said: Unveiling Hardware Cache Side-Channels in Local Large Language Model Inference | Zibo Gao et.al. | 2505.06738 | null |
2025-06-15 | SmartHome-Bench: A Comprehensive Benchmark for Video Anomaly Detection in Smart Homes Using Multi-Modal Large Language Models | Xinyi Zhao et.al. | 2506.12992 | link |
2025-06-15 | Multi-document Summarization through Multi-document Event Relation Graph Reasoning in LLMs: a case study in Framing Bias Mitigation | Yuanyuan Lei et.al. | 2506.12978 | null |
2025-06-15 | Scaling Test-time Compute for LLM Agents | King Zhu et.al. | 2506.12928 | null |
2025-06-15 | PersonaFeedback: A Large-scale Human-annotated Benchmark For Personalization | Meiling Tao et.al. | 2506.12915 | null |
2025-06-15 | SciDA: Scientific Dynamic Assessor of LLMs | Junting Zhou et.al. | 2506.12909 | null |
2025-06-15 | WereWolf-Plus: An Update of Werewolf Game setting Based on DSGBench | Xinyuan Xia et.al. | 2506.12841 | null |
2025-06-15 | Mastering Da Vinci Code: A Comparative Study of Transformer, LLM, and PPO-based Agents | LeCheng Zhang et.al. | 2506.12801 | null |
2025-06-15 | MCTS-Refined CoT: High-Quality Fine-Tuning Data for LLM-Based Repository Issue Resolution | Yibo Wang et.al. | 2506.12728 | null |
2025-06-15 | Humanity’s Last Code Exam: Can Advanced LLMs Conquer Human’s Hardest Code Competition? | Xiangyang Li et.al. | 2506.12713 | link |
2025-06-15 | Building Trustworthy AI by Addressing its 16+2 Desiderata with Goal-Directed Commonsense Reasoning | Alexis R. Tudor et.al. | 2506.12667 | null |
2025-06-15 | GTA: Grouped-head latenT Attention | Luoyang Sun et.al. | 2506.17286 | null |
2025-06-14 | Synthetic Socratic Debates: Examining Persona Effects on Moral Decision and Persuasion Dynamics | Jiarui Liu et.al. | 2506.12657 | null |
2025-06-14 | Towards Building General Purpose Embedding Models for Industry 4.0 Agents | Christodoulos Constantinides et.al. | 2506.12607 | null |
2025-06-14 | OneEval: Benchmarking LLM Knowledge-intensive Reasoning over Diverse Knowledge Bases | Yongrui Chen et.al. | 2506.12577 | null |
2025-06-14 | RealFactBench: A Benchmark for Evaluating Large Language Models in Real-World Fact-Checking | Shuo Yang et.al. | 2506.12538 | null |
2025-06-14 | Detection, Classification, and Mitigation of Gender Bias in Large Language Models | Xiaoqing Cheng et.al. | 2506.12527 | null |
2025-06-14 | Graph of Verification: Structured Verification of LLM Reasoning with Directed Acyclic Graphs | Jiwei Fang et.al. | 2506.12509 | null |
2025-06-14 | From Outcomes to Processes: Guiding PRM Learning from ORM for Inference-Time Alignment | Bin Xie et.al. | 2506.12446 | null |
2025-06-14 | Advances in LLMs with Focus on Reasoning, Adaptability, Efficiency and Ethics | Asifullah khan et.al. | 2506.12365 | null |
2025-06-14 | QiMeng-Attention: SOTA Attention Operator is generated by SOTA Attention Algorithm | Qirui Zhou et.al. | 2506.12355 | null |
2025-06-14 | Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek | Peiran Qiu et.al. | 2506.12349 | null |
2025-06-14 | Med-U1: Incentivizing Unified Medical Reasoning in LLMs via Large-scale Reinforcement Learning | Xiaotian Zhang et.al. | 2506.12307 | null |
2025-06-14 | Unveiling Confirmation Bias in Chain-of-Thought Reasoning | Yue Wan et.al. | 2506.12301 | null |
2025-06-14 | The SWE-Bench Illusion: When State-of-the-Art LLMs Remember Instead of Reason | Shanchao Liang et.al. | 2506.12286 | null |
2025-06-14 | ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression | Guangda Liu et.al. | 2412.03213 | link |
2025-06-13 | Beyond Homogeneous Attention: Memory-Efficient LLMs via Fourier-Approximated KV Cache | Xiaoran Liu et.al. | 2506.11886 | null |
2025-06-13 | Lag-Relative Sparse Attention In Long Context Training | Manlai Liang et.al. | 2506.11498 | null |
2025-06-13 | Efficient Long-Context LLM Inference via KV Cache Clustering | Jie Hu et.al. | 2506.11418 | null |
2025-06-13 | Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles | Qingyan Wei et.al. | 2506.10848 | link |
2025-06-13 | Investigating the Potential of Large Language Model-Based Router Multi-Agent Architectures for Foundation Design Automation: A Task Classification and Expert Selection Study | Sompote Youwai et.al. | 2506.13811 | null |
2025-06-13 | From Emergence to Control: Probing and Modulating Self-Reflection in Language Models | Xudong Zhu et.al. | 2506.12217 | link |
2025-06-13 | Supernova Event Dataset: Interpreting Large Language Model’s Personality through Critical Event Analysis | Pranav Agarwal et.al. | 2506.12189 | null |
2025-06-13 | Instruction Tuning and CoT Prompting for Contextual Medical QA with LLMs | Chenqian Le et.al. | 2506.12182 | null |
2025-06-13 | code_transformed: The Influence of Large Language Models on Code | Yuliang Xu et.al. | 2506.12014 | null |
2025-06-13 | Tracing LLM Reasoning Processes with Strategic Games: A Framework for Planning, Revision, and Resource-Constrained Decision Making | Xiaopeng Yuan et.al. | 2506.12012 | null |
2025-06-13 | How Visual Representations Map to Language Feature Space in Multimodal LLMs | Constantin Venhoff et.al. | 2506.11976 | null |
2025-06-13 | Feedback Friction: LLMs Struggle to Fully Incorporate External Feedback | Dongwei Jiang et.al. | 2506.11930 | null |
2025-06-13 | LiveCodeBench Pro: How Do Olympiad Medalists Judge LLMs in Competitive Programming? | Zihan Zheng et.al. | 2506.11928 | null |
2025-06-13 | TreeRL: LLM Reinforcement Learning with On-Policy Tree Search | Zhenyu Hou et.al. | 2506.11902 | link |
2025-06-13 | MapQaTor: An Extensible Framework for Efficient Annotation of Map-Based QA Datasets | Mahir Labib Dihan et.al. | 2412.21015 | link |
2025-06-12 | SwiftSpec: Ultra-Low Latency LLM Decoding by Scaling Asynchronous Speculative Decoding | Ziyi Zhang et.al. | 2506.11309 | null |
2025-06-11 | SAFEFLOW: A Principled Protocol for Trustworthy and Transactional Autonomous Agent Systems | Peiran Li et.al. | 2506.07564 | null |
2025-06-10 | Draft-based Approximate Inference for LLMs | Kevin Galim et.al. | 2506.08373 | link |
2025-06-10 | Activated LoRA: Fine-tuned LLMs for Intrinsics | Kristjan Greenewald et.al. | 2504.12397 | link |
2025-06-09 | Graph-KV: Breaking Sequence via Injecting Structural Biases into Large Language Models | Haoyu Wang et.al. | 2506.07334 | null |
2025-06-09 | MoQAE: Mixed-Precision Quantization for Long-Context LLM Inference via Mixture of Quantization-Aware Experts | Wei Tao et.al. | 2506.07533 | null |
2025-06-08 | Paged Attention Meets FlexAttention: Unlocking Long-Context Efficiency in Deployed Inference | Thomas Joshi et.al. | 2506.07311 | null |
2025-06-08 | MiniKV: Pushing the Limits of LLM Inference via 2-Bit Layer-Discriminative KV Cache | Akshat Sharma et.al. | 2411.18077 | null |
2025-06-07 | Parallel CPU-GPU Execution for LLM Inference on Constrained GPUs | Jiakun Fan et.al. | 2506.03296 | null |
2025-06-06 | Saffron-1: Towards an Inference Scaling Paradigm for LLM Safety Assurance | Ruizhong Qiu et.al. | 2506.06444 | link |
2025-06-05 | Dynamic Context Tuning for Retrieval-Augmented Generation: Enhancing Multi-Turn Planning and Tool Adaptation | Jubin Abhishek Soni et.al. | 2506.11092 | null |
2025-06-05 | SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs | Jiahui Wang et.al. | 2506.05344 | link |
2025-06-05 | Inference-Time Hyper-Scaling with KV Cache Compression | Adrian Łańcucki et.al. | 2506.05345 | null |
2025-06-05 | Unleashing Hour-Scale Video Training for Long Video-Language Understanding | Jingyang Lin et.al. | 2506.05332 | null |
2025-06-05 | MobiEdit: Resource-efficient Knowledge Editing for Personalized On-device LLMs | Zhenyan Lu et.al. | 2506.13772 | null |
2025-06-05 | ReCalKV: Low-Rank KV Cache Compression via Head Reordering and Offline Calibration | Xianglong Yan et.al. | 2505.24357 | null |
2025-06-05 | Efficiently Serving Large Multimodal Models Using EPD Disaggregation | Gursimran Singh et.al. | 2501.05460 | link |
2025-06-04 | Homogeneous Keys, Heterogeneous Values: Exploiting Local KV Cache Asymmetry for Long-Context LLMs | Wanyun Cui et.al. | 2506.05410 | null |
2025-06-04 | AhaKV: Adaptive Holistic Attention-Driven KV Cache Eviction for Efficient Inference of Large Language Models | Yifeng Gu et.al. | 2506.03762 | null |
2025-06-04 | AdaDecode: Accelerating LLM Decoding with Adaptive Layer Parallelism | Zhepei Wei et.al. | 2506.03700 | link |
2025-06-04 | HashEvict: A Pre-Attention KV Cache Eviction Strategy using Locality-Sensitive Hashing | Minghui Liu et.al. | 2412.16187 | null |
2025-06-04 | KVPR: Efficient LLM Inference with I/O-Aware KV Cache Partial Recomputation | Chaoyi Jiang et.al. | 2411.17089 | link |
2025-06-03 | A$^2$ATS: Retrieval-Based KV Cache Reduction via Windowed Rotary Position Embedding and Query-Aware Vector Quantization | Junhui He et.al. | 2502.12665 | null |
2025-06-03 | SCOPE: Optimizing Key-Value Cache Compression in Long-context Generation | Jialong Wu et.al. | 2412.13649 | link |
2025-06-02 | Memory Access Characterization of Large Language Models in CPU Environment and its Potential Impacts | Spencer Banasik et.al. | 2506.01827 | null |
2025-06-02 | SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation | Aurick Qiao et.al. | 2410.03960 | null |
2025-06-02 | SuffixDecoding: Extreme Speculative Decoding for Emerging AI Applications | Gabriele Oliaro et.al. | 2411.04975 | link |
2025-06-01 | Earley-Driven Dynamic Pruning for Efficient Structured Decoding | Xintong Sun et.al. | 2506.01151 | null |
2025-06-01 | A Survey of LLM $\times$ DATA | Xuanhe Zhou et.al. | 2505.18458 | link |
2025-05-31 | Accelerating Diffusion LLMs via Adaptive Parallel Decoding | Daniel Israel et.al. | 2506.00413 | null |
2025-05-31 | QuickVideo: Real-Time Long Video Understanding with System Algorithm Co-Design | Benjamin Schneider et.al. | 2505.16175 | link |
2025-05-31 | KVTuner: Sensitivity-Aware Layer-Wise Mixed-Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference | Xing Li et.al. | 2502.04420 | link |
2025-05-30 | HELM: Hyperbolic Large Language Models via Mixture-of-Curvature Experts | Neil He et.al. | 2505.24722 | link |
2025-05-30 | Are Optimal Algorithms Still Optimal? Rethinking Sorting in LLM-Based Pairwise Ranking with Batching and Caching | Juan Wisznia et.al. | 2505.24643 | null |
2025-05-30 | SkyLB: A Locality-Aware Cross-Region Load Balancer for LLM Inference | Tian Xia et.al. | 2505.24095 | null |
2025-05-30 | RaaS: Reasoning-Aware Attention Sparsity for Efficient LLM Reasoning | Junhao Hu et.al. | 2502.11147 | null |
2025-05-30 | Learn from the Past: Fast Sparse Indexing for Large Language Model Decoding | Feiyu Yao et.al. | 2506.15704 | null |
2025-05-29 | EFIM: Efficient Serving of LLMs for Infilling Tasks with Improved KV Cache Reuse | Tianyu Guo et.al. | 2505.21889 | link |
2025-05-29 | Wireless Agentic AI with Retrieval-Augmented Multimodal Semantic Perception | Guangyuan Liu et.al. | 2505.23275 | null |
2025-05-29 | EmbAdvisor: Adaptive Cache Management for Sustainable LLM Serving | Yuyang Tian et.al. | 2505.23970 | null |
2025-05-29 | KVzip: Query-Agnostic KV Cache Compression with Context Reconstruction | Jang-Hyun Kim et.al. | 2505.23416 | link |
2025-05-28 | Towards Efficient Key-Value Cache Management for Prefix Prefilling in LLM Inference | Yue Zhu et.al. | 2505.21919 | null |
2025-05-28 | Mustafar: Promoting Unstructured Sparsity for KV Cache Pruning in LLM Inference | Donghyeon Joo et.al. | 2505.22913 | link |
2025-05-28 | Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding | Chengyue Wu et.al. | 2505.22618 | null |
2025-05-28 | Scaling Reasoning without Attention | Xueliang Zhao et.al. | 2505.22425 | null |
2025-05-28 | InComeS: Integrating Compression and Selection Mechanisms into LLMs for Efficient Model Editing | Shuaiyi Li et.al. | 2505.22156 | null |
2025-05-28 | gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling | Tianyu Guo et.al. | 2504.14775 | link |
2025-05-27 | Hardware-Efficient Attention for Fast Decoding | Ted Zadouri et.al. | 2505.21487 | null |
2025-05-27 | SpecExtend: A Drop-in Enhancement for Speculative Decoding of Long Sequences | Jungyoub Cha et.al. | 2505.20776 | link |
2025-05-27 | TailorKV: A Hybrid Framework for Long-Context Inference via Tailored KV Cache Optimization | Dingyu Yao et.al. | 2505.19586 | link |
2025-05-27 | EPIC: Efficient Position-Independent Caching for Serving Large Language Models | Junhao Hu et.al. | 2410.15332 | null |
2025-05-26 | HAMburger: Accelerating LLM Inference via Token Smashing | Jingyu Liu et.al. | 2505.20438 | null |
2025-05-26 | O$^2$-Searcher: A Searching-based Agent Model for Open-Domain Open-Ended Question Answering | Jianbiao Mei et.al. | 2505.16582 | link |
2025-05-26 | RAP: Runtime-Adaptive Pruning for LLM Inference | Huanrong Liu et.al. | 2505.17138 | null |
2025-05-26 | SLOT: Sample-specific Language Model Optimization at Test-time | Yang Hu et.al. | 2505.12392 | link |
2025-05-26 | PRESERVE: Prefetching Model Weights and KV-Cache in Distributed LLM Serving | Ahmet Caner Yüzügüler et.al. | 2501.08192 | null |
2025-05-26 | UniICL: An Efficient Unified Framework Unifying Compression, Selection, and Generation | Jun Gao et.al. | 2405.17062 | null |
2025-05-26 | BurstGPT: A Real-world Workload Dataset to Optimize LLM Serving Systems | Yuxin Wang et.al. | 2401.17644 | link |
2025-05-25 | Accelerating Adaptive Retrieval Augmented Generation via Instruction-Driven Representation Reduction of Retrieval Overlaps | Jie Ou et.al. | 2505.12731 | null |
2025-05-24 | Efficient and Workload-Aware LLM Serving via Runtime Layer Swapping and KV Cache Resizing | Zhaoyuan Su et.al. | 2506.02006 | null |
2025-05-24 | Lookahead Q-Cache: Achieving More Consistent KV Cache Eviction via Pseudo Query | Yixuan Wang et.al. | 2505.20334 | null |
2025-05-24 | PM-KVQ: Progressive Mixed-precision KV Cache Quantization for Long-CoT LLMs | Tengxuan Liu et.al. | 2505.18610 | link |
2025-05-24 | PersonaX: A Recommendation Agent Oriented User Modeling Framework for Long Behavior Sequence | Yunxiao Shi et.al. | 2503.02398 | link |
2025-05-23 | FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding | Zhibin Wang et.al. | 2505.17694 | null |
2025-05-23 | Guided by Gut: Efficient Test-Time Scaling with Reinforced Intrinsic Confidence | Amirhosein Ghasemabadi et.al. | 2505.20325 | null |
2025-05-23 | NSNQuant: A Double Normalization Approach for Calibration-Free Low-Bit Vector Quantization of KV Cache | Donghyun Son et.al. | 2505.18231 | null |
2025-05-23 | Titanus: Enabling KV Cache Pruning and Quantization On-the-Fly for LLM Acceleration | Peilin Chen et.al. | 2505.17787 | link |
2025-05-23 | ThinkLess: A Training-Free Inference-Efficient Method for Reducing Reasoning Redundancy | Gengyang Li et.al. | 2505.15684 | null |
2025-05-23 | Hogwild! Inference: Parallel LLM Generation via Concurrent Attention | Gleb Rodionov et.al. | 2504.06261 | link |
2025-05-22 | Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought | Tencent Hunyuan Team et.al. | 2505.15431 | null |
2025-05-22 | Zebra-Llama: Towards Extremely Efficient Hybrid Models | Mingyu Yang et.al. | 2505.17272 | null |
2025-05-22 | T1: A Tool-Oriented Conversational Dataset for Multi-Turn Agentic Planning | Amartya Chakraborty et.al. | 2505.16986 | null |
2025-05-22 | NQKV: A KV Cache Quantization Scheme Based on Normal Distribution Characteristics | Zhihang Cai et.al. | 2505.16210 | null |
2025-05-22 | HCRMP: A LLM-Hinted Contextual Reinforcement Learning Framework for Autonomous Driving | Zhiwen Chen et.al. | 2505.15793 | null |
2025-05-21 | Not All Models Suit Expert Offloading: On Local Routing Consistency of Mixture-of-Expert Models | Jingcong Liang et.al. | 2505.16056 | link |
2025-05-21 | A Federated Splitting Framework for LLMs: Security, Efficiency, and Adaptability | Zishuai Zhang et.al. | 2505.15683 | link |
2025-05-21 | FlowKV: Enhancing Multi-Turn Conversational Coherence in LLMs via Isolated Key-Value Cache Management | Xiang Liu et.al. | 2505.15347 | null |
2025-05-21 | LiveVLM: Efficient Online Video Understanding via Streaming-Oriented KV Cache and Retrieval | Zhenyu Ning et.al. | 2505.15269 | null |
2025-05-21 | AutoData: A Multi-Agent System for Open Web Data Collection | Tianyi Ma et.al. | 2505.15859 | link |
2025-05-21 | Effective and Efficient Schema-aware Information Extraction Using On-Device Large Language Models | Zhihao Wen et.al. | 2505.14992 | null |
2025-05-21 | Can LLMs Maintain Fundamental Abilities under KV Cache Compression? | Xiang Liu et.al. | 2502.01941 | null |
2025-05-20 | Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning | Jiwon Song et.al. | 2505.13866 | link |
2025-05-20 | SkyMemory: A LEO Edge Cache for Transformer Inference Optimization and Scale Out | Thomas Sandholm et.al. | 2505.14427 | null |
2025-05-20 | Log-Augmented Generation: Scaling Test-Time Reasoning with Reusable Computation | Peter Baile Chen et.al. | 2505.14398 | null |
2025-05-20 | CE-LSLM: Efficient Large-Small Language Model Inference and Communication via Cloud-Edge Collaboration | Pengyan Zhu et.al. | 2505.14085 | null |
2025-05-20 | KeyDiff: Key Similarity-Based KV Cache Eviction for Long-Context LLM Inference in Resource-Constrained Environments | Junyoung Park et.al. | 2504.15364 | null |
2025-05-20 | Scaling Test-Time Inference with Policy-Optimized, Dynamic Retrieval-Augmented Generation via KV Caching and Decoding | Sakhinana Sagar Srinivas et.al. | 2504.01281 | null |
2025-05-20 | Online Scheduling for LLM Inference with KV Cache Constraints | Patrick Jaillet et.al. | 2502.07115 | null |
2025-05-19 | FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference | Guangda Liu et.al. | 2505.13109 | null |
2025-05-19 | AD-AGENT: A Multi-agent Framework for End-to-end Anomaly Detection | Tiankai Yang et.al. | 2505.12594 | link |
2025-05-19 | SubGCache: Accelerating Graph-based RAG with Subgraph-level KV Cache | Qiuyu Zhu et.al. | 2505.10951 | null |
2025-05-19 | FreqKV: Frequency Domain Key-Value Compression for Efficient Context Window Extension | Jushi Kai et.al. | 2505.00570 | null |
2025-05-18 | KVmix: Gradient-Based Layer Importance-Aware Mixed-Precision Quantization for KV Cache | Fei Li et.al. | 2506.08018 | null |
2025-05-16 | Semantic Caching of Contextual Summaries for Efficient Question-Answering with Language Models | Camille Couturier et.al. | 2505.11271 | null |
2025-05-16 | Accurate KV Cache Quantization with Outlier Tokens Tracing | Yi Su et.al. | 2505.10938 | link |
2025-05-16 | KVShare: An LLM Service System with Efficient and Effective Multi-Tenant KV Cache Reuse | Huan Yang et.al. | 2503.16525 | null |
2025-05-14 | SALM: A Multi-Agent Framework for Language Model-Driven Social Network Simulation | Gaurav Koley et.al. | 2505.09081 | link |
2025-05-14 | Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization | Minsu Kim et.al. | 2503.18599 | null |
2025-05-13 | Enhancing Cache-Augmented Generation (CAG) with Adaptive Contextual Compression for Scalable Knowledge Integration | Rishabh Agrawal et.al. | 2505.08261 | null |
2025-05-13 | Gradual Binary Search and Dimension Expansion : A general method for activation quantization in LLMs | Lucas Maisonnave et.al. | 2504.13989 | null |
2025-05-12 | SpecRouter: Adaptive Routing for Multi-Level Speculative Decoding in Large Language Models | Hang Wu et.al. | 2505.07680 | null |
2025-05-12 | Cache-Efficient Posterior Sampling for Reinforcement Learning with LLM-Derived Priors Across Discrete and Continuous Domains | Ibne Farabi Shihab et.al. | 2505.07274 | null |
2025-05-12 | Comet: Accelerating Private Inference for Large Language Model by Predicting Activation Sparsity | Guang Yan et.al. | 2505.07239 | null |
2025-05-12 | PrefillOnly: An Inference Engine for Prefill-only Workloads in Large Language Model Applications | Kuntai Du et.al. | 2505.07203 | null |
2025-05-11 | Ecco: Improving Memory Bandwidth and Capacity for LLMs via Entropy-aware Cache Compression | Feng Cheng et.al. | 2505.06901 | null |
2025-05-09 | Sparse Attention Remapping with Clustering for Efficient LLM Decoding on PIM | Zehao Fan et.al. | 2505.05772 | null |
2025-05-08 | A Survey on Inference Engines for Large Language Models: Perspectives on Optimization and Efficiency | Sihyeong Park et.al. | 2505.01658 | link |
2025-05-05 | RetroInfer: A Vector-Storage Approach for Scalable Long-Context LLM Inference | Yaoqi Chen et.al. | 2505.02922 | null |
2025-05-05 | Large Language Model Partitioning for Low-Latency Inference at the Edge | Dimitrios Kafetzis et.al. | 2505.02533 | null |
2025-05-01 | Spill The Beans: Exploiting CPU Cache Side-Channels to Leak Tokens from Large Language Models | Andrew Adiletta et.al. | 2505.00817 | null |
2025-05-01 | QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving | Yujun Lin et.al. | 2405.04532 | link |
2025-04-29 | CachePrune: Neural-Based Attribution Defense Against Indirect Prompt Injection Attacks | Rui Wang et.al. | 2504.21228 | null |
2025-04-28 | semi-PD: Towards Efficient LLM Serving via Phase-Wise Disaggregated Computation and Unified Storage | Ke Hong et.al. | 2504.19867 | null |
2025-04-25 | ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference | Hanshi Sun et.al. | 2410.21465 | link |
2025-04-24 | L3: DIMM-PIM Integrated Architecture and Coordination for Scalable Long-Context LLM Inference | Qingyuan Liu et.al. | 2504.17584 | null |
2025-04-22 | SeaLLM: Service-Aware and Latency-Optimized Resource Sharing for Large Language Model Inference | Yihao Zhao et.al. | 2504.15720 | null |
2025-04-22 | Optimizing SLO-oriented LLM Serving with PD-Multiplexing | Weihao Cui et.al. | 2504.14489 | null |
2025-04-21 | Splitwiser: Efficient LM inference with constrained resources | Asad Aali et.al. | 2505.03763 | link |
2025-04-21 | LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention | Shang Yang et.al. | 2502.14866 | link |
2025-04-21 | FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving | Zihao Ye et.al. | 2501.01005 | link |
2025-04-21 | Reimagining Memory Access for LLM Inference: Compression-Aware Memory Controller Design | Rui Xie et.al. | 2503.18869 | null |
2025-04-20 | Understanding and Optimizing Multi-Stage AI Inference Pipelines | Abhimanyu Rajeshkumar Bambhaniya et.al. | 2504.09775 | null |
2025-04-19 | Improving the Serving Performance of Multi-LoRA Large Language Models via Efficient LoRA and KV Cache Management | Hang Zhang et.al. | 2505.03756 | null |
2025-04-18 | LogicTree: Structured Proof Exploration for Coherent and Rigorous Logical Reasoning with Large Language Models | Kang He et.al. | 2504.14089 | null |
2025-04-18 | HPU: High-Bandwidth Processing Unit for Scalable, Cost-effective LLM Inference via GPU Co-processing | Myunghyun Rhee et.al. | 2504.16112 | null |
2025-04-16 | Cost-Efficient LLM Serving in the Cloud: VM Selection with KV Cache Offloading | Kihyun Kim et.al. | 2504.11816 | link |
2025-04-16 | Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs | Hyungwoo Lee et.al. | 2504.11765 | null |
2025-04-15 | Optimizing LLM Inference: Fluid-Guided Online Scheduling with Memory Constraints | Ruicheng Ao et.al. | 2504.11320 | link |
2025-04-14 | AlayaDB: The Data Foundation for Efficient and Effective Long-context LLM Inference | Yangshen Deng et.al. | 2504.10326 | null |
2025-04-14 | KeepKV: Eliminating Output Perturbation in KV Cache Compression for Efficient LLMs Inference | Yuxuan Tian et.al. | 2504.09936 | null |
2025-04-13 | Efficient LLM Serving on Hybrid Real-time and Best-effort Requests | Wan Borui et.al. | 2504.09590 | null |
2025-04-11 | Scaling Up On-Device LLMs via Active-Weight Swapping Between DRAM and Flash | Fucheng Jia et.al. | 2504.08378 | null |
2025-04-11 | Boosting Universal LLM Reward Design through Heuristic Reward Observation Space Evolution | Zen Kit Heng et.al. | 2504.07596 | null |
2025-04-10 | Apt-Serve: Adaptive Request Scheduling on Hybrid Cache for Scalable LLM Inference Serving | Shihong Gao et.al. | 2504.07494 | link |
2025-04-10 | Marconi: Prefix Caching for the Era of Hybrid LLMs | Rui Pan et.al. | 2411.19379 | null |
2025-04-10 | UniCAIM: A Unified CAM/CIM Architecture with Static-Dynamic KV Cache Pruning for Efficient Long-Context LLM Inference | Weikai Xu et.al. | 2504.07479 | null |
2025-04-09 | Saliency-driven Dynamic Token Pruning for Large Language Models | Yao Tao et.al. | 2504.04514 | null |
2025-04-08 | Unifying KV Cache Compression for Large Language Models with LeanKV | Yanqi Zhang et.al. | 2412.03131 | null |
2025-04-08 | SPIRe: Boosting LLM Inference Throughput with Speculative Decoding | Sanjit Neelam et.al. | 2504.06419 | null |
2025-04-08 | HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference | Shuzhang Zhong et.al. | 2504.05897 | link |
2025-04-08 | Accelerating LLM Inference Throughput via Asynchronous KV Cache Prefetching | Yanhao Dong et.al. | 2504.06319 | null |
2025-04-07 | AccLLM: Accelerating Long-Context LLM Inference Via Algorithm-Hardware Co-Design | Yanbiao Liang et.al. | 2505.03745 | null |
2025-04-03 | CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion | Jiayi Yao et.al. | 2405.16444 | link |
2025-04-03 | HyperRAG: Enhancing Quality-Efficiency Tradeoffs in Retrieval-Augmented Generation with Reranker KV-Cache Reuse | Yuwei An et.al. | 2504.02921 | null |
2025-04-03 | LLM Library Learning Fails: A LEGO-Prover Case Study | Ian Berlot-Attwell et.al. | 2504.03048 | null |
2025-04-02 | MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding | Ranajoy Sadhukhan et.al. | 2408.11049 | link |
2025-04-01 | SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching | Yuxuan Zhu et.al. | 2504.00970 | null |
2025-04-01 | Beyond Quacking: Deep Integration of Language Models and RAG into DuckDB | Anas Dorbani et.al. | 2504.01157 | null |
2025-04-01 | Knowledge-Aware Iterative Retrieval for Multi-Agent Systems | Seyoung Song et.al. | 2503.13275 | null |
2025-03-31 | Rethinking Key-Value Cache Compression Techniques for Large Language Model Serving | Wei Gao et.al. | 2503.24000 | link |
2025-03-30 | PQCache: Product Quantization-based KVCache for Long Context LLM Inference | Hailin Zhang et.al. | 2407.12820 | null |
2025-03-30 | Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference | Wei Tao et.al. | 2503.23294 | null |
2025-03-27 | Solving AI Foundational Model Latency with Telco Infrastructure | Sebastian Barros et.al. | 2504.03708 | null |
2025-03-27 | WindowKV: Task-Adaptive Group-Wise KV Cache Window Selection for Efficient LLM Inference | Youhui Zuo et.al. | 2503.17922 | link |
2025-03-25 | LogQuant: Log-Distributed 2-Bit Quantization of KV Cache with Superior Accuracy Preservation | Han Chen et.al. | 2503.19950 | link |
2025-03-24 | Jenga: Effective Memory Management for Serving LLM with Heterogeneity | Chen Zhang et.al. | 2503.18292 | null |
2025-03-24 | Mitigating KV Cache Competition to Enhance User Experience in LLM Inference | Haiying Shen et.al. | 2503.13773 | null |
2025-03-24 | EconoServe: Maximizing Multi-Resource Utilization with SLO Guarantees in LLM Serving | Haiying Shen et.al. | 2411.06364 | null |
2025-03-24 | xKV: Cross-Layer SVD for KV-Cache Compression | Chi-Chih Chang et.al. | 2503.18893 | link |
2025-03-21 | MKG-Rank: Enhancing Large Language Models with Knowledge Graph for Multilingual Medical Question Answering | Feiyang Li et.al. | 2503.16131 | null |
2025-03-20 | Plug-and-Play 1.x-Bit KV Cache Quantization for Video Large Language Models | Keda Tao et.al. | 2503.16257 | null |
2025-03-20 | SpeCache: Speculative Key-Value Caching for Efficient Generation of LLMs | Shibo Jie et.al. | 2503.16163 | null |
2025-03-17 | AccelGen: Heterogeneous SLO-Guaranteed High-Throughput LLM Inference Serving for Diverse Applications | Haiying Shen et.al. | 2503.13737 | null |
2025-03-16 | CAKE: Cascading and Adaptive KV Cache Eviction with Layer Preferences | Ziran Qin et.al. | 2503.12491 | null |
2025-03-12 | PRISM: Efficient Long-Range Reasoning With Short-Context LLMs | Dulhan Jayalath et.al. | 2412.18914 | null |
2025-03-11 | FastCache: Optimizing Multimodal LLM Serving through Lightweight KV-Cache Compression Framework | Jianian Zhu et.al. | 2503.08461 | null |
2025-03-09 | Seesaw: High-throughput LLM Inference via Model Re-sharding | Qidong Su et.al. | 2503.06433 | null |
2025-03-07 | DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference | Jinwei Yao et.al. | 2404.00242 | null |
2025-03-06 | LVLM-Compress-Bench: Benchmarking the Broader Impact of Large Vision-Language Model Compression | Souvik Kundu et.al. | 2503.04982 | null |
2025-03-06 | Beyond RAG: Task-Aware KV Cache Compression for Comprehensive Knowledge Reasoning | Giulio Corallo et.al. | 2503.04973 | null |
2025-03-06 | Markov Chain of Thought for Efficient Mathematical Reasoning | Wen Yang et.al. | 2410.17635 | null |
2025-03-05 | Enhancing Memory Efficiency in Large Language Model Training Through Chronos-aware Pipeline Parallelism | Xinyuan Lin et.al. | 2503.03182 | null |
2025-03-03 | WeightedKV: Attention Scores Weighted Key-Value Cache Merging for Large Language Models | Jian Yuan et.al. | 2503.01330 | null |
2025-03-01 | Progressive Sparse Attention: Algorithm and System Co-design for Efficient Attention in LLM Serving | Qihui Zhou et.al. | 2503.00392 | null |
2025-02-27 | Dynamic Parallel Tree Search for Efficient LLM Reasoning | Yifu Ding et.al. | 2502.16235 | null |
2025-02-27 | ThinK: Thinner Key Cache by Query-Driven Pruning | Yuhui Xu et.al. | 2407.21018 | null |
2025-02-24 | ELMo-Tune-V2: LLM-Assisted Full-Cycle Auto-Tuning to Optimize LSM-Based Key-Value Stores | Viraj Thakkar et.al. | 2502.17606 | link |
2025-02-24 | Round Attention: A Novel Round-Level Attention Mechanism to Accelerate LLM Inference | Yaohua Tang et.al. | 2502.15294 | null |
2025-02-24 | The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve? | Zhenheng Tang et.al. | 2502.17535 | null |
2025-02-22 | AIBrix: Towards Scalable, Cost-Effective Large Language Model Inference Infrastructure | The AIBrix Team et.al. | 2504.03648 | null |
2025-02-20 | SpinQuant: LLM quantization with learned rotations | Zechun Liu et.al. | 2405.16406 | null |
2025-02-20 | Compute Or Load KV Cache? Why Not Both? | Shuowei Jin et.al. | 2410.03065 | null |
2025-02-17 | Does RAG Really Perform Bad For Long-Context Processing? | Kun Luo et.al. | 2502.11444 | null |
2025-02-12 | The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems | Linke Song et.al. | 2409.20002 | null |
2025-02-11 | HexGen-2: Disaggregated Generative Inference of LLMs in Heterogeneous Environment | Youhe Jiang et.al. | 2502.07903 | null |
2025-02-10 | MARM: Unlocking the Future of Recommendation Systems through Memory Augmentation and Scalable Complexity | Xiao Lv et.al. | 2411.09425 | null |
2025-02-08 | ProMoE: Fast MoE-based LLM Serving using Proactive Caching | Xiaoniu Song et.al. | 2410.22134 | null |
2025-02-07 | fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving | Hanfei Yu et.al. | 2502.05370 | null |
2025-02-05 | Accessible and Portable LLM Inference by Compiling Computational Graphs into SQL | Wenbo Sun et.al. | 2502.02818 | null |
2025-02-05 | Qrazor: Reliable and Effortless 4-bit LLM Quantization by Significant Data Razoring | Dongyoung Lee et.al. | 2501.13331 | null |
2025-02-05 | Cache-Craft: Managing Chunk-Caches for Efficient Retrieval-Augmented Generation | Shubham Agarwal et.al. | 2502.15734 | null |
2025-02-04 | LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation | Xuan Zhang et.al. | 2410.13846 | link |
2025-02-02 | RotateKV: Accurate and Robust 2-Bit KV Cache Quantization for LLMs via Outlier-Aware Adaptive Rotations | Zunhai Su et.al. | 2501.16383 | null |
2025-02-01 | QSpec: Speculative Decoding with Complementary Quantization Schemes | Juntao Zhao et.al. | 2410.11305 | null |
2025-01-30 | State Stream Transformer (SST) : Emergent Metacognitive Behaviours Through Latent State Persistence | Thea Aviss et.al. | 2501.18356 | null |
2025-01-29 | vAttention: Dynamic Memory Management for Serving LLMs without PagedAttention | Ramya Prabhu et.al. | 2405.04437 | link |
2025-01-27 | PrefixQuant: Eliminating Outliers by Prefixed Tokens for Large Language Models Quantization | Mengzhao Chen et.al. | 2410.05265 | link |
2025-01-25 | Task-KV: Task-aware KV Cache Optimization via Semantic Differentiation of Attention Heads | Xingyang He et.al. | 2501.15113 | null |
2025-01-24 | Locality-aware Fair Scheduling in LLM Serving | Shiyi Cao et.al. | 2501.14312 | null |
2025-01-24 | Serving Long-Context LLMs at the Mobile Edge: Test-Time Reinforcement Learning-based Model Caching and Inference Offloading | Minrui Xu et.al. | 2501.14205 | null |
2025-01-24 | EchoLM: Accelerating LLM Serving with Real-time Knowledge Distillation | Yifan Yu et.al. | 2501.12689 | null |
2025-01-23 | A Training-free Sub-quadratic Cost Transformer Model Serving Framework With Hierarchically Pruned Attention | Heejun Lee et.al. | 2406.09827 | null |
2025-01-22 | Yi-Lightning Technical Report | Alan Wake et.al. | 2412.01253 | null |
2025-01-17 | BatchLLM: Optimizing Large Batched LLM Inference with Global Prefix Sharing and Throughput-oriented Token Batching | Zhen Zheng et.al. | 2412.03594 | null |
2025-01-12 | Mell: Memory-Efficient Large Language Model Serving via Multi-GPU KV Cache Management | Liu Qianli et.al. | 2501.06709 | null |
2025-01-06 | The Power of Negative Zero: Datatype Customization for Quantized Large Language Models | Yuzong Chen et.al. | 2501.04052 | link |
2025-01-02 | MSWA: Refining Local Attention with Multi-ScaleWindow Attention | Yixing Xu et.al. | 2501.01039 | null |
2025-01-02 | A Survey on Large Language Model Acceleration based on KV Cache Management | Haoyang Li et.al. | 2412.19442 | link |
2024-12-31 | RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval | Di Liu et.al. | 2409.10516 | link |
2024-12-23 | Deliberation in Latent Space via Differentiable Cache Augmentation | Luyang Liu et.al. | 2412.17747 | null |
2024-12-21 | SYMPHONY: Improving Memory Management for LLM Inference Workloads | Saurabh Agarwal et.al. | 2412.16434 | null |
2024-12-21 | MemServe: Context Caching for Disaggregated LLM Serving with Elastic Memory Pool | Cunchen Hu et.al. | 2406.17565 | null |
2024-12-19 | DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving | Yuhan Liu et.al. | 2411.02820 | null |
2024-12-18 | MagicPIG: LSH Sampling for Efficient LLM Generation | Zhuoming Chen et.al. | 2410.16179 | link |
2024-12-18 | Semantic Convergence: Harmonizing Recommender Systems via Two-Stage Alignment and Behavioral Semantic Tokenization | Guanghan Li et.al. | 2412.13771 | null |
2024-12-17 | A System for Microserving of LLMs | Hongyi Jin et.al. | 2412.12488 | null |
2024-12-16 | CSR:Achieving 1 Bit Key-Value Cache via Sparse Representation | Hongxuan Zhang et.al. | 2412.11741 | null |
2024-12-13 | KVDirect: Distributed Disaggregated LLM Inference | Shiyang Chen et.al. | 2501.14743 | null |
2024-12-12 | PowerInfer-2: Fast Large Language Model Inference on a Smartphone | Zhenliang Xue et.al. | 2406.06282 | null |
2024-12-05 | A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts | Suyu Ge et.al. | 2410.01485 | null |
2024-11-27 | FastSwitch: Optimizing Context Switching Efficiency in Fairness-aware Large Language Model Serving | Ao Shen et.al. | 2411.18424 | null |
2024-11-24 | Chameleon: Adaptive Caching and Scheduling for Many-Adapter LLM Inference Environments | Nikoleta Iliakopoulou et.al. | 2411.17741 | null |
2024-11-21 | InstCache: A Predictive Cache for LLM Serving | Longwei Zou et.al. | 2411.13820 | null |
2024-11-14 | Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning | Yu Fu et.al. | 2410.19258 | link |
2024-11-08 | Eigen Attention: Attention in Low-Rank Space for KV Cache Compression | Utkarsh Saxena et.al. | 2408.05646 | link |
2024-11-02 | NEO: Saving GPU Memory Crisis with CPU Offloading for Online LLM Inference | Xuanlin Jiang et.al. | 2411.01142 | null |
2024-10-31 | ALISE: Accelerating Large Language Model Serving with Speculative Scheduling | Youpeng Zhao et.al. | 2410.23537 | null |
2024-10-29 | LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism | Bingyang Wu et.al. | 2404.09526 | link |
2024-10-25 | Fast Inference for Augmented Large Language Models | Rana Shahout et.al. | 2410.18248 | null |
2024-10-24 | Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design | Ruisi Cai et.al. | 2410.19123 | link |
2024-10-23 | Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching | Jie Peng et.al. | 2410.14740 | null |
2024-10-23 | ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference | Xin He et.al. | 2410.17954 | null |
2024-10-21 | Do Large Language Models Need a Content Delivery Network? | Yihua Cheng et.al. | 2409.13761 | link |
2024-10-16 | COMET: Towards Partical W4A4KV4 LLMs Serving | Lian Liu et.al. | 2410.12168 | null |
2024-10-09 | LayerKV: Optimizing Large Language Model Serving with Layer-wise KV Cache Management | Yi Xiong et.al. | 2410.00428 | null |
2024-10-08 | KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches | Jiayi Yuan et.al. | 2407.01527 | link |
2024-10-07 | Fast State Restoration in LLM Serving with HCache | Shiwei Gao et.al. | 2410.05004 | null |
2024-10-07 | KV-Compress: Paged KV-Cache Compression with Variable Compression Rates per Attention Head | Isaac Rehg et.al. | 2410.00161 | link |
2024-10-04 | LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy | Rongzhi Zhang et.al. | 2410.03111 | null |
2024-10-04 | Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization | Seungwoo Son et.al. | 2406.12016 | null |
2024-10-03 | Preble: Efficient Distributed Prompt Scheduling for LLM Serving | Vikranth Srivatsa et.al. | 2407.00023 | link |
2024-10-01 | Self-controller: Controlling LLMs with Multi-round Step-by-step Self-awareness | Xiao Peng et.al. | 2410.00359 | null |
2024-09-23 | Steward: Natural Language Web Automation | Brian Tang et.al. | 2409.15441 | link |
2024-09-23 | BlockLLM: Multi-tenant Finer-grained Serving for Large Language Models | Bodun Hu et.al. | 2404.18322 | null |
2024-09-23 | SEAL: Suite for Evaluating API-use of LLMs | Woojeong Kim et.al. | 2409.15523 | null |
2024-09-21 | LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching | Simranjit Singh et.al. | 2406.06799 | null |
2024-09-11 | Inf-MLLM: Efficient Streaming Inference of Multimodal Large Language Models on a Single GPU | Zhenyu Ning et.al. | 2409.09086 | null |
2024-09-04 | SparQ Attention: Bandwidth-Efficient LLM Inference | Luka Ribar et.al. | 2312.04985 | link |
2024-08-05 | SLO-aware GPU Frequency Scaling for Energy Efficient LLM Inference Serving | Andreas Kosmas Kakolyris et.al. | 2408.05235 | null |
2024-08-04 | TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding | Hanshi Sun et.al. | 2404.11912 | link |
2024-08-01 | Intermittent Semi-working Mask: A New Masking Paradigm for LLMs | Mingcong Lu et.al. | 2408.00539 | null |
2024-08-01 | ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition | Lu Ye et.al. | 2402.15220 | link |
2024-07-22 | vTensor: Flexible Virtual Tensor Management for Efficient LLM Serving | Jiale Xu et.al. | 2407.15309 | link |
2024-07-22 | Dissecting Multiplication in Transformers: Insights into LLMs | Luyu Qiu et.al. | 2407.15360 | link |
2024-07-21 | Model Tells You Where to Merge: Adaptive KV Cache Merging for LLMs on Long-Context Tasks | Zheng Wang et.al. | 2407.08454 | null |
2024-07-18 | QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead | Amir Zandieh et.al. | 2406.03482 | link |
2024-07-11 | Bifurcated Attention: Accelerating Massively Parallel Decoding with Shared Prefixes in LLMs | Ben Athiwaratkun et.al. | 2403.08845 | null |
2024-07-09 | Mooncake: A KVCache-centric Disaggregated Architecture for LLM Serving | Ruoyu Qin et.al. | 2407.00079 | link |
2024-06-30 | Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention | Bin Gao et.al. | 2403.19708 | null |
2024-06-28 | InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management | Wonbeom Lee et.al. | 2406.19707 | null |
2024-06-19 | VELO: A Vector Database-Assisted Cloud-Edge Collaborative LLM QoS Optimization Framework | Zhi Yao et.al. | 2406.13399 | null |
2024-06-16 | EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism | Yanxi Chen et.al. | 2312.04916 | link |
2024-06-08 | QCQA: Quality and Capacity-aware grouped Query Attention | Vinay Joshi et.al. | 2406.10247 | null |
2024-06-06 | SGLang: Efficient Execution of Structured Language Model Programs | Lianmin Zheng et.al. | 2312.07104 | link |
2024-05-31 | Cached Model-as-a-Resource: Provisioning Large Language Model Agents for Edge Intelligence in Space-air-ground Integrated Networks | Minrui Xu et.al. | 2403.05826 | null |
2024-05-13 | Hydragen: High-Throughput LLM Inference with Shared Prefixes | Jordan Juravsky et.al. | 2402.05099 | link |
2024-04-15 | Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models | Siyan Zhao et.al. | 2404.09529 | link |
2024-03-26 | ALISA: Accelerating Large Language Model Inference via Sparsity-Aware KV Caching | Youpeng Zhao et.al. | 2403.17312 | null |
2024-03-18 | FastDecode: High-Throughput GPU-Efficient LLM Serving using Heterogeneous Pipelines | Jiaao He et.al. | 2403.11421 | null |
2024-03-12 | GPT-4V(ision) is a Generalist Web Agent, if Grounded | Boyuan Zheng et.al. | 2401.01614 | link |
2024-03-11 | Large Language Models as Tool Makers | Tianle Cai et.al. | 2305.17126 | link |
2024-02-16 | When Large Language Model Agents Meet 6G Networks: Perception, Grounding, and Alignment | Minrui Xu et.al. | 2401.07764 | null |
2024-02-04 | LLM-Enhanced Data Management | Xuanhe Zhou et.al. | 2402.02643 | link |
2024-01-16 | GMLake: Efficient and Transparent GPU Memory Defragmentation for Large-scale DNN Training with Virtual Memory Stitching | Cong Guo et.al. | 2401.08156 | link |
2024-01-16 | LLMs for Test Input Generation for Semantic Caches | Zafaryab Rasool et.al. | 2401.08138 | null |
2023-06-09 | S$^{3}$: Increasing GPU Utilization during Generative Inference for Higher Throughput | Yunho Jin et.al. | 2306.06000 | null |