Cut LLM costs. Save up to 90% with semantic caching.

See how with Redis LangCache
Rini Vasan

Blog Posts

Rini Vasan

AI Product Marketing Manager

  • Fast internet search for agents with Redis & Tavily
    Tyler Hutcherson
    Noah Nefsky
    Sofia Guzowski
    +2
    Tech
    Sep 12,2025
  • Vector Similarity: Why It Matters for Real-World AI Applications
    Rini Vasan
    Tech
    Aug 11,2025
  • It’s official: We’re the #1 AI agent data storage tool
    Image
    Rini Vasan
    Tech
    Aug 08,2025
  • Get faster LLM inference and cheaper responses with LMCache and Redis
    Rini Vasan
    Yihua Cheng
    Tech
    Jul 28,2025
  • Build faster AI memory with Cognee & Redis
    Rini Vasan
    Hande Kafkas
    Tech
    Jul 08,2025
  • LLM Chunking: How to Improve Retrieval & Accuracy at Scale
    Rini Vasan
    Tech
    Jun 20,2025
  • Scale your LLM gateway with LiteLLM & Redis
    Rini Vasan
    Tech
    Jun 12,2025
  • Faster AI workflows with Unstructured & Redis
    Image
    Rini Vasan
    Maria Khalusova
    Tech
    Apr 22,2025
  • From zero to RAG: Building your first RAG pipeline with RedisVL
    Rini Vasan
    How To and Tutorials
    Feb 03,2025