Cut LLM costs. Save up to 90% with semantic caching.

See how with Redis LangCache
Robert Shelton

Blog Posts

Robert Shelton

AI Engineer at Redis

  • Retrieval optimizer: Custom data
    Robert Shelton
    Tech
    Jul 21,2025
  • Retrieval optimizer: Bayesian optimization
    Robert Shelton
    Tech
    Jul 21,2025
  • Retrieval optimizer: Grid search
    Robert Shelton
    Tech
    Jul 21,2025
  • What’s the best embedding model for semantic caching?
    Robert Shelton
    Tech
    Feb 04,2025
  • Level up RAG apps with Redis Vector Library
    Robert Shelton
    Justin Cechmanek
    Tech
    Oct 09,2024
  • Get better RAG responses with Ragas
    Robert Shelton
    Tech
    Sep 26,2024