Redis cache store.
Deployment note: Take care to use a *dedicated Redis cache* rather than pointing this at your existing Redis server. It won't cope well with mixed usage patterns and it won't expire cache entries by default.
Redis cache server setup guide: redis.io/topics/lru-cache
-
Supports vanilla Redis, hiredis, and Redis::Distributed.
-
Supports Memcached-like sharding across Redises with Redis::Distributed.
-
Fault tolerant. If the Redis server is unavailable, no exceptions are raised.
Cache
fetches are all misses and writes are dropped. -
Local cache. Hot in-memory primary cache within block/middleware scope.
-
read_multi
andwrite_multi
support for Redis mget/mset. Use Redis::Distributed 4.0.1+ for distributed mget support. -
delete_matched
support for Redis KEYS globs.
Methods
- cleanup
- clear
- decrement
- delete_matched
- increment
- inspect
- new
- read_multi
- redis
- supports_cache_versioning?
Constants
DEFAULT_ERROR_HANDLER | = | -> (method:, returning:, exception:) do if logger logger.error { "RedisCacheStore: #{method} failed, returned #{returning.inspect}: #{exception.class}: #{exception.message}" } end end |
DEFAULT_REDIS_OPTIONS | = | { connect_timeout: 20, read_timeout: 1, write_timeout: 1, reconnect_attempts: 0, } |
MAX_KEY_BYTESIZE | = | 1024 |
Keys are truncated with their own SHA2 digest if they exceed 1kB |
Attributes
[R] | max_key_bytesize | |
[R] | redis_options |
Class Public methods
new(namespace: nil, compress: true, compress_threshold: 1.kilobyte, coder: DEFAULT_CODER, expires_in: nil, race_condition_ttl: nil, error_handler: DEFAULT_ERROR_HANDLER, **redis_options)
Creates a new Redis cache store.
Handles four options: :redis block, :redis instance, single :url string, and multiple :url strings.
Option Class Result
:redis Proc -> options[:redis].call
:redis Object -> options[:redis]
:url String -> Redis.new(url: …)
:url Array -> Redis::Distributed.new([{ url: … }, { url: … }, …])
No namespace is set by default. Provide one if the Redis cache server is shared with other apps: namespace: 'myapp-cache'
.
Compression is enabled by default with a 1kB threshold, so cached values larger than 1kB are automatically compressed. Disable by passing compress: false
or change the threshold by passing compress_threshold: 4.kilobytes
.
No expiry is set on cache entries by default. Redis is expected to be configured with an eviction policy that automatically deletes least-recently or -frequently used keys when it reaches max memory. See redis.io/topics/lru-cache for cache server setup.
Race condition TTL is not set by default. This can be used to avoid “thundering herd” cache writes when hot cache entries are expired. See ActiveSupport::Cache::Store#fetch
for more.
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 172
def initialize(namespace: nil, compress: true, compress_threshold: 1.kilobyte, coder: DEFAULT_CODER, expires_in: nil, race_condition_ttl: nil, error_handler: DEFAULT_ERROR_HANDLER, **redis_options)
@redis_options = redis_options
@max_key_bytesize = MAX_KEY_BYTESIZE
@error_handler = error_handler
super namespace: namespace,
compress: compress, compress_threshold: compress_threshold,
expires_in: expires_in, race_condition_ttl: race_condition_ttl,
coder: coder
end
🔎 See on GitHub
supports_cache_versioning?()
Advertise cache versioning support.
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 70
def self.supports_cache_versioning?
true
end
🔎 See on GitHub
Instance Public methods
cleanup(options = nil)
Cache
Store
API implementation.
Removes expired entries. Handled natively by Redis least-recently-/ least-frequently-used expiry, so manual cleanup is not supported.
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 304
def cleanup(options = nil)
super
end
🔎 See on GitHub
clear(options = nil)
Clear the entire cache on all Redis servers. Safe to use on shared servers if the cache is namespaced.
Failsafe: Raises errors.
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 312
def clear(options = nil)
failsafe :clear do
if namespace = merged_options(options)[:namespace]
delete_matched "*", namespace: namespace
else
redis.with { |c| c.flushdb }
end
end
end
🔎 See on GitHub
decrement(name, amount = 1, options = nil)
Cache
Store
API implementation.
Decrement a cached value. This method uses the Redis decr atomic operator and can only be used on values written with the :raw option. Calling it on a value not stored with :raw will initialize that value to zero.
Failsafe: Raises errors.
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 285
def decrement(name, amount = 1, options = nil)
instrument :decrement, name, amount: amount do
failsafe :decrement do
options = merged_options(options)
key = normalize_key(name, options)
redis.with do |c|
c.decrby(key, amount).tap do
write_key_expiry(c, key, options)
end
end
end
end
end
🔎 See on GitHub
delete_matched(matcher, options = nil)
Cache
Store
API implementation.
Supports Redis KEYS glob patterns:
h?llo matches hello, hallo and hxllo
h*llo matches hllo and heeeello
h[ae]llo matches hello and hallo, but not hillo
h[^e]llo matches hallo, hbllo, ... but not hello
h[a-b]llo matches hallo and hbllo
Use \ to escape special characters if you want to match them verbatim.
See redis.io/commands/KEYS for more.
Failsafe: Raises errors.
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 233
def delete_matched(matcher, options = nil)
instrument :delete_matched, matcher do
unless String === matcher
raise ArgumentError, "Only Redis glob strings are supported: #{matcher.inspect}"
end
redis.with do |c|
pattern = namespace_key(matcher, options)
cursor = "0"
# Fetch keys in batches using SCAN to avoid blocking the Redis server.
nodes = c.respond_to?(:nodes) ? c.nodes : [c]
nodes.each do |node|
begin
cursor, keys = node.scan(cursor, match: pattern, count: SCAN_BATCH_SIZE)
node.del(*keys) unless keys.empty?
end until cursor == "0"
end
end
end
end
🔎 See on GitHub
increment(name, amount = 1, options = nil)
Cache
Store
API implementation.
Increment a cached value. This method uses the Redis incr atomic operator and can only be used on values written with the :raw option. Calling it on a value not stored with :raw will initialize that value to zero.
Failsafe: Raises errors.
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 262
def increment(name, amount = 1, options = nil)
instrument :increment, name, amount: amount do
failsafe :increment do
options = merged_options(options)
key = normalize_key(name, options)
redis.with do |c|
c.incrby(key, amount).tap do
write_key_expiry(c, key, options)
end
end
end
end
end
🔎 See on GitHub
inspect()
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 197
def inspect
instance = @redis || @redis_options
"#<#{self.class} options=#{options.inspect} redis=#{instance.inspect}>"
end
🔎 See on GitHub
read_multi(*names)
Cache
Store
API implementation.
Read multiple values at once. Returns a hash of requested keys -> fetched values.
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 206
def read_multi(*names)
if mget_capable?
instrument(:read_multi, names, options) do |payload|
read_multi_mget(*names).tap do |results|
payload[:hits] = results.keys
end
end
else
super
end
end
🔎 See on GitHub
redis()
📝 Source code
# File activesupport/lib/active_support/cache/redis_cache_store.rb, line 184
def redis
@redis ||= begin
pool_options = self.class.send(:retrieve_pool_options, redis_options)
if pool_options.any?
self.class.send(:ensure_connection_pool_added!)
::ConnectionPool.new(pool_options) { self.class.build_redis(**redis_options) }
else
self.class.build_redis(**redis_options)
end
end
end
🔎 See on GitHub