How Memcached Works

This article discuss memcached internals

What is Memcached?

From memcached.org:

Memcached is Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

How Memcached Works

  • Memcached assign memory that can be used to store data item via (-m) option. In ElastiCache this was configured using max_cache_memory parameter in parameter group. Value of max_cache_memory parameter cannot be changed. This is not directly requested to operating systems when memcached start but will be requested as needed.
  • Memcached split the memory into small part called page. A page in Memcached have fixed size of 1 MB. (source code reference)
  • Memcached is having a concepts of slabs to manage memory. On start, memcached will define slab classes.
  • Each slab class will have its own chunk size. Chunk size for each slab class is defined by the following elasticache parameters.
    • chunk_size (default : 48 (bytes))
    • chunk_size_growth_factor (default : 1.25)
    • slab_chunk_max (default : 524288 (bytes))
  • When we store item in Memcached, it will find slab class with chunk size that fit with key-value data size and additional metadata. If the total data is not fit to a slab class, it will use the next slab class
  • If no chunk left on a page for a slab class Memcached will allocate another page and put them on the same slab class.

Memory Waste in Memcached

  • There are two potential memory waste from memcached:
    • Once a chunk is used to store an item, the remaining free space cannot be used to store another item or part of another items.
    • Since Memcached is requesting memory per page (1MB), the memory is already allocated for specific slab class even if only 1 chunk is being used on the slab class.

Clustering in memcached

  • Clustering in Memcached is just a bunch of nodes
  • No communication / replication between nodes
  • There is no master node, slave / replica nodes
  • Server rely on client (application that use Memcached) hashing mechanism to know which node in cluster have the key value it needs.

Limits

  • Max key size is 250 bytes (source code reference)
  • Max Item Size is 1 MB by default
    • 1M (from start)
    • 128 M (since version ?)
    • 1G (since version ?)

Memcached Slab Classes

Below table is reference of Memcached slab classes with its chunk size using default settings (default parameter group)

Slab ClassChunk Size (bytes)Chunks Per PageUsable space per chunk
1961092248
2120873872
31526898104
41925461144
52404369192
63043449256
73842730336
84802184432
96001747552
107521394704
119441110896
1211848851136
1314807081432
1418565641808
1523204512272
1629043612856
1736322883584
1845442304496
1956801845632
2071041477056
2188801188832
22111049411056
23138807513832
24173526017304
25216964821648
26271203827072
27339043033856
28423842442336
29529841952936
30662321566184
31827921282744
3210349610103448
331293768129328
341617206161672
352021525202104
362526964252648
373158723315824
383948402394792
395242882524240

Chunk Size = 48 bytes metadata

Large Item Size in Memcached

  • Before version 1.4.29 max-item-size parameter is tied largest chunk size.

Benefit Upgrading to Memcached 1.5.10

  • Cumulative fixes, such as ASCII multigets, (CVE-2017-9951) Fixed in Memcached 1.4.39 and (limit crawls for metadumper).
  • Better connection management by closing connections at the connection limit.
  • Improved item-size management for item size above 1MB.
    • Before version 1.5.0 Item larger than 1 MB is always using largest slab size (around 512k). For example if we store 700 k item it will use two chunks in slab 39 (chunk size 524288). Since we only use 700k on two chunk of slab class 39 we will waste around 300k of memory
    • Starting on version 1.5.0, Memcached will use multiple slab class that minimize memory waste.
  • Better performance and memory-overhead improvements by reducing memory requirements per-item by a few bytes.
    • On Version 1.4.39 (Release Notes) : save four bytes per item if client flags is set to 0.