Installation guide
gc_batch_size The batch size used for removing expired sessions during garbage collection. This
defaults to 25, which is the maximum size of a single BatchWriteItem operation.
This value should also take your provisioned throughput into account as well as the
timing of your garbage collection.
gc_operation_delay The delay (in seconds) between service operations performed during garbage
collection. This defaults to 0. Increasing this value allows you to throttle your own
requests in an attempt to stay within your provisioned throughput capacity during
garbage collection.
max_lock_wait_time Maximum time (in seconds) that the session handler should wait to acquire a lock
before giving up. This defaults to 10 and is only used with the
PessimisticLockingStrategy.
min_lock_retry_microtimeMinimum time (in microseconds) that the session handler should wait between
attempts to acquire a lock. This defaults to 10000 and is only used with the
PessimisticLockingStrategy.
max_lock_retry_microtimeMaximum time (in microseconds) that the session handler should wait between
attempts to acquire a lock. This defaults to 50000 and is only used with the
PessimisticLockingStrategy.
dynamodb_client The DynamoDbClient object that should be used for performing DynamoDB
operations. If you register the session handler from a client object using the
registerSessionHandler() method, this will default to the client you are
registering it from. If using the SessionHandler::factory() method, you are
required to provide an instance of DynamoDbClient.
To configure the Session Handler, you must specify the configuration options when you instantiate the handler. The
following code is an example with all of the configuration options specified.
$sessionHandler = $dynamoDb->registerSessionHandler(array(
'table_name' => 'sessions',
'hash_key' => 'id',
'session_lifetime' => 3600,
'consistent_read' => true,
'locking_strategy' => null,
'automatic_gc' => 0,
'gc_batch_size' => 50,
'max_lock_wait_time' => 15,
'min_lock_retry_microtime' => 5000,
'max_lock_retry_microtime' => 50000,
));
Pricing
Aside from data storage and data transfer fees, the costs associated with using Amazon DynamoDB are calculated
based on the provisioned throughput capacity of your table (see the Amazon DynamoDB pricing details).
Throughput is measured in units of Write Capacity and Read Capacity. The Amazon DynamoDB homepage says:
A unit of Write Capacity enables you to perform one write per second for items of up to 1KB in size. Similarly, a
unit of Read Capacity enables you to perform one strongly consistent read per second (or two eventually
consistent reads per second) of items of up to 1KB in size. Larger items will require more capacity. You can
calculate the number of units of read and write capacity you need by estimating the number of reads or writes
you need to do per second and multiplying by the size of your items (rounded up to the nearest KB).
Ultimately, the throughput and the costs required for your sessions table is going to correlate with your expected
traffic and session size. The following table explains the amount of read and write operations that are performed on
your DynamoDB table for each of the session functions.
Read via session_start()
(Using NullLockingStrategy)
• 1 read operation (only 0.5 if consistent_read is false).
• (Conditional) 1 write operation to delete the session if it is expired.
DynamoDB Session Handler
129