Feature #272

tpcall cache

Added by Madars almost 4 years ago. Updated over 3 years ago.

Status:ClosedStart date:03/25/2018
Priority:NormalDue date:
Assignee:-% Done:


Target version:-


We could use LMDB for making cache for service calls. The cache could be before tpcall() entry. The cache DB contained following data:
1) Service name,
2) key (format str for UBF for example)
3) partial data to cache (list of UBF field to cache).

Thus if result is found in cache, we return it from cache. The cache settings will be stored in common-config ini file. We shall support different kind of buffers, for UBF we operate with fields (cache storage options: full buffer, or list of fields. In case of list of fields, we merge the call buffer with request buffer), for others that could be regular expression or positions, and storage: full message or specified positions.

There could be api function like tpcachectl(char *db, char *data, long flags), cache driving is handled by @TPCACHENNN (where NNN node id) service. The server could receive ATMI buffer (data) and service name (some field from Exfields) and we need a command code for delete. If fore example for delete command data is empty, then we shall delete all data from DB. Say binary data "tpcachesv", we can run it in multiple copies. The tpcachesv shall subscribe to the event "@CACHEUPDATE" which would process the incoming buffer and store it locally.

The cache rules are processed by cache daemon processes, runt by CMPSRV. The process name "tpcached".

expiry -> drop the cache after the X milliseconds.

# List of caches to process by daemon (i.e. timeout)

# Cache files are defined by sub-sections

# Simple cache

#  Time limited cache
# milliseconds:

# Limited slots cache
# Optional:
# Reset cache file at boot.
# Notify nodes
# Send nodes 1,4,24 event updates that we have a cache changes.

# lru, hits, fifo -> requires limit. lru,hits,fifo  are exclusive, only one should be present

# We shall support caches by process sub-sections. so that there could be different caches used by different cctags
# If rsprule not present -> Save approved. If present, then evaluate the expression
                    "rsprule":"EX_TPERRNO==11 && EX_TPURCODE==5" 

Also needs to think which process fills up the cache. I would say that this could be client process. So firstly if TPNOCACHE is not set, it check the cache (if initialized), if record not found, cal the server, if answer is succeed, then set up the cache record. We might set it up twice if concurrent calls are running, but I guess this is not a problem.

xadmin command line tool shall allow to drop the cache. The command line interface could look like:

xadmin cachedel -b <db-name> [-k <key_string>] -r [perform regexp match -> loop over all db]
at return we shall indicate number of records deleted.

another command we need to show the cache:

xadmin cacheshow [-b <db name>] [-k <key_string>] [-l] [-d] where -d is to show the hex dumps. If not set, then show just infos about number of cached record. -l should list the records with statistics info (last usage date, fifo order)?


Feature #292: Cache refresh scheduleNew


#1 Updated by Madars almost 4 years ago

  • Description updated (diff)

#2 Updated by Madars almost 4 years ago

- Locally insert/delete. The inserter will overwrite existing rec (upsert)
- Message is broadcasted to event server (broadcast=y)
- Events are following: @CACHENNN_INSERT, @CACHENNN_DELETE
- Optionally we can configure from which nodes to consume events like: (@CACHE011_.*|@CACHE010_INSERT) (this would accept from node 11 any event, from node 10 only insert)
- Cache admin server advertizes correspoding insert/delete services
- event server shall perform only one tpcall per unique service name

#3 Updated by Madars almost 4 years ago

Chache events shall be:

@CP/.*/.* -> Listen on any PUT event
@C./.*/.* -> Listen on any event

@CP/1//TESTSVC -> just put, no flags
@CD/22/F/TESTSVC -> Full delete

So format is:

@C<P|D>/<Publisher Node Id>/<Service Name of Cache>

#4 Updated by Madars almost 4 years ago

For split brain issues in cluster mode, we could allow to save multiple keys. And we try to fetch all. And to user we return the youngest result. The caller could delete the oldest records.

This should be configurable at cachedb level. For some flag like "timesync"

#5 Updated by Madars almost 4 years ago

Cache invalidate strategies by call:

1. We need a invalidate cache settings. Flags would contain "inval". If so then optional invalsvc -> service cache to invalidate. Key would be destination data key. We shall specify invalidx -> index of dest cache number.

2. We also need a "recache" expression. If expression is true, the lookup shall ignore cached data. And overwrite cache data.

#6 Updated by Madars almost 4 years ago

tpcached -> shall provide boot-reset function. And monitor for database rules (record expiry) and given interfval

tpcachsv -> Listents to event subscribed to, performs cache updates, delete. Service is "@TPCACHENNN". If message is other than event, then by command codes: we provide backend for xadmin. Where we delete full db, or just key (tpcall). Or provide the record listings in caches with optional data buffer. Record by record is delivered to xadmin in conversational mode. So this way we do not need any aditional API to manage cache records. The user call by it self to manage cache.

#7 Updated by Madars over 3 years ago

if we trigger reset we might want to continue with caching this record.

So that we call one service, it resets other service cache, and saves this service value in cache. So if we trigger theirs, then we continue with ours next save caches. This could be an option to continue with next cache or not.

#8 Updated by Madars over 3 years ago

  • Description updated (diff)

#9 Updated by Madars over 3 years ago

  • Description updated (diff)

#10 Updated by Madars over 3 years ago

Test scenarios, test048_cache

1. Simple cache, perform one call, get tstamp. Perform another call, the timestamp shall be the same. Perform another call, to get second record in cache. Use xadmin cs and xadmin cd to dup the results. Before test db shall be cleaned up with xadmin ci <dbname>. Delete records in db -> single key, regexp, full db (xadmin ce variations). Test refresh rule (so that we call and cache record is invalidated)
tmshutdown of service shall make no lookups in cache...

2. Two database cache, operate with second db. First call goes to cache. shutdown the test48sv, call shall fail (does not come from cache), after boot up call the service data shall come from cache (i.e. tpcached have not removed it).

3. Two database cache, operate with second db. First call goes to cache. shutdown the test48sv, call shall fail (does not come from cache), after boot up first call comes from cache, as tpcached have zapped it.

2. Simple cache, use second database, save negative result. Positive shall be ignored. The same timestamp based test. First DB shall be empty, second db shall be filled.

3. timed cache with expiry. Use different service cache and different database. Use flag to drop the db at startup, by ndrxd. Record shall be limited to 10seconds. Add few recs, check they exist in cache with `cs'. wait 10 sec, they should be removed from cache.

4. hits tests. Limit db to 100 recs. Have a key as integer of loop. Each loop cycle we lookup the record in db the number of the counter. Do the loop of i<150 times. Thus wait some time to tpcached process the limits. the check with cs, that keys are have 50..149. As those others have less hits. How ever As this happens in real time, we shall start with 150... down to 0, to gain the hits and itermediate of cached does not kill the results.

5. The fifo test

6. then LRU test

7. Then invalidated ours (refresh rule)

8. Then invalidate theirs


Use different "saveproj" strategies.

Test broadcast functionality:

Test scenarios, test050_domcache - domain cache

Establish cluster:

1. Save data in node1, node2 should be replicated.

2. Save data in node1, node2 should be replicated, data expires, node1 and node2 on both records shall be removed.

3. Save data in node1, delete by key, both nodes must have data deleted.

#11 Updated by Madars over 3 years ago

For unit tests have a "callcache48" utility for cache tests scripting with following properties:

./cachecall <SVCNM> <'{"C_STRING_FIELD":"HELLO"}'> <TSTAMP_FIELD> <got cached result: Y|N - validate> {noformat}

so this will call target server with UBF buffer, check the timestamp in TSTAMP_FIELD, and validate is data cached or not (i.e. stamp must be less than current tstamp then record is cached). we may compare strings, just ensure that alphabetic compare is ok for numbers.

#12 Updated by Madars over 3 years ago

we shall remove from ndrxd cache reset at boot.

Instead we shall add application specific option from tpcachesv with -m <mater_server_id>. So if at tpinit() tpcachesv sees that server id equal to this number, then it will perform cache reset at startup (if bootreset flag is present). Also needs to think about doing this from cold start. I.e. there should be all servers down and then we start. New env variable NDRX_FIRST=1 shall be exported. If it is not starting from cold, then NDRX_FIRST=0.

so if NDRX_FIRST==1 and -m == server_id then reset cache.

#13 Updated by Madars over 3 years ago

well better new binary:

-> tpcachebtsv -> tpcache boot service.

if env NDRX_FULLSTART==1, then reset cache.

#14 Updated by Madars over 3 years ago

cross domain checks can be done in following way:

DOM1 cache svc call, with TPNOCACHELOOK
sleep 1
DOM2 cache svc call, with TPNOCACHELOOK

Then lookup caches, in both machines we shall get result from DOM2 as it is newer.
-> in DB we shall get the DUP records.

Lookup cache records with cache look (standard), the duplicates shall be killed (i.e. timesync is set)

-> then do the test with other DBs, "scandup" enabled, timesync not set.

#15 Updated by Madars over 3 years ago

also the keygroup db shall have back reference to original db. So that if we perform delete of keygroup record, we must kill all records for they keys in data storage db.

Also think about the same db mabe?

Also needs to think how to maximise read-only transactions, for lookup!

On 23/02/18 14:43, Madars Vitolins wrote:

We could have keygroupdb="DB name"

also keygroupfmt="$(LKEY_ID)"

when adding data to database we must append the key group with new key if not in the list.

If doing lookup and keygroupfmt is defined, then firstly find the keygroup rec. Validate that key exists in group and only then perform lookup to DB...

#16 Updated by Madars over 3 years ago

Changes required:

- default rule => Accept always

- default "save" => "*" - all

- default "rsprule" => Only approved

#17 Updated by Madars over 3 years ago

tpcachebtsv at bootreset shall delete also db files! As if file exists some changed settings does not apply!

#18 Updated by Madars over 3 years ago

     args: cs <cache_db_name>|-d <cache_db_name>

cacheshow    Alias for `cs' 

cd    Dump message in cache
     args: cd -d <dbname> -k <key> [-i interpret_result]

cachedump    Alias for `cd' 

ci    Invalidate cache
     args: ci -d <dbname> [-k <key>][-r use_regexp]

cacheinval    Alias for `ci' 

NDRX> cacheshow db01
Syntax error, too few args (min 2, max 3, got 1)!
NDRX> cacheshow db01
Syntax error, too few args (min 2, max 3, got 1)!

#19 Updated by Madars over 3 years ago

Add standard library function for dropping database file (ndrx_mdb_drop("directory")) -> This would simple delete "lock.edb" and "data.edb".

#20 Updated by Madars over 3 years ago

New feature: we could schedule invalidate for time period. For example cached data header would identify max UTC until which cache shall not be used. And only after this period record is updated and cache could be used again.

#21 Updated by Madars over 3 years ago

  • Status changed from New to Resolved

#22 Updated by Madars over 3 years ago

  • Status changed from Resolved to Closed

Also available in: Atom PDF