Friday, June 14, 2013

SCB threading IV 2012 - caching remote DB

Q: A remote position DB (some opaque data source) is super slow. It takes 500ms to return a (big) position object when queried. The read(int key) method is provided by the DB's client-side API and is thread safe, so multiple threads can call read() concurrently. Now, we need some kind of caching to minimize the 500ms latency, but we want a lazy cache, partly to avoid overloading this busy DB -- We don't want to load all 9 million positions when we are interested in only 80 positions. Our cache shall provide an API like a Cache class with a method "public Position lookup(int key) const"

If a lookup thread is blocked due to a cache miss (resulting in a DB read), it should not block other unrelated lookup threads. Also, if 2 threads happen to call lookup(123) in quick succession which is a cache miss, 2nd thread should block then get the result as soon as first thread receives the position and populates the cache. (Perhaps some kind of notification.)
----------- Analysis -----------
I feel we can address this either with low-level tools or with high level tools. Perhaps the interviewer is interested in the low-level know-how, or perhaps interested in the higher-level design techniques and principles. I feel it's good to assume high volume and slow connections. Perhaps a distributed infrastructure. It's generally easier to scale down an over-engineered design than to scale-up a simplistic design.
----------- MOM based idea ----------
(not yet a "design"). If the request volume is too high, then we run the risk of creating thousands of blocked threads. MOM to the rescue. The client thread would have to be decoupled from the engine thread, otherwise 5000 concurrent requests would map to 5000 engine threads. To decouple, the client thread (one of 5000 or 55000) could block as a MOM consumer, or client thread could register a OnMsg callback. On the dumb MOM, each request (MOM knows no cache miss or hit) goes into a queue. MOM delivers each request to the engine. Simple if cache hit. Position data goes out in a response topic. What if cache miss? Add the key to a task queue (producer-consumer). The consumer thread pool could have 1 or more work threads. Assuming the read() method is blocking, the worker thread must block. Upon failure, we update the cache with a special record for key 123. Upon success, we update the cache. Either way we publish to the response topic.

Note If a request key 123 is already being processed or in the task queue, then it's not even added to the task queue. The task queue should "see" each key only once (Should be easy to implement).
Now let's assume this is a low-level single-process threading challenge, without MOM. The lookup() method could be called on 50 threads simultaneously. If 2 threads both request key 123 (cache miss), they must both block. Each reader thread should create a lock tagged with the key. During the read(), the lock should not be held. Upon successful read, the reader thread should lock, update cache, notify, unlock, then return the Position object. If in the interim a 2nd thread (a want2read thread) also has a cache miss on key 123, it would look for the lock tagged with 123, lock, then wait in it in a loop. Upon wake-up, it check cache. If found, it exits the loop and returns the Position object.

If read() times out or fails, the reader thread would fake a special Position object with an error msg, and do the same as above.

%%A: We need a thread pool to consume the queue. The pool threads will execute processOneReadTask()

How about the listener solution instead of wait/notify? If we have 500 threads requesting position 123. They all block in wait() -- memory-intensive. The listener solution means each thread would package its local state info into a state object and save it into a global collection. A few threads would be enough. For example, a single thread could wake up upon notification and sequentially complete the tasks of the 500 requests.

No comments:

Total Pageviews

my favorite topics (labels)

_fuxi (302) _misLabel (13) _orig? (3) _rm (2) _vague (2) clarified (58) cpp (39) cpp_const (22) cpp_real (76) cpp/java/c# (101) cppBig4 (54) cppSmartPtr (35) cppSTL (33) cppSTL_itr (27) cppSTL_real (26) cppTemplate (28) creditMkt (14) db (65) db_sybase (43) deepUnder (31) dotnet (20) ECN (27) econ/bank` (36) fin/sys_misc (43) finGreek (34) finReal (45) finRisk (30) finTechDesign (46) finTechMisc (32) finVol (66) FixedIncom (28) fMath (7) fMathOption (33) fMathStoch (67) forex (39) gr8IV_Q (46) GTD_skill (15) GUI_event (30) inMemDB (42) intuit_math (41) intuitFinance (57) javaMisc (68) javaServerSide (13) lambda/delegate (22) marketData (28) math (10) mathStat (55) memIssue (8) memMgmt (66) metaProgram` (6) OO_Design (84) original_content (749) polymorphic/vptr (40) productive (21) ptr/ref (48) py (28) reflect (8) script`/unix (82) socket/stream (39) subquery/join (30) subvert (13) swing/wpf (9) sysProgram` (16) thread (164) thread_CAS (15) thread_cpp (28) Thread* (22) timeSaver (80) transactional (23) tune (24) tuneDB (40) tuneLatency (30) z_ajax (9) z_algoDataStruct (41) z_arch (26) z_arch_job (27) z_automateTest (17) z_autoTrad` (19) z_bestPractice (39) z_bold (83) z_bondMath (35) z_book (18) z_boost (19) z_byRef^Val (32) z_c#GUI (43) z_c#misc (80) z_cast/convert (28) z_container (67) z_cStr/arr (39) z_Favorite* (8) z_FIX (15) z_forex (48) z_fwd_Deal (18) z_gz=job (33) z_gzBig20 (13) z_gzMgr (13) z_gzPain (20) z_gzThreat (19) z_hib (19) z_IDE (52) z_ikm (5) z_IR_misc (36) z_IRS (26) z_javaWeb (28) z_jdbc (10) z_jobFinTech (46) z_jobHunt (20) z_jobRealXp (10) z_jobStrength (15) z_jobUS^asia (27) z_letter (42) z_linq (10) z_memberHid` (11) z_MOM (54) z_nestedClass (5) z_oq (24) z_PCP (12) z_pearl (1) z_php (20) z_prodSupport (7) z_py (31) z_quant (14) z_regex (8) z_rv (38) z_skillist (48) z_slic`Problem (6) z_SOA (14) z_spring (25) z_src_code (8) z_swingMisc (50) z_swingTable (26) z_unpublish (2) z_VBA/Excel (8) z_windoz (17) z_wpfCommand (9)

About Me

New York (Time Square), NY, United States