ConnectR - Cached Lookups
One key feature in many software systems is their ability to cache data in memory that they obtain from a dynamic, external source, like a database. Database look-ups are very "costly" in performance terms - they can take tens or hundreds of times longer than an in-memory look-up, and will affect the systems required to do that look-up - the database and network, specifically.
ConnectR does not have this caching mechanism built in; however, you can coax ConnectR into caching database information.
Two of the most common needs for a DB look-up include:
- Checking the existence of a record, e.g. dictionary entry, before attempting to create it.
- Finding a value in a table, based on information that you have, e.g. finding a provider's code, based on NPI.
Empirical Performance Analysis
We performed an analysis at a client of ours who receives 150,000 ADT messages per day. There was an existing lookup to determine if a "provider" code that was being sent by the Practice Management System was in the Provider or Referring Provider dictionaries. The savings we received by using cached lookups, was equivalent to opening the v11 Clinical Desktop 682 times a day, or 6.5 hours of processing time on the ConnectR side.
- For Clinical Desktop comparison, we looked at SQL reads. The number of reads on average to pull up an "average" patient's chart - these were test patients with 10-20 ChartViewer items, 1-5 allergies, 2-6 active medications, 5-10 total orders, 5-10 immunizations and 5-10 problems. For these, we saw an average of 3,404 reads for the primary SQL Stored Procedures used to load the Clinical Desktop. The average reads for the Provider and Referring Provider lookup was 15, over 20 trials.
- The average ADTs per day was 154,817.
- The average duration for the Script lookup in to the database from ConnectR was 150.85ms, though this varied from 12-25 for 85% of the transactions, and 914-928 for a random 15% of the transactions.