Uploaded image for project: 'Aptana Studio'
  1. Aptana Studio
  2. APSTUD-7286

Implement LRUCache with virtual memory paging scheme in editor infrastructure



      Initially the main issue found is that the caching does not really work well with our multi-parsing strategy.

      When a parse for a large document is done, say, an html with 30 css and 10 js partitions, each of those will end up requesting a new parse and as our parse cache is an LRU of size = 3, the cache is never used in this situation.

      On the good side, once we get caches in place properly, when parsing a large document, we may have lots of sub-partitions already cached, which should make the parsing faster when reparsing the full html.

      Ideas on approaches:

      1. A simple approach is simply making the cache larger (say 256), but maybe we should have 2 different caches, one for 'outer' calls and another for 'internal' calls inside the parsing structure, but memory may be a problem here.

      2. Or maybe a single cache where we have an LRU with a 'desired' max size but items are only prunned if not accessed for some time (say 10 seconds).

      3. Having a SoftHashMap may be a reasonable replacement too.

      4. Making proper use of the LRUCache implementation we have which implements a virtual memory paging scheme (each value may have an associated size and the LRU has a size related to that - the parsed document length may be a good tradeof in speed/size of the final generated AST).

      Selected approach:

      LRUCache with virtual memory paging scheme + SoftHashMap for pruned entries (which may be restored if still alive).


          Issue Links



              • Assignee:
                fzadrozny Fabio Zadrozny
                ingo Ingo Muschenetz
              • Watchers:
                0 Start watching this issue


                • Created:

                  Git Integration