operating system - Can't understand Belady's anomaly -
so belady's anomaly states when using fifo page replacement policy, when adding more page space we'll have more page faults.
my intuition says should less or @ most, same number of page faults add more page space.
if think of fifo queue pipe, adding more page space making pipe bigger:
____ o____o size 4 ________ o________o size 8
so, why more page faults? intuition says longer pipe, you'd take bit longer start having page faults (so, infinite pipe you'd have no page faults) , you'd have many page faults , smaller pipe.
what wrong reasoning?
the reason when using fifo, increasing number of pages can increase fault rate in access patterns, because when have more pages, requested pages can remain @ bottom of fifo queue longer.
consider third time "3" requested in wikipedia example here: http://en.wikipedia.org/wiki/belady%27s_anomaly
page faults marked "f".
1:
page requests 3 2 1 0 3 2 4 3 2 1 0 4 newest page 3f 2f 1f 0f 3f 2f 4f 4 4 1f 0f 0 3 2 1 0 3 2 2 2 4 1 1 oldest page 3 2 1 0 3 3 3 2 4 4
2:
page requests 3 2 1 0 3 2 4 3 2 1 0 4 newest page 3f 2f 1f 0f 0 0 4f 3f 2f 1f 0f 4f 3 2 1 1 1 0 4 3 2 1 0 3 2 2 2 1 0 4 3 2 1 oldest page 3 3 3 2 1 0 4 3 2
in first example (with fewer pages), there 9 page faults.
in second example (with more pages), there 10 page faults.
when using fifo, increasing size of cache changes order in items removed. in some cases, can increase fault rate.
belady's anomaly not state general trend of fault rates respect cache size. reasoning (about viewing cache pipe), in general case not wrong.
in summary: belady's anomaly points out possible exploit fact larger cache sizes can cause items in cache raised in fifo queue later smaller cache sizes, in order cause larger cache sizes have higher fault rate under particular (and possibly rare) access pattern.
Comments
Post a Comment