gcc - Why does make -j perform better when it is passed a number larger than the number of cores available? -


i have quad-core processor hyper-threading. when use make -j8 faster make -j4 (i read number of cores in java , called make -j<number of cores>).

i don't understand why make -j32 faster make -j8 when have (read in java) 8 cores (hyper-threading doubles number of physical cores). how possible?

there's more compiling cpu speed , number of cores available: disk bandwidth , memory bandwith matter lot too.

in case, imagine each cpu ht sibling getting 4 processes execute. starts one, blocks on disk io , moves onto next process. second 1 tries open second file, blocks on disk io, , sibling moves onto next process. starting 4 compilers before first disk io ready wouldn't surprise me.

so when first 1 read in program source, compiler must start hunting through directories find #included files. each 1 requires open() calls followed read() calls, of can block, , of relinquish sibling other processes run.

now multiply 8 siblings -- each ht core run until blocks on memory access, @ point it'll swap on other sibling, , run while. once memory of first sibling has been fetched cache, time second sibling stall while waiting memory.

there upper limit on how faster can compiles run using make -j, twice-number-of-cpus has been starting point me in past.


Comments

Popular posts from this blog

java - SNMP4J General Variable Binding Error -

windows - Python Service Installation - "Could not find PythonClass entry" -

Determine if a XmlNode is empty or null in C#? -