c++ - Which are the best practices for data intensive reading and writing in a HD? -


i'm developing c++ application (running in linux box) intensive in reading log files , writing derived results in disk. i'd know best practices optimizing these kind of applications:

  • which os tweaks improve performance?
  • which programming patterns boost io throughput?
  • is pre-processing data (convert binary, compress data, etc...) helpful measure?
  • does chunking/buffering data helps in performance?
  • which hardware capabilities should aware of?
  • which practices best profiling , measuring performance in these applications?
  • (express here concern i'm missing)

is there read basics of adapt existing know-how problem?

thanks

compression may lot , simpler tweaking os. check out gzip , bzip2 support in boost.iostreams library. takes toll on processor, though.

measuring these kinds of jobs starts time command. if system time high compared user time, program spends lot of time doing system calls. if wall-clock ("real") time high compared system , user time, it's waiting disk or network. top command showing less 100% cpu usage program sign of i/o bottleneck.


Comments

Popular posts from this blog

java - SNMP4J General Variable Binding Error -

windows - Python Service Installation - "Could not find PythonClass entry" -

Determine if a XmlNode is empty or null in C#? -