java - efficient db operations -


here scenario researching solution @ work. have table in postgres stores events happening on network. way works is, rows inserted network events come , @ same time older records match specific timestamp deleted in order keep table size limited 10,000 records. basically, similar idea log rotation. network events come in burst of thousands @ time, hence rate of transaction high causes performance degradation, after sometime either server crashes or becomes slow, on top of that, customer asking keep table size million records going accelerate performance degradation (since have keep deleting record matching specific timestamp) , cause space management issue. using simple jdbc read/write on table. can tech community out there suggest better performing way handle inserts , deletes in table?

i think use partitioned tables, perhaps 10 x total desired size, inserting newest, , dropping oldest partition.

http://www.postgresql.org/docs/9.0/static/ddl-partitioning.html

this makes load on "dropping oldest" smaller query , delete.

update: agree nos' comment though, inserts/deletes may not bottleneck. maybe investigation first.


Comments

Popular posts from this blog

java - SNMP4J General Variable Binding Error -

windows - Python Service Installation - "Could not find PythonClass entry" -

Determine if a XmlNode is empty or null in C#? -