Cassandra CPU负载(太多)

时间:2015-01-20 20:04:50

标签: cassandra bigdata cql cassandra-2.0

使用top

8260 root      20   0 5163m 4.7g **133m** S 144.6 30.5   2496:46 java

大部分时间%CPU都是> 170。

我正在努力找出问题所在。我认为GC或潮红太过于怪。

 S0     S1     E      O      P     YGC     YGCT    FGC    FGCT     GCT    LGCC                 GCC 
0.00  16.73  74.74  29.33  59.91  27819  407.186   206   10.729  417.914 Allocation Failure   No GC     
0.00  16.73  99.57  29.33  59.91  27820  407.186   206   10.729  417.914 Allocation Failure   Allocation Failure

同样来自Cassandra的日志,它说使用相同的段ID重新映射位置,并且memtable太频繁了。

INFO  [SlabPoolCleaner] 2015-01-20 13:55:48,515 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 112838010 (11%) on-heap, 0 (0%) off-heap
INFO  [MemtableFlushWriter:1587] 2015-01-20 13:55:48,516 Memtable.java:325 - Writing Memtable-bid_list@2003093066(23761503 serialized bytes, 211002 ops, 11%/0% of on/off-heap limit)
INFO  [MemtableFlushWriter:1587] 2015-01-20 13:55:49,251 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3965-Data.db (4144688 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25289038)
INFO  [SlabPoolCleaner] 2015-01-20 13:56:23,429 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 104056985 (10%) on-heap, 0 (0%) off-heap
INFO  [MemtableFlushWriter:1589] 2015-01-20 13:56:23,429 Memtable.java:325 - Writing Memtable-bid_list@1124683519(21909522 serialized bytes, 194778 ops, 10%/0% of on/off-heap limit)
INFO  [MemtableFlushWriter:1589] 2015-01-20 13:56:24,130 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3967-Data.db (3830733 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25350445)
INFO  [SlabPoolCleaner] 2015-01-20 13:56:55,493 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 95807739 (9%) on-heap, 0 (0%) off-heap
INFO  [MemtableFlushWriter:1590] 2015-01-20 13:56:55,494 Memtable.java:325 - Writing Memtable-bid_list@473510037(20170635 serialized bytes, 179514 ops, 9%/0% of on/off-heap limit)
INFO  [MemtableFlushWriter:1590] 2015-01-20 13:56:56,151 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3968-Data.db (3531752 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25373052)

任何帮助或建议都会很棒。我还为KeySpace禁用了持久写入false。谢谢

刚刚重新启动所有节点后,即使没有发生任何事情,其中​​一台服务器上的YGC仍在继续。停止了数据转储等。

1 个答案:

答案 0 :(得分:1)

您使用什么类型的压实?尺寸分层还是平整? 如果你正在使用水平压缩,你可以切换到大小分层,因为你似乎有太多的压缩。增加水平压实的稳定尺寸也可能有所帮助。

  

sstable_size_in_mb (默认:160MB)    使用分级压缩策略的SSTable的目标大小。虽然SSTable尺寸   应该小于或等于sstable_size_in_mb,它可以有   压实过程中较大的SSTable。当给定数据时会发生这种情况   分区键非常大。数据不分为两部分   SSTables。

http://www.datastax.com/documentation/cassandra/1.2/cassandra/reference/referenceTableAttributes.html#reference_ds_zyq_zmz_1k__sstable_size_in_mb

如果使用大小分层压缩,请在看到轻微压缩之前增加SS表的数量。这是在创建表时设置的,因此您可以使用ALTER命令进行更改。示例如下:

  

ALTER TABLE用户WITH   compaction_strategy_class =' SizeTieredCompactionStrategy'和   min_compaction_threshold = 6;

创建6个SSTable后紧凑

相关问题