2
我想以1MB或300條記錄爲上限,以先到者爲準。MongoDB Capped Collection不刪除文檔
PRIMARY>db.runCommand({"convertToCapped":"cache",'size':1024*1024, 'max':300});
{ "ok" : 1 }
PRIMARY>db.cache.isCapped();
true
到目前爲止好。 過了一會兒我回來檢查,收集已使用了一下後(新記錄插入等)
PRIMARY> db.cache.count();
513
嗯,什麼?我最後一次檢查,513> 300 注意「封蓋」被計數在讀這總產值突破300
PRIMARY> db.cache.validate();
{
"ns" : "streamified.cache",
"capped" : 1,
"max" : 2147483647,
"firstExtent" : "16:7279e000 ns:streamified..tmp.convertToCapped.cache",
"lastExtent" : "16:7279e000 ns:streamified..tmp.convertToCapped.cache",
"extentCount" : 1,
"datasize" : 858104,
"nrecords" : 513,
"lastExtentSize" : 1052672,
"padding" : 1,
"firstExtentDetails" : {
"loc" : "16:7279e000",
"xnext" : "null",
"xprev" : "null",
"nsdiag" : "streamified..tmp.convertToCapped.cache",
"size" : 1052672,
"firstRecord" : "16:7279e0b0",
"lastRecord" : "16:72871444"
},
"deletedCount" : 1,
"deletedSize" : 186184,
"nIndexes" : 0,
"keysPerIndex" : {
},
"valid" : true,
"errors" : [ ],
"warning" : "Some checks omitted for speed. use {full:true} option to do more thorough scan.",
"ok" : 1
}
我不是偉大的前執行,但上面的「最大」值看起來有點奇怪。除此之外,我不確定會出現什麼問題......
啊,我錯過了。真是恥辱,這完全殺死了我使用Capped Collections的能力,因爲貓鼬中存在一個bug。2.7.1阻止我在定義時創建一個capped集合。 –