talk about mongdb data loss

This commit is contained in:
Yann Esposito 2015-10-26 18:04:45 +01:00
parent 8d80c53297
commit 3c21335521
2 changed files with 5 additions and 0 deletions

View file

@ -97,6 +97,7 @@
<p>Nice until you reach the hard limit. At that time it was, Mongo 2.6. So there was a <strong>Database Level Lock</strong>.</p>
<p>Yes, I repeat: <strong>Database Level Lock</strong>. Each time you read or write, nobody could read or write at the same time.</p>
<p>And even using very expensive clusters, these cant handle the hard limits.</p>
<p>The result, when the MongoDB was asked to write and read a lot (even using batches), you start to lose datas. If you can write them, lets destroy them. Furthermore the code dealing with tweet insertion in MongoDB was really hard to manipulate. No correct error handling. In the end, data loss…</p>
<p>There is a lot to say about MongoDB and a lot was already written. But the main point is yes. MongoDB couldnt be trusted nor used for intensive data manipulation.</p>
<p>Now, the situation might have changed. But there are better tools for the same job.</p>
<p>When we arrived, many client had already paid. And many product should come to life.</p>

View file

@ -104,6 +104,10 @@ Each time you read or write, nobody could read or write at the same time.
And even using very expensive clusters, these can't handle the hard limits.
The result, when the MongoDB was asked to write and read a lot (even using batches), you start to lose datas.
If you can write them, let's destroy them.
Furthermore the code dealing with tweet insertion in MongoDB was really hard to manipulate. No correct error handling. In the end, data loss...
There is a lot to say about MongoDB and a lot was already written.
But the main point is yes.
MongoDB couldn't be trusted nor used for intensive data manipulation.