Skip to content

Voluntarism in the capture range of keys in case of a cluster separation can lead to data loss. #59

@GoogleCodeExporter

Description

@GoogleCodeExporter
What steps will reproduce the problem?
1. Start scalaris with four nodes. An each node should to own an equal part oh 
the keyspace.
2. Suspend all nodes except the boot-node by pressing Ctrl-C in the erlang 
shell of these nodes.
3. Make the write operation for some key:
ok=cs_api_v2:write("Key", 1).
1=cs_api_v2:read("Key").
4. Resume the suspended nodes (by pressing c + [enter] on each).
5. Try to read the key value:
{fail, not_found}=cs_api_v2:read("Key").

What is the expected output? What do you see instead?
So we lost the data after the cluster recombination. You can get another 
effect, in the case when the writing of this key was made for each of the 
breakaway node by different clients. In this case, after recombination, the 
nodes can be stored keys having different values but the same version. And 
therefore in the further reading data from the key, different clients can get 
different values simultaneously.
The proof:
> cs_api_v2:range_read(0,0).
{ok,[{12561922216592930516936087995162401722,2,false,0,0},
     {182703105677062162248623391711046507450,4,false,0,0},
     {267773697407296778114467043568988560314,1,false,0,0},
     {97632513946827546382779739853104454586,3,false,0,0}]}
It is four different values for the "Key" key.

What version of the product are you using? On what operating system?
r978

Please provide any additional information below.


Original issue reported on code.google.com by serge.po...@gmail.com on 10 Aug 2010 at 3:31

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions