Replies: 7 comments 13 replies
-
|
Hi @tglman I am asking because I have seen only a single committer for years already. You very often refer "we" in your posts, but I see only a single supporter, and all changes seen now are:
|
Beta Was this translation helpful? Give feedback.
-
|
You wrote: Could you share any design choices that will backup those plans ? |
Beta Was this translation helpful? Give feedback.
-
That is true. Could you introduce a contributor (and by this, I mean either person or company) who will work on this part to improve its maintenance? |
Beta Was this translation helpful? Give feedback.
-
This paragraph does not provide any concrete plans.
From commits are seen there are quite minor improvements in this area. |
Beta Was this translation helpful? Give feedback.
-
|
Are there any companies that can support and ensure continuous development and the quality of the efforts that will be spent to implement such a big scope? |
Beta Was this translation helpful? Give feedback.
-
|
Could you provide QA measures that you are going to use to ensure quality assurance that is needed to ensure quality characteristics of upcoming and current releases? I am pretty sure that running tests on GitHub actions is insufficient to ensure such quality. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the update @tglman The list of planned changes looks really good, and touches on a bunch of things our engineering team would like to see. I'd be interested in supporting the Lucene durability work, and our team have a good amount of experience in distributed/consensus systems if you want to collaborate on the Hazelcast replacement. The console and query parser (the current grammar is somewhat "suboptimal") would be areas we'd like to see improved as well, so happy to help there. A few things that sprung to mind reading the list of planned changes:
Aside from all of these ideas, I think I'd second the suggestion that data integrity testing is paramount for rebuilding confidence in the user community. Starting with deploying 3.1 in production, it was quickly evident that the distributed code had never really been used in anger (simply waiting for an amount of time killed the servers), and we're experienced genuine data file corruption in production systems, which is really un-fun to work around. I know 3.2 has improved that area a lot, but having a public testing strategy covering weak/missing areas, and fuzzing/stress approach, that people in the community could get behind would be a major benefit to the project. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I wrote a blog post about the work happening and what is going to be in the next release here https://orientdb.dev/news/status-and-backlog/
Please check it and let me know any feedback, also happy to accept any contribution from anyone to make next release happen !
Beta Was this translation helpful? Give feedback.
All reactions