I run this scenario:
- Java 17 or 21, Windows or Mac
- Apache Ignite 2.17.0
- Single Ignite data node, single cache. That JVM that runs the node also make occasional bursts of updates to a single record in the Ignite cache, say 100 updates in a simple
for
loop, then 1 second pause, then repeat. Always same record. - A thin client runs in a separate JVM, running
ContinuousQuery
to listen to those updates.
Expected behaviour: updates for the same key are coming in the right order at least with happens-before relationship between them, or maybe even just within the same thread.
Actual behaviour: updates for the same key are coming in different threads which are interleaved with each other, so there is no way to observe the proper order of updates, they are mixed up.
I think the scenario is pretty simple, but I can share the code if needed.
If I change the thin client to a full client, i.e. another full node in the client mode, then situation changes, the order of updates in the continuous query local listener is always correct now in my tests (at least within the same cache key). But I'd prefer the thin one.
I understand that in a thin client, just ForkJoinPool is used to send the updates down the local listeners, while in a full client it's more involved.
It seems that I can kinda fix it with this:
clientConfiguration.setAsyncContinuationExecutor(Executors.newSingleThreadExecutor());
But a) the problem was quite unexpected so took time to pinpoint it to the thin client, b) it took quite a while to find this solution, and c) all the the updates are coming in a single thread now, regardless if it's the same key or not.
So, the question is - is it the expected behaviour? Unordered updates for the same key out of the box look more like a bug to me tbh. Maybe IgniteStripedThreadPoolExecutor
needs to be used somehow for a thin client?
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744765076a4592410.html
评论列表(0条)