Transaction questions #100
Replies: 2 comments
-
This is also what I thought. I can't find anywhere in the standard that let's someone do this. Although, my comment here is a bit misleading. I didn't mean if we could avoid incrementing the transaction ID for transactions that only contain read only operations. I was actually talking here about read only operations that have an implicit transaction. For example, issuing a I know this won't work for higher isolation levels, but given most applications don't use transactions at all I didn't want to penalize single read-heavy application for the fewer that require transactions. That being said, the goal has been to focus on correctness over speed.
This is possible but I think unnecessary for a couple reasons:
Everything is worth discussing and I don't want it to seem like these aren't good ideas, but I think vsql isn't ready for this kind of deep diving. |
Beta Was this translation helpful? Give feedback.
-
That's totally understandable. Actually this idea was meant rather as a thought experiment than anything to plan or aim for. I'd still be interested to learn about such attempts to optimize SQL transactions if anyone tried it before (and I guess someone did). Please speak up everyone who reads this (even if this issue thread will be closed). Feel free to close this for lack of actionable items. |
Beta Was this translation helpful? Give feedback.
-
Original posted by @dumblob. Moved here for easier discussion:
That depends on the transaction isolation probably. But generally it's impossible due to the SQL transactions nature which doesn't allow one to expect the whole content of the transaction up front (which would allow one to make this optimization).
But maybe vsql could introduce "by server steered transactions" instead of the typical "by client steered transactions". By server steered transactions would require the client to send the whole transaction (including client business logic inside of the transaction in the form of a set of data dependency graphs) to the server, then the server would examine it to determine whether certain optimizations could be done (by reading the business logic and trying to prove there is no dependency among the individual read values).
Then if there is any dependency, it wouldn't do any such optimization and instead call on the client to perform the business logic step. Then the client would report back, the server would perform the next lookup, send the result to client and so on until the end of the transaction. Of course if there wouldn't be any dependency, the server would prefetch all the requested values at once, commit the transaction and send all there prefetched values in one reply to the client which would then gradually use one by one of the received values as it would be grinding through the business logic.
Hm, thinking of this more - V actually allows one to do this due to AST capabilities. So the client could inspect the given transactional part of the AST and subsequently dependency graph (in compile time now when V has an interpreter, yay!) to be submitted to the server. Sounds too cool to be true.
But I'd guess this type of optimization has lots of potential as it'd finally allow one to write the code naturally instead of thinking how to structure it to split potentially bigger transaction into smaller chunks with different isolation preset and then tying all these fetched values manually and finally performing the rest of the desired business logic. As of now most of such transactions are either inefficient (because programmers are lazy) or they are unsafe (because programmers are even lazier and don't want to adjust their coding and instead they adjust e.g. the isolation level).
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions