Just a quick note. I have an application that is generating thousands of inactive sessions and with the default dedicated server configuration we are having to add more and more memory to our virtual host to support the connections. We estimate that the application may need 45,000 mostly inactive sessions once the application is fully rolled out. So, I thought about how much memory would be required to support 45,000 sessions using shared servers. In an earlier post I mentioned how I got the sessions up to about 11,000 so I just took the Java program from that post and tried to adjust memory parameters to support over 45,000. I got it up to 0ver 60,000 so the test was essentially successful. I don’t think I would want to run a system with 60,000 sessions on a single node, but it is nice to see that it is to some degree possible.
I used a 64 gigabyte Linux VM and set these parameters:
sga_max_size=52G sga_target=52G shared_pool_size=36G dispatchers='(PROTOCOL=TCP)(DISPATCHERS=64)' max_shared_servers=16 shared_servers=16 large_pool_size=512M
Pretty sure that the large pool grew dynamically to fill the sga space not taken up by the shared pool. 52-36=16 gigabyte large pool.
Anyway, I don’t have time to write this up carefully now, but I wanted to publish the parameters.
Here is the previous post with the Java program I used to open 1000 connections:
I ended up running 30 of these on 3 servers for a total of 90,000 potential logins and got up to over 63,000.
The first time you enable query tracing for a database session using shared servers. Only to discover the .trc file contains query executions for 100s/1000s of unrelated dml statements. Plus, the off the shelf application has no feature to output the series of dml statements in a transaction.
Fun times in the 2000’aughts.
Another fun fact. If one database session locks a table row and fails to reach a commit/rollback point for whatever reason, other sessions wanting to update the same locked table row(s) reach a userlock condition. As the chaos of huge transactional applications go, it only takes 100 of such transactions in a userlock state to busy 100 shared servers in few seconds. The entire application freezes and the phones start ringing.
Fun times in the 2010s.
Thanks for your comment Tim. Shared Servers has tradeoffs for sure. But we have used it for several years on an important system. It is true that sessions hung on locks will tie up all your shared servers. I have seen that. But, honestly, if we have 100 sessions hung on a lock we will probably have an issue on that system anyway.
I am not sure about tracing. I would have to check that. I guess it traces the process and not the session? I guess I have not had a need for traces that much on our production system that uses shared servers but that sounds like a pain if it dumps all the SQL that goes through that process.
One strange thing is that I expected shared servers to be a little slower for many short queries but it has been faster. Maybe there are advantages to funneling all your SQL through a few processes. Caching or locality of reference? Not sure why but it does run faster and not slower than dedicated.
Anyway, I appreciate your comment and I agree that there are some negatives with shared servers, but I have seen great benefits in specific situations so I think you have to weigh the pros and cons. I wouldn’t use shared servers unless you knew that it was needed.