Just a quick note. I have an application that is generating thousands of inactive sessions and with the default dedicated server configuration we are having to add more and more memory to our virtual host to support the connections. We estimate that the application may need 45,000 mostly inactive sessions once the application is fully rolled out. So, I thought about how much memory would be required to support 45,000 sessions using shared servers. In an earlier post I mentioned how I got the sessions up to about 11,000 so I just took the Java program from that post and tried to adjust memory parameters to support over 45,000. I got it up to 0ver 60,000 so the test was essentially successful. I don’t think I would want to run a system with 60,000 sessions on a single node, but it is nice to see that it is to some degree possible.
I used a 64 gigabyte Linux VM and set these parameters:
sga_max_size=52G sga_target=52G shared_pool_size=36G dispatchers='(PROTOCOL=TCP)(DISPATCHERS=64)' max_shared_servers=16 shared_servers=16 large_pool_size=512M
Pretty sure that the large pool grew dynamically to fill the sga space not taken up by the shared pool. 52-36=16 gigabyte large pool.
Anyway, I don’t have time to write this up carefully now, but I wanted to publish the parameters.
Here is the previous post with the Java program I used to open 1000 connections:
I ended up running 30 of these on 3 servers for a total of 90,000 potential logins and got up to over 63,000.