How to Create a Large SGA Without Relocating sgabeg on 32-bit Oracle? One of our production data warehouse servers (Oracle 8.1.7.4 Solaris 2.6 sun4u 12GB memory) had a 256MB shmmax set in /etc/system for years, yet the total SGA was over 2GB: Total System Global Area 2367844512 bytes Fixed Size 73888 bytes Variable Size 187043840 bytes Database Buffers 2129920000 bytes Redo Buffers 50806784 bytes One day we realized this mistake and changed shmmax to 4GB per Oracle817 Install Guide A85471_01.pdf. Then we couldn't startup the instance and got the familiar error: SVRMGR> startup ORA-27123: unable to attach to shared memory segment SVR4 Error: 22: Invalid argument Additional information: 1 Additional information: 106 How interesting! You could create a greater than 1.75 GB SGA without relocating sgabeg (procedure outlined in Metalink Note:1028623.6). So I did some test on a small Ultra2 box running Oracle 9.0.1.3 Solaris 2.6 with only 512MB RAM. What I find is quite surprising. When shmmax is set to 268435456, 1073741824, 1610612736, 1879048192, 2013265920, 2076386194 or 2111934921, the instance can startup OK: [note] Total System Global Area 2131529288 bytes Fixed Size 282184 bytes Variable Size 218103808 bytes Database Buffers 1912602624 bytes Redo Buffers 540672 bytes But when it's 2139506468, I get ORA-27123 when I try to startup Oracle. Why is the jump from 2111934921 to 2139506468 significant? Because SGA is 2131529288, a number between them! So, here's my conclusion: ******************************************************************************** If shmmax < SGA, you can always startup Oracle even with SGA > 1.75GB; but if shmmax > SGA, the one segment rule will be enforced and therefore you can't have an SGA > 1.75GB unless you relocate sgabeg. ******************************************************************************** There's of course an upper limit to which you can increase SGA. 32-bit OS only allows 2^32 or 4GB virtual address space. The binary oracle, stack and many shared libraries have to take some space. When you arbitrarily increase SGA, you're squeezing down heap (used by Oracle PGA) to a smaller value. The above conclusion basically states that sgabeg loses its meaning when shmmax < SGA and some segments will be created under "sgabeg", achieving the same effect of relocating the SGA beginning address to a lower value, only that multiple segments instead of a single one are created. Oracle Support's "explanation" why the one-segment rule is not enforced when shmmax < SGA didn't convince me (see Tars 3018781.999 and 13791543.6 for those who can view them). By the way, there're some wrong or misleading notes on Meatalink. Note:221805.1 says 64-bit OS limits virtual address space to 29GB and therefore you can only create an SGA smaller than 29-14=15GB (14 is sgabeg for 64-bit Oracle). But even for UltraSPARC-I/II, the OS limit on virtual address space is 2^44 or 16TB (see groups.google.com/groups?&selm=8i39dp%24j23%241%40new-usenet.uk.sun.com and groups.google.com/groups?selm=2002510.22527.9963%40cable.prodigy.com). Oracle Install Guide A96167-01 tells users to set shmmax to 4GB without a caveat that this advice is only for 32-bit Oracle, except in the HPUX section where shmmax is advised to set to "Available physical memory", the best advice of all! Yong Huang ________________________ [note] In case you're interested in this test, other /etc/system parameters are (from sysdef | grep -i shm): sys/shmsys 2111934921 max shared memory segment size (SHMMAX) 1 min shared memory segment size (SHMMIN) 100 shared memory identifiers (SHMMNI) 200 max attached shm segments per process (SHMSEG) database parameters are: db_block_buffers 0 db_block_size 8192 db_cache_size 1912602624 hi_shared_memory_address 0 shared_memory_address 0 shared_pool_reserved_size 3355443 shared_pool_size 67108864 For obvious reasons, starting up an instance of 2GB SGA on a 512MB RAM Unix box incurs very heavy swapping. In fact, the instance starting up with 2GB SGA on this box logs a warning like this in alert.log: WARNING: EINVAL creating segment of size 0x0000000080086000 fix shm parameters in /etc/system or equivalent and WARNING: Not enough physical memory for SHM_SHARE_MMU segment of size 0x0000000040000000 [flag=0x4000] The latter warning implies ISM (intimate shared memory) is not enabled. Also /usr/proc/pmap -x shows the multiple shared memory segments for this instance (note that sgabeg, which is x80000000, loses its meaning): Address Kbytes Resident Shared Private Permissions Mapped File 00010000 37688 14848 6800 8048 read/exec oracle 024EC000 304 216 72 144 read/write/exec oracle 02538000 912 560 - 560 read/write/exec [ heap ] 20000000 1032192 251880 456 251424 read/write/exec/shared [shmid=0x4] 5F000000 8 - - - read/shared [shmid=0x4] 5F002000 512 512 80 432 read/write/exec/shared [shmid=0x4] 5F082000 8 8 8 - read/shared [shmid=0x4] 5F084000 8 8 8 - read/write/exec/shared [shmid=0x4] 80000000 280 280 - 280 read/write/exec/shared [shmid=0x66] 81000000 1048576 55920 2208 53712 read/write/exec/shared [shmid=0x3] EF000000 3904 2648 544 2104 read/exec libjox9.so EF3DE000 160 160 - 160 read/write/exec libjox9.so EF406000 8 - - - read/write/exec [ anon ] EF4E0000 8 8 8 - read/write/exec/shared [ anon ] EF4F0000 16 16 16 - read/exec libmp.so.2 EF502000 8 8 - 8 read/write/exec libmp.so.2 EF510000 88 56 24 32 read/exec libm.so.1 EF534000 8 8 - 8 read/write/exec libm.so.1 EF540000 8 8 8 - read/exec libkstat.so.1 EF550000 8 8 - 8 read/write/exec libkstat.so.1 EF560000 24 16 8 8 read/exec libposix4.so.1 EF574000 8 8 - 8 read/write/exec libposix4.so.1 EF580000 600 472 456 16 read/exec libc.so.1 EF624000 32 32 - 32 read/write/exec libc.so.1 EF62C000 8 8 - 8 read/write/exec [ anon ] EF640000 24 24 24 - read/exec libaio.so.1 EF654000 16 16 8 8 read/write/exec libaio.so.1 EF660000 8 8 8 - read/exec libsched.so.1 EF670000 8 8 - 8 read/write/exec libsched.so.1 EF680000 456 352 344 8 read/exec libnsl.so.1 EF700000 40 40 - 40 read/write/exec libnsl.so.1 EF70A000 16 - - - read/write/exec [ anon ] EF720000 16 16 16 - read/exec libc_psr.so.1 EF730000 8 - - - read/write/exec [ anon ] EF740000 32 32 32 - read/exec libsocket.so.1 EF756000 8 8 - 8 read/write/exec libsocket.so.1 EF758000 8 - - - read/write/exec [ anon ] EF760000 8 8 - 8 read/write/exec [ anon ] EF770000 8 8 8 - read/exec libskgxp9.so EF780000 8 8 - 8 read/write/exec libskgxp9.so EF790000 8 8 8 - read/exec libodmd9.so EF7A0000 8 8 - 8 read/write/exec libodmd9.so EF7B0000 8 8 8 - read/exec libdl.so.1 EF7C0000 128 128 128 - read/exec ld.so.1 EF7EE000 16 16 - 16 read/write/exec ld.so.1 EFFFA000 24 24 - 24 read/write [ stack ] -------- ------ ------ ------ ------ total Kb 2126232 328408 11280 317128 ipcs -b shows: T ID KEY MODE OWNER GROUP SEGSZ Shared Memory: m 0 0x50000fff --rw-r--r-- root root 68 m 1 00000000 --rw-rw-rw- root root 4068 m 102 00000000 --rw-r----- oracle dba 286720 m 3 00000000 --rw-r----- oracle dba 1073741824 m 4 0x350cde58 --rw-r----- oracle dba 1057513472 According to Steve Adams (Oracle8i Internal Services, p.91), multiple shared memory segments have slight negative impact on instance startup and process creation. But all that only tells us playing this game is impractical and has no real use. The purpose of this experiment is for understanding shared memory segment creation by Oracle. [20060918 note] Bug:1470761 reports the same observation, for Oracle 8.0.6 on SGI.