-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
executable file
·1 lines (1 loc) · 386 KB
/
index.json
1
[{"content":"Sample Post with an Image Welcome to this sample post! This is an example of how to embed an image stored in your Hugo static directory.\nHere\u0026rsquo;s the Image End of post\n","permalink":"https://v2.amardeepsidhu.com/blog/testpost2/","summary":"\u003ch1 id=\"sample-post-with-an-image\"\u003eSample Post with an Image\u003c/h1\u003e\n\u003cp\u003eWelcome to this sample post! This is an example of how to embed an image stored in your Hugo \u003ccode\u003estatic\u003c/code\u003e directory.\u003c/p\u003e\n\u003ch2 id=\"heres-the-image\"\u003eHere\u0026rsquo;s the Image\u003c/h2\u003e\n\u003cp\u003e\u003cimg alt=\"Sample image\" loading=\"lazy\" src=\"/blog/image.png\"\u003e\u003c/p\u003e\n\u003cp\u003eEnd of post\u003c/p\u003e","title":"Testpost2"},{"content":"","permalink":"https://v2.amardeepsidhu.com/blog/testpost/","summary":"","title":"Testpost"},{"content":"While provisioning compute instances in OCI, you may come across scenarios which need more memory and cores than what the standard shapes provide. Extended memory VMs are meant to solve that problem. Such VMs are able to access cores and memory across a single physical socket and it allows them to go beyond the limits of standard shapes. There are no additional charges for using this specific feature. Customer is charged on the basis of total number of cores and total amount of memory used.\nOn the provisioning screen, the sliders for selecting number of cores and amount of memory allow you to go beyond the limits for standard shapes. Once you cross the limit of a standard shape, the instances becomes an extended memory instance.\nThere are a few things to keep in mind while provisioning such an instance:\nNot all shapes support extended memory instances. Currently it is supported with VM.Standard3.Flex, VM.Standard.E3.Flex and VM.Standard.E4.Flex Extended memory instances don\u0026rsquo;t support burstable feature. Capacity reservations aren\u0026rsquo;t available with Extended memory instances. Preemptible instances don\u0026rsquo;t work with this feature. To take advantage of the extended memory, it is important to make the application NUMA aware when it is deployed on a Extended memory instance. More details about extended memory instances are available in the official documentation page.\n","permalink":"https://v2.amardeepsidhu.com/blog/2023/09/21/extended-memory-vm-instances-in-oci/","summary":"\u003cp\u003eWhile provisioning compute instances in OCI, you may come across scenarios which need more memory and cores than what the standard shapes provide. Extended memory VMs are meant to solve that problem. Such VMs are able to access cores and memory across a single physical socket and it allows them to go beyond the limits of standard shapes. There are no additional charges for using this specific feature. Customer is charged on the basis of total number of cores and total amount of memory used.\u003c/p\u003e","title":"Extended memory VM instances in OCI"},{"content":"Last week, got this issue reported by a DBA that he wasn\u0026rsquo;t able to su to oracle user from root on a Oracle Base Database VM in OCI. The login of opc user worked fine and he could do sudo su to root but he couldn\u0026rsquo;t su to oracle. When he did it just came back to root shell.\n[root@xxx ~]# su - oracle\rLast login: Fri Jan 12 10:20:38 UTC 2023\r[root@xxx ~]# There was nothing relevant in /var/log/messages or /var/log/secure. I tried it for some other user and it worked fine. Then I suspected something with the profile of oracle user and voila ! The .bashrc looked like this\n[root@xxx oracle]# pwd\r/home/oracle\r[root@xxx oracle]# more .bashrc\rexit\r[root@xxx oracle]# So the moment it logged in with user oracle, there was an exit command in .bashrc and it used to exit. Problem solved. I don\u0026rsquo;t know who did it but it looks like a mischief done by someone.\n","permalink":"https://v2.amardeepsidhu.com/blog/2023/01/20/cant-su-to-oracle-user/","summary":"\u003cp\u003eLast week, got this issue reported by a DBA that he wasn\u0026rsquo;t able to su to oracle user from root on a Oracle Base Database VM in OCI. The login of opc user worked fine and he could do sudo su to root but he couldn\u0026rsquo;t su to oracle. When he did it just came back to root shell.\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e[root@xxx ~]# su - oracle\r\nLast login: Fri Jan 12 10:20:38 UTC 2023\r\n[root@xxx ~]#\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eThere was nothing relevant in /var/log/messages or /var/log/secure. I tried it for some other user and it worked fine. Then I suspected something with the profile of oracle user and voila ! The .bashrc looked like this\u003c/p\u003e","title":"Can’t su to oracle user"},{"content":"A customer is using an Exadata X8M-2 machine with multiple VMs (hence multiple clusters). I was working on adding a new storage cell to the configuration. After creating griddisks on the new cell and updating cellip.ora on all the VMs, I noticed that none of the clusters was able to see the new griddisks. I checked the usual suspects like if asm_diskstring was set properly, private network subnet mask on new cell was same as the old ones. All looked good. I started searching about the issue and stumbled upon some references mentioning ASM scoped security. I checked on one of the existing cells and that actually was the issue. The existing nodes had it enabled while the new one hadn\u0026rsquo;t. Running this command on an existing cell\ncellcli -e list key detail\rname:\rkey:\tc25a62472a160e28bf15a29c162f1d74\rtype:\tCELL\rname:\tcluster1\rkey:\tfa292e11b31b210c4b7a24c5f1bb4d32\rtype:\tASMCLUSTER\rname:\tcluster2\rkey:\tb67d5587fe728118af47c57ab8da650a\rtype:\tASMCLUSTER We need to enable ASM scoped security on the new cell as well. There are three things that need to be done. We need to copy /etc/oracle/cell/network-config/cellkey.ora from an existing cell to the new cell, assign the key to the cell and then assign keys to the different ASM clusters. We can use these commands to do it\ncellcli -e ASSIGN KEY FOR CELL \u0026#39;c25a62472a160e28bf15a29c162f1d74\u0026#39;\rcellcli -e ASSIGN KEY FOR ASMCLUSTER \u0026#39;cluster1\u0026#39;=\u0026#39;fa292e11b31b210c4b7a24c5f1bb4d32\u0026#39;;\rcellcli -e ASSIGN KEY FOR ASMCLUSTER \u0026#39;cluster2\u0026#39;=\u0026#39;b67d5587fe728118af47c57ab8da650a\u0026#39;; Once this is done, we need to tag the griddisks for appropriate ASM clusters. If the griddisks aren\u0026rsquo;t created yet, we can use this command to do it\ncellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=sales, size=75G, availableTo=\u0026#39;cluster1\u0026#39; If the griddisks are already created, we can use the alter command to make this change\ncellcli -e alter griddisk griddisk0,gridisk1,.....griddisk11 availableTo=\u0026#39;cluster1\u0026#39;; Once this is done, we should be able to see new griddisks as CANDIDATE in v$asm_disk\nComments Comment by Amit on 2023-03-26 03:59:06 +0530 I am also about to add 4 X9M cells to an existing X8M rack (2 DB+9 Cells). Oracle has added the cells to existing Rack and done the cabling but th.e cells are powered down. They just enabled ILOM access to first new cell node. How do I take it from here.\nDo I need to use OEDA to create xml files for new 4 cell nodes? How do I update the IP addresses of the Cells before adding them to the cluster? Do I need to run OEDA install.sh? If I do need OEDA, should I enter info about only new cells or do I need to enter entire rack (2DB + 9Cells + 4newCells)? Won’t it cause any issues if I run install.sh with all this info as the cluster is already configured. If you have any documents or a link that explains this procedure, that would be great. Thanks!\nComment by Sidhu on 2023-09-19 12:21:02 +0530 Yes, you will need to select storage expansion rack. You can enter 0 for number of DB nodes and 4 for number of storage nodes.\nYou will need to make these changes manually. install.sh will not be used here.\nYou don’t need to run install.sh for this. Storage expansion part is mostly handled manually.\nI am not sure if this information exists in consolidation form at one place but I am sure there would be many blog posts and MOS docs describing this scenario.\n","permalink":"https://v2.amardeepsidhu.com/blog/2022/05/09/adding-a-new-cell-to-exadata-with-asm-scoped-security-enabled/","summary":"\u003cp\u003eA customer is using an Exadata X8M-2 machine with multiple VMs (hence multiple clusters). I was working on adding a new storage cell to the configuration. After creating griddisks on the new cell and updating cellip.ora on all the VMs, I noticed that none of the clusters was able to see the new griddisks. I checked the usual suspects like if asm_diskstring was set properly, private network subnet mask on new cell was same as the old ones. All looked good. I started searching about the issue and stumbled upon some references mentioning \u003ca href=\"https://mudasirhakakblog.wordpress.com/2019/03/01/asm-scoped-security/\"\u003eASM scoped security\u003c/a\u003e. I checked on one of the existing cells and that actually was the issue. The existing nodes had it enabled while the new one hadn\u0026rsquo;t. Running this command on an existing cell\u003c/p\u003e","title":"Adding a new cell to Exadata with asm scoped security enabled"},{"content":"A customer who is using an Exadata X8M-2 with multiple VMs had Smokescreen deployed in their company recently and they reported an issue that one of the Smokescreen decoy servers in their DC was seeing traffic from one of the Exadata VMs on a certain port. That was rather confusing as that port was the database listener port on that VM and why would a VM with Oracle RAC deployed try to access any random IP on the listener port. Also it was happening only for this VM. Nothing for so many other VMs.\nWe were just looking at the things and my colleague said that he had seen this IP somewhere and he started looking through the emails. In a minute, we found the issue as he found this IP mentioned in one of the emails. This was the VIP of this VM from where the traffic was reported to be originating. While reserving IPs for Smokescreen decoy servers, someone made the mess and re-used the IP that was already used for one of the VIPs of this RAC system !\n","permalink":"https://v2.amardeepsidhu.com/blog/2022/03/21/smokescreen-detects-traffic-from-an-exadata-vm/","summary":"\u003cp\u003eA customer who is using an Exadata X8M-2 with multiple VMs had Smokescreen deployed in their company recently and they reported an issue that one of the Smokescreen decoy servers in their DC was seeing traffic from one of the Exadata VMs on a certain port. That was rather confusing as that port was the database listener port on that VM and why would a VM with Oracle RAC deployed try to access any random IP on the listener port. Also it was happening only for this VM. Nothing for so many other VMs.\u003c/p\u003e","title":"Smokescreen detects traffic from an Exadata VM"},{"content":"I was working on configuring a new database for backup to ZDLRA and hit this issue while testing a controlfile backup via Enterprise Manager -\u0026gt; Schedule backup. It could happen in any environment.\nUnable to connect to the database with SQLPlus, either because the database is down or due to an environment issue such as incorrectly specified...\rIf the database is up, check the database target monitoring properties and verify that the Oracle Home value is correct. The 2nd line clearly tells the problem but since the Cluster Database status in EM was green, so it took me a while to figure it out. Issue turned out to be a missing / in the end of ORACLE_HOME specified in monitoring configuration of the cluster database. The DB home specified was /u01/app/oracle/product/11.2.0.4/dbhome_1 instead of /u01/app/oracle/product/11.2.0.4/dbhome_1 /.\nOn the server the bash_profile has the home set as /u01/app/oracle/product/11.2.0.4/dbhome_1. When I tried to connect as sysdba there, it gave an error TNS : lost contact. Then I set the environment with .oraenv and I was able to connect. /etc/oratab had correct home specified as /u01/app/oracle/product/11.2.0.4/dbhome_1 /. After comparing the value of ORACLE_HOME in these two cases, the issue was identified. Then I updated the ORACLE_HOME value in the target monitoring configuration in Enterprise Manager and it worked as expected.\n","permalink":"https://v2.amardeepsidhu.com/blog/2022/01/06/unable-to-connect-to-the-database-with-sqlplus/","summary":"\u003cp\u003eI was working on configuring a new database for backup to ZDLRA and hit this issue while testing a controlfile backup via Enterprise Manager -\u0026gt; Schedule backup. It could happen in any environment.\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eUnable to connect to the database with SQLPlus, either because the database is down or due to an environment issue such as incorrectly specified...\r\nIf the database is up, check the database target monitoring properties and verify that the Oracle Home value is correct.\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eThe 2nd line clearly tells the problem but since the Cluster Database status in EM was green, so it took me a while to figure it out. Issue turned out to be a missing / in the end of ORACLE_HOME specified in monitoring configuration of the cluster database. The DB home specified was /u01/app/oracle/product/11.2.0.4/dbhome_1 instead of /u01/app/oracle/product/11.2.0.4/dbhome_1 \u003cstrong\u003e/\u003c/strong\u003e.\u003c/p\u003e","title":"Unable to connect to the database with SQLPlus"},{"content":"It was actually funny. So thought about posting it that sometimes how we can miss the absolute basics. This customer is using a virtualized Exadata with multiple VMs. One VM hosts the database meant to be used for dbfs and another VM connects to this DB over IB to mount dbfs file system using dbfs_client. One day VMs were rebooted and due to some reason the dbfs filesystem didn\u0026rsquo;t mount on startup. It went on for few days and they couldn\u0026rsquo;t mount it. One day I got a chance to look at it and the error they were facing was:\nFile system already present at specified mount point /dbfs_direct If you are familiar with Unix, this clearly indicates some problem with the directory where it is trying to mount the file system. I checked that and there were some files in /dbfs_direct. I moved those files and it was able to mount. Issue resolved.\nThen after closing the session, I was thinking that what could have happened as this dbfs mount point has been in use for long. Then it struck me. It was being used to take some RMAN backups and the path was hard coded in the scripts. When it didn\u0026rsquo;t mount after the reboot (i don\u0026rsquo;t know why), someone ran that script and whatever directories it didn\u0026rsquo;t find, it probably created that complete directory structure and tried to write to a log file. Once /dbfs_direct had those files, it anyway was not going to mount.\n","permalink":"https://v2.amardeepsidhu.com/blog/2021/06/05/file-system-already-present-at-specified-mount-point-dbfs_direct/","summary":"\u003cp\u003eIt was actually funny. So thought about posting it that sometimes how we can miss the absolute basics. This customer is using a virtualized Exadata with multiple VMs. One VM hosts the database meant to be used for dbfs and another VM connects to this DB over IB to mount dbfs file system using dbfs_client. One day VMs were rebooted and due to some reason the dbfs filesystem didn\u0026rsquo;t mount on startup. It went on for few days and they couldn\u0026rsquo;t mount it. One day I got a chance to look at it and the error they were facing was:\u003c/p\u003e","title":"File system already present at specified mount point /dbfs_direct"},{"content":"To put it in bit of an Indian context, database is not your daughter-in-law that you can blame it for every performance issue that occurs in the environment. But it does happen. Most of the time it is the database that is blamed for all such issues. Many times, the issues are in some other layer like OS, network or storage.\nFaced this issue recently at one of the customer sites where performance in one of the databases went down suddenly. It was a 2 node RAC on 12.1.0.2 running on Linux 7 using some kind of Hitachi SSD storage array. There were no changes as per DBA, application, OS and storage teams. But something must have changed somewhere. Otherwise why would performance degrade just like that. I \u0026amp; my colleague checked some details and found that something happened in the morning a day before. Starting from that point in time, the execution time for all the commonly run queries shot up. Generally speaking, when all the queries are doing bad and you are sure that nothing has been changed on the database side, the reasons could be outside the database. But being a DBA, it is not easy to prove that. We took AWRs from good and bad times and the wait events section looked like this:\nNow there is something clearly and terribly wrong with the details in the second snippet and in the first look it appears to be an IO issue. Av Rd(ms) in the File IO Stats section of the AWR reports was also showing really bad numbers for most of the data files, which have been fine two days ago.\nThe conference calls continued and we were not reaching anywhere. Storage team as usual said that everything was fine and there were no issues. Finally the discussion moved to multipathing and the teams started checking in that direction. There were errors like this in /var/log/messages\nmultipathd: asm!.asm_ctl_vbg1: failed to get path uid\rmultipathd: asm!.asm_ctl_vbg6: failed to get path uid\rmultipathd: asm!.asm_ctl_vbg9: failed to get path uid That meant there was a problem with one of the paths from the database nodes to storage. They disabled the bad path for both the DB nodes and voila ! IO performance was back on track. It was multipathing that needed to be fixed.\nSo it is always not the database. It is unfair to always blame the DBA !\nComments Comment by Ravinder on 2021-03-22 17:51:55 +0530 Thanks for sharing this information !\ndo we have way where we can find this is not database issue . Issue is with network or stirage.\nComment by Sidhu on 2021-03-22 19:19:40 +0530 In this case it was kinda straight forward but that is not always the case. System level performance issues can be very complex to diagnose. AWR report and an ASH report are good starting points. You can also use Tanel Poder’s scripts like snapper and ashtop/dashtop and then move from there. He has made multiple videos and blog posts on use of these tools:\nhttps://tanelpoder.com/videos/\nComment by supriyo77 on 2021-03-22 20:18:10 +0530 i had an issue with a db where server RAID battery had a problem.As a result I/O performance degraded and multiple events pop up . issue was identified by iotop command.\nComment by Sidhu on 2021-03-23 10:39:56 +0530 Cool !\nComment by [email protected] on 2021-03-26 12:31:28 +0530 Excellent Amar. Cool and simplified post.\nComment by Sidhu on 2021-03-28 10:23:40 +0530 Thanks Raj !\n","permalink":"https://v2.amardeepsidhu.com/blog/2021/03/22/database-performance-degradation-due-to-multipath-issues/","summary":"\u003cp\u003eTo put it in bit of an Indian context, database is not your daughter-in-law that you can blame it for every performance issue that occurs in the environment. But it does happen. Most of the time it is the database that is blamed for all such issues. Many times, the issues are in some other layer like OS, network or storage.\u003c/p\u003e\n\u003cp\u003eFaced this issue recently at one of the customer sites where performance in one of the databases went down suddenly. It was a 2 node RAC on 12.1.0.2 running on Linux 7 using some kind of Hitachi SSD storage array. There were no changes as per DBA, application, OS and storage teams. But something must have changed somewhere. Otherwise why would performance degrade just like that. I \u0026amp; my colleague checked some details and found that something happened in the morning a day before. Starting from that point in time, the execution time for all the commonly run queries shot up. Generally speaking, when all the queries are doing bad and you are sure that nothing has been changed on the database side, the reasons could be outside the database. But being a DBA, it is not easy to prove that. We took AWRs from good and bad times and the wait events section looked like this:\u003c/p\u003e","title":"Database performance degradation due to multipath issues"},{"content":"To be honest, Fernando Simon has already documented all the steps needed in ZDLRA patching . So this post is more like a reference post for me and it points to the links on his blog. One thing he could change though are the post titles. He also agrees ;)\nhttps://twitter.com/amardeep_sidhu/status/1370304085245661192\nZDLRA patching is broadly divided into two parts. First part is where you patch the RA library and Grid \u0026amp; DB homes. Second part includes compute node \u0026amp; storage cell image patch and patches for IB/RoCE switches. Second part is exactly similar to Exadata except that it is bit restricted in terms of image versions that you can use. Only the versions that are certified for ZDLRA can be used. Also the RA library version and the Exadata image version should be compatible with each other. So if you are planning to patch only one part; RA library or the image, make sure that both the components stay compatible. The MOS note that has all these details is 1927416.1. This note should be the first place to go when you are planning to patch a ZDLRA. The steps for upgrade/patch, image patching are given in MOS note 2028931.1. There is another note 2639262.1 that discusses some of the known issues that you may face while doing the patching. It is important to review all three notes before you plan to patch.\nThe RA library patching part can be considered of two different types. This is an important difference. Make sure that you follow the right set of commands. When you are jumping between major versions say going from 12.x to 19.x, it is called an upgrade and the commands are like racli upgrade appliance \u0026ndash;step=1. Fernando talks about this in detail in this post.\nOn the other hand, when you are not jumping between versions; say going from 19.x to 19.x only, it is called patching and the commands are like racli patch appliance \u0026ndash;step=1. Fernando has discussed this in detail in this post.\nThe Exadata bit (image \u0026amp; switches patching) of it is exactly the same as we do in Exadata. Fernando talks about this in this post.\nThe RA library patching bit is pretty much automated and works fine most of the time. If you hit an issue, you may find the solution/workaround documented in one of the MOS notes.\nHappy patching !\nComments Comment by Jamie on 2024-02-09 19:29:01 +0530 Hi Amardeep , Nice article . is it possible to have patching method for ZDLRA as ROLLING ?\n","permalink":"https://v2.amardeepsidhu.com/blog/2021/03/12/zdlra-patching/","summary":"\u003cp\u003eTo be honest, Fernando Simon has already documented all the steps needed in ZDLRA patching . So this post is more like a reference post for me and it points to the links on \u003ca href=\"http://www.fernandosimon.com/blog/\"\u003ehis blog\u003c/a\u003e. One thing he could change though are the post titles. He also agrees ;)\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://twitter.com/amardeep\"\u003ehttps://twitter.com/amardeep\u003c/a\u003e_sidhu/status/1370304085245661192\u003c/p\u003e\n\u003cp\u003eZDLRA patching is broadly divided into two parts. First part is where you patch the RA library and Grid \u0026amp; DB homes. Second part includes compute node \u0026amp; storage cell image patch and patches for IB/RoCE switches. Second part is exactly similar to Exadata except that it is bit restricted in terms of image versions that you can use. Only the versions that are certified for ZDLRA can be used. Also the RA library version and the Exadata image version should be compatible with each other. So if you are planning to patch only one part; RA library or the image, make sure that both the components stay compatible. The MOS note that has all these details is 1927416.1. This note should be the first place to go when you are planning to patch a ZDLRA. The steps for upgrade/patch, image patching are given in MOS note 2028931.1. There is another note 2639262.1 that discusses some of the known issues that you may face while doing the patching. It is important to review all three notes before you plan to patch.\u003c/p\u003e","title":"ZDLRA patching"},{"content":"Faced this while running installer for setting up a 2 node RAC setup (version 19.8) on an Oracle SuperCluster. The error reported in the log is:\n[FATAL] [INS-44000] Passwordless SSH connectivity is not setup from the local node node1 to the following nodes:\r[node2]\r[INS-06006] Passwordless SSH connectivity not set up between the following node(s): [node2] From the error it appears that the ssh is not setup between two nodes but actually that is not the case. Here the error message is bit misleading. It turned out to be an issue with scp with openssh version 8.x. Running the setup with -debug option gives the clue:\n\u0026lt;protocol error: filename does not match request\u0026gt; The reason is a new check introduced in openssh version 8.x. It is explained here, here and here. MOS note 2555697.1 also talks about it.\nWorkaround is to pass the -T option to scp to ignore the new checks. You can rename the scp binary to something like scp.original and create a new shell script there like this:\ncd /usr/bin\rmv scp scp.original\rvi scp\r/usr/bin/scp.original -T $*\rchmod 555 scp This time, the install should succeed. You can revert the changes back once the install is done.\nComments Comment by Fritson Louis on 2023-01-08 04:51:52 +0530 Thank you!!\n","permalink":"https://v2.amardeepsidhu.com/blog/2021/03/10/fatal-ins-44000-passwordless-ssh-connectivity-is-not-setup/","summary":"\u003cp\u003eFaced this while running installer for setting up a 2 node RAC setup (version 19.8) on an Oracle SuperCluster. The error reported in the log is:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e[FATAL] [INS-44000] Passwordless SSH connectivity is not setup from the local node node1 to the following nodes:\r\n[node2]\r\n[INS-06006] Passwordless SSH connectivity not set up between the following node(s): [node2]\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eFrom the error it appears that the ssh is not setup between two nodes but actually that is not the case. Here the error message is bit misleading. It turned out to be an issue with scp with openssh version 8.x. Running the setup with -debug option gives the clue:\u003c/p\u003e","title":"[FATAL] [INS-44000] Passwordless SSH connectivity is not setup"},{"content":"Earlier versions of OEDA didn\u0026rsquo;t allow you to have mixed cells in the configuration i.e. High Capacity (HC) and Extreme Flash (EF). The way to deal with that configuration was that deploy the system with either HC or EF cells and then manually configure the remaining cells.\nI am not sure when did it change but the newer versions allow you have mixed type of cells in a single OEDA configuration. Once you select the hardware, there is an additional option called Enable Additional Storage, where you can select the other type of cells. The minimum number of cells has to be three to use this option. Also the cells that are at the bottom of the rack physically should be selected as main storage and the other cells should be added as additional storage as that is how OEDA builds the configuration files.\nOnce this is selected, on the Diskgroups screen, select Diskgroup layout as custom and you can create multiple diskgroups and select cells for each diskgroup (as EF \u0026amp; HC cells can\u0026rsquo;t be part of the same diskgroup).\nOnce the configuration is generated, it can be deployed with OneCommand without any manual intervention. A small feature but makes life easier by getting rid of all the manual steps.\nComments Comment by Aman on 2023-03-25 21:34:06 +0530 Hi Amardeep,\nThanks for the blog. It provided some good info. I have a question. If we need to add 4 X9M cell to an existing installed X8M system, what kind of hardware selection we do in OEDA?\nThanks for your help!\nAman\nComment by Sidhu on 2023-09-19 12:18:07 +0530 You can select a storage expansion rack by inputting 0 DB nodes and 4 storage nodes.\n","permalink":"https://v2.amardeepsidhu.com/blog/2020/10/27/doing-an-exadata-mixed-cells-config-with-oeda/","summary":"\u003cp\u003eEarlier versions of OEDA didn\u0026rsquo;t allow you to have mixed cells in the configuration i.e. High Capacity (HC) and Extreme Flash (EF). The way to deal with that configuration was that deploy the system with either HC or EF cells and then manually configure the remaining cells.\u003c/p\u003e\n\u003cp\u003eI am not sure when did it change but the newer versions allow you have mixed type of cells in a single OEDA configuration. Once you select the hardware, there is an additional option called \u003cstrong\u003eEnable Additional Storage\u003c/strong\u003e, where you can select the other type of cells. The minimum number of cells has to be three to use this option. Also the cells that are at the bottom of the rack physically should be selected as main storage and the other cells should be added as additional storage as that is how OEDA builds the configuration files.\u003c/p\u003e","title":"Doing an Exadata mixed cells config with OEDA"},{"content":"In part 1, we discussed few things that you should take care before implementation of a ZDLRA. In this post, we will discuss few more things that you should review before or at the time of implementation:\nIf you are getting two ZDLRAs (one each for primary and standby sites), there are two ways they can be deployed. One scenario is where all the primary databases (or the database that have no standby) backup to RA at the primary site and then the data is replicated from primary RA to RA at the standby site. This works well for the DBs that have no standby database. For the DBs where there is a standby database, there is a better architecture that can be deployed. In that scenario, primary databases backup to primary RA and the standby databases backup to standby RA. That saves you all the traffic over replication network. Oracle has published a whitepaper on how to do this configuration. Few of the instructions in this paper are a bit dated but it gives a good overall idea of how to do the implementation. Keep an eye on the features supported for different DB versions. An interesting one is that real-time redo shipping from standby databases is supported on 12c+ databases only. It is not supported for 11g. There could be other similar things. MOS note 1995866.1 has these details. Depending upon the ZDLRA software version being deployed, it may need a minimum version of EM and the ZDLRA plugin. MOS note 2542836.1 has these details. Make sure after discovering the the primary and standby databases in EM, their primary-standby relationship is reflected. Real-time redo sent to ZDLRA is compressed but the archive logs backup will be compressed only if you use compression in the RMAN command. It is always good to include backup archivelog command with daily incremental job to make sure that no archive log is missed. Many of the environments have separate networks for backup traffic. Make sure the backup traffic to ZDLRA uses DB server\u0026rsquo;s backup network. If that is not the case, you may need to add an explicit route on DB server for ZDLRA client/VIP/scan IPs. There are going to be different users that you will need to use: one OS user for deploying the EM agent, one DB user that will be used to run the backups. Depending upon your environment, it may oracle OS user, SYS DB user or could be some other named user created for this purpose. In next few posts, we will discuss some of the issues I have faced while doing ZDLRA implementation for some customers.\nPS: Fernando Simon has written some brilliant posts related to ZDLRA on his blog. I highly recommend to review all of them. Brilliant stuff.\n","permalink":"https://v2.amardeepsidhu.com/blog/2020/10/06/implementing-zdlra-part-2/","summary":"\u003cp\u003eIn \u003ca href=\"/blog/2020/09/09/implementing-zdlra-part-1/\"\u003epart 1\u003c/a\u003e, we discussed few things that you should take care before implementation of a ZDLRA. In this post, we will discuss few more things that you should review before or at the time of implementation:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eIf you are getting two ZDLRAs (one each for primary and standby sites), there are two ways they can be deployed. One scenario is where all the primary databases (or the database that have no standby) backup to RA at the primary site and then the data is replicated from primary RA to RA at the standby site. This works well for the DBs that have no standby database. For the DBs where there is a standby database, there is a better architecture that can be deployed. In that scenario, primary databases backup to primary RA and the standby databases backup to standby RA. That saves you all the traffic over replication network. Oracle has published a whitepaper on how to do this configuration. Few of the instructions in this paper are a bit dated but it gives a good overall idea of how to do the implementation.\u003c/li\u003e\n\u003cli\u003eKeep an eye on the features supported for different DB versions. An interesting one is that real-time redo shipping from standby databases is supported on 12c+ databases only. It is not supported for 11g. There could be other similar things. MOS note 1995866.1 has these details.\u003c/li\u003e\n\u003cli\u003eDepending upon the ZDLRA software version being deployed, it may need a minimum version of EM and the ZDLRA plugin. MOS note 2542836.1 has these details.\u003c/li\u003e\n\u003cli\u003eMake sure after discovering the the primary and standby databases in EM, their primary-standby relationship is reflected.\u003c/li\u003e\n\u003cli\u003eReal-time redo sent to ZDLRA is compressed but the archive logs backup will be compressed only if you use compression in the RMAN command. It is always good to include backup archivelog command with daily incremental job to make sure that no archive log is missed.\u003c/li\u003e\n\u003cli\u003eMany of the environments have separate networks for backup traffic. Make sure the backup traffic to ZDLRA uses DB server\u0026rsquo;s backup network. If that is not the case, you may need to add an explicit route on DB server for ZDLRA client/VIP/scan IPs.\u003c/li\u003e\n\u003cli\u003eThere are going to be different users that you will need to use: one OS user for deploying the EM agent, one DB user that will be used to run the backups. Depending upon your environment, it may oracle OS user, SYS DB user or could be some other named user created for this purpose.\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eIn next few posts, we will discuss some of the issues I have faced while doing ZDLRA implementation for some customers.\u003c/p\u003e","title":"Implementing ZDLRA – Part 2"},{"content":"A quick note about an error I faced while running root.sh on an Exadata machine. The configuration tools failed with the following error:\nError is PRVF-4657 : Name resolution setup check for \u0026#34;db-scan\u0026#34; (IP address: x.x.x.101) failed I did nslookup on the scan name and it all seemed good. So why the error ? After spending another 5 minutes, I looked at /etc/hosts and there was it. Someone had populated /etc/hosts of DB nodes with all the hostnames entries including the scan name. Something like:\nx.x.x.101\tdb-scan.example.com\tdb-scan\rx.x.x.102\tdb-scan.example.com\tdb-scan\rx.x.x.103\tdb-scan.example.com\tdb-scan As /etc/hosts can return only one IP against a hostname whereas for scan, DNS is supposed to return 3 IPs, hence the problem. The solution is to comment out the scan name entries in /etc/hosts on all the db nodes and let the system do the name resolution via the DNS.\n","permalink":"https://v2.amardeepsidhu.com/blog/2020/09/25/prvf-4657-name-resolution-setup-check-for-db-scan-ip-address-x-x-x-101-failed/","summary":"\u003cp\u003eA quick note about an error I faced while running root.sh on an Exadata machine. The configuration tools failed with the following error:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eError is PRVF-4657 : Name resolution setup check for \u0026#34;db-scan\u0026#34; (IP address: x.x.x.101) failed\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eI did nslookup on the scan name and it all seemed good. So why the error ? After spending another 5 minutes, I looked at /etc/hosts and there was it. Someone had populated /etc/hosts of DB nodes with all the hostnames entries including the scan name. Something like:\u003c/p\u003e","title":"PRVF-4657 : Name resolution setup check for “db-scan” (IP address: x.x.x.101) failed"},{"content":"Zero Data Loss Recovery Appliance (ZDLRA) is Oracle\u0026rsquo;s solution for database backups. It has many advantages over other backup solutions that are available in the market. This post has a brief introduction to ZDLRA and few links for further reading. This is a quick post about few of things that you should keep in mind if you are planning to get a ZDLRA (RA in short). Of course, there is a lot more that is needed while executing the whole plan, but these are some of the basics:\nThe very first thing is capacity planning. Depending upon the number \u0026amp; sizes of the DBs that you plan to backup, you need to choose the required configuration. In most cases, an Oracle guy would be doing this for you but you should actively participate in the exercise by providing all the necessary information so that the calculations can be as accurate as possible. Another things that plays an important role in deciding the capacity needed is the retention period i.e. period for which you would like to keep the backups in RA. More the number of days, more is the space that you will need. Another important thing to consider is whether you are getting only one RA (for primary or standby site) or getting two of them i.e. one each for primary and standby site. Both scenarios need different type of configurations (including the bandwidth requirements between primary and standby sites) so it needs to be planned accordingly. One more aspect you need to consider is long term retention. It could be Oracle Cloud object storage or some tape solution. Once you have enabled DB backups to ZDLRA, you will need to stop all other backups. Plan that accordingly. Oracle provides way to run the legacy and ZDLRA backups together but that is for short duration i.e. when you are migrating from legacy backups to ZDLRA. That is not really a way to run 2 backup strategies together for long term. In the next post, will talk about few more things that are important at the time of actual implementation.\n","permalink":"https://v2.amardeepsidhu.com/blog/2020/09/09/implementing-zdlra-part-1/","summary":"\u003cp\u003eZero Data Loss Recovery Appliance (ZDLRA) is Oracle\u0026rsquo;s solution for database backups. It has many advantages over other backup solutions that are available in the market. \u003ca href=\"https://blogs.oracle.com/frankwickham/zero-data-loss-recovery-appliance-zdlra\"\u003eThis post\u003c/a\u003e has a brief introduction to ZDLRA and few links for further reading. This is a quick post about few of things that you should keep in mind if you are planning to get a ZDLRA (RA in short). Of course, there is a lot more that is needed while executing the whole plan, but these are some of the basics:\u003c/p\u003e","title":"Implementing ZDLRA – Part 1"},{"content":"Exadata storage software version 20.1 introduces a new feature called \u0026ldquo;Secure Fabric\u0026rdquo; for KVM based multi cluster deployments (Exadata X8M). It enables network isolation between multiple tenants (i.e. KVM VMs based RAC clusters). This feature aligns with Infiniband Partitioning on OVM based systems. There are customers who in such scenarios want that VMs of one RAC shouldn\u0026rsquo;t be able to see traffic of the other RAC VMs. This feature achieves that. Similar to Pkeys in IB switches, here it uses a double VLAN tagging system where the first tag identiefies the network partition and the second tag is used to denote membership level of the VM. Exadata documention has more details.\nThe minimum Exadata software version needed to enable this feature is 20.1. This release comes with RoCE switches firmware version 7.0(3)I7(8).\nStarting Jun 2020, OEDA supports this configuraion and this feature can be enabled in OEDA itself. To enable it in OEDA, under Cluster Networks click on the Advanced button and you will see the Enable Secure Fabric option.\nOnce this option is enabled, you will see VLANs enabled for the private network. While doing the deployment, OneCommand will take care of the configuration needed.\nAs per documentation, at present there is no way to enable it on existing systems except doing a re-deployment.\n","permalink":"https://v2.amardeepsidhu.com/blog/2020/07/17/using-secure-fabric-for-network-isolation-in-kvm-environments-on-exadata/","summary":"\u003cp\u003eExadata storage software version 20.1 introduces a new feature called \u0026ldquo;Secure Fabric\u0026rdquo; for KVM based multi cluster deployments (Exadata X8M). It enables network isolation between multiple tenants (i.e. KVM VMs based RAC clusters). This feature aligns with Infiniband Partitioning on OVM based systems. There are customers who in such scenarios want that VMs of one RAC shouldn\u0026rsquo;t be able to see traffic of the other RAC VMs. This feature achieves that. Similar to Pkeys in IB switches, here it uses a double VLAN tagging system where the first tag identiefies the network partition and the second tag is used to denote membership level of the VM. \u003ca href=\"https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmin/exadata-network-requirements.html#GUID-75CC8740-CC7F-4A7B-B69B-B93E927E80EC\"\u003eExadata documention\u003c/a\u003e has more details.\u003c/p\u003e","title":"Using Secure Fabric for network isolation in KVM environments on Exadata"},{"content":"There are two common scenarios when we may need this:\nAn existing DB node has crashed and is unrecoverable (due to some failure and non-availability of any backups. Though some of the things may need to be done even if the backups were available). We have an existing Exadata rack that is virtualized. Now there is a new DB node and the existing clusters need to be extended to include the VMs on this new node. I recently faced the first scenario where a virtualized DB node crashed and wasn\u0026rsquo;t recoverable. A bare metal DB node restore is a relatively simple procedure where we just have to reimage the node, create the needed directories, users etc and add it to the RAC cluster. In case of virtualization, the creation of VMs is an additional step that needs to be done. That makes it slightly more complex.\nSo the scenario is that we have an Exadata quarter rack where DB node1 has issues and needs to be reimaged and reconfigured. There are multiple VMs (so RAC clusters) created. As one of the DB node has gone down, each RAC cluster is running with one less instance. This failed node will need to be cleaned up from the RAC configuration before adding it back. Here are the steps that we need to follow to restore it back:\nReimage the node using an ISO and make it ready for creation of User Domains (aka VMs) Create the required VMs Create the required users, setup ssh with other nodes Clear the failed node configuration from existing RAC clusters Add the newly created VMs back to the respective RAC clusters Now let\u0026rsquo;s discuss these steps in detail.\nReimage : The simplest way to reimage an Exadata node is to connect the ISO (We can download the ISO for the version we need from MOS note 888828.1) using ILOM, set the next boot device to CD-ROM, reboot/reset the node and let it boot from CD-ROM. Most of the installation part is automated and doesn\u0026rsquo;t ask any questions. Once it is done installing, ipconf starts in interactive mode and asks for all the information like Name servers, NTP servers, IP addresses and hostnames for various network interfaces etc. Once done, it will boot into the Linux partition. Since we need to virtualize the node, we need to switch it to OVS by running a script called /opt/oracle.SupportTools/switch_to_ovm.sh. It will reboot the node to OVS partition. Next step is to run reclaim /opt/opt/oracle.SupportTools/reclaim.sh -free -reclaim to reclaim the space used for bare metal partition. At this moment we are done with the reimaging part. To use ILOM in a browser and be able to access the console, we need a Java enabled Windows/Linux system. And if there is a firewall between that system and the server, this link lists the ports that need to be allowed in the firewall.\nVMs creation : Next step is the creation of VMs. We will use OneCommand to achieve this. In this case, we had the original XML file used for deployment. Now we need to edit that configuration and remove the existing node\u0026rsquo;s details from it. We can import the XML into OEDA, make the required changes and save the configuration files. This needs to be done carefully as a simple mistake like a duplicate IP may cause issues with the ASM/DBs running on the other node. Once this is done, we can download the OneCommand patch (MOS note 888828.1) and run the create VMs step of OneCommand. As we have only one node in the XML file, so it is not going to touch the existing configuration.\nCreate users : Now we need to create the users on the newly created VMs. OneCommand\u0026rsquo;s create users step can be used here. It will create users on all the VMs. There are some things that we need to do manually here. First thing is to remove binaries from Grid \u0026amp; DB home. As we are going to use addnode.sh to add new nodes to existing RAC clusters, so binaries are going to be copied from an existing node. Then we need to change ownership of Grid \u0026amp; DB home directory tree to oracle:oinstall. Also for each VM, we need to setup passwordless ssh with the respective other VM (\u0026amp; vice versa) that is going to be part of the cluster.\nClear failed node config : Next we need to clear the failed node\u0026rsquo;s configuration from each of the RAC clusters. That is pretty much the standard stuff we do in RAC.\nAdd the new nodes : This again is just the standard addnode stuff we do in RAC.\nI have used the terms VM and Node interchangeably here but the context should make it clear if I am referring to the physical node or a VM. There is another method to do this using OEDACLI and it is documented in Exadata documentation. That automates a lot of these things. Check this link for the details.\nComments Comment by Ayush on 2020-05-12 13:18:25 +0530 Thanks for sharing this. Please confirm what kind of backup/recovery plan can save us from this reimaging?\nComment by Sidhu on 2020-05-28 12:37:24 +0530 It is not that common a occurrence that 2 disks fail at the same time and you get into a situation where you have to reimage and build it all over again but nevertheless it can happen. There are ways to take backups of dom0 and domUs. Exadata documentation talks about it https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/managing-oracle-vm-domains.html#GUID-A92B0137-52F4-4D58-BF06-291F79460C1B\n","permalink":"https://v2.amardeepsidhu.com/blog/2020/05/11/exadata-virtualized-db-node-restore/","summary":"\u003cp\u003eThere are two common scenarios when we may need this:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eAn existing DB node has crashed and is unrecoverable (due to some failure and non-availability of any backups. Though some of the things may need to be done even if the backups were available).\u003c/li\u003e\n\u003cli\u003eWe have an existing Exadata rack that is virtualized. Now there is a new DB node and the existing clusters need to be extended to include the VMs on this new node.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eI recently faced the first scenario where a virtualized DB node crashed and wasn\u0026rsquo;t recoverable. A bare metal DB node restore is a relatively simple procedure where we just have to reimage the node, create the needed directories, users etc and add it to the RAC cluster. In case of virtualization, the creation of VMs is an additional step that needs to be done. That makes it slightly more complex.\u003c/p\u003e","title":"Exadata Virtualized DB node restore"},{"content":"I was patching an Exadata db node from 18.1.5.0.0.180506 to 19.3.2.0.0.191119. It had been more than an hour and dbnodeupdate.sh appeared to be stuck. Trying to ssh to the node was giving \u0026ldquo;connection refused\u0026rdquo; and the console had this output (some output removed for brevity):\n[ 458.006444] upgrade[8876]: [642/676] (72%) installing exadata-sun-computenode-19.3.2.0.0.191119-1...\r\u0026lt;\u0026gt;\r[ 459.991449] upgrade[8876]: Created symlink /etc/systemd/system/multi-user.target.wants/exadata-iscsi-reconcile.service, pointing to /etc/systemd/system/exadata-iscsi-reconcile.service.\r[ 460.011466] upgrade[8876]: Looking for unit files in (higher priority first):\r[ 460.021436] upgrade[8876]: /etc/systemd/system\r[ 460.028479] upgrade[8876]: /run/systemd/system\r[ 460.035431] upgrade[8876]: /usr/local/lib/systemd/system\r[ 460.042429] upgrade[8876]: /usr/lib/systemd/system\r[ 460.049457] upgrade[8876]: Looking for SysV init scripts in:\r[ 460.057474] upgrade[8876]: /etc/rc.d/init.d\r[ 460.064430] upgrade[8876]: Looking for SysV rcN.d links in:\r[ 460.071445] upgrade[8876]: /etc/rc.d\r[ 460.076454] upgrade[8876]: Looking for unit files in (higher priority first):\r[ 460.086461] upgrade[8876]: /etc/systemd/system\r[ 460.093435] upgrade[8876]: /run/systemd/system\r[ 460.100433] upgrade[8876]: /usr/local/lib/systemd/system\r[ 460.107474] upgrade[8876]: /usr/lib/systemd/system\r[ 460.114432] upgrade[8876]: Looking for SysV init scripts in:\r[ 460.122455] upgrade[8876]: /etc/rc.d/init.d\r[ 460.129458] upgrade[8876]: Looking for SysV rcN.d links in:\r[ 460.136468] upgrade[8876]: /etc/rc.d\r[ 460.141451] upgrade[8876]: Created symlink /etc/systemd/system/multi-user.target.wants/exadata-multipathmon.service, pointing to /etc/systemd/system/exadata-multipathmon.service. There was not much that I could do so just waited. Also created an SR with Oracle Support and they also suggested to wait. It started moving after some time and completed successfully. Finally when the node came up, i checked that there was an NFS mount entry in /etc/rc.local and that was what created the problem. For the second node, we commented this out and it was all smooth. Important to comment out all NFS entries during patching to avoid all such issues. I had commented the ones in /etc/fstab but the one in rc.local was an unexpected one.\n","permalink":"https://v2.amardeepsidhu.com/blog/2019/12/21/dbnodeupdate-sh-appears-to-be-stuck/","summary":"\u003cp\u003eI was patching an Exadata db node from 18.1.5.0.0.180506 to 19.3.2.0.0.191119. It had been more than an hour and dbnodeupdate.sh appeared to be stuck. Trying to ssh to the node was giving \u0026ldquo;connection refused\u0026rdquo; and the console had this output (some output removed for brevity):\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e[ 458.006444] upgrade[8876]: [642/676] (72%) installing exadata-sun-computenode-19.3.2.0.0.191119-1...\r\n\u0026lt;\u0026gt;\r\n[ 459.991449] upgrade[8876]: Created symlink /etc/systemd/system/multi-user.target.wants/exadata-iscsi-reconcile.service, pointing to /etc/systemd/system/exadata-iscsi-reconcile.service.\r\n[ 460.011466] upgrade[8876]: Looking for unit files in (higher priority first):\r\n[ 460.021436] upgrade[8876]: /etc/systemd/system\r\n[ 460.028479] upgrade[8876]: /run/systemd/system\r\n[ 460.035431] upgrade[8876]: /usr/local/lib/systemd/system\r\n[ 460.042429] upgrade[8876]: /usr/lib/systemd/system\r\n[ 460.049457] upgrade[8876]: Looking for SysV init scripts in:\r\n[ 460.057474] upgrade[8876]: /etc/rc.d/init.d\r\n[ 460.064430] upgrade[8876]: Looking for SysV rcN.d links in:\r\n[ 460.071445] upgrade[8876]: /etc/rc.d\r\n[ 460.076454] upgrade[8876]: Looking for unit files in (higher priority first):\r\n[ 460.086461] upgrade[8876]: /etc/systemd/system\r\n[ 460.093435] upgrade[8876]: /run/systemd/system\r\n[ 460.100433] upgrade[8876]: /usr/local/lib/systemd/system\r\n[ 460.107474] upgrade[8876]: /usr/lib/systemd/system\r\n[ 460.114432] upgrade[8876]: Looking for SysV init scripts in:\r\n[ 460.122455] upgrade[8876]: /etc/rc.d/init.d\r\n[ 460.129458] upgrade[8876]: Looking for SysV rcN.d links in:\r\n[ 460.136468] upgrade[8876]: /etc/rc.d\r\n[ 460.141451] upgrade[8876]: Created symlink /etc/systemd/system/multi-user.target.wants/exadata-multipathmon.service, pointing to /etc/systemd/system/exadata-multipathmon.service.\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eThere was not much that I could do so just waited. Also created an SR with Oracle Support and they also suggested to wait. It started moving after some time and completed successfully. Finally when the node came up, i checked that there was an NFS mount entry in /etc/rc.local and that was what created the problem. For the second node, we commented this out and it was all smooth. Important to comment out all NFS entries during patching to avoid all such issues. I had commented the ones in /etc/fstab but the one in rc.local was an unexpected one.\u003c/p\u003e","title":"dbnodeupdate.sh appears to be stuck"},{"content":"I was installing Database Firewall version 12.2.0.11.0 on a Dell x86 machine (with 5 * 500 GB local HDDs configured in RAID 10) and it got successfully installed. Later on, I came to know that this version doesn\u0026rsquo;t support host monitor functionality on Windows hosts. The latest version that supports that is 12.2.0.10.0. So that was the time to download and install 12.2.0.10.0. The installation started fine but it failed with an error:\nException occured\ranaconda 13.21.263 exception report\rFile \u0026#34;/usr/lib/anaconda/storage/devices.py\u0026#34;,\rOSError: [Errno 2] No such file or directory:\r\u0026#39;/dev/sr0\u0026#39; From the script that it is calling i.e. device.py, I guessed it had something to do with the storage. Maybe it was not able to figure out something that was created by the latest version installation. So I removed the RAID configuration and created it again. After this the installation went through without any issues.\n","permalink":"https://v2.amardeepsidhu.com/blog/2019/12/05/avdf-installation-error/","summary":"\u003cp\u003eI was installing Database Firewall version 12.2.0.11.0 on a Dell x86 machine (with 5 * 500 GB local HDDs configured in RAID 10) and it got successfully installed. Later on, I came to know that this version \u003ca href=\"https://docs.oracle.com/cd/E69292_01/doc.122/e41705/host_con.htm#SIGAD40830\"\u003edoesn\u0026rsquo;t support\u003c/a\u003e host monitor functionality on Windows hosts. The latest version that supports that is 12.2.0.10.0. So that was the time to download and install 12.2.0.10.0. The installation started fine but it failed with an error:\u003c/p\u003e","title":"AVDF installation error"},{"content":"It was started by Tim Hall in 2016. This is a Thank you community post. There are so many experts posting on Oracle related forums, doing blog posts, sharing their scripts with everyone. All of you are doing a great job. I would like to mention three names especially:\nTim Hall : Tim is a legend ! I don\u0026rsquo;t consider something a new feature until Tim writes about it :D\nJonathan Lewis : I don\u0026rsquo;t think there is anyone on this planet who has even once worked on a performance problem and hasn\u0026rsquo;t gained something from the knowledge shared by him on forums or in one of the blog posts.\nTanel Poder : His hacking sessions, blog posts and scripts are awesome. And ashtop is amazing man !\nThank you all of you. We learn from everyone of you. Keep rocking !\n","permalink":"https://v2.amardeepsidhu.com/blog/2019/10/10/ogb-appreciation-day-thank-you-community-thanksogb/","summary":"\u003cp\u003eIt was \u003ca href=\"https://oracle-base.com/blog/2019/09/30/ogb-appreciation-day-2019-thanksogb/\"\u003estarted by Tim Hall\u003c/a\u003e in 2016. This is a Thank you community post. There are so many experts posting on Oracle related forums, doing blog posts, sharing their scripts with everyone. All of you are doing a great job. I would like to mention three names especially:\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://oracle-base.com/\"\u003eTim Hall\u003c/a\u003e : Tim is a legend ! I don\u0026rsquo;t consider something a new feature until Tim writes about it :D\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://jonathanlewis.wordpress.com/all-postings/\"\u003eJonathan Lewis\u003c/a\u003e : I don\u0026rsquo;t think there is anyone on this planet who has even once worked on a performance problem and hasn\u0026rsquo;t gained something from the knowledge shared by him on forums or in one of the blog posts.\u003c/p\u003e","title":"OGB Appreciation Day : Thank you community ! (#ThanksOGB)"},{"content":"Use of Exadata storage cells seems to be a very poorly understood concept. A lot of people have confusions about how exactly ASM makes uses of disks from storage cells. Many folks assume there is some sort of RAID configured in the storage layer whereas there is nothing like that. I will try to explain some of the concepts in this post.\nLet\u0026rsquo;s take an example of an Exadata quarter rack that has 2 db and 3 storage nodes (node means a server here). Few things to note:\nThe space for binaries installation on db nodes comes from the local disks installed in db nodes (600GB * 4 (expandable to 8) configured in RAID5). In case you are using OVM, same disks are used for keeping configuration files, Virtual disks for VMs etc. All of the ASM space comes from storage cells. The minimum configuration is 3 storage cells. So let\u0026rsquo;s try to understand what makes a storage cell. There are 12 disks in each storage cell (latest X7 cells are coming with 10 TB disks). As I mentioned above that there are 3 storage cells in a minimum configuraiton. So we have a total of 36 disks. There is no RAID configured in the storage layer. All the redundancy is handled at ASM level. So to create a disk group:\nFirst of all cell disks are created on each storage cell. 1 physical disk makes 1 cell disk. So a quarter rack has 36 cell disks. To divide the space in various disk groups (by default only two disk groups are created : DATA \u0026amp; RECO; you can choose how much space to give to each of them) grid disks are created. grid disk is a partition on the cell disk. slice of a disk in other words. Slice from each cell disk must be part of both the disk groups. We can\u0026rsquo;t have something like say DATA has 18 disks out of 36 and the RECO has another 18. That is not supported. Let\u0026rsquo;s say you decide to allocate 5 TB to DATA grid disks and 4 TB to RECO grid disks (out of 10 TB on each disk, approx 9 TB is what you get as usable). So you will divide each cell disk into 2 parts - 5 TB and 4 TB and you would have 36 slices of 5 TB each and 36 slices of 4 TB each. DATA disk group will be created using the 36 5 TB slices where grid disks from each storage cell constitute one failgroup. Similarly RECO disk group will be created using the 36 4 TB slices. What we have discussed above is a quarter rack scenario with High Capacity (HC) disks. There can be somewhat different configurations too:\nInstead of HC disks, you can have the Extreme Flash (EF) configuration which uses flash cards in place of disks. Everything remains the same except the number. Instead of 12 HC disks there will be 8 flash cards. With X3 I think, Oracle introduced an eighth rack configuration. In an eighth rack configuration db nodes come with half the cores (of quarter rack db nodes) and storage cells come with 6 disks in each of the cell. So here you would have only 18 disks in total. Everything else works in the same way. Hope it clarified some of the doubts about grid disks.\n","permalink":"https://v2.amardeepsidhu.com/blog/2019/02/18/understanding-grid-disks-in-exadata/","summary":"\u003cp\u003eUse of Exadata storage cells seems to be a very poorly understood concept. A lot of people have confusions about how exactly ASM makes uses of disks from storage cells. Many folks assume there is some sort of RAID configured in the storage layer whereas there is nothing like that. I will try to explain some of the concepts in this post.\u003c/p\u003e\n\u003cp\u003eLet\u0026rsquo;s take an example of an Exadata quarter rack that has 2 db and 3 storage nodes (node means a server here). Few things to note:\u003c/p\u003e","title":"Understanding grid disks in Exadata"},{"content":"It is actually a dumb one. I was disabling triggers in a schema and ran this SQL to generate the disable statements. (Example from here)\nHR@test\u0026gt; select \u0026#39;alter trigger \u0026#39;||trigger_name|| \u0026#39; disable;\u0026#39; from user_triggers where table_name=\u0026#39;PRODUCT\u0026#39;;\r\u0026#39;ALTERTRIGGER\u0026#39;||TRIGGER_NAME||\u0026#39;DISABLE;\u0026#39;\r--------------------------------------------------------------------------------\ralter trigger PRICE_HISTORY_TRIGGERv1 disable;\rHR@test\u0026gt; alter trigger PRICE_HISTORY_TRIGGERv1 disable;\ralter trigger PRICE_HISTORY_TRIGGERv1 disable\r*\rERROR at line 1:\rORA-04080: trigger \u0026#39;PRICE_HISTORY_TRIGGERV1\u0026#39; does not exist\rHR@test\u0026gt; WTF ? It is there but the disable didn\u0026rsquo;t work. I was in hurry, tried to connect through SQL developer and disable and it worked ! Double WTF ! Then i spotted the problem. Someone created it with one letter in the name in small. So to make it work, we need to use double quotes.\nHR@test\u0026gt; alter trigger \u0026#34;PRICE_HISTORY_TRIGGERv1\u0026#34; disable;\rTrigger altered.\rHR@test\u0026gt; One of the reasons why you shouldn\u0026rsquo;t use case sensitive names in Oracle. That is stupid.\n","permalink":"https://v2.amardeepsidhu.com/blog/2019/01/22/ora-04080-trigger-price_history_triggerv1-does-not-exist/","summary":"\u003cp\u003eIt is actually a dumb one. I was disabling triggers in a schema and ran this SQL to generate the disable statements. (Example from \u003ca href=\"http://plsql-tutorial.com/plsql-triggers.htm\"\u003ehere\u003c/a\u003e)\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eHR@test\u0026gt; select \u0026#39;alter trigger \u0026#39;||trigger_name|| \u0026#39; disable;\u0026#39; from user_triggers where table_name=\u0026#39;PRODUCT\u0026#39;;\r\n\r\n\u0026#39;ALTERTRIGGER\u0026#39;||TRIGGER_NAME||\u0026#39;DISABLE;\u0026#39;\r\n--------------------------------------------------------------------------------\r\nalter trigger PRICE_HISTORY_TRIGGERv1 disable;\r\n\r\nHR@test\u0026gt; alter trigger PRICE_HISTORY_TRIGGERv1 disable;\r\nalter trigger PRICE_HISTORY_TRIGGERv1 disable\r\n*\r\nERROR at line 1:\r\nORA-04080: trigger \u0026#39;PRICE_HISTORY_TRIGGERV1\u0026#39; does not exist\r\n\r\nHR@test\u0026gt;\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eWTF ? It is there but the disable didn\u0026rsquo;t work. I was in hurry, tried to connect through SQL developer and disable and it worked ! Double WTF ! Then i spotted the problem. Someone created it with one letter in the name in small. So to make it work, we need to use double quotes.\u003c/p\u003e","title":"ORA-04080: trigger ‘PRICE_HISTORY_TRIGGERV1’ does not exist"},{"content":"This was another issue that I faced while trying to configure GoldenGate in HA mode. ggsci was working fine after normal installation but after configuring it in HA mode and trying to run ggsci, it resulted in this:\n[oragg@node2 product]$ ggsci\rOracle GoldenGate Command Interpreter for Oracle\rVersion 12.3.0.1.4 OGGCORE_12.3.0.1.0_PLATFORMS_180415.0359_FBO\rLinux, x64, 64bit (optimized), Oracle 12c on Apr 16 2018 00:53:30\rOperating system character set identified as UTF-8.\rCopyright (C) 1995, 2018, Oracle and/or its affiliates. All rights reserved.\r2019-01-08 16:28:37.913\rCLSD: An error occurred while attempting to generate a full name. Logging may not be active for this process\rAdditional diagnostics: CLSU-00100: operating system function: sclsdgcwd failed with error data: -1\rCLSU-00103: error location: sclsdgcwd2\r(:CLSD00183:)\rGGSCI (node2) 1\u0026gt; No obvious clues in the error message but little searching revealed that it had something to do with permissions. It was on Exadata so i tried to do a strace of ggsci and see if it could give some clues. There we go:\n[oragg@node2 product]$ strace ggsci\r.\r.\rmkdir(\u0026#34;/u01/app/oracle/product/12.1.0.2/dbhome_4/log/exadatadb02\u0026#34;, 01777) = -1 EACCES (Permission denied) That is the Oracle database home, GoldenGate is supposed to use. It is trying to create a directory at the mentioned path and not able to do it. There was another directory called client needed inside this. I created both of them and set the needed permissions \u0026amp; the sticky bit and it worked fine. It was working fine on the other node, so i could check the permissions over there and do the same on this node.\n","permalink":"https://v2.amardeepsidhu.com/blog/2019/01/12/error-while-running-ggsci/","summary":"\u003cp\u003eThis was another issue that I faced while trying to configure GoldenGate in HA mode. ggsci was working fine after normal installation but after configuring it in HA mode and trying to run ggsci, it resulted in this:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e[oragg@node2 product]$ ggsci\r\n Oracle GoldenGate Command Interpreter for Oracle\r\n Version 12.3.0.1.4 OGGCORE_12.3.0.1.0_PLATFORMS_180415.0359_FBO\r\n Linux, x64, 64bit (optimized), Oracle 12c on Apr 16 2018 00:53:30\r\n Operating system character set identified as UTF-8.\r\n Copyright (C) 1995, 2018, Oracle and/or its affiliates. All rights reserved.\r\n 2019-01-08 16:28:37.913\r\n CLSD: An error occurred while attempting to generate a full name. Logging may not be active for this process\r\n Additional diagnostics: CLSU-00100: operating system function: sclsdgcwd failed with error data: -1\r\n CLSU-00103: error location: sclsdgcwd2\r\n (:CLSD00183:)\r\n GGSCI (node2) 1\u0026gt;\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eNo obvious clues in the error message but little searching revealed that it had something to do with permissions. It was on Exadata so i tried to do a strace of ggsci and see if it could give some clues. There we go:\u003c/p\u003e","title":"Error while running ggsci"},{"content":"I was configuring GoldenGate in HA mode by following this document. Everything worked ok but in the end while running agctl config goldengate to view the configuration of GoldenGate resource, it was failing with the following error:\n[oracle@exadatadb02 ~]$ agctl config goldengate GG_TARGET\rFailed to execute the command \u0026#34;\u0026#34;/u01/app/xag/bin/clsecho\u0026#34; -p xag -f xag -m 5080 \u0026#34;GG_TARGET\u0026#34;\u0026#34; (rc=134), with the message:\rOracle Clusterware infrastructure fatal error in clsecho.bin (OS PID 126367_140570897783808): Internal error (ID (:CLSB00107:)) - Error -1 (ORA-08275) determining Oracle base\r/u01/app/xag/bin/clsecho: line 45: 126367 Aborted (core dumped) ${CRS_HOME}/bin/clsecho.bin \u0026#34;$@\u0026#34;\rFailed to execute the command \u0026#34;\u0026#34;/u01/app/xag/bin/clsecho\u0026#34; -p xag -f xag -m 5081 \u0026#34;/u01/app/oragg/product\u0026#34;\u0026#34; (rc=134), with the message: If you look at the error in bold it sounds kinda obvious that it is not able to figure our where the ORACLE_BASE is. But somehow it didn\u0026rsquo;t strike me at that moment. So started looking around. If we look at the command it is running, it runs clsecho. This is simply a shell script which in turn calls $CRS_HOME/bin/clsecho.bin . In the script, it sets various environment variables and that is where the problem was. There are lines like:\nORACLE_BASE=\rexport ORACLE_BASE Nowhere in the script, it is setting the value of ORACLE_BASE. That was causing an issue. I changed the first line to set the ORACLE_BASE location and it worked fine after that. There was another issue i faced with ggsci after doing xag configuration. Will do another blog post on that.\n","permalink":"https://v2.amardeepsidhu.com/blog/2019/01/08/failed-to-execute-the-command-u01-app-xag-bin-clsecho/","summary":"\u003cp\u003eI was configuring GoldenGate in HA mode by following \u003ca href=\"https://www.oracle.com/technetwork/database/features/availability/maa-wp-gg-oracledbm-128760.pdf\"\u003ethis document\u003c/a\u003e. Everything worked ok but in the end while running agctl config goldengate to view the configuration of GoldenGate resource, it was failing with the following error:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e[oracle@exadatadb02 ~]$ agctl config goldengate GG_TARGET\r\n Failed to execute the command \u0026#34;\u0026#34;/u01/app/xag/bin/clsecho\u0026#34; -p xag -f xag -m 5080 \u0026#34;GG_TARGET\u0026#34;\u0026#34; (rc=134), with the message:\r\n Oracle Clusterware infrastructure fatal error in clsecho.bin (OS PID 126367_140570897783808): Internal error (ID (:CLSB00107:)) - Error -1 (ORA-08275) determining Oracle base\r\n /u01/app/xag/bin/clsecho: line 45: 126367 Aborted (core dumped) ${CRS_HOME}/bin/clsecho.bin \u0026#34;$@\u0026#34;\r\n Failed to execute the command \u0026#34;\u0026#34;/u01/app/xag/bin/clsecho\u0026#34; -p xag -f xag -m 5081 \u0026#34;/u01/app/oragg/product\u0026#34;\u0026#34; (rc=134), with the message:\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eIf you look at the error in bold it sounds kinda obvious that it is not able to figure our where the ORACLE_BASE is. But somehow it didn\u0026rsquo;t strike me at that moment. So started looking around. If we look at the command it is running, it runs clsecho. This is simply a shell script which in turn calls $CRS_HOME/bin/clsecho.bin . In the script, it sets various environment variables and that is where the problem was. There are lines like:\u003c/p\u003e","title":"Failed to execute the command “”/u01/app/xag/bin/clsecho”"},{"content":"This is an Exadata machine running GI version 18.3.0.0.180717 and DB version 12.1.0.2.180717. On one of the DB nodes while running dbca, it doesn\u0026rsquo;t list the diskgroups. it works fine on the other node.\nI cheked the dbca trace and found that the kfod command was failing. I tried to run it manually and got the same error:\n[oracle@exadb01 ~]$ /u01/app/18.0.0.0/grid/bin/kfod op=groups verbose=true\rKFOD-00300: OCI error [-1] [OCI error] [Could not fetch details] [-105777048]\rKFOD-00105: Could not open pfile \u0026#39;[email protected]\u0026#39;\r[oracle@exadb01 ~]$ I ran it with strace then:\n[oracle@exadb01 ~]$ strace /u01/app/18.0.0.0/grid/bin/kfod op=groups verbose=true\rexecve(\u0026#34;/u01/app/18.0.0.0/grid/bin/kfod\u0026#34;, [\u0026#34;/u01/app/18.0.0.0/grid/bin/kfod\u0026#34;, \u0026#34;op=groups\u0026#34;, \u0026#34;verbose=true\u0026#34;], [/* 18 vars */]) = 0\rbrk(0) = 0x2641000\r.\r.\r.\r.\r.\ropen(\u0026#34;/u01/app/18.0.0.0/grid/dbs/ab_+ASM1.dat\u0026#34;, O_RDONLY) = -1 EACCES (Permission denied)\rgeteuid() = 1003\ropen(\u0026#34;/u01/app/18.0.0.0/grid/rdbms/mesg/kfodus.msb\u0026#34;, O_RDONLY) = 13\rfcntl(13, F_SETFD, FD_CLOEXEC) = 0\rlseek(13, 0, SEEK_SET) = 0\rread(13, \u0026#34;\\25\\23\\\u0026#34;\\1\\23\\3\\t\\t\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\u0026#34;…, 280) = 280\rlseek(13, 512, SEEK_SET) = 512\rread(13, \u0026#34;\\352\\3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\u0026#34;…, 512) = 512\rlseek(13, 1024, SEEK_SET) = 1024\rread(13, \u0026#34;.\\1=\\1E\\1M\\1X\\1\\352\\3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\u0026#34;…, 512) = 512\rlseek(13, 1536, SEEK_SET) = 1536\rread(13, \u0026#34;\\n\\0d\\0\\0\\0D\\0e\\0\\1\\0e\\0f\\0\\1\\0\\230\\0g\\0\\1\\0\\306\\0h\\0\\2\\0\\325\\0\u0026#34;…, 512) = 512\rfstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 3), …}) = 0\rmmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f43f85f2000\rwrite(1, \u0026#34;KFOD-00300: OCI error [-1] [OCI \u0026#34;…, 78KFOD-00300: OCI error [-1] [OCI error] [Could not fetch details] [-132605848]\r) = 78 The text in bold just before the kfod error caught my attention. When I checked actually oracle user wasn\u0026rsquo;t able to read the file. The permissions looked like this:\n[root@exadb01 dbs]# ls -ltr\rtotal 20\r-rw-r--r-- 1 oragrid oinstall 3079 May 14 2015 init.ora\r-rw-r--r-- 1 oragrid oinstall 587 Dec 12 15:33 initbackuppfile.ora\r-rw-rw---- 1 oragrid asmadmin 1656 Dec 20 14:26 ab_+ASM1.dat\r-rw-rw---- 1 oragrid oinstall 1544 Dec 20 14:26 hc_+APX1.dat\r-rw-rw---- 1 oragrid oinstall 1544 Dec 21 16:57 hc_+ASM1.dat\r[root@exadb01 dbs]# Whereas on node2 they were like:\n[oracle@exadb02 dbs]$ ls -ltr\rtotal 16\r-rwxrwxrwx 1 oragrid oinstall 3079 Dec 12 14:52 init.ora\r-rwxrwxrwx 1 oragrid oinstall 1544 Dec 21 16:57 hc_+ASM2.dat\r-rw-rw---- 1 oragrid oinstall 1720 Dec 21 16:57 ab_+ASM2.dat\r-rwxrwxrwx 1 oragrid oinstall 1544 Dec 21 16:57 hc_+APX2.dat\r[oracle@exadb02 dbs]$ Since oracle user isn\u0026rsquo;t member of asmadmin group, it is not able to read the mentioned file. Changing the owner to oragrid:oinstall fixed the issue.\nComments Comment by Martin Decker on 2018-12-27 14:53:34 +0530 Sidhu,\nnormally, the oracle rdbms binary executeable ownership is modified by “setasmgidwrap” tool.\nAfter applying any patches on RDBMS Home, the setasmgidwrap has to be run (with rdbms instance offline)\nas user oragrid:\n/u01/app/18.0.0.0/grid/bin/setasmgidwrap o=$ORACLE_RDBMS_HOME/bin/oracle\nRegards,\nMartin\n","permalink":"https://v2.amardeepsidhu.com/blog/2018/12/26/dbca-doesnt-list-diskgroups/","summary":"\u003cp\u003eThis is an Exadata machine running GI version 18.3.0.0.180717 and DB version 12.1.0.2.180717. On one of the DB nodes while running dbca, it doesn\u0026rsquo;t list the diskgroups. it works fine on the other node.\u003c/p\u003e\n\u003cp\u003eI cheked the dbca trace and found that the kfod command was failing. I tried to run it manually and got the same error:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e[oracle@exadb01 ~]$ /u01/app/18.0.0.0/grid/bin/kfod op=groups verbose=true\r\nKFOD-00300: OCI error [-1] [OCI error] [Could not fetch details] [-105777048]\r\n\r\nKFOD-00105: Could not open pfile \u0026#39;[email protected]\u0026#39;\r\n[oracle@exadb01 ~]$\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eI ran it with strace then:\u003c/p\u003e","title":"dbca doesn’t list diskgroups"},{"content":"It started with an xls sheet (that was called dbm configurator) . Then OEDA (Oracle Exadata Deployment Assistant) was introduced that was a Java based GUI tool to enter all the information needed to configure an Exadata machine. Now with the latest patch released in Oct, OEDA has changed again; to become a web based tool. It is deployed on WebLogic and comes with some new features as well. SuperCluster deployments will continue to use the Java based OEDA tool. The new interface has support for Exadata, ZDLRA and ExaCC. It is backward compatible and can import the XMLs generated by older versions of OEDA. Some of the new features include the ability to configure single instance homes, create more than 2 diskgroups, create more than 1 database homes and databases, allow ILOMs to have a different subnet etc.\nTo configure the OEDA application you need to unzip the contents and run the installWls script with -p switch (that mentions the port). It will deploy the application on WebLogic and give you the URL to access the OEDA. The interface is similar to the older version. Just that it runs in a browser and there are some new features added. MOS note 2460104.1 and the Exadata documentation has more details:\n[Using Oracle Exadata Deployment Assistant](http://Using Oracle Exadata Deployment Assistant)\n","permalink":"https://v2.amardeepsidhu.com/blog/2018/11/21/new-web-based-oeda-for-exadata/","summary":"\u003cp\u003eIt started with an xls sheet (that was called dbm configurator) . Then OEDA (Oracle Exadata Deployment Assistant) was introduced that was a Java based GUI tool to enter all the information needed to configure an Exadata machine. Now with the latest patch released in Oct, OEDA has changed again; to become a web based tool. It is deployed on WebLogic and comes with some new features as well. SuperCluster deployments will continue to use the Java based OEDA tool. The new interface has support for Exadata, ZDLRA and ExaCC. It is backward compatible and can import the XMLs generated by older versions of OEDA. Some of the new features include the ability to configure single instance homes, create more than 2 diskgroups, create more than 1 database homes and databases, allow ILOMs to have a different subnet etc.\u003c/p\u003e","title":"New web based OEDA for Exadata"},{"content":"A colleague faced this while running FMW installer on a Linux machine. The display appeared like this\nThis thread gave a clue that it could have something to do with fonts. So I checked what all fonts related stuff was installed.\n[bash][root@someserver ~]# rpm -aq |grep -i font stix-fonts-1.1.0-5.el7.noarch xorg-x11-font-utils-7.5-20.el7.x86_64 xorg-x11-fonts-cyrillic-7.5-9.el7.noarch xorg-x11-fonts-ISO8859-1-75dpi-7.5-9.el7.noarch xorg-x11-fonts-ISO8859-9-100dpi-7.5-9.el7.noarch xorg-x11-fonts-ISO8859-9-75dpi-7.5-9.el7.noarch libXfont-1.5.2-1.el7.x86_64 xorg-x11-fonts-ISO8859-14-100dpi-7.5-9.el7.noarch xorg-x11-fonts-ISO8859-1-100dpi-7.5-9.el7.noarch xorg-x11-fonts-75dpi-7.5-9.el7.noarch xorg-x11-fonts-ISO8859-2-100dpi-7.5-9.el7.noarch libfontenc-1.1.3-3.el7.x86_64 xorg-x11-fonts-ethiopic-7.5-9.el7.noarch xorg-x11-fonts-100dpi-7.5-9.el7.noarch xorg-x11-fonts-misc-7.5-9.el7.noarch fontpackages-filesystem-1.44-8.el7.noarch fontconfig-2.10.95-11.el7.x86_64 xorg-x11-fonts-ISO8859-2-75dpi-7.5-9.el7.noarch xorg-x11-fonts-ISO8859-14-75dpi-7.5-9.el7.noarch xorg-x11-fonts-Type1-7.5-9.el7.noarch xorg-x11-fonts-ISO8859-15-75dpi-7.5-9.el7.noarch [root@someserver ~]#[/bash]\nstix-fonts looked suspicious to me. So I removed that with rpm -e stix-fonts.\nThat actually fixed the issue. After this the Installer window was displaying fine.\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/11/18/garbled-display-while-running-fmw-installer-on-linux/","summary":"\u003cp\u003eA colleague faced this while running FMW installer on a Linux machine. The display appeared like this\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/blog/wp-content/uploads/2017/11/fmw_installer.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"/blog/wp-content/uploads/2017/11/fmw_installer.jpg\"\u003e\u003c/a\u003e\u003ca href=\"https://stackoverflow.com/questions/46270769/weblogic-12c-12-1-3-installation-on-unix-garbled-character-in-gui-over-xming\"\u003eThis thread\u003c/a\u003e gave a clue that it could have something to do with fonts. So I checked what all fonts related stuff was installed.\u003c/p\u003e\n\u003cp\u003e[bash][root@someserver ~]# rpm -aq |grep -i font\nstix-fonts-1.1.0-5.el7.noarch\nxorg-x11-font-utils-7.5-20.el7.x86_64\nxorg-x11-fonts-cyrillic-7.5-9.el7.noarch\nxorg-x11-fonts-ISO8859-1-75dpi-7.5-9.el7.noarch\nxorg-x11-fonts-ISO8859-9-100dpi-7.5-9.el7.noarch\nxorg-x11-fonts-ISO8859-9-75dpi-7.5-9.el7.noarch\nlibXfont-1.5.2-1.el7.x86_64\nxorg-x11-fonts-ISO8859-14-100dpi-7.5-9.el7.noarch\nxorg-x11-fonts-ISO8859-1-100dpi-7.5-9.el7.noarch\nxorg-x11-fonts-75dpi-7.5-9.el7.noarch\nxorg-x11-fonts-ISO8859-2-100dpi-7.5-9.el7.noarch\nlibfontenc-1.1.3-3.el7.x86_64\nxorg-x11-fonts-ethiopic-7.5-9.el7.noarch\nxorg-x11-fonts-100dpi-7.5-9.el7.noarch\nxorg-x11-fonts-misc-7.5-9.el7.noarch\nfontpackages-filesystem-1.44-8.el7.noarch\nfontconfig-2.10.95-11.el7.x86_64\nxorg-x11-fonts-ISO8859-2-75dpi-7.5-9.el7.noarch\nxorg-x11-fonts-ISO8859-14-75dpi-7.5-9.el7.noarch\nxorg-x11-fonts-Type1-7.5-9.el7.noarch\nxorg-x11-fonts-ISO8859-15-75dpi-7.5-9.el7.noarch\n[root@someserver ~]#[/bash]\u003c/p\u003e","title":"Garbled display while running FMW installer on Linux"},{"content":"Got this while trying to install 11.2.0.4 RAC on Redhat Linux 7.2. root.sh fails with a message like\n[sql]ohasd failed to start Failed to start the Clusterware. Last 20 lines of the alert log follow: 2017-11-09 15:43:37.883: [client(37246)]CRS-2101:The OLR was formatted using version 3.[/sql]\nThis is bug 18370031. Need to apply the patch before running root.sh.\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/11/18/root-sh-fails-with-crs-2101-the-olr-was-formatted-using-version-3/","summary":"\u003cp\u003eGot this while trying to install 11.2.0.4 RAC on Redhat Linux 7.2. root.sh fails with a message like\u003c/p\u003e\n\u003cp\u003e[sql]ohasd failed to start\nFailed to start the Clusterware. Last 20 lines of the alert log follow:\n2017-11-09 15:43:37.883:\n[client(37246)]CRS-2101:The OLR was formatted using version 3.[/sql]\u003c/p\u003e\n\u003cp\u003eThis is bug 18370031. Need to apply the patch before running root.sh.\u003c/p\u003e","title":"root.sh fails with CRS-2101:The OLR was formatted using version 3"},{"content":"I will be presenting a session titled \u0026ldquo;An 18 pointers guide to setting up an Exadata machine\u0026rdquo; at Cloud Day being organized by North India chapter of AIOUG. Vivek Sharma is doing multiple sessions on various cloud and performance related topics. You can register for the event here\nhttps://www.meraevents.com/event/aioug-nic-cloud-day\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/11/06/presenting-at-cloud-day-event-of-north-india-chapter-of-aioug/","summary":"\u003cp\u003eI will be presenting a session titled \u003cstrong\u003e\u0026ldquo;An 18 pointers guide to setting up an Exadata machine\u0026rdquo;\u003c/strong\u003e at Cloud Day being organized by North India chapter of AIOUG. \u003ca href=\"https://viveklsharma.wordpress.com/\"\u003eVivek Sharma\u003c/a\u003e is doing multiple sessions on various cloud and performance related topics. You can register for the event here\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://www.meraevents.com/event/aioug-nic-cloud-day\"\u003ehttps://www.meraevents.com/event/aioug-nic-cloud-day\u003c/a\u003e\u003c/p\u003e","title":"Presenting at Cloud day event of North India Chapter of AIOUG"},{"content":"If you have installed some one off ksplice fix for kernel on Exadata, remember to uninstall it before you do a kernel upgrade eg regular Exadata patching. As such fixes are kernel version specific so they may not work with the newer version of the kernel.\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/11/05/ksplice-kernel-updates-and-exadata-patching/","summary":"\u003cp\u003eIf you have installed some one off ksplice fix for kernel on Exadata, remember to uninstall it before you do a kernel upgrade eg regular Exadata patching. As such fixes are kernel version specific so they may not work with the newer version of the kernel.\u003c/p\u003e","title":"ksplice kernel updates and Exadata patching"},{"content":"A colleague was working on an ASM issue (Standalone one, Version 11.2.0.3 on AIX) at one of the customer sites. Later on, I also joined him. The issue was that the customer added few news disks to an existing diskgroup. Everything went well and the rebalance kicked in. After some time, something happened and all of a sudden the diskgroup was dismounted. While trying the mount the diskgroup again, it was giving\n[sql]ORA-15032: not all alterations performed ORA-15040: diskgroup is incomplete ORA-15042: ASM disk \u0026ldquo;27\u0026rdquo; is missing from group number \u0026ldquo;2\u0026rdquo;[/sql]\nHere is the relevant text from the ASM alert log\n[sql]ORA-27063: number of bytes read/written is incorrect IBM AIX RISC System/6000 Error: 19: No such device Additional information: -1 Additional information: 1048576 WARNING: Write Failed. group:2 disk:27 AU:1005 offset:0 size:1048576 Fri Nov 03 10:55:27 2017 Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_dbw0_58983380.trc: ORA-27063: number of bytes read/written is incorrect IBM AIX RISC System/6000 Error: 19: No such device Additional information: -1 Additional information: 4096 WARNING: Write Failed. group:2 disk:27 AU:0 offset:16384 size:4096 NOTE: cache initiating offline of disk 27 group DATADG NOTE: process _dbw0_+asm1 (58983380) initiating offline of disk 27.3928481273 (DISK_01) with mask 0x7e in group 2 Fri Nov 03 10:55:27 2017 WARNING: Disk 27 (DISK_01) in group 2 mode 0x7f is now being offlined WARNING: Disk 27 (DISK_01) in group 2 in mode 0x7f is now being taken offline on ASM inst 1 NOTE: initiating PST update: grp = 2, dsk = 27/0xea27ddf9, mask = 0x6a, op = clear ERROR: failed to copy file +DATADG.263, extent 1952 GMON updating disk modes for group 2 at 36 for pid 9, osid 58983380 ERROR: Disk 27 cannot be offlined, since diskgroup has external redundancy. ERROR: too many offline disks in PST (grp 2) ERROR: ORA-15080 thrown in ARB0 for group number 2 Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_57672234.trc: ORA-15080: synchronous I/O operation to a disk failed Fri Nov 03 10:55:27 2017 NOTE: stopping process ARB0 WARNING: Disk 27 (DISK_01) in group 2 mode 0x7f offline is being aborted WARNING: Offline of disk 27 (DISK_01) in group 2 and mode 0x7f failed on ASM inst 1 NOTE: halting all I/Os to diskgroup 2 (DATADG) Fri Nov 03 10:55:28 2017 NOTE: cache dismounting (not clean) group 2/0xDEB72D47 (DATADG) NOTE: messaging CKPT to quiesce pins Unix process pid: 62128816, image: [email protected] (B000) NOTE: dbwr not being msg\u0026rsquo;d to dismount Fri Nov 03 10:55:28 2017 NOTE: LGWR doing non-clean dismount of group 2 (DATADG) NOTE: LGWR sync ABA=124.7138 last written ABA 124.7138 NOTE: cache dismounted group 2/0xDEB72D47 (DATADG) SQL\u0026gt; alter diskgroup DATADG dismount force /* ASM SERVER */ [/sql]\nAt this stage disk 27 was not readable even with dd. So that means something is wrong with the disk. Since it is an external redundancy diskgroup not much can be done until the disk becomes available.\nSpeaking to the storage team cleared the air. One that the disk had gone offline at storage level so that is why even dd was not able to read it. Two that all these disks were thin provisioned (over provisioning of the storage space to improve the utilization; similar to over provisioning of CPU cores in the Virtualization world) from the storage. This particular disk 27 was meant for some other purpose but got wrongly allocated to this diskgroup. The actual space available in the pool (of this disk) was less than what was needed. The moment disks were added to the diskgroup, the rebalance kicked in and ASM started writing data to the disk. Within few minutes space became full and the storage software took the disk offline. Since ASM couldn\u0026rsquo;t write to the disk, the diskgroup was dismounted.\nFortunately, in the same pool, there was another disk that was still unused. So the storage guy dropped that disk and it freed up some space in the pool. He brought this disk 27 online after that. Diskgroup got mounted and the rebalance kicked in again. Finally, we dropped this disk and the rebalance started again. Once the rebalance completed, disk was free to be taken offline.\nComments Comment by Neerav on 2017-11-04 08:52:43 +0530 Great blog sir – Neerav\nComment by sachin on 2017-11-04 10:50:01 +0530 Good one\nComment by Sidhu on 2017-11-06 17:20:46 +0530 Cheers !\nComment by Sidhu on 2017-11-06 17:20:55 +0530 Thank you !\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/11/03/ora-15040-ora-15042-with-external-redundancy-diskgroup/","summary":"\u003cp\u003eA colleague was working on an ASM issue (Standalone one, Version 11.2.0.3 on AIX) at one of the customer sites. Later on, I also joined him. The issue was that the customer added few news disks to an existing diskgroup. Everything went well and the rebalance kicked in. After some time, something happened and all of a sudden the diskgroup was dismounted. While trying the mount the diskgroup again, it was giving\u003c/p\u003e","title":"ORA-15040 ORA-15042 with EXTERNAL redundancy Diskgroup"},{"content":"Scenario : Setting up a physical standby from Exadata to a non-Exadata single instance. tnsping from standby to primary works fine but tnsping from primary to standby fails with:\n[sql]TNS-12543: TNS:destination host unreachable[/sql]\nI am able to ssh standby from primary, can ping as well but tnsping doesn\u0026rsquo;t work. From the error description we can figure out that something is blocking the access. In this case it was iptables that was enabled on the standby server.\nStopping the service resolved the issue.\n[bash]service iptables stop chkconfig iptables off[/bash]\nThe error is an obvious one but sometimes it just doesn\u0026rsquo;t strike you that it could be something simple like that.\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/07/15/tns-12543-tns-destination-host-unreachable/","summary":"\u003cp\u003eScenario : Setting up a physical standby from Exadata to a non-Exadata single instance. tnsping from standby to primary works fine but tnsping from primary to standby fails with:\u003c/p\u003e\n\u003cp\u003e[sql]TNS-12543: TNS:destination host unreachable[/sql]\u003c/p\u003e\n\u003cp\u003eI am able to ssh standby from primary, can ping as well but tnsping doesn\u0026rsquo;t work. From the error description we can figure out that something is blocking the access. In this case it was iptables that was enabled on the standby server.\u003c/p\u003e","title":"TNS-12543: TNS:destination host unreachable"},{"content":"Hit this silly issue in one of the data guard environments today. Primary is a 2 node RAC running 11.2.0.4 and standby is also a 2 node RAC. Archive logs from node2 aren\u0026rsquo;t shipping and the error being reported is\n[sql]ORA-12154: TNS:could not resolve the connect identifier specified[/sql]\nWe tried usual things like going to $TNS_ADMIN, checking the entry in tnsnames.ora and then also trying to connect using sqlplus sys@target as sysdba. Everything seemed to be good but logs were not shipping and the same problem was being reported repeatedly. As everything on node1 was working fine so it looked even more weird.\nFrom the error it is clear that the issue is with tnsnames entry. Finally found the issue after some 30 mins. It was an Oracle EBS environment so the TNS_ADMIN was set to the standard $ORACLE_HOME/network/admin/*hostname* path (on both the nodes). On node1 there was no tnsnames.ora file in $ORACLE_HOME/network/admin so it was connecting to the standby using the Apps tnsnames.ora which was having the correct entry for standby. On node2 there was a file called tnsnames.ora in $ORACLE_HOME/network/admin but it was not having any entry for standby. It was trying to connect using that file (the default tns path) and failing with ORA-12154. Once we removed that file, it started using the Apps tnsnames.ora and logs started shipping.\nComments Comment by Corlins on 2022-04-28 21:29:07 +0530 Found this post super helpful after spending 2 days troubleshooting standby tns issues in ebs environment. Thanks for sharing this.\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/05/31/ora-12154-in-data-guard-environment/","summary":"\u003cp\u003eHit this silly issue in one of the data guard environments today. Primary is a 2 node RAC running 11.2.0.4 and standby is also a 2 node RAC. Archive logs from node2 aren\u0026rsquo;t shipping and the error being reported is\u003c/p\u003e\n\u003cp\u003e[sql]ORA-12154: TNS:could not resolve the connect identifier specified[/sql]\u003c/p\u003e\n\u003cp\u003eWe tried usual things like going to $TNS_ADMIN, checking the entry in tnsnames.ora and then also trying to connect using sqlplus sys@target as sysdba. Everything seemed to be good but logs were not shipping and the same problem was being reported repeatedly. As everything on node1 was working fine so it looked even more weird.\u003c/p\u003e","title":"ORA-12154 in Data Guard environment"},{"content":"Long story short, faced this issue while running OneCommand for one Exadata system. The root.sh step (Initialize Cluster Software) was failing with the following error on the screen\nChecking file root_dm01dbadm02.in.oracle.com_2017-04-27_18-13-27.log on node dm01dbadm02.somedomain.com Error: Error running root scripts, please investigate\u0026hellip; Collecting diagnostics\u0026hellip; Errors occurred. Send /u01/onecommand/linux-x64/WorkDir/Diag-170427_181710.zip to Oracle to receive assistance.\nDoesn’t make much sense. So let us check the log file of this step\n2017-04-27 18:17:10,463 [INFO][ OCMDThread][ ClusterUtils:413] Checking file root_dm01dbadm02.somedomain.com_2017-04-27_18-13-27.log on node inx321dbadm02.somedomain.com 2017-04-27 18:17:10,464 [INFO][ OCMDThread][ OcmdException:62] Error: Error running root scripts, please investigate\u0026hellip; 2017-04-27 18:17:10,464 [FINE][ OCMDThread][ OcmdException:63] Throwing OcmdException\u0026hellip; message:Error running root scripts, please investigate\u0026hellip;\nSo we need to go to root.sh log file now. That shows\nFailed to create voting files on disk group RECOC1. Change to configuration failed, but was successfully rolled back. CRS-4000: Command Replace failed, or completed with errors. Voting file add failed 2017/04/27 18:16:37 CLSRSC-261: Failed to add voting disksDied at /u01/app/12.1.0.2/grid/crs/install/crsinstall.pm line 2068. The command \u0026lsquo;/u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/root crs.pl \u0026rsquo; execution failed\nMakes some senses but we can’t understand what happened while creating Voting files on RECOC1. Let us check ASM alert log also\nNOTE: Creating voting files in diskgroup RECOC1 Thu Apr 27 18:16:36 2017 NOTE: Voting File refresh pending for group 1/0x39368071 (RECOC1) Thu Apr 27 18:16:36 2017 NOTE: Attempting voting file creation in diskgroup RECOC1 NOTE: voting file allocation (replicated) on grp 1 disk RECOC1_CD_00_DM01CELADM01 NOTE: voting file allocation on grp 1 disk RECOC1_CD_00_DM01CELADM01 NOTE: voting file allocation (replicated) on grp 1 disk RECOC1_CD_00_DM01CELADM02 NOTE: voting file allocation on grp 1 disk RECOC1_CD_00_DM01CELADM02 NOTE: voting file allocation (replicated) on grp 1 disk RECOC1_CD_00_DM01CELADM03 NOTE: voting file allocation on grp 1 disk RECOC1_CD_00_DM01CELADM03 ERROR: Voting file allocation failed for group RECOC1 Thu Apr 27 18:16:36 2017 Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_228588.trc: ORA-15274: Not enough failgroups (5) to create voting files\nSo we can see the issue here. We can look at the above trace file also for more detail.\nNow to why did this happen ?\nThe RECOC1 is a HIGH redundancy disk group which means that if we want to place Voting files there, it should have at least 5 failure groups. In this configuration there are only 3 cells and that doesn’t meet the minimum failure groups condition (1 cell = 1 failgroup in Exadata).\nNow to how did it happen ?\nThis one was an Exadata X3 half rack and we planned to deploy it (for testing purpose) as 2 quarter racks : 1st cluster with db1, db2 + cell1, cell2, cell3 and 2nd cluster with db3, db4 + cell4, cell5, cell6, cell7. All the disk groups to be in High redundancy.\nBefore a certain 12.x Exadata software version it was not even possible to have all disk groups in High redundancy in a quarter rack as to have Voting disk in a High redundancy disk group you need to have a minimum of 5 failure groups (as mentioned above). In a quarter rack you have only 3 fail groups. With a certain 12.x Exadata software version a new feature quorum disks was introduced which made is possible to have that configuration. Read this link for more details. Basically we take a slice of disk from each DB node and add it to the disk group where you want to have the Voting file. 3 cells + 2 disks from DB nodes makes it 5 so all is good.\nNow while starting with the deployment we noticed that db node1 was having some hardware issues. As we needed the machine for testing so we decided to build the first cluster with 1 db node only. So the final configuration of 1st cluster had 1 db node + 3 cells. We imported the XML back in OEDA, modified the cluster 1 configuration to 1 db node and generated the configuration files. That is where the problem started. The RECO disk group still was High redundancy but as we had only 1 db node at this stage so the configuration was not even a candidate for quorum disks. Hence the above error. Changing DBFS_DG to Normal redundancy fixed the issue as when DBFS_DG is Normal redundancy, OneCommand will place the Voting files there.\nIdeally it shouldn’t happened as OEDA shouldn’t allow a configuration that is not doable. The case here is that as originally the configuration was having 2 db nodes + 3 cells so High redundancy for all disk groups was allowed in OEDA. While modifying the configuration when one db node was removed from the cluster, OEDA probably didn\u0026rsquo;t run the redundancy check on disk groups and it allowed the go past that screen. If you try to create a new configuration with 1 db node + 3 cells, it will not allow you to choose High redundancy for all disk groups. DBFS will remain in Normal redundancy. You can\u0026rsquo;t change that.\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/04/28/failed-to-create-voting-files-on-disk-group-recoc1/","summary":"\u003cp\u003eLong story short, faced this issue while running OneCommand for one Exadata system. The root.sh step (Initialize Cluster Software) was failing with the following error on the screen\u003c/p\u003e\n\u003cp\u003eChecking file root_dm01dbadm02.in.oracle.com_2017-04-27_18-13-27.log on node dm01dbadm02.somedomain.com\nError: Error running root scripts, please investigate\u0026hellip;\nCollecting diagnostics\u0026hellip;\nErrors occurred. Send /u01/onecommand/linux-x64/WorkDir/Diag-170427_181710.zip to Oracle to receive assistance.\u003c/p\u003e\n\u003cp\u003eDoesn’t make much sense. So let us check the log file of this step\u003c/p\u003e\n\u003cp\u003e2017-04-27 18:17:10,463 [INFO][ OCMDThread][ ClusterUtils:413] Checking file root_dm01dbadm02.somedomain.com_2017-04-27_18-13-27.log on node inx321dbadm02.somedomain.com\n2017-04-27 18:17:10,464 [INFO][ OCMDThread][ OcmdException:62] Error: Error running root scripts, please investigate\u0026hellip;\n2017-04-27 18:17:10,464 [FINE][ OCMDThread][ OcmdException:63] Throwing OcmdException\u0026hellip; message:Error running root scripts, please investigate\u0026hellip;\u003c/p\u003e","title":"Failed to create voting files on disk group RECOC1"},{"content":"Hit this silly issue while doing an Exadata deployment for a customer. Step 1 was giving the following error:\nERROR: 192.168.99.102 configured on dm01celadm01.example.com as dm01dbadm02 does not match expected value dm01dbadm02.example.com\nI wasn\u0026rsquo;t able to make sense of it for quite some time until a colleague pointed out that the reverse lookup entries should be done for FQDN only. As it is clear in the above message reverse lookup of the IP 192.168.99.102 returns dm01dbadm02 instead of dm01dbadm02.example.com. Fixing this in DNS resolved the issue.\nActually the customer had done reverse lookup entries for both the hostname and FQDN. As the DNS can return the results in any order, so the error message was bit random. Whenever the the hostname was returned first, Step 1 gave an error. But when the FQDN was the first thing returned, there was no error in Step 1 for that IP.\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/04/10/onecommand-step-1-error/","summary":"\u003cp\u003eHit this silly issue while doing an Exadata deployment for a customer. Step 1 was giving the following error:\u003c/p\u003e\n\u003cp\u003eERROR: 192.168.99.102 configured on dm01celadm01.example.com as dm01dbadm02 does not match expected value dm01dbadm02.example.com\u003c/p\u003e\n\u003cp\u003eI wasn\u0026rsquo;t able to make sense of it for quite some time until a colleague pointed out that the reverse lookup entries should be done for FQDN only. As it is clear in the above message reverse lookup of the IP 192.168.99.102 returns dm01dbadm02 instead of dm01dbadm02.example.com. Fixing this in DNS resolved the issue.\u003c/p\u003e","title":"OneCommand Step 1 error"},{"content":"I was trying to do a 2 node RAC setup on Solaris 11.3 where Oracle Solaris Cluster 4.3 was already configured. Installed was running but the Cluster Node Information screen was appearing like this\nThe install log shows this:\nINFO: Checking cluster configuration details\nINFO: Found Vendor Clusterware. Fetching Cluster Configuration\nINFO: Executing [/tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes]\nwith environment variables {TERM=xterm, LC_COLLATE=, SHLVL=3, JAVA_HOME=, XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt, SSH_CLIENT=172.16.64.55 56370 22, LC_NUMERIC=, LC_MESSAGES=, MAIL=/var/mail/oracle, PWD=/export/software/grid/grid, XTERM_VERSION=XTerm(320), WINDOWID=2097165, LOGNAME=oracle, _=*50727*/export/software/grid/grid/install/.oui, NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat, SSH_CONNECTION=172.16.64.55 56370 172.16.72.18 22, OLDPWD=/export/oracle, LC_CTYPE=, CLASSPATH=, PATH=/usr/bin:/usr/ccs/bin:/usr/bin:/bin:/export/software/grid/grid/install, LC_ALL=, DISPLAY=localhost:10.0, LC_MONETARY=, USER=oracle, HOME=/export/oracle, XTERM_SHELL=/bin/bash, XAUTHORITY=/tmp/ssh-xauth-mlq21a/xauthfile, A__z=\u0026quot;*SHLVL, XTERM_LOCALE=en_US.UTF-8, TZ=localtime, LC_TIME=, LANG=en_US.UTF-8}\nINFO: Starting Output Reader Threads for process /tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes\nINFO: The process /tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes exited with code 9\nSo we can see the problem. lsnodes is not able to list the nodes. Let us try to run that command manually.\n-bash-4.1$ export PATH=PATH=/usr/bin:/usr/ccs/bin:/usr/bin:/bin:/export/software/grid/grid/install\n-bash-4.1$ /tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes\nld.so.1: lsnodes: fatal: libskgxn2.so: open failed: No such file or directory\nKilled\n-bash-4.1$\nSo looks like it is not able to find this library called libskgxn2.so. If we do a find for this file name we can see that it is present in this directory /usr/cluster/lib/sparcv9/libskgxn2.so .\nSome googling and MOS searches revealed that it expects the library to be present at /opt/ORCLcluster/lib. This directory doesn\u0026rsquo;t exist here. As a workaround we can create this directory manually and create symbolic link to file libskgxn2.so\nThe lsnodes command worked fine after this workaround and installer also shows both the nodes listed.\n","permalink":"https://v2.amardeepsidhu.com/blog/2017/03/28/oracle-rac-12-1-lsnodes-exited-with-code-9/","summary":"\u003cp\u003eI was trying to do a 2 node RAC setup on Solaris 11.3 where Oracle Solaris Cluster 4.3 was already configured. Installed was running but the Cluster Node Information screen was appearing like this\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/blog/wp-content/uploads/2017/03/error.jpg\"\u003e\u003cimg alt=\"error\" loading=\"lazy\" src=\"/blog/wp-content/uploads/2017/03/error_thumb.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe install log shows this:\u003c/p\u003e\n\u003cp\u003eINFO: Checking cluster configuration details\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eINFO: Found Vendor Clusterware. Fetching Cluster Configuration\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eINFO: Executing [/tmp/OraInstall2017-03-28_12-50-48PM/ext/bin/lsnodes]\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003ewith environment variables {TERM=xterm, LC_COLLATE=, SHLVL=3, JAVA_HOME=, XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt, SSH_CLIENT=172.16.64.55 56370 22, LC_NUMERIC=, LC_MESSAGES=, MAIL=/var/mail/oracle, PWD=/export/software/grid/grid, XTERM_VERSION=XTerm(320), WINDOWID=2097165, LOGNAME=oracle, _=*50727*/export/software/grid/grid/install/.oui, NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat, SSH_CONNECTION=172.16.64.55 56370 172.16.72.18 22, OLDPWD=/export/oracle, LC_CTYPE=, CLASSPATH=, PATH=/usr/bin:/usr/ccs/bin:/usr/bin:/bin:/export/software/grid/grid/install, LC_ALL=, DISPLAY=localhost:10.0, LC_MONETARY=, USER=oracle, HOME=/export/oracle, XTERM_SHELL=/bin/bash, XAUTHORITY=/tmp/ssh-xauth-mlq21a/xauthfile, A__z=\u0026quot;*SHLVL, XTERM_LOCALE=en_US.UTF-8, TZ=localtime, LC_TIME=, LANG=en_US.UTF-8}\u003c/p\u003e","title":"Oracle RAC 12.1 – lsnodes exited with code 9"},{"content":"Just a stupid error. Posting it so that someone else googling for the same thing can get a clue.\nAn ASM instance running with default parameters (no pfile, no spfile). Updated spfile for the instance with asmcmd spset command and bounced crs. After reboot also, it still wasn\u0026rsquo;t using spfile. Got puzzled and checked GPnP settings again. All looked good. Then in alert log came across this\n[text]ERROR: SPFile in diskgroup \u0026lt;\u0026gt; does not match the specified spfile +DATA/asm/asmparameterfile/registry.253.769187275[/text]\nThe problem was that while copying the spfile path the complete name didn\u0026rsquo;t get copied. The last character got missed. So the filename that it was looking for wasn\u0026rsquo;t there. Updating GPnP with correct filename and bouncing crs resolved the issue.\n","permalink":"https://v2.amardeepsidhu.com/blog/2016/09/20/error-spfile-in-diskgroup-does-not-match-the-specified-spfile/","summary":"\u003cp\u003eJust a stupid error. Posting it so that someone else googling for the same thing can get a clue.\u003c/p\u003e\n\u003cp\u003eAn ASM instance running with default parameters (no pfile, no spfile). Updated spfile for the instance with asmcmd spset command and bounced crs. After reboot also, it still wasn\u0026rsquo;t using spfile. Got puzzled and checked GPnP settings again. All looked good. Then in alert log came across this\u003c/p\u003e\n\u003cp\u003e[text]ERROR: SPFile in diskgroup \u0026lt;\u0026gt; does not match the specified spfile +DATA/asm/asmparameterfile/registry.253.769187275[/text]\u003c/p\u003e","title":"ERROR: SPFile in diskgroup \u003c\u003e does not match the specified spfile"},{"content":"So this customer has an Exadata quarter rack and they have an IB listener configured on both DB nodes (for DB connections from a multi-racked Exalogic system). We were adding a new DB node to this rack. So just followed the standard procedure of creating users, directories etc on the new node, setting up ssh equivalence and running addNode.sh. All went fine but root.sh failed. Little looking into the logs revealed that it failed while running srvctl start listener –n \u0026lt;node_name\u0026gt;\nIf we manually run this command, it will immediately reveal what the problem is. It is not able to start IB listener on the new node as the IB VIP doesn\u0026rsquo;t yet exist. It could happen for any of the additional networks added.\nThere is a MOS note that describes this exact situation but the solution that it gives is to remove the additional listener, complete addNode.sh \u0026amp; root.sh and add the additional listener back. That wasn’t possible in this case. After little bit of googling I stumbled upon this post by Jeremy Schneider. His colleague solved this problem with a very simple and clever workaround. Before root.sh prepares to run srvctl start listener command, run the add VIP command from another Window . Additional network would have already got added when root.sh runs on the new node.\nTo be able to perform this trick, you have to have the hosts file updated with the new VIP name and IP and be ready with the command to add the VIP. While root.sh is running, it will show a message like “there is already an active cluster, restarting to join”, immediately start trying to run srvctl add vip command in another window. The moment CRS, comes up the command will succeed. Immediately after that root.sh is going to run srvctl start listener command, and this time it shouldn\u0026rsquo;t fail as the VIP is already added.\nAnother small mistake we made was not updating the cellip.ora on the new node before running root.sh. That caused the root.sh to fail as it couldn’t talk to ASM running on existing cell nodes. Updating cellip.ora with the existing storage node IPs fixed the problem.\n","permalink":"https://v2.amardeepsidhu.com/blog/2016/09/13/addnode-sh-failed-root-sh-and-ib-listener/","summary":"\u003cp\u003eSo this customer has an Exadata quarter rack and they have an IB listener configured on both DB nodes (for DB connections from a multi-racked Exalogic system). We were adding a new DB node to this rack. So just followed the standard procedure of creating users, directories etc on the new node, setting up ssh equivalence and running addNode.sh. All went fine but root.sh failed. Little looking into the logs revealed that it failed while running \u003cstrong\u003esrvctl start listener –n \u0026lt;node_name\u0026gt;\u003c/strong\u003e\u003c/p\u003e","title":"addNode.sh, failed root.sh and IB listener"},{"content":"So if you are filling an OEDA for Exadata deployment there are few things you should take care of. Most of the screens are self explanatory but there are some bits where one should focus little more. I am running the Aug version of it and the screenshots below are from that version.\nOn the Define customer networks screen, the client network is the actual network where your data is going to flow. So typically it is going to be bonded (for high availability) and depending upon the network in your data center you have to select one out of 1/10 G copper and 10 G optical.\nIf you are going to use trunk VLANs for your client network, remember to enabled it by clicking the Advanced button and then entering the relevant VLAN id.\nAlso if it is going to be an OVM configuration, you may want to have different VMs in different VLAN segments. It will allow you to change VLAN ids for individual VMs on the respective cluster screens like below\nIf all the cores aren\u0026rsquo;t licensed remember to enable Capacity on Demand (COD) on the Identify Compute node OS screen. On the Define clusters screen make sure that you enter a unique (across your environment) cluster name.\nThe cluster details screen captures some of the most important details like\nWhether you want to have flash cache in WriteBack mode instead of WriteThrough\nWhether you want to have a role separated install or want to install both GI and Oracle binaries with oracle user itself.\nGI \u0026amp; Database versions and home for binaries. Always good to leave it at the Oracle recommended values as that makes the future maintenance easy and less painful.\nDisk Group names, redundancy and the space allocation.\nDefault database name and type (OLTP or DW).\nOf course it is important to carefully fill the information in all the screens but the above ones are some of them which should be filled very carefully after capturing the required information from other teams, if needed.\n","permalink":"https://v2.amardeepsidhu.com/blog/2016/09/08/oeda-things-to-keep-an-eye-on/","summary":"\u003cp\u003eSo if you are filling an \u003cstrong\u003eOEDA\u003c/strong\u003e for Exadata deployment there are few things you should take care of. Most of the screens are self explanatory but there are some bits where one should focus little more. I am running the Aug version of it and the screenshots below are from that version.\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\n\u003cp\u003eOn the \u003cstrong\u003eDefine customer networks\u003c/strong\u003e screen, the client network is the actual network where your data is going to flow. So typically it is going to be bonded (for high availability) and depending upon the network in your data center you have to select one out of 1/10 G copper and 10 G optical.\u003c/p\u003e","title":"OEDA\u0026ndash;Things to keep an eye on"},{"content":"Faced this error while querying v$asm_disk after adding new storage cell IPs to cellip.ora on DB nodes of an existing cluster on Exadata. Query ends with ORA-03113 end-of-file on communication channel and ORA-56841 is reported in $ORA_CRS_HOME/log//diskmon/diskmon.log. Reason in my case was that the new cell was using different subnet for IB. It was pingable from the db nodes but querying v$asm_disk wasn\u0026rsquo;t working. Changing the subnet for IB on new cell to the one on existing cells fixed the issue.\n","permalink":"https://v2.amardeepsidhu.com/blog/2016/05/19/ora-56841-master-diskmon-cannot-connect-to-a-cell/","summary":"\u003cp\u003eFaced this error while querying v$asm_disk after adding new storage cell IPs to cellip.ora on DB nodes of an existing cluster on Exadata. Query ends with \u003cem\u003eORA-03113 end-of-file on communication channel\u003c/em\u003e and \u003cem\u003eORA-56841\u003c/em\u003e is reported in \u003cem\u003e$ORA_CRS_HOME/log/\u003c!-- raw HTML omitted --\u003e/diskmon/diskmon.log\u003c/em\u003e. Reason in my case was that the new cell was using different subnet for IB. It was pingable from the db nodes but querying v$asm_disk wasn\u0026rsquo;t working. Changing the subnet for IB on new cell to the one on existing cells fixed the issue.\u003c/p\u003e","title":"ORA-56841: Master Diskmon cannot connect to a CELL"},{"content":"On a T5 Super Cluster (running 11.2.0.3) I was creating a cascaded standby from an already functional standby using RMAN DUPLICATE and it errored out with\nORA-01671: control file is a backup, cannot make a standby control file A quick search reveals that it is bug 11715084 that affects most of the 11.x versions except 11.2.0.4. There is a one off patch available for most of the versions or one can install the bundle patch that includes the fix for this patch. I applied BP26 and it worked fine after that.\n","permalink":"https://v2.amardeepsidhu.com/blog/2015/12/19/ora-01671-while-creating-cascaded-standby-from-standby-using-rman-duplicate/","summary":"\u003cp\u003eOn a T5 Super Cluster (running 11.2.0.3) I was creating a cascaded standby from an already functional standby using RMAN DUPLICATE and it errored out with\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eORA-01671: control file is a backup, cannot make a standby control file\n\u003c/code\u003e\u003c/pre\u003e\u003c/blockquote\u003e\n\u003cp\u003eA quick search reveals that it is bug 11715084 that affects most of the 11.x versions except 11.2.0.4. There is a one off patch available for most of the versions or one can install the bundle patch that includes the fix for this patch. I applied BP26 and it worked fine after that.\u003c/p\u003e","title":"ORA-01671 while creating cascaded standby from standby using RMAN DUPLICATE"},{"content":"This was my 6th year at Sangam and as always was good fun. We were a group of 4 people who were traveling from Delhi and we reached Hyderabad on Friday morning. Just wanted to keep a day for visiting Ramoji Film City and also wanted to avoid the rush that morning travel on the conference\u0026rsquo;s starting day brings. So after dropping the luggage at the hotel we hired a taxi and reached Ramoji Film City. It is a huge place and it is tiring to move around checking everything. But fortunately on that day the weather was very pleasant so moving around was good fun. We took a ride what they call as Space Walk and watched few sets where some movies were shot. Also they have a pretty good bird sanctuary over there where they have pretty good number of beautiful birds. Spending time there was nice and fun.\nBy 7 PM or so we were done with everything and started back to hotel. As it was dinner time already so we directly headed to Paradise and had some awesome Biryani.\nSaturday was the first day of the conference. We reached the venue by 8:30 AM and the registration was pretty quick. Before starting of the technical sessions at 10 AM, we had plenty of time to move around, meet folks especially who we know online but had never met in person. For me it was my chance to meet Tim Hall in person for the first time. Simply put Tim is brilliant. His website is an inspiration for many bloggers. It was great meeting Tim in person and striking few conversations about various technologies.\nAlso met Kamran for the first time in person. Been connected to him on social media for quite some time now. It was great catching up with you mate.\nHad last met Francisco in Sangam 10 and this year got a chance to meet him again. The second question (first was how is job ;) ) he asked me was \u0026ldquo;How is your blog going ?\u0026rdquo; So that calls for some focus on blogging again :)\nThen we selected the sessions we wanted to attend and moved to respective rooms. I attended most of the sessions around database technology. There was good variety of sessions but I noticed that Engineered Systems part was missing. There could have been some sessions on Exadata, Exalogic and Super Cluster. That would make an interesting topic and would pull good audience.\nStart of the second day looked little lazy but with time it caught up. There were some free time slots for me as not all sessions were of my interest. So got some time to chat around with folks there.\nIn the end there was an excellent motivational session by Dr Rajdeep Manwani that was enjoyed by everyone in the audience. The key take away was that you yourself have to do something about the things that you think need to be fixed. Blaming others for mess up in your life isn\u0026rsquo;t going to help.\nThat was the end of the second day and the conference. We met everyone around and decided to leave. Our 9 PM flight to Delhi was delayed by 4 hours and we had plenty of time to reach the airport. So once again we headed to Paradise for dinner. We were having dinner and Kamran and Markus also reached there for checking out some Biryani. Finished our dinner, said good bye to them and left for the airport. Flight was delayed little more and finally reached home at 4 AM next morning. Long day !\nRead Tim\u0026rsquo;s take on Day1 and Day2.\n","permalink":"https://v2.amardeepsidhu.com/blog/2015/11/24/sangam-15/","summary":"\u003cp\u003eThis was my 6th year at Sangam and as always was good fun. We were a group of 4 people who were traveling from Delhi and we reached Hyderabad on Friday morning. Just wanted to keep a day for visiting \u003ca href=\"http://ramojifilmcity.com/\"\u003eRamoji Film City\u003c/a\u003e and also wanted to avoid the rush that morning travel on the conference\u0026rsquo;s starting day brings. So after dropping the luggage at the hotel we hired a taxi and reached Ramoji Film City. It is a huge place and it is tiring to move around checking everything. But fortunately on that day the weather was very pleasant so moving around was good fun. We took a ride what they call as Space Walk and watched few sets where some movies were shot. Also they have a pretty good bird sanctuary over there where they have pretty good number of beautiful birds. Spending time there was nice and fun.\u003c/p\u003e","title":"Sangam 15"},{"content":"A rather not so great post about an ORA-00600 error i faced on a standby database. Environement was 11.2.0.3 on Sun Super Cluster machine. MRP process was hitting ORA-00600 while trying to apply a specific archive log.\nThe error message was something like this\nMRP0: Background Media Recovery terminated with error 600\rErrors in file /u01/app/oracle/product/11.2.0.3/diag/diag/rdbms/xxxprd/xxxprd1/trace/xxxprd1_pr00_6342.trc:\rORA-00600: internal error code, arguments: [2619], [539], [], [], [], [], [], [], [], [], [], []\rRecovery interrupted! Some googling and MOS searches revealed that the error was due to corrupted archive log file. Recopying the archive file from primary and restarting the recovery resolved the issue. The fist argument of the ORA-600 is actually the sequence no of the archive it is trying to apply.\n","permalink":"https://v2.amardeepsidhu.com/blog/2015/08/20/mrp-process-on-standby-stops-with-ora-00600/","summary":"\u003cp\u003eA rather not so great post about an ORA-00600 error i faced on a standby database. Environement was 11.2.0.3 on Sun Super Cluster machine. MRP process was hitting ORA-00600 while trying to apply a specific archive log.\u003c/p\u003e\n\u003cp\u003eThe error message was something like this\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eMRP0: Background Media Recovery terminated with error 600\r\nErrors in file /u01/app/oracle/product/11.2.0.3/diag/diag/rdbms/xxxprd/xxxprd1/trace/xxxprd1_pr00_6342.trc:\r\nORA-00600: internal error code, arguments: [2619], [539], [], [], [], [], [], [], [], [], [], []\r\nRecovery interrupted!\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eSome googling and MOS searches revealed that the error was due to corrupted archive log file. Recopying the archive file from primary and restarting the recovery resolved the issue. The fist argument of the ORA-600 is actually the sequence no of the archive it is trying to apply.\u003c/p\u003e","title":"MRP process on standby stops with ORA-00600"},{"content":"Tim Hall has written some brilliant posts about getting going with writing (blogs, whitepapers etc). This post is the result of inspiration from there only. Tim says that just get started with whatever .\nIf you are into blogging and no so active or even if you aren\u0026rsquo;t you may want to take a look at all the posts to get some inspiration to document the knowledge you gain on day to day basis.\nHere is an index to all the posts by Tim till now\nhttp://oracle-base.com/blog/2015/05/11/writing-tips-why-should-i-bother/\nhttp://oracle-base.com/blog/2015/05/12/writing-tips-how-do-i-start/\nhttp://oracle-base.com/blog/2015/05/13/writing-tips-writing-style/\nhttp://oracle-base.com/blog/2015/05/14/writing-tips-how-do-i-stay-motivated/\nhttp://oracle-base.com/blog/2015/05/15/writing-tips-dealing-with-comments-and-criticism/\nhttp://oracle-base.com/blog/2015/05/18/writing-tips-should-i-go-back-and-rewrite-revise-remove-old-posts/\nhttp://oracle-base.com/blog/2015/05/19/writing-tips-how-often-should-i-write/\nEnjoy !\n","permalink":"https://v2.amardeepsidhu.com/blog/2015/05/19/writing-tips/","summary":"\u003cp\u003eTim Hall has written some brilliant posts about getting going with writing (blogs, whitepapers etc). This post is the result of inspiration from there only. Tim says that just get started with whatever \u003cimg alt=\"Winking smile\" loading=\"lazy\" src=\"/blog/wp-content/uploads/2015/05/wlEmoticon-winkingsmile.png\"\u003e.\u003c/p\u003e\n\u003cp\u003eIf you are into blogging and no so active or even if you aren\u0026rsquo;t you may want to take a look at all the posts to get some inspiration to document the knowledge you gain on day to day basis.\u003c/p\u003e","title":"Writing tips"},{"content":"Many people have asked me this question that how they can learn Exadata ? It starts sounding even more difficult as a lot of people don’t have access to Exadata environments. So thought about writing a small post on the same.\nIt actually is not as difficult as it sounds. There are a lot of really good resources available from where you can learn about Exadata architecture and the things that work differently from any non-Exadata platform. You might be able to do lot more RnD if you have got access to an Exadata environment but don’t worry if you haven\u0026rsquo;t. Without that also there is a lot that you can explore. So here we go:\nI think the best reference that one can start with is Expert Oracle Exadata book by Tanel Poder, Kerry Osborne and Randy Johnson. As a traditional book covers the subject topic by topic from ground up so it makes a fun read. This book is also no different. It will teach you a lot. They are already working on the second edition. (See here). Next you can jump to whitepapers on Oracle website Exadata page, blog posts (keep an eye on OraNA.info) and whitepapers written by other folks. There is a lot of useful material out there. You just need to Google a bit. Exadata documentation (not public yet) should be your next stop if you have got access to it. Patch 10386736 on MOS if you have got the access. Try to attend an Oracle Users Group conference if there is one happening in your area. Most likely someone would be presenting on Exadata so you can use that opportunity to learn about it. Also you will get a chance to ask him questions. Lastly if you have an Exadata machine available do all the RnD you can. Happy New Year and Happy Learning !\nComments Comment by Aman\u0026hellip;. on 2015-01-02 15:24:47 +0530 The best way to learn Exadata is to purchase a Quarter Rack. There is nothing like a “hands-on” learning and as an added benefit, one would get access to Support as well so you can learn and ask your questions/doubts directly to support services 😉 . The only issue, a small only, is the price but then again, there is always a price for quality, isn’t it!!\nOkay, so on a serious note, besides the mentioned links, Oracle’s Learning Library(http://oracle.com/goto/oll) has a complete series for Flash based videos to explain various concepts of Exadata. That’s a very good and Free resource to learn Exadata. Also, Uwe Hesse has written some really good stuff to understand the concepts of Exadata on his blog http://uhesse.com .\nAnd last but certainly not the least, attending an Oracle Univ course for the same would give one access to machines to learn the things and play with the technology. Also, the same would make one eligible for the certification.\nAman….\nComment by joseph on 2015-02-02 10:36:29 +0530 the hurdle really is getting access on a exadata machine so you can learn first hand\n","permalink":"https://v2.amardeepsidhu.com/blog/2015/01/02/want-to-learn-exadata/","summary":"\u003cp\u003eMany people have asked me this question that how they can learn Exadata ? It starts sounding even more difficult as a lot of people don’t have access to Exadata environments. So thought about writing a small post on the same.\u003c/p\u003e\n\u003cp\u003eIt actually is not as difficult as it sounds. There are a lot of really good resources available from where you can learn about Exadata architecture and the things that work differently from any non-Exadata platform. You might be able to do lot more RnD if you have got access to an Exadata environment but don’t worry if you haven\u0026rsquo;t. Without that also there is a lot that you can explore. So here we go:\u003c/p\u003e","title":"Want to learn Exadata ?"},{"content":"I was troubleshooting some Windows hangs on my Desktop system running Windows 8 and enabled driver verifier. Today when I tried to start VirtualBox it failed with this error message.\nFailed to load VMMR0.r0 (VERR_LDR_MISMATCH_NATIVE)\nMost of the online forums were asking to reinstall VirtualBox to fix the issue. But one of the thread mentioned that it was being caused by Windows Driver Verifier. I disabled it, restarted Windows and VirtualBox worked like a charm. Didn\u0026rsquo;t have time to do more research as i quickly wanted to test something. May be we can skip some particular stuff from Driver Verifier and VirutalBox can then work.\n","permalink":"https://v2.amardeepsidhu.com/blog/2014/12/03/virtualbox-and-windows-driver-verifier/","summary":"\u003cp\u003eI was troubleshooting some Windows hangs on my Desktop system running Windows 8 and enabled driver verifier. Today when I tried to start VirtualBox it failed with this error message.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eFailed to load VMMR0.r0 (VERR_LDR_MISMATCH_NATIVE)\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eMost of the online forums were asking to reinstall VirtualBox to fix the issue. But \u003ca href=\"http://stackoverflow.com/questions/21654596/suddenly-getting-failed-to-load-vmmr0-r0-verr-ldr-mismatch-native-in-virtual\"\u003eone of the thread\u003c/a\u003e mentioned that it was being caused by Windows Driver Verifier. I disabled it, restarted Windows and VirtualBox worked like a charm. Didn\u0026rsquo;t have time to do more research as i quickly wanted to test something. May be we can skip some particular stuff from Driver Verifier and VirutalBox can then work.\u003c/p\u003e","title":"VirtualBox and Windows driver verifier"},{"content":"Few months ago I contributed a chapter (on Monitoring, Troubleshooting and Performance tuning) to a GoldenGate book on Oracle Press that Robert Freeman was authoring. Thought of posting a small update that the book is now out. My name doesn’t appear on the main page but you will see it in the Acknowledgements section Below is a screenshot taken from Amazon preview .\nYou may want to grab a copy if you are using/planning to use Oracle GoldenGate 11g.\nHere is the link to the book page on Amazon. It seems the book is not published in India yet but one can order the imported edition on amazon.in Comments Comment by Puja on 2015-06-28 06:07:12 +0530 Amazing!!!! Congrats 🙂\nComment by Sidhu on 2015-08-20 12:37:01 +0530 Thank you ! 🙂\n","permalink":"https://v2.amardeepsidhu.com/blog/2013/07/18/oracle-goldengate-11g-handbook/","summary":"\u003cp\u003eFew months ago I contributed a chapter (on Monitoring, Troubleshooting and Performance tuning) to a GoldenGate book on Oracle Press that \u003ca href=\"http://robertgfreeman.blogspot.in/\"\u003eRobert Freeman\u003c/a\u003e was authoring. Thought of posting a small update that the book is now out. My name doesn’t appear on the main page \u003cimg alt=\"Sad smile\" loading=\"lazy\" src=\"/blog/wp-content/uploads/2013/07/wlEmoticon-sadsmile.png\"\u003e but you will see it in the Acknowledgements section \u003cimg alt=\"Winking smile\" loading=\"lazy\" src=\"/blog/wp-content/uploads/2013/07/wlEmoticon-winkingsmile.png\"\u003e Below is a screenshot taken from Amazon preview \u003cimg alt=\"Smile\" loading=\"lazy\" src=\"/blog/wp-content/uploads/2013/07/wlEmoticon-smile.png\"\u003e.\u003c/p\u003e\n\u003cp\u003eYou may want to grab a copy if you are using/planning to use Oracle GoldenGate 11g.\u003c/p\u003e","title":"Oracle GoldenGate 11g Handbook"},{"content":"So there is a new toy in the market for database geeks : Oracle has released database 12c. Every social platform is abuzz with the 12c activity. So thought that I should also complete the ritual In this post Aman has already summed up many important links.\nMaria Colgan has posted some useful links here.\nAnd here is a link to a slidedeck about Upgrading and Migrating to 12c.\nHappy 12c’ing !\n","permalink":"https://v2.amardeepsidhu.com/blog/2013/06/27/oracle-database-12c/","summary":"\u003cp\u003eSo there is a new toy in the market for database geeks : Oracle has released database 12c. Every social platform is abuzz with the 12c activity. So thought that I should also complete the ritual \u003cimg alt=\"Winking smile\" loading=\"lazy\" src=\"/blog/wp-content/uploads/2013/06/wlEmoticon-winkingsmile.png\"\u003e\u003c/p\u003e\n\u003cp\u003eIn \u003ca href=\"http://blog.aristadba.com/?p=254\"\u003ethis post\u003c/a\u003e Aman has already summed up many important links.\u003c/p\u003e\n\u003cp\u003eMaria Colgan has posted some useful links \u003ca href=\"https://blogs.oracle.com/optimizer/entry/oracle_database_12c_is_here\"\u003ehere\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003eAnd \u003ca href=\"http://apex.oracle.com/pls/apex/f?p=202202:2:::::P2_SUCHWORT:migrate12c\"\u003ehere is a link\u003c/a\u003e to a slidedeck about Upgrading and Migrating to 12c.\u003c/p\u003e\n\u003cp\u003eHappy 12c’ing !\u003c/p\u003e","title":"Oracle database 12c"},{"content":"Yesterday I was configuring EM 12c for a Sun Super Cluster system. There were a total of 4 LDOMs where I needed to deploy the agent (Setup –\u0026gt; Add targets –\u0026gt; Add targets manually). Out of these 4 everything went fine for 2 LDOMs but for the other two it failed with an error message. It didn’t give much details on the EM screen but rather gave a message to try to secure/start the agent manually. When I tried to do that manually the secure agent part worked fine but the start agent command failed with the following error message:\noracle@app1:~$emctl start agent\nOracle Enterprise Manager Cloud Control 12c Release 2\nCopyright (c) 1996, 2012 Oracle Corporation. All rights reserved.\nStarting agent \u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;. failed.\nHTTP Listener failed at Startup\nPossible port conflict on port(3872): Retrying the operation\u0026hellip;\nFailed to start the agent after 1 attempts. Please check that the port(3872) is available.\nI thought that there was something wrong with the port thing so I cleaned the agent installation, made sure that the port wasn’t being used and did the agent deployment again. This time it again failed with the same message but it reported a different port number ie 1830 agent port no:\noracle@app1:~$emctl start agent\nOracle Enterprise Manager Cloud Control 12c Release 2\nCopyright (c) 1996, 2012 Oracle Corporation. All rights reserved.\nStarting agent \u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;\u0026hellip;. failed.\nHTTP Listener failed at Startup\nPossible port conflict on port(1830): Retrying the operation\u0026hellip;\nFailed to start the agent after 1 attempts. Please check that the port(1830) is available.\nAgain checked few things but found nothing wrong. All the LDOMs had similar configuration so what worked for the other two should have worked for these two also.\nBefore starting with the installation I had noted the LDOM hostnames and IPs in a notepad file and had swapped the IPs of two LDOMs (actually these two only ). But later on I found that and corrected. While looking at the notepad file it occurred to me that the same stuff could be wrong in /etc/hosts of the server where EM is deployed. Oh boy that is what it was. While making the entries in /etc/hosts of EM server, I copied it from the notepad and the wrong entries got copied. The IPs for these two LDOMs got swapped with each other and that was causing the whole problem.\ndeinstalled the agent, correct the /etc/hosts and tried to deploy again…all worked well !\n","permalink":"https://v2.amardeepsidhu.com/blog/2013/06/16/agent-deployment-error-in-em-12c/","summary":"\u003cp\u003eYesterday I was configuring EM 12c for a Sun Super Cluster system. There were a total of 4 LDOMs where I needed to deploy the agent (Setup –\u0026gt; Add targets –\u0026gt; Add targets manually). Out of these 4 everything went fine for 2 LDOMs but for the other two it failed with an error message. It didn’t give much details on the EM screen but rather gave a message to try to secure/start the agent manually. When I tried to do that manually the secure agent part worked fine but the start agent command failed with the following error message:\u003c/p\u003e","title":"agent deployment error in EM 12c"},{"content":"Just a quick note about change in the way the compute nodes are patched starting from version 11.2.3.1.1. For earlier versions Oracle provided the minimal pack for patching the compute nodes. Starting with version 11.2.3.1.1 Oracle has discontinued the minimal pack and the updates to compute nodes are done via Unbreakable Linux Network (ULN).\nNow there are three ways to update the compute nodes:\nYou have internet access on the Compute nodes. In this case you can download patch 13741363, complete the one time setup and start the update.\nIn case you don’t have internet access on the Compute nodes you can choose some intermediate system (that has internet access) to create a local repository and then point the Compute nodes to this system to install the updates.\nOracle will also provide all the future updates via an downloadable ISO image file (patch 14245540 for 11.2.3.1.1). You can download that ISO image file, mount it on some local system and point the compute nodes to this system for updating the rpms (the readme has all the details on how to do this).\nSome useful links:\nhttps://blogs.oracle.com/XPSONHA/entry/updating_exadata_compute_nodes_using\nhttps://blogs.oracle.com/XPSONHA/entry/new_channels_for_exadata_11\nMetalink note 1466459.1\n","permalink":"https://v2.amardeepsidhu.com/blog/2012/08/19/updating-to-exadata-11-2-3-1-1/","summary":"\u003cp\u003eJust a quick note about change in the way the compute nodes are patched starting from version 11.2.3.1.1. For earlier versions Oracle provided the minimal pack for patching the compute nodes. Starting with version 11.2.3.1.1 Oracle has discontinued the minimal pack and the updates to compute nodes are done via Unbreakable Linux Network (ULN).\u003c/p\u003e\n\u003cp\u003eNow there are three ways to update the compute nodes:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\n\u003cp\u003eYou have internet access on the Compute nodes. In this case you can download patch \u003cstrong\u003e13741363\u003c/strong\u003e, complete the one time setup and start the update.\u003c/p\u003e","title":"Updating to Exadata 11.2.3.1.1"},{"content":"There was an interesting issue at one of the customer sites. Few tables in the database were altered and the dependent objects became invalid. But the attempts to compile the objects using utlrp.sql or manually were failing. In all the cases it was giving the same error:\nSQL\u0026gt; alter function SCOTT.SOME_FUNCTION compile;\ralter function SCOTT.SOME_FUNCTION compile\r*\rERROR at line 1:\rORA-00604: error occurred at recursive SQL level 1\rORA-01422: exact fetch returns more than requested number of rows\rORA-06512: at line 27\rSQL\u0026gt; At first look it sounded like some issue with the dictionary as the error in case of every object (be it a view, function or package) was the same.\nEverybody was trying to compile the invalid objects and surprisingly few VIEWs (that were not getting compiled from SQL*Plus) got compiled from Toad ! But that didn\u0026rsquo;t explain anything. In fact it was more confusing.\nFinally I enabled errorstack for event 1422 and tried to compile a view. Here is the relevant content from the trace file\n----- Error Stack Dump -----\rORA-01422: exact fetch returns more than requested number of rows\r----- Current SQL Statement for this session (sql_id=7kb01v7t6s054) -----\rSELECT SQL_TEXT FROM V$OPEN_CURSOR VOC, V$SESSION VS WHERE VOC.SADDR = VS.SADDR AND AUDSID=USERENV(\u0026#39;sessionid\u0026#39;) AND UPPER(SQL_TEXT) LIKE \u0026#39;ALTER%\u0026#39; I took it to be some system SQL and started searching in that direction and obviously that was of no use.\nIn the mean time another guy almost shouted…”oh there is a trigger to capture DDL operations in the database; it must be that”. And indeed it was. Here is the code that was creating the problem:\nselect sql_text into vsql_text\rfrom v$open_cursor voc, v$session vs\rwhere voc.saddr = vs.saddr\rand audsid=userenv(\u0026#39;sessionid\u0026#39;)\rand upper(sql_text) like \u0026#39;ALTER%\u0026#39;; As v$open_cursor was returning multiple rows, hence the problem !\nMoral is that the errorstack traces do tell a lot (of course if you listen carefully) ;)\n","permalink":"https://v2.amardeepsidhu.com/blog/2012/07/31/ora-01422-while-compiling-objects/","summary":"\u003cp\u003eThere was an interesting issue at one of the customer sites. Few tables in the database were altered and the dependent objects became invalid. But the attempts to compile the objects using utlrp.sql or manually were failing. In all the cases it was giving the same error:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003eSQL\u0026gt; alter function SCOTT.SOME_FUNCTION compile;\r\n alter function SCOTT.SOME_FUNCTION compile\r\n*\r\nERROR at line 1:\r\nORA-00604: error occurred at recursive SQL level 1\r\nORA-01422: exact fetch returns more than requested number of rows\r\nORA-06512: at line 27\r\n\r\nSQL\u0026gt;\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eAt first look it sounded like some issue with the dictionary as the error in case of every object (be it a view, function or package) was the same.\u003c/p\u003e","title":"ORA-01422 while compiling objects"},{"content":"Sometimes you may need to run GoldenGate on some different machine than the one that hosts the database. It is very much possible but some kind of restrictions apply. First is that the Endian order of both the systems should be same and the second is the bit width has to be same. For example it is not possible to run GoldenGate on a 32 bit system to read from a database that runs on some 64 bit platform. Assuming that the environemnt satisfies the above two conditions; we can use the LOGSOURCE option of TRANSLOGOPTIONS to achieve this.\nHere we run GG on host goldengate1 (192.168.0.109) and the database from which we want to capture the changes runs on the host goldengate3 (192.168.0.111). Both the systems run 11.2.0.2 on RHEL 5.5. On goldengate3 redo logs are in the mount point /home which has been NFS mounted on goldengate1 as /home_gg3\nFilesystem 1K-blocks Used Available Use% Mounted on\r192.168.0.111:/home 12184800 7962496 3593376 69% /home_gg3 The Extract parameters are as follows:\nEXTRACT ERMT01\rUSERID ggadmin@orcl3, PASSWORD ggadmin\rEXTTRAIL ./dirdat/er\rTRANLOGOPTIONS LOGSOURCE LINUX, PATHMAP /home/oracle/app/oracle/oradata/orcl /home_gg3/oracle/app/oracle/oradata/or\rcl, PATHMAP /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog /home_gg3/oracle/app/oracle/flash_recovery_\rarea/ORCL/archivelog\rTABLE HR.*;\r(The text in the line starting with TRANLOGOPTIONS is a single line) So using PATHMAP we can make GG aware about the actual location of the red logs \u0026amp; archive logs on the remote server and the mapped location on the system where GG is running (It is somewhat like db_file_name_convert option for Data Guards).\nWe fire some DMLs on the source database and then run stats command for the Extract\nGGSCI (goldengate1) 93\u0026gt; stats ermt01 totalsonly *\rSending STATS request to EXTRACT ERMT01 ...\rStart of Statistics at 2012-05-26 05:17:05.\rOutput to ./dirdat/er:\rCumulative totals for specified table(s):\r*** Total statistics since 2012-05-26 04:51:10 ***\rTotal inserts 1.00\rTotal updates 0.00\rTotal deletes 1.00\rTotal discards 0.00\rTotal operations 2.00\r.\r.\r.\rEnd of Statistics.\rGGSCI (goldengate1) 94\u0026gt; For more details have a look at the GG reference guide (Page 402).\nComments Comment by Viral Vaidya on 2012-06-21 20:59:13 +0530 Hi, what changes required if my target server datafile is different and i want to enable DDL replication for tablespace as well?\nComment by An Oracle DB/GG blog - paddukandimalla on 2016-07-05 14:49:09 +0530 I got this requirement and searching in google and your page is in top list , thanks Sidhu ji for this –Paddu Kandimalla\n","permalink":"https://v2.amardeepsidhu.com/blog/2012/05/26/configure-goldengate-extract-to-read-from-remote-logs/","summary":"\u003cp\u003eSometimes you may need to run GoldenGate on some different machine than the one that hosts the database. It is very much possible but some kind of restrictions apply. First is that the Endian order of both the systems should be same and the second is the bit width has to be same. For example it is not possible to run GoldenGate on a 32 bit system to read from a database that runs on some 64 bit platform. Assuming that the environemnt satisfies the above two conditions; we can use the LOGSOURCE option of TRANSLOGOPTIONS to achieve this.\u003c/p\u003e","title":"Configure GoldenGate Extract to read from remote logs"},{"content":"Just a quick note/post about the significance of COMPRESS and TCPBUFSIZE parameter in performance of a GoldenGate Extract Pump process. COMPRESS helps in compressing the outgoing blocks hence helping in better utilization of the bandwidth from source to target. GG is going to uncompress the blocks before writing them to the remote trail file on the target. Compression ratios of 4:1 or better can be achieved. Of course, use of COMPRESS may result in increased CPU usage on both the sides.\nTCPBUFSIZE controls the size of the TCP buffer socket that is going to be used by the Extract. If the bandwidth allows, it will be a good idea to send larger packets. So depending upon the available bandwidth one can experiment with the values of TCPBUFSIZE. At one of the client sites, I saw a great increase in the performance after setting TCPBUFSIZE. The trail file (10 MB size) that was taking almost a minute to transfer started getting through in few seconds after setting this parameter. Documentation ( http://docs.oracle.com/cd/E35209_01/doc.1121/e29399.pdf page 313) provides the method to calculate the optimum value for TCPBUFSIZE for your environment.\nWhile using TCPBUFSIZE value for TCPFLUSHBYTES (at least equal to the value of TCPBUFSIZE) also needs to be set. It is the buffer that collects the data that is going to be transferred to the target.\nThese parameters can be used like following:\nrmthost, mgrport, compress, tcpbufsize 10000, tcpflushbytes 10000 Also see the metalink note 1071892.1.\nComments Comment by Puja on 2012-05-25 22:09:05 +0530 Thanks! I am trying to understand and implement this one for a new project!\nComment by Sidhu on 2012-05-25 22:19:35 +0530 Great !\nFor starting a very good reference is http://gavinsoorma.com/oracle-goldengate-veridata-web\nAlso do check Troubleshooting \u0026amp; Tuning guide from the GG documentation. Real good stuff there.\n","permalink":"https://v2.amardeepsidhu.com/blog/2012/05/25/tuning-goldengate-extract-pump-performance/","summary":"\u003cp\u003eJust a quick note/post about the significance of COMPRESS and TCPBUFSIZE parameter in performance of a GoldenGate Extract Pump process. COMPRESS helps in compressing the outgoing blocks hence helping in better utilization of the bandwidth from source to target. GG is going to uncompress the blocks before writing them to the remote trail file on the target. Compression ratios of 4:1 or better can be achieved. Of course, use of COMPRESS may result in increased CPU usage on both the sides.\u003c/p\u003e","title":"Tuning GoldenGate Extract Pump performance"},{"content":"Hybrid Columnar Compression (HCC) is a new awesome feature in Exadata that helps in saving a lot of storage space in your environment. This whitepaper on Oracle website explains this feature in detail. Also Uwe Hesse has an excellent how to use all this post on his blog. You can see the compression levels one can achive by making use of HCC. It is very simple to use feature but one needs to be aware of few things before using HCC extensively as otherwise all your storage calculations may go weird. Here are few of the things to keep in mind:\nHCC works with direct path loads only that includes: CTAS, running impdp with ACCESS_METHOD=DIRECT or direct path inserts. If you insert data using a normal insert, it will not be HCC compressed.\nIt is most suited for tables that aren\u0026rsquo;t going to be updated once loaded. There are some complications (next point) that arise if some DML is going to be run on HCC compressed data.\nAt block level HCC stores data as compression units. A compression unit can be defined as a set of blocks. Now if some rows (stored with HCC) are updated, they need to be decompressed first. Also in that case the database needs to read the compression unit, not a single block. So once you do some update on the data stored in HCC, it will be moved out of HCC compression. To HCC compress it again you will need to do alter table table_name move compress for(Also see Metalink note 1332853.1) . So if the tables you are planning to use HCC upon, undergo frequent DML, HCC may not be best suited for that scenario. Not only it will add the additional overhead of running alter table move statement every time some updates happen, it may screw up the storage space calculations as well.\n","permalink":"https://v2.amardeepsidhu.com/blog/2011/12/22/dml-and-hcc-exadata/","summary":"\u003cp\u003eHybrid Columnar Compression (HCC) is a new awesome feature in Exadata that helps in saving a lot of storage space in your environment. \u003ca href=\"http://www.oracle.com/technetwork/middleware/bi-foundation/ehcc-twp-131254.pdf\"\u003eThis whitepaper\u003c/a\u003e on Oracle website explains this feature in detail. Also Uwe Hesse has an excellent \u003cem\u003ehow to use all this\u003c/em\u003e \u003ca href=\"http://uhesse.wordpress.com/2011/01/21/exadata-part-iii-compression/\"\u003epost on his blog\u003c/a\u003e. You can see the compression levels one can achive by making use of HCC. It is very simple to use feature but one needs to be aware of few things before using HCC extensively as otherwise all your storage calculations may go weird. Here are few of the things to keep in mind:\u003c/p\u003e","title":"DML and HCC – Exadata"},{"content":"The last post was just like that. It was this GoldenGate issue that woke me up from the deep sleep to do a post after a long time :P .\nWell it was a simple schema to schema replication setup using GoldenGate. We were using the SCN method (Metalink Doc ID 1276058.1 \u0026amp; 1347191.1) to do the intial load so that there is no overlvapping of transactions and the replicat runs with minimum issues. Even after following this method, the replicat was hitting\n[text]2011-10-31 19:25:17 WARNING OGG-01004 Aborted grouped transaction on \u0026lsquo;SCHEMA.TABLE\u0026rsquo;, Database error 1403 ().\n2011-10-31 19:25:17 WARNING OGG-01003 Repositioning to rba 3202590 in seqno 1.\n2011-10-31 19:25:18 WARNING OGG-01154 SQL error 1403 mapping SCHEMA.TABLE TO SCHEMA.TABLE.\n2011-10-31 19:25:18 WARNING OGG-01003 Repositioning to rba 3468713 in seqno 1.[/text]\nIf we managed to bypass this error somehow, it hit:\n[text]2011-10-24 19:58:15 WARNING OGG-00869 OCI Error ORA-00001: unique constraint (SCHEMA.UK) violated (status = 1), SQL \u0026lt;INSERT INTO \u0026ldquo;SCHEMA\u0026rdquo;.\u0026ldquo;TABLE\u0026rdquo; (\n2011-10-24 19:58:15 WARNING OGG-01004 Aborted grouped transaction on \u0026lsquo;SCHEMA.TABLE\u0026rsquo;, Database error 1 (OCI Error ORA-00001: unique constraint (SCHEMA.UK) violated (status = 1), SQL ).\n2011-10-24 19:58:15 WARNING OGG-01003 Repositioning to rba 1502788 in seqno 3.\n2011-10-24 19:58:15 WARNING OGG-01154 SQL error 1 mapping SCHEMA.TABLE to SCHEMA.TABLE OCI Error ORA-00001: unique constraint (SCHEMA.UK) violated (status = 1), SQL .\n2011-10-24 19:58:15 WARNING OGG-01003 Repositioning to rba 1502788 in seqno 3.[/text]\n1403 means that GoldenGate couldn\u0026rsquo;t find the record it wanted to update.\n00001 would mean that the record GoldenGate tried to insert was already there.\nIn our case, as we used SCN method so none of them was expected. So these weird errors left us totally confused. Some guys suggested that expdp was not taking a consistent image and some transactions were getting overlapped (picked up by both expdp \u0026amp; GG extract trail). We took the database down and repeated the exercise but oops ! it hit almost the same errors again. So it was not about consistency for sure.\nTill now we haven\u0026rsquo;t been examining the contents of discard file very seriously. As the errors were pretty simple so we always suspected that some transactions were getting overlapped. Now it was high time to take some help from discard file as well ;) . We took the before/after image of the record from the discard file and checked it in the target database \u0026amp; values in one or two columns were different (that is why GG couldn\u0026rsquo;t find that record). The new values were the actual hint towards the solution [It was a table storing the mail requests and their statuses. This update that GG was trying to run was updating the status from NOT-SENT TO SENT but here on the target the status was already set to \u0026lsquo;ORA-something\u0026hellip;\u0026hellip;\u0026rsquo;]. We got the clue that something must have run on the target itself that spoiled this record and now GG is not able to find it and abending with 1403. select * from dba_jobs cleared it all. While doing the initial load with expdp/impdp, job also got imported and some of them were in not broken state. They were firing according to their schedule and making changes to data in the target. So before GG came to update/insert record the job had already done its game and the replicat was hitting different errors. We did the initial load again (this time by using flashback_scn in the running database), disabled all the jobs and ran the replicat. It went through without any errors.\nSo things to take care of, in such cases:\nDisable all the triggers on the target side (or exclude triggers while running expdp)\nLook for and disable any scheduled jobs (could be dba_jobs, dba_scheduler_jobs or cron)\nHappy GoldenGate\u0026rsquo;ing !\nComments Comment by Arunkumar A on 2018-03-13 12:11:07 +0530 INSERTMISSINGUPDATES add this parameter and start\n","permalink":"https://v2.amardeepsidhu.com/blog/2011/11/04/ogg-01004-aborted-grouped-transaction-on-database-error-1403/","summary":"\u003cp\u003eThe \u003ca href=\"/blog/2011/11/04/expdp-not-consistent/\"\u003elast post\u003c/a\u003e was just like that. It was this GoldenGate issue that woke me up from the deep sleep to do a post after a long time :P .\u003c/p\u003e\n\u003cp\u003eWell it was a simple schema to schema replication setup using GoldenGate. We were using the SCN method (Metalink Doc ID 1276058.1 \u0026amp; 1347191.1) to do the intial load so that there is no overlvapping of transactions and the replicat runs with minimum issues. Even after following this method, the replicat was hitting\u003c/p\u003e","title":"OGG-01004 Aborted grouped transaction on \u003ctable_name\u003e‘, Database error 1403 ()"},{"content":"Came across this small oddity that documentation of 10.2 and 11.2 states that expdp by default takes consistent image of the database. But actually it is not so. You need to use flashback_scn/flashback_time for that. Metalink doc 377218.1 explains the scenario.\nComments Comment by Chris Fischer on 2011-11-04 19:47:12 +0530 I’ve been warning my customers about this for years. “Shut down all db access before taking a schema or full expdp!”\n","permalink":"https://v2.amardeepsidhu.com/blog/2011/11/04/expdp-not-consistent/","summary":"\u003cp\u003eCame across this small oddity that documentation of 10.2 and 11.2 states that expdp by default takes consistent image of the database. But actually it is not so. You need to use flashback_scn/flashback_time for that. Metalink doc 377218.1 explains the scenario.\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-chris-fischer-on-2011-11-04-194712-0530\"\u003eComment by Chris Fischer on 2011-11-04 19:47:12 +0530\u003c/h3\u003e\n\u003cp\u003eI’ve been warning my customers about this for years. “Shut down all db access before taking a schema or full expdp!”\u003c/p\u003e","title":"expdp not consistent"},{"content":"It was a 10g (10.2.0.5 on HP-UX 11.23 RISC) database which was recently upgraded from 9.2.0.8. The CPU and memory utilization was going really high. After tuning few of the queries coming in top, CPU usage was coming within accetable limits but the memory usage was still high. There was a total of 16 GB of RAM on the server and the usage was above 90%, constantly. One of the reasons behind high usage was increase in the SGA size. It was increased from 2.5 GB (in 9i) to around 5 GB (in 10g). Another major chunk was being eaten by OS buffer cache. While looking at the memory usage with kmeminfo:[bash]Buffer cache = 1048448 4.0g 25% details with -bufcache[/bash]\nIn HP-UX, The memory allocated to (dynamic) buffer cache is controlled by two parameters dbc_min_pct and dbc_max_pct. It can vary between dbc_min_pct and dbc_max_pct percent of the total RAM. They default to 5 and 50 respectively. For a system that is running an Oracle database value of 50 for dbc_max_pct is way too high. That means half of the memory is going to be allocated to OS buffer cache. As Oracle has got its own buffer cache so the OS cache is not of much use. As mentioned in the metalink note 726652.1, the value of dbc_max_pct can be safely lowered without impacting the Oracle database performance. In many of the threads (on HP website) people have suggested the value of 10 for db_max_pct. Not sure if it is more like a thumb rule but in the same metalink note (726652.1) it is mentioned that if %rcache in sar -b is above 90, that means your OS buffer cache is adequately sized.\nAfter setting the value of dbc_max_pct to 15 (It will be changed to 10, finally), around 1.6 GB more memory was freed. Also there was no impact on the database or OS performance. Here are few of the metalink notes and threads on HP-UX website that talk about these parameters in detail:\nOracle Shadow Processes Are Taking Too Much Memory (Doc ID 434535.1) How OS Buffer Cache Size Affects Db Performance (Doc ID 726652.1) Commonly Misconfigured HP-UX Kernel Parameters (Doc ID 68105.1) http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1306231311459+28353475\u0026amp;threadId=1266914\nhttp://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=727618 http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=467288 http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=750342\nComments Comment by Neeraj on 2011-09-09 21:12:00 +0530 Good stuff! Keep it up!\nCheers,\nNeeraj\n","permalink":"https://v2.amardeepsidhu.com/blog/2011/05/25/dbc_min_pct-and-dbc_max_pct-in-hp-ux/","summary":"\u003cp\u003eIt was a 10g (10.2.0.5 on HP-UX 11.23 RISC) database which was recently upgraded from 9.2.0.8. The CPU and memory utilization was going really high. After tuning few of the queries coming in top, CPU usage was coming within accetable limits but the memory usage was still high. There was a total of 16 GB of RAM on the server and the usage was above 90%, constantly. One of the reasons behind high usage was increase in the SGA size. It was increased from 2.5 GB (in 9i) to around 5 GB (in 10g). Another major chunk was being eaten by OS buffer cache. While looking at the memory usage with kmeminfo:[bash]Buffer cache = 1048448 4.0g 25% details with -bufcache[/bash]\u003c/p\u003e","title":"dbc_min_pct and dbc_max_pct in HP-UX"},{"content":"Very simple issue but took some amount of time in troubleshooting so thought about posting it here. May be it proves to be useful for someone.\nScenario was: Oracle is installed from \u0026ldquo;oracle\u0026rdquo; user and all runs well. There is a new OS user \u0026ldquo;test1\u0026rdquo; that also needs to use sqlplus. So granted the necessary permissions on ORACLE_HOME to test1. Tried to connect sqlplus scott/tiger@DB and yes it works. But while trying sqlplus scott/tiger it throws:\n[sql]$ sqlplus scott/tiger\nSQL*Plus: Release 10.2.0.5.0 - Production on Wed May 18 09:32:35 2011\nCopyright (c) 1982, 2010, Oracle. All Rights Reserved.\nERROR: ORA-12547: TNS:lost contact\nEnter user-name: ^C $[/sql]\nDid a lot of troubleshooting including checking tnsnames.ora, sqlnet.ora, listener.ora and so on. Nothing was hitting my mind so finally raised an SR. And it has to do with the permissions of the $ORACLE_HOME/bin/oracle binary. The permissions of oracle executable should be rwsr-s\u0026ndash;x or 6751 but they were not. See below:\n[sql]$ id uid=241(test1) gid=202(users) groups=1(staff),13(dba) $\n$ cd $ORACLE_HOME/bin $ ls -ltr oracle -rwxr-xr-x 1 oracle dba 136803483 Mar 16 20:32 oracle $\n$ chmod 6751 oracle $ ls -ltr oracle -rwsr-s\u0026ndash;x 1 oracle dba 136803483 Mar 16 20:32 oracle $\n$ sqlplus scott/tiger\nSQL*Plus: Release 10.2.0.5.0 - Production on Wed May 18 10:23:27 2011\nCopyright (c) 1982, 2010, Oracle. All Rights Reserved.\nConnected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options\nSQL\u0026gt; show user USER is \u0026ldquo;SCOTT\u0026rdquo; SQL\u0026gt;[/sql]\nComments Comment by sunil on 2011-08-12 00:30:20 +0530 great fix. had a similar issue. thanks for the post.\nComment by Taj on 2011-08-22 17:42:26 +0530 Nice post.\nComment by 北在南方 on 2011-08-22 18:59:30 +0530 I have a database which has been upgraded from 11.2.0.1 to 11.2.0.2 ; when execute “sqlplus /” as a OS user ADMIN which is not belong to the oracle oinstall group,ORA-12547: TNS:lost contact occurs\nwhen do “sqlplus /nolog” and then conn /as sysdba as ORACLE user ,the resault id normal\[email protected]:/home/oracle\u0026gt;sqlplus /nolog\nSQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 20 14:59:50 2011\nCopyright (c) 1982, 2010, Oracle. All rights reserved.\n@\u0026gt;conn /as sysdba\nConnected.\nsys@alibank1\u0026gt;exit\nbut when do this as ADMIN uers ,ORA-12547: TNS:lost contact occurs;\[email protected]:/home/admin\u0026gt;tnsping alibank1\nTNS Ping Utility for Linux: Version 11.2.0.2.0 – Production on 20-AUG-2011 14:57:19\nCopyright (c) 1997, 2010, Oracle. All rights reserved.\nUsed parameter files:\nTNS-03505: Failed to resolve name\[email protected]:/home/admin\u0026gt;sqlplus /nolog\nSQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 20 14:57:33 2011\nCopyright (c) 1982, 2010, Oracle. All rights reserved.\n@\u0026gt;conn /as sysdba\nERROR:\nORA-12547: TNS:lost contact\nComment by 北在南方 on 2011-08-22 19:01:25 +0530 I am a oracle dba from chain and looking forward to your advice..\nComment by suhas on 2011-11-28 09:10:28 +0530 Thanks\nproblem is solve\nComment by Gaurav R on 2011-12-16 17:13:44 +0530 Thanks. had a similar issue after upgrading from Oracle10g (10.2.0.5) to Oracle11g.\nComment by Gaurav R on 2011-12-16 17:25:36 +0530 We can also use following commands:\ncd $ORACLE_HOME/bin\nrelink all\nComment by isma on 2012-01-06 08:57:25 +0530 Hi,\nNeed your help.\nEven after perform the command as suggested, I’m still can’t start the listener. Help!!\nroot@ormdevl # chmod 6751 oracle\nroot@ormdevl # ls -ltr oracle\n-rwsr-s–x 1 oracle dba 66435324 Mar 8 2006 oracle\nroot@ormdevl # lsnrctl\nLSNRCTL for Solaris: Version 9.2.0.7.0 – Production on 06-JAN-2012 11:21:02\nCopyright (c) 1991, 2002, Oracle Corporation. All rights reserved.\nWelcome to LSNRCTL, type “help” for information.\nLSNRCTL\u0026gt; start\nStarting /oracle/9.2.0/bin/tnslsnr: please wait…\nld.so.1: tnslsnr: fatal: /oracle/9.2.0/lib/libclntsh.so.9.0: Permission denied\nTNS-12547: TNS:lost contact\nTNS-12560: TNS:protocol adapter error\nTNS-00517: Lost contact\nSolaris Error: 32: Broken pipe\nLSNRCTL\u0026gt;\nComment by Brian Repko on 2012-01-06 09:16:32 +0530 Had the same issue – thanks a ton!!\nComment by gopal on 2012-01-21 09:39:09 +0530 chmod 6751 oracle\n$ ls -ltr oracle\n-rwsr-s–x 1 oracle dba 136803483 Mar 16 20:32 oracle\n$\nResolved my issue. Thanks for your support.\nComment by Meiron on 2012-01-21 23:46:11 +0530 great. chmod 6751 helped a lot.\nComment by Vivek on 2012-03-13 11:26:26 +0530 ORA-12547: TNS:lost contact\nexapmple :\n– $ORACLE_HOME\n– chmod r 777 where oracle is installed\n– chown -R oracle_llg uo2 (changing recursively the owner of u02 where oracle is installed )\n– chown -R oracle_llg:oinstall uo2(changing recursively the group of u02 where oracle is installed)\nthis helped me..\nComment by Elliot on 2012-03-17 00:17:28 +0530 Helped me too. Thanks.\nAny idea on what would have caused the permissions to change? I don’t want it to happen again.\nComment by Sidhu on 2012-03-21 14:59:08 +0530 @Elliot\nMay be they were like this since ever. Or it was working fine before you faced this error one day ?\nComment by Amith on 2012-05-15 05:40:53 +0530 Thanks Amardeep. It worked like a charm.\nCheers\nAmith\nComment by Sidhu on 2012-05-17 11:45:18 +0530 Great ! 🙂\nComment by Amit on 2012-06-09 20:36:02 +0530 Thanks Amardeep.. worked for me\nComment by amanpreet on 2012-06-29 12:21:21 +0530 Thanks Amardeep…. worked for me too….\nComment by satya on 2017-08-01 10:20:01 +0530 Thanks Amardeep,\nComment by Rakesh Chouksey on 2017-09-23 16:25:37 +0530 Thanks Amardeep.. it is really helpful blog. Thanks mate.\nComment by Sidhu on 2017-11-06 17:20:20 +0530 Cheers !\n","permalink":"https://v2.amardeepsidhu.com/blog/2011/05/18/ora-12547-tns-lost-contact/","summary":"\u003cp\u003eVery simple issue but took some amount of time in troubleshooting so thought about posting it here. May be it proves to be useful for someone.\u003c/p\u003e\n\u003cp\u003eScenario was: Oracle is installed from \u0026ldquo;oracle\u0026rdquo; user and all runs well. There is a new OS user \u0026ldquo;test1\u0026rdquo; that also needs to use sqlplus. So granted the necessary permissions on ORACLE_HOME to test1. Tried to connect sqlplus scott/tiger@DB and yes it works. But while trying sqlplus scott/tiger it throws:\u003c/p\u003e","title":"ORA-12547: TNS:lost contact"},{"content":"Some time back, I was at a client where the customer complained that no one was able to log in to the database. It was Oracle 10.2.0.4 running on HP-Ux. I logged in to the database and checked the wait events:\n[sql]SQL\u0026gt; @wait\nEVENT COUNT(*) ---\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;- wait for possible quiesce finish 1 Streams AQ: qmn coordinator idle wait 1 Streams AQ: qmn slave idle wait 1 Streams AQ: waiting for time management or cleanup tasks 1 SQL*Net message to client 1 smon timer 1 pmon timer 1 jobq slave wait 4 rdbms ipc message 11 SQL*Net message from client 27 resmgr:become active 322\n11 rows selected.\nSQL\u0026gt;[/sql]\nTanel\u0026rsquo;s snapper showed something like:\n[sql]SQL\u0026gt; @snapper ash 5 1 all Sampling with interval 5 seconds, 1 times\u0026hellip;\n-- Session Snapper v3.11 by Tanel Poder @ E2SN ( http://tech.e2sn.com )\n---\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026ndash; Active% | SQL_ID | EVENT | WAIT_CLASS ---\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026ndash; 26322% | 4ffu7nb93c2c9 | resmgr:become active | Scheduler 1900% | 2wn958z7gzh57 | resmgr:become active | Scheduler 1400% | 9d9bg2r538nd2 | resmgr:become active | Scheduler 600% | 4d3k70q6y344k | resmgr:become active | Scheduler 500% | d6vwqbw6r2ffk | resmgr:become active | Scheduler 500% | 4tsrz92mmshbw | resmgr:become active | Scheduler 200% | 37td1bbvc1a69 | resmgr:become active | Scheduler 100% | ftdjfxws0s8q9 | resmgr:become active | Scheduler 100% | 41apc1bjqrfbv | resmgr:become active | Scheduler 100% | af9d8aqtkvn02 | resmgr:become active | Scheduler\n-- End of ASH snap 1, end=2011-02-10 11:06:40, seconds=5, samples_taken=23\nPL/SQL procedure successfully completed.\nSQL\u0026gt;[/sql]\nIf we check the description of the wait event, it says:\nThe session is waiting for a resource manager active session slot. This event occurs when the resource manager is enabled and the number of active sessions in the session\u0026rsquo;s current consumer group exceeds the current resource plan\u0026rsquo;s active session limit for the consumer group. To reduce the occurrence of this wait event, increase the active session limit for the session\u0026rsquo;s current consumer group.\nBut if we check the resource_limit settings:\n[sql]SQL\u0026gt; show parameter resource\nNAME_COL_PLUS_SHOW_PARAM TYPE VALUE_COL_PLUS_SHOW_PARAM ---\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026ndash; \u0026mdash;\u0026mdash;\u0026mdash;\u0026ndash; \u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026ndash; resource_limit boolean FALSE resource_manager_plan string\nSQL\u0026gt;[/sql]\nWhat ? Resource manager is not enabled. But why all the sessions are waiting for resmgr:become active and nobody is able to login ?\nA bit of googling lead me to this page from where I got the clue.\nGenerally, this wait situation occurs when you execute certain EMCA operations such as the operation for creating the EM repository. As a result of these operations, the systems implicity switches to QUIESCE mode. Therefore, all database connections (except users SYS and SYSTEM) must wait for \u0026ldquo;resmgr:become active\u0026rdquo;. In this case, refer to Note 1044758 and execute the following command if necessary:\nALTER SYSTEM UNQUIESCE;\nI asked around in the DBA team and one of the guys was trying to configure EM for the database due to which system switched tto QUIESCE mode and all the sessions were waiting on resmgr:become active.\nAfter canceling the operation, the wait event was gone and everything was working normally.\n","permalink":"https://v2.amardeepsidhu.com/blog/2011/03/04/waiting-for-resmgr-become-active-cant-login/","summary":"\u003cp\u003eSome time back, I was at a client where the customer complained that no one was able to log in to the database. It was Oracle 10.2.0.4 running on HP-Ux. I logged in to the database and checked the wait events:\u003c/p\u003e\n\u003cp\u003e[sql]SQL\u0026gt; @wait\u003c/p\u003e\n\u003cp\u003eEVENT COUNT(*)\n---\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;-\nwait for possible quiesce finish 1\nStreams AQ: qmn coordinator idle wait 1\nStreams AQ: qmn slave idle wait 1\nStreams AQ: waiting for time management or cleanup tasks 1\nSQL*Net message to client 1\nsmon timer 1\npmon timer 1\njobq slave wait 4\nrdbms ipc message 11\nSQL*Net message from client 27\nresmgr:become active 322\u003c/p\u003e","title":"waiting for resmgr:become active – can’t login"},{"content":"Last week I had a chance to upgrade a 9.2.0.7 database to 10.2.0.5. The size of the database was around 800 GB. The major applications connecting to the database were developed in Pro*C and Oracle Forms. The upgrade itself pretty smooth but there were few glitches around that needed to be handled. Just thought about documenting all the issues:\nFew users in the database were assigned the CREATE SESSION privilege through a password protected role (That role was the default role for that user). 10.2.0.5 onwards, password protected roles can’t be set as default roles. The alternate is to either disable the password for the role or assign CREATE SESSION directly to the user, not through a role.\nAfter the upgrade, few procedures became invalid and while compiling started giving ORA-00918: COLUMN AMBIGUOUSLY DEFINED. The issue was bug 2846640 which is fixed in 10.2. Actually, in few of the queries using ANSI syntax, the developer didn’t qualify the column names with table names. It worked fine in 9i but due to the bug getting fixed in 10g, it started giving ORA-00918. The simple solution is to prefix the column name with the table name.\nFew of the application schema owner users complained that they were not able to modify the procedures/packages in their own schemas. The schemas were not assigned CREATE PROCEDURE privilege but as per documentation, they should be able to modify the existing procedures/packages owned by them. This again is a documentation bug. It worked fine in 9i but in 10g onwards you need to have either a CREATE PROCEDURE or ALTER ANY PROCEDURE privilege (a risky one) to be able to edit the PL/SQL units in your own schema.\nThese were few of the issues encountered, rest of the upgrade was super smooth !\nHappy upgrading !\n","permalink":"https://v2.amardeepsidhu.com/blog/2011/01/29/issues-in-upgrading-from-9i-to-10g/","summary":"\u003cp\u003eLast week I had a chance to upgrade a 9.2.0.7 database to 10.2.0.5. The size of the database was around 800 GB. The major applications connecting to the database were developed in Pro*C and Oracle Forms. The upgrade itself pretty smooth but there were few glitches around that needed to be handled. Just thought about documenting all the issues:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\n\u003cp\u003eFew users in the database were assigned the CREATE SESSION privilege through a password protected role (That role was the default role for that user). 10.2.0.5 onwards, password protected roles can’t be set as default roles. The alternate is to either disable the password for the role or assign CREATE SESSION directly to the user, not through a role.\u003c/p\u003e","title":"Issues in upgrading from 9i to 10g"},{"content":"Another let-us-help-Google post ;).\nWhile running impdp import in 11g, you hit:\n[sql]ORA-39083: Object type INDEX failed to create with error: ORA-14102: only one LOGGING or NOLOGGING clause may be specified[/sql]\nIt is related to bug 9015411 where DBMS_METADATA.GET_DDL creates incorrect create index statement by dumping both LOGGING and NO LOGGING clauses. Due to this the CREATE INDEX statement, while running impdp fails with the above error. It applies to 11.2.0.1 (Metalink doc id 1066635.1)\nFix is to install the patch, if it is available for your platform. Another workaround is given in this OTN thread i.e. strip the create index statement of storage related information by using TRANSFORM=SEGMENT_ATTRIBUTES:N:INDEX \u0026amp; TRANSFORM=SEGMENT_ATTRIBUTES:N:CONSTRAINT\n","permalink":"https://v2.amardeepsidhu.com/blog/2010/12/04/ora-39083-object-type-index-failed-to-create-with-error/","summary":"\u003cp\u003eAnother let-us-help-Google post ;).\u003c/p\u003e\n\u003cp\u003eWhile running impdp import in 11g, you hit:\u003c/p\u003e\n\u003cp\u003e[sql]ORA-39083: Object type INDEX failed to create with error:\nORA-14102: only one LOGGING or NOLOGGING clause may be specified[/sql]\u003c/p\u003e\n\u003cp\u003eIt is related to bug 9015411 where DBMS_METADATA.GET_DDL creates incorrect create index statement by dumping both LOGGING and NO LOGGING clauses. Due to this the CREATE INDEX statement, while running impdp fails with the above error. It applies to 11.2.0.1 (Metalink doc id 1066635.1)\u003c/p\u003e","title":"ORA-39083: Object type INDEX failed to create with error"},{"content":"Yesterday, a friend of mine asked me about an error he was getting while running a schema level export in Oracle 8i:\n[sql]exp system/manager@DB owner=ABC file=ABC.dmp log =ABC.log\nExport: Release 8.1.7.1.0 - Production on Fri Nov 12 04:21:05 2010\n(c) Copyright 2000 Oracle Corporation. All rights reserved.\nEXP-00056: ORACLE error 28002 encountered ORA-28002: the password will expire within 11 days EXP-00056: ORACLE error 24309 encountered ORA-24309: already connected to a server EXP-00000: Export terminated unsuccessfully[/sql]\nI googled and metalink\u0026rsquo;ed (oops\u0026hellip;is there anything like that ? ;) ) a bit and found that it was bug 1654141 where user accounts in grace period cannot perform export. It is fixed in Oracle 9i (version 9.0.0.0 as per metalink). The obvious work around is to change the password and then try again. Thought about posting it here so that Google can give little better results if someone in trouble comes searching for it ;).\nComments Comment by himanshu yadav on 2010-11-14 09:25:13 +0530 wonderful man !\nbut what is the criterian for the error i mean how many days before it expire .. i guess it has sometihng to do password policy ? is it so ?\nComment by Sidhu on 2010-11-14 09:32:52 +0530 Yes, password policy.\nWhen a user’s grace period (the password is expired but the DBA has set a grace period of x number of days, so user would be able to login for those many days, he will get a message about the expired password, though) is going on, he cannot perform the export.\n","permalink":"https://v2.amardeepsidhu.com/blog/2010/11/14/exp-00056-oracle-error-28002-encountered/","summary":"\u003cp\u003eYesterday, a friend of mine asked me about an error he was getting while running a schema level export in Oracle 8i:\u003c/p\u003e\n\u003cp\u003e[sql]exp system/manager@DB owner=ABC file=ABC.dmp log =ABC.log\u003c/p\u003e\n\u003cp\u003eExport: Release 8.1.7.1.0 - Production on Fri Nov 12 04:21:05 2010\u003c/p\u003e\n\u003cp\u003e(c) Copyright 2000 Oracle Corporation. All rights reserved.\u003c/p\u003e\n\u003cp\u003eEXP-00056: ORACLE error 28002 encountered\nORA-28002: the password will expire within 11 days\nEXP-00056: ORACLE error 24309 encountered\nORA-24309: already connected to a server\nEXP-00000: Export terminated unsuccessfully[/sql]\u003c/p\u003e","title":"EXP-00056: ORACLE error 28002 encountered"},{"content":"It has been almost an year since i posted something (useful) here. The last post was also a crappy one :) . Well, it all boils down to sheer laziness ;) . Now, i think the time has come to be regular again. Here i am getting a good start talking about Sangam10, i attended last week. It was a great opportunity to meet so many fellow Oracle professionals and most awesomely to meet \u0026amp; see Jonathan Lewis talk about Performance \u0026amp; Tuning. As expected the whole experience was amazing. It was a 2 day event where Jonathan was delivering 2 half day seminars on SQL Tuning and there were other break out sessions as well. We had planned to go a day in advance so me, Aman, Ankit \u0026amp; Neeraj reached Hyderabad on 2nd Sep.\nJonathan\u0026rsquo;s presentations were simply amazing. His knowledge about how things work (and why they work this way not that) is simply awesome. He is an inspiration for newbies like us and there was so much to learn from him. Few of the quick tips that i picked up from him:\nDon\u0026rsquo;t believe what you read or hear. Make small test cases to test and confirm how things work \u0026amp; how they don\u0026rsquo;t. He said that he has around 2000 test cases on his laptop. Some of them ready to be fired on Oracle database 12g ;) . Always document your findings. At a later date you only won\u0026rsquo;t be able to remember that something that you already faced and solved something you are stuck in. If you document things properly, you would always remember a bit of it and you can search it in a minute. Also i got to meet \u0026amp; attend presentation of good friend Francisco Munoz Alvarez. I have been in touch with him since more than 2 years but this was for the first time i was meeting him in person. Also his presentation on how to become a good DBA was really awesome. Enjoyed every bit of it.\nTwo of my colleagues Vivek Sharma and Rahul Dutta were also presenting, so got a chance to see their presentations too. Vivek talked about developing scalable applications and Rahul\u0026rsquo;s presentation was about developing a EBS reporting solution using Oracle streams.\nI attended some part of Mark Rittman\u0026rsquo;s session also. I am not much into data warehousing but Mark is such a respected name so wanted to be present in his session ;) .\nI also met and attended one of the presentation of Iggy Fernandez. He talked about 52 weeks in the life of a database. I couldn\u0026rsquo;t attend his other presentation on reading execution plans as Vivek was presnting in the same time slot.\nOverall, it was an amazing experience and i am already looking forward to attending Sangam (or whatever it would be called ;) ) 11 !\nRead Aman\u0026rsquo;s post about Sangam 10.\nComments Comment by Vaibhav on 2010-09-12 17:23:26 +0530 Good event man.. not like the cloud computing one we went to from Microsoft 🙂\nComment by Sidhu on 2010-09-12 21:29:38 +0530 haha…it was really good one.\n","permalink":"https://v2.amardeepsidhu.com/blog/2010/09/12/sangam-10/","summary":"\u003cp\u003eIt has been almost an year since i posted something (useful) here. The last post was also a crappy one :) . Well, it all boils down to sheer laziness ;) . Now, i think the time has come to be regular again. Here i am getting a good start talking about \u003ca href=\"http://www.aioug.org/sangam10.php\"\u003eSangam10\u003c/a\u003e, i attended last week. It was a great opportunity to meet so many fellow Oracle professionals and most awesomely to meet \u0026amp; see \u003ca href=\"http://jonathanlewis.wordpress.com/\"\u003eJonathan Lewis\u003c/a\u003e talk about Performance \u0026amp; Tuning. As expected the whole experience was amazing. It was a 2 day event where Jonathan was delivering 2 half day seminars on SQL Tuning and there were other break out sessions as well. We had planned to go a day in advance so me, \u003ca href=\"http://blog.aristadba.com/\"\u003eAman\u003c/a\u003e, \u003ca href=\"http://ankitkgoel.wordpress.com/\"\u003eAnkit\u003c/a\u003e \u0026amp; \u003ca href=\"http://neerajbhatia.wordpress.com\"\u003eNeeraj\u003c/a\u003e reached Hyderabad on 2nd Sep.\u003c/p\u003e","title":"Sangam 10"},{"content":"2-3 days ago, I came across a code, intended to make delete faster. Just have a look ;)\n[sql]. . . LOOP SELECT COUNT (1) INTO v_cnt FROM table1 WHERE ROWNUM \u0026lt; 2;\nIF v_cnt = 0 THEN EXIT; END IF;\nDELETE FROM table1 WHERE ROWNUM \u0026lt; 1000;\nCOMMIT; v_cnt := 0; END LOOP; . . .[/sql]\nComments Comment by Vaibhav on 2009-10-10 07:58:17 +0530 What is the joke here?\nI am DB ignorant of the highest level 🙂\nComment by Sidhu on 2009-12-21 21:38:36 +0530 Hey man…\nSorry for the prompt reply 😛 .\nThe Joke is that he is committing inside the LOOP…which is a disaster in a database.\nComment by maclean on 2010-06-09 22:07:19 +0530 how many rows stored in this table?non-commit loop may cause large undo tablespace or ora-1555, i think we’d better choose a good recurring number and then commit.\n","permalink":"https://v2.amardeepsidhu.com/blog/2009/10/08/delete-delete-faster-faster/","summary":"\u003cp\u003e2-3 days ago, I came across a code, intended to make delete faster. Just have a look ;)\u003c/p\u003e\n\u003cp\u003e[sql].\n.\n.\nLOOP\nSELECT COUNT (1)\nINTO v_cnt\nFROM table1\nWHERE ROWNUM \u0026lt; 2;\u003c/p\u003e\n\u003cp\u003eIF v_cnt = 0\nTHEN\nEXIT;\nEND IF;\u003c/p\u003e\n\u003cp\u003eDELETE FROM table1\nWHERE ROWNUM \u0026lt; 1000;\u003c/p\u003e\n\u003cp\u003eCOMMIT;\nv_cnt := 0;\nEND LOOP;\n.\n.\n.[/sql]\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-vaibhav-on-2009-10-10-075817-0530\"\u003eComment by Vaibhav on 2009-10-10 07:58:17 +0530\u003c/h3\u003e\n\u003cp\u003eWhat is the joke here?\u003c/p\u003e","title":"Delete Delete Faster Faster ;)"},{"content":"Yesterday, one of my colleague asked that if he traced a wrap\u0026rsquo;ed PL/SQL procedure, would the SQL statements show up in the trace ? Very simple thing but at that moment i got, sort of into doubt. So i ran a simple test and yes they do show up ;)\n[sql]CREATE OR REPLACE PROCEDURE wrap1 AS v_today DATE; BEGIN SELECT SYSDATE INTO v_today FROM DUAL; END; /\nC:\\\u0026gt;wrap iname=wrap1.sql\nPL/SQL Wrapper: Release 10.2.0.1.0- Production on Fri Sep 18 21:07:49 2009\nCopyright (c) 1993, 2004, Oracle. All rights reserved.\nProcessing wrap1.sql to wrap1.plb\nC:\\\u0026gt;more wrap1.plb CREATE OR REPLACE PROCEDURE wrap1 wrapped a000000 b2 abcd abcd abcd abcd abcd abcd abcd abcd abcd abcd abcd abcd abcd abcd abcd 7 65 96 uu93le0yJCtORZedJgcWflZ1Jacwg5nnm7+fMr2ywFwWlvJWfF3AdIsJaWnnbSgIv1JfNsJx doRxO75ucVUAc2fTr+Ii4v+onq/3r8q9yOOsrLAP4yRZW6LbYoWa6q9sd7PG7Nk9cpXs+6Y5 tQR4\n/[/sql]\nAnd here is the output from the trace file, showing the SQL statement:\n[sql]BEGIN wrap1; END;\ncall count cpu elapsed disk query current rows ---\u0026mdash;- \u0026mdash;\u0026mdash; \u0026mdash;\u0026mdash;\u0026ndash; \u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.03 0.01 0 0 0 1 Fetch 0 0.00 0.00 0 0 0 0 ---\u0026mdash;- \u0026mdash;\u0026mdash; \u0026mdash;\u0026mdash;\u0026ndash; \u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;- \u0026mdash;\u0026mdash;\u0026mdash;- total 2 0.03 0.02 0 0 0 1\nMisses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 54 \\\\\\******************************************************************************\nSELECT SYSDATE FROM DUAL[/sql]\nComments Comment by Martin Berger on 2009-09-19 00:37:33 +0530 Amardeep,\nas you are talking about wrap’ed code and have an example with version 10g you might want to have a look at Antons http://technology.amis.nl/blog/4753/unwrapping-10g-wrapped-plsql\nIt’s not a walkthrough, but a good start for any one with a basic IT education.\nMartin\nComment by Amardeep Sidhu on 2009-09-19 21:44:21 +0530 Martin\nThanks for the link. Coincidently, yesterday i was going through the presentation of Pete mentioned in the link 🙂 .\nAmardeep\n","permalink":"https://v2.amardeepsidhu.com/blog/2009/09/18/wraped-code-and-sql-trace/","summary":"\u003cp\u003eYesterday, one of my colleague asked that if he traced a wrap\u0026rsquo;ed PL/SQL procedure, would the SQL statements show up in the trace ? Very simple thing but at that moment i got, sort of into doubt. So i ran a simple test and yes they do show up ;)\u003c/p\u003e\n\u003cp\u003e[sql]CREATE OR REPLACE PROCEDURE wrap1\nAS\nv_today DATE;\nBEGIN\nSELECT SYSDATE\nINTO v_today\nFROM DUAL;\nEND;\n/\u003c/p\u003e\n\u003cp\u003eC:\\\u0026gt;wrap iname=wrap1.sql\u003c/p\u003e\n\u003cp\u003ePL/SQL Wrapper: Release 10.2.0.1.0- Production on Fri Sep 18 21:07:49 2009\u003c/p\u003e","title":"wrap’ed code and SQL trace"},{"content":"Today i was refreshing a MVIEW (Oracle 9.2.0.1.0 on Windows 2000) and instead of writing\n[sql]exec dbms_mview.refresh(\u0026lsquo;SCHEMA1.MVIEW1\u0026rsquo;,\u0026lsquo;C\u0026rsquo;); [/sql]\ni wrote\n[sql]exec dbms_mview.refresh(\u0026lsquo;SCHEMA1\u0026rsquo;,\u0026lsquo;MVIEW1\u0026rsquo;,\u0026lsquo;C\u0026rsquo;);[/sql]\nAnd it gave me:\nERROR at line 1:\nORA-30019: Illegal rollback Segment operation in Automatic Undo mode\nORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 794\nORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 851\nORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 832\nORA-06512: at line 1\n[sql]ERROR at line 1: ORA-30019: Illegal rollback Segment operation in Automatic Undo mode ORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 794 ORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 851 ORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 832 ORA-06512: at line 1[/sql]\nwhich has nothing to do with the real error. Take care ! ","permalink":"https://v2.amardeepsidhu.com/blog/2009/07/09/ora-30019-illegal-rollback-segment-operation-in-automatic-undo-mode/","summary":"\u003cp\u003eToday i was refreshing a MVIEW (Oracle 9.2.0.1.0 on Windows 2000) and instead of writing\u003c/p\u003e\n\u003cp\u003e[sql]exec dbms_mview.refresh(\u0026lsquo;SCHEMA1.MVIEW1\u0026rsquo;,\u0026lsquo;C\u0026rsquo;); [/sql]\u003c/p\u003e\n\u003cp\u003ei wrote\u003c/p\u003e\n\u003cp\u003e[sql]exec dbms_mview.refresh(\u0026lsquo;SCHEMA1\u0026rsquo;,\u0026lsquo;MVIEW1\u0026rsquo;,\u0026lsquo;C\u0026rsquo;);[/sql]\u003c/p\u003e\n\u003cp\u003eAnd it gave me:\u003c/p\u003e\n\u003cp\u003eERROR at line 1:\u003c/p\u003e\n\u003cp\u003eORA-30019: Illegal rollback Segment operation in Automatic Undo mode\u003c/p\u003e\n\u003cp\u003eORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 794\u003c/p\u003e\n\u003cp\u003eORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 851\u003c/p\u003e\n\u003cp\u003eORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 832\u003c/p\u003e\n\u003cp\u003eORA-06512: at line 1\u003c/p\u003e\n\u003cp\u003e[sql]ERROR at line 1:\nORA-30019: Illegal rollback Segment operation in Automatic Undo mode\nORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 794\nORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 851\nORA-06512: at \u0026ldquo;SYS.DBMS_SNAPSHOT\u0026rdquo;, line 832\nORA-06512: at line 1[/sql]\u003c/p\u003e","title":"ORA-30019: Illegal rollback Segment operation in Automatic Undo mode"},{"content":"Today i was gathering stats on one schema (10.2.0.3 on AIX 5.3, 64 bit) and it said:\n[sql]ERROR at line 1: ORA-03001: unimplemented feature ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13336 ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13682 ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13760 ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13719 ORA-06512: at line 1[/sql]\nLittle bit of searching on Metalink revealed that i had hit Bug no 6011068 which points to the base Bug 576661 which is related to function based indexes. There were 2 function based indexes in the schema. Before talking about the workaround let us re-produce the test case. Here i am doing it on my laptop (10.2.0.1 on Windows XP 32 bit)\n[sql]SCOTT@TESTING \u0026gt;create table test1 as select * from emp;\nTable created.\nSCOTT@TESTING \u0026gt;create index ind1 on test1(comm,1);\nIndex created.\nSCOTT@TESTING \u0026gt;\nSYSTEM@TESTING\u0026gt;exec dbms_stats.gather_schema_stats(\u0026lsquo;SCOTT\u0026rsquo;); BEGIN dbms_stats.gather_schema_stats(\u0026lsquo;SCOTT\u0026rsquo;); END;\n* ERROR at line 1: ORA-03001: unimplemented feature ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13210 ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13556 ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13634 ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13593 ORA-06512: at line 1\nSYSTEM@TESTING\u0026gt;[/sql]\nAs suggested in the metalink article let us set event 3001 before running the GATHER_SCHEMA_STATS command.\n[sql]SYSTEM@TESTING \u0026gt;alter session set tracefile_identifier=stats1;\nSession altered.\nSYSTEM@TESTING \u0026gt;alter session set events \u0026lsquo;3001 trace name ERRORSTACK level 3\u0026rsquo;;\nSession altered.\nSYSTEM@TESTING \u0026gt;exec dbms_stats.gather_schema_stats(\u0026lsquo;SCOTT\u0026rsquo;); BEGIN dbms_stats.gather_schema_stats(\u0026lsquo;SCOTT\u0026rsquo;); END;\n* ERROR at line 1: ORA-03001: unimplemented feature ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13210 ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13556 ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13634 ORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13593 ORA-06512: at line 1\nSYSTEM@TESTING \u0026gt;[/sql]\nPart of the trace file reads:\n[sql]ksedmp: internal or fatal error ORA-03001: unimplemented feature Current SQL statement for this session: select /*+ no_parallel_index(t,IND1) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_expand index(t,\u0026ldquo;IND1\u0026rdquo;) */ count(*) as nrw,count(distinct sys_op_lbid(51966,\u0026lsquo;L\u0026rsquo;,t.rowid)) as nlb,count(distinct hextoraw(sys_op_descend(\u0026ldquo;COMM\u0026rdquo;)||sys_op_descend(1))) as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) as clf from \u0026ldquo;SCOTT\u0026rdquo;.\u0026ldquo;TEST1\u0026rdquo; t where \u0026ldquo;COMM\u0026rdquo; is not null or 1 is not null ----- PL/SQL Call Stack \u0026mdash;\u0026ndash; object line object handle number name 65AA77D4 9406 package body SYS.DBMS_STATS 65AA77D4 9919 package body SYS.DBMS_STATS[/sql]\nSo the problem is being caused by the index ind1 we created on (comm,1). This bug has been fixed in 10.2.0.5 and 11.1.0.7. The available workaround for other versions is to create index using 1 as character instead of number.\n[sql]SCOTT@TESTING \u0026gt;drop index ind1;\nIndex dropped.\nSCOTT@TESTING \u0026gt;create index ind1 on test1(comm,\u0026lsquo;1\u0026rsquo;);\nIndex created.\nSCOTT@TESTING \u0026gt;[/sql]\nAnd now running GATHER_SCHEMA_STATS:\n[sql]SYSTEM@TESTING \u0026gt;exec dbms_stats.gather_schema_stats(\u0026lsquo;SCOTT\u0026rsquo;);\nPL/SQL procedure successfully completed.\nSYSTEM@TESTING \u0026gt;[/sql]\nComments Comment by sarayu on 2009-07-09 02:29:50 +0530 Why do you ever want to create index as test1(comm,1);\nAny special reasons??\nComment by Coskan on 2009-07-09 06:42:00 +0530 My question is what is this function based index actually doing?\nComment by Sidhu on 2009-07-09 20:53:43 +0530 @ Sarayu \u0026amp; Coskan\nI didn’t create this index ;). But to best of my knowledge it has been created to index NULL entries. Something like this:\nComment by Vaibhav Garg on 2009-07-09 21:34:14 +0530 Wow very neat and clean post.\nDidn’t understand much but it looks great 🙂\nComment by Sidhu on 2009-07-09 22:29:42 +0530 Thanks man ! We never cross each other in this area of topics for technical blogs 😉\n","permalink":"https://v2.amardeepsidhu.com/blog/2009/07/08/gather_schema_stats-ora-03001-unimplemented-feature/","summary":"\u003cp\u003eToday i was gathering stats on one schema (10.2.0.3 on AIX 5.3, 64 bit) and it said:\u003c/p\u003e\n\u003cp\u003e[sql]ERROR at line 1:\nORA-03001: unimplemented feature\nORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13336\nORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13682\nORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13760\nORA-06512: at \u0026ldquo;SYS.DBMS_STATS\u0026rdquo;, line 13719\nORA-06512: at line 1[/sql]\u003c/p\u003e\n\u003cp\u003eLittle bit of searching on Metalink revealed that i had hit Bug no 6011068 which points to the base \u003ca href=\"https://metalink2.oracle.com/metalink/plsql/f?p=130:14:5335018637904207860::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,559389.1,1,1,1,helvetica\"\u003eBug 576661\u003c/a\u003e which is related to function based indexes. There were 2 function based indexes in the schema. Before talking about the workaround let us re-produce the test case. Here i am doing it on my laptop (10.2.0.1 on Windows XP 32 bit)\u003c/p\u003e","title":"GATHER_SCHEMA_STATS \u0026 ORA-03001: unimplemented feature"},{"content":"Today one of my colleague was working on development of a screen in Oracle Forms to give the end user an option to schedule a job using dbms_scheduler. With the hope that i would be able to explain it properly, the whole scenario is like this:\nUser will log in to the application with his username (Lets say USER01) and password (basically every application user is a database user). He is provided with a screen where he can enter details about the job and the code behind the button calls a PL/SQL procedure in the main application schema (lets say APP1) which in turn uses DBMS_SCHEDULER.CREATE_JOB to schedule the new job. The ultimate task of the job is to move data from one table in the first database to a table in the second database using a DB Link. There is a VPD policy applied on all the application users to restrict the view of data. Policy function uses SYS_CONTEXT to fetch some information about the logged in user. The main application user APP1 is exempted from policy and can see the whole data. Things seem to work fine till the schedule part. But when the job runs it hits ORA-02070: database does not support operator SYS_CONTEXT in this context as SYS_CONTEXT and DB link doesn\u0026rsquo;t go together.\nI did a bit of troubleshooting and came to know that the job gets created with JOB_CREATOR (a field in DBA_SCHEDULER_JOBS) as the user who is logged in (ie USER001). Now when the job runs from USER001, there is a VPD policy which is going to append a where clause to the query and there is a DB link being used, hence ORA-02070.\nSo the way out would be to schedule and run the job from some user that has no VPD policy applied to it. The best choice would obviously be the main application user; APP1 but as the user logs in with his own username so the job would always be created with JOB_CREATOR as USER001. After a bit of thought provoking an idea hit me:\nCreate a table in the APP1 schema. Now when the user schedules the job, insert the values of the parameters required to schedule the job in the table. Schedule one master job in APP1 schema which would read this table and in turn call DBMS_SCHEDULER.CREATE_JOB to schedule the job required by the user. Now as there is no policy applied on the APP1 database user so the job is not going to hit ORA-02070. The frequency of the master job can be set as per the requirements. To identify which entries in the table have been processed either keep a flag which can be updated or delete the record from the table after scheduling.\nThat is how it clicked in my mind at that time. Suggestions about any other better (or worse ;) ) methods are welcome :)\nPS: About the title: Nothing really was coming into my mind so i picked up the all three words and titled it DBMS_SCHEDULER, DBMS_RLS and SYS_CONTEXT :)\n","permalink":"https://v2.amardeepsidhu.com/blog/2009/06/19/dbms_scheduler-dbms_rls-and-sys_context/","summary":"\u003cp\u003eToday one of my colleague was working on development of a screen in Oracle Forms to give the end user an option to schedule a job using dbms_scheduler. With the hope that i would be able to explain it properly, the whole scenario is like this:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eUser will log in to the application with his username (Lets say USER01) and password (basically every application user is a database user).\u003c/li\u003e\n\u003cli\u003eHe is provided with a screen where he can enter details about the job and the code behind the button calls a PL/SQL procedure in the main application schema (lets say APP1) which in turn uses DBMS_SCHEDULER.CREATE_JOB to schedule the new job.\u003c/li\u003e\n\u003cli\u003eThe ultimate task of the job is to move data from one table in the first database to a table in the second database using a DB Link.\u003c/li\u003e\n\u003cli\u003eThere is a VPD policy applied on all the application users to restrict the view of data. Policy function uses SYS_CONTEXT to fetch some information about the logged in user. The main application user APP1 is exempted from policy and can see the whole data.\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eThings seem to work fine till the schedule part. But when the job runs it hits \u003cem\u003e\u003cstrong\u003eORA-02070: database does not support operator SYS_CONTEXT in this context\u003c/strong\u003e\u003c/em\u003e as SYS_CONTEXT and DB link doesn\u0026rsquo;t go together.\u003c/p\u003e","title":"DBMS_SCHEDULER, DBMS_RLS and SYS_CONTEXT"},{"content":"Since long time i have almost been writing useless posts only. Now, i guess my blog doesn\u0026rsquo;t even look like an Oracle blog. So thought about posting something related to Oracle ;)\nDay before yesterday a colleague at my workplace asked that she was running an SQL script (which contained a simple DBMS_MVIEW.REFRESH() statement to refresh an MVIEW), it ran successfully but after completion re-ran the last command run in the session. I was also puzzled and checked the SQL script but it contained simple DBMS_MVIEW.REFRESH() statement. Next try revealed that the script actually had a / (slash) in the second line (with no semi-colon at the end of the first line). Something like this (I used dbms_stats instead of dbms_mview):\n[sql]exec dbms_stats.gather_table_stats(user,\u0026lsquo;EMP\u0026rsquo;) /\n[/sql]\nNow this thing, when run in SQL* Plus session can be confusing:\n[sql]SCOTT@TESTING \u0026gt; SCOTT@TESTING \u0026gt;delete emp1; delete emp1 * ERROR at line 1: ORA-00942: table or view does not exist\nSCOTT@TESTING \u0026gt;@c:\\test\nPL/SQL procedure successfully completed.\ndelete emp1 * ERROR at line 1: ORA-00942: table or view does not exist\nSCOTT@TESTING \u0026gt; [/sql]\nThere is no semicolon at the end of the first statement but it executes without that also. So the slash in the 2nd line simply re-executes the last SQL, as expected :) . But it does get confusing !\nComments Comment by Surachart Opun on 2009-06-14 19:09:02 +0530 Thank You… for your idea. That’s good .\n-\u0026gt; slash in a SQL script make so bad or sqlplus 😉\nSQL\u0026gt; delete emp1;\ndelete emp1\n*\nERROR at line 1:\nORA-00942: table or view does not exist\nSQL\u0026gt; exec dbms_stats.gather_table_stats(user,’A’);\nPL/SQL procedure successfully completed.\nSQL\u0026gt; list\n1* delete emp1\nSQL\u0026gt;\nSQL\u0026gt;\nSQL\u0026gt; begin\n2 dbms_stats.gather_table_stats(user,’A’);\n3 end;\n4 /\nPL/SQL procedure successfully completed.\nSQL\u0026gt; list\n1 begin\n2 dbms_stats.gather_table_stats(user,’A’);\n3* end;\n","permalink":"https://v2.amardeepsidhu.com/blog/2009/06/14/take-care-of-a-slash-in-a-sql-script/","summary":"\u003cp\u003eSince long time i have almost been writing useless posts only. Now, i guess my blog doesn\u0026rsquo;t even look like an Oracle blog. So thought about posting something related to Oracle ;)\u003c/p\u003e\n\u003cp\u003eDay before yesterday a colleague at my workplace asked that she was running an SQL script (which contained a simple DBMS_MVIEW.REFRESH() statement to refresh an MVIEW), it ran successfully but after completion re-ran the last command run in the session. I was also puzzled and checked the SQL script but it contained simple DBMS_MVIEW.REFRESH() statement. Next try revealed that the script actually had a / (slash) in the second line (with no semi-colon at the end of the first line). Something like this (I used dbms_stats instead of dbms_mview):\u003c/p\u003e","title":"Take care of a slash in a SQL script"},{"content":"On the wall just outside my office. Who says there is recession ? ;)\nComments Comment by Dan Norris on 2009-02-23 23:40:04 +0530 Forms and Reports? For what? How old is this picture? 🙂\nComment by Neeraj on 2009-02-23 23:54:03 +0530 cool 😀\nbulk joining leads to bulk firing ….\nComment by Puja on 2009-02-24 09:42:06 +0530 Just hope it is not one of those job scams that they pull on poor unemployed youth!\nComment by Francois on 2009-02-24 11:45:51 +0530 In what country is this ?\nComment by Sidhu on 2009-02-24 18:53:01 +0530 @Neeraj hahhahaa\n@Puja Seriously. It looks like that case only.\n@Francois New Delhi, India\nComment by Sidhu on 2009-02-24 19:02:50 +0530 @Dan Clicked it yesterday only.\nYour comment was marked spam by Akismet 🙁\nComment by aman\u0026hellip;. on 2009-02-24 19:55:08 +0530 Hmm must be a desperate employer, on one wall, 3 banners for the same post!\nComment by Sidhu on 2009-03-07 12:33:00 +0530 My Cell camera could capture only three. There were a lots actually 😀\n","permalink":"https://v2.amardeepsidhu.com/blog/2009/02/23/required/","summary":"\u003cp\u003eOn the wall just outside my office. Who says there is recession ? ;)\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"required\" loading=\"lazy\" src=\"/blog/wp-content/uploads/2009/02/required.jpg\"\u003e\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-dan-norris-on-2009-02-23-234004-0530\"\u003eComment by Dan Norris on 2009-02-23 23:40:04 +0530\u003c/h3\u003e\n\u003cp\u003eForms and Reports? For what? How old is this picture? 🙂\u003c/p\u003e\n\u003ch3 id=\"comment-by-neeraj-on-2009-02-23-235403-0530\"\u003eComment by Neeraj on 2009-02-23 23:54:03 +0530\u003c/h3\u003e\n\u003cp\u003ecool 😀\u003c/p\u003e\n\u003cp\u003ebulk joining leads to bulk firing ….\u003c/p\u003e\n\u003ch3 id=\"comment-by-puja-on-2009-02-24-094206-0530\"\u003eComment by Puja on 2009-02-24 09:42:06 +0530\u003c/h3\u003e\n\u003cp\u003eJust hope it is not one of those job scams that they pull on poor unemployed youth!\u003c/p\u003e","title":"Required ;)"},{"content":"Got this in a forward mail. Good one.\nForgiving or punishing the terrorists is left to God. But, fixing their appointment with God is our responsibility - Indian Army\nUpdated statement for this in S/W INDUSTRY\u0026hellip;\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\nForgiving or punishing the Developer is left to Manager. But, fixing their appointment with Manager is our responsibility - Tester\nWe all knew that\u0026hellip;\nbut this one is for the finishing touch!!!\nDamn good.\nForgiving or punishing the Manager is left to Client. But, fixing their appointment with Client is our responsibility - Developer\n;)\nComments Comment by aman on 2009-02-22 17:16:11 +0530 hehehe! good that we are dbas;).\nComment by Sidhu on 2009-02-24 18:52:15 +0530 😛\n","permalink":"https://v2.amardeepsidhu.com/blog/2009/02/22/quote-modified-for-sw-industry/","summary":"\u003cp\u003eGot this in a forward mail. Good one.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eForgiving or punishing\u003c/strong\u003e \u003cstrong\u003ethe terrorists\u003c/strong\u003e \u003cstrong\u003eis left to God.\u003c/strong\u003e \u003cstrong\u003eBut,\u003c/strong\u003e \u003cstrong\u003efixing their appointment\u003c/strong\u003e \u003cstrong\u003ewith God\u003c/strong\u003e \u003cstrong\u003eis our responsibility\u003c/strong\u003e \u003cstrong\u003e- Indian Army\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eUpdated statement for this in S/W INDUSTRY\u0026hellip;\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eForgiving or punishing\u003c/strong\u003e \u003cstrong\u003ethe Developer\u003c/strong\u003e \u003cstrong\u003eis left to Manager.\u003c/strong\u003e \u003cstrong\u003eBut,\u003c/strong\u003e \u003cstrong\u003efixing their appointment\u003c/strong\u003e \u003cstrong\u003ewith Manager\u003c/strong\u003e \u003cstrong\u003eis our responsibility\u003c/strong\u003e \u003cstrong\u003e- Tester\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eWe all knew that\u0026hellip;\u003c/p\u003e\n\u003cp\u003ebut this one is for the finishing touch!!!\u003c/p\u003e\n\u003cp\u003eDamn good.\u003c/p\u003e","title":"Quote modified for S/W industry"},{"content":"One of my friend today asked me about removing Linux partitions \u0026amp; GRUB (from a dual boot system) and return back to windows alone. Removing Linux involves just formatting/removing the partitions. Now to remove GRUB either do fdisk /mbr from a Windows 98 bootable CD or do fixmbr after booting into repair mode with Windows XP CD. But if you have none then to remove GRUB you will need some utility like this one and if you reboot before doing that it might make GRUB unable to boot into Windows. It will get stuck at GRUB\u0026gt; prompt only. So there is an option: to manually boot the OS you want (ie Windows). A quick search gave link to this thread. It involves few commands on the GRUB prompt:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]grub\u0026gt; rootnoverify (hd0,0) grub\u0026gt; makeactive grub\u0026gt; chainloader +1 grub\u0026gt; boot[/sourcecode]\nIt will load the NTLDR where your Windows is installed in Partition 1 on HDD 1.\nComments Comment by hsidhu on 2008-12-17 05:59:56 +0530 eh pata si mainu.. eh shyad pehli cheej aa jo mainu pata si jo ethe post hoyi aa.. lol\nComment by Sidhu on 2008-12-20 11:56:20 +0530 haha…cool…\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/12/14/manually-booting-an-os-from-grub/","summary":"\u003cp\u003eOne of my friend today asked me about removing Linux partitions \u0026amp; GRUB (from a dual boot system) and return back to windows alone. Removing Linux involves just formatting/removing the partitions. Now to remove GRUB either do \u003cstrong\u003efdisk /mbr\u003c/strong\u003e from a Windows 98 bootable CD or do \u003cstrong\u003efixmbr\u003c/strong\u003e after booting into repair mode with Windows XP CD. But if you have none then to remove GRUB you will need some utility like \u003ca href=\"http://www.ambience.sk/fdisk-master-boot-record-windows-linux-lilo-fixmbr.php\"\u003ethis one\u003c/a\u003e and if you reboot before doing that it might make GRUB unable to boot into Windows. It will get stuck at GRUB\u0026gt; prompt only. So there is an option: to manually boot the OS you want (ie Windows). A quick search gave link to \u003ca href=\"http://www.ntcompatible.com/How_to_remove_GRUB_loader_t28242.html#150012\"\u003ethis thread\u003c/a\u003e. It involves few commands on the GRUB prompt:\u003c/p\u003e","title":"Manually booting an OS from GRUB"},{"content":"Since long time i have been struggling to learn CSS and make my website look great. The efforts did succeed but not in a way i wanted. After playing with CSS for few months i finally switched to Joomla and yes, in a week my website looks pretty cool. CSS automated ;) . Joomla installation is pretty next-next job. Then play around a bit and you are done. Next game is to choose a nice looking template according to your taste and addition of few extensions to make yours tasks easier. (Wordpress \u0026amp; Joomla rock because of these free extensions). So after long time i feel satisfied with the way my website looks. Here are the screenshots of the old and present new look for archives ;)\nJoomla rocks !!!\nComments Comment by Aman\u0026hellip;. on 2008-12-10 15:52:06 +0530 Hey I can see my pic some where in that Friend Connect ;-). Nice site BTW ;-).\nCheers\nAman….\nComment by Sidhu on 2008-12-11 23:29:55 +0530 You are right. Its your pic indeed 😉\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/12/09/my-re-designed-website/","summary":"\u003cp\u003eSince long time i have been struggling to learn CSS and make \u003ca href=\"http://www.amardeepsidhu.com\"\u003emy website\u003c/a\u003e look great. The efforts did succeed but not in a way i wanted. After playing with CSS for few months i finally switched to Joomla and yes, in a week my website looks pretty cool. CSS automated ;) . Joomla installation is pretty next-next job. Then play around a bit and you are done. Next game is to choose a nice looking template according to your taste and addition of few extensions to make yours tasks easier. (Wordpress \u0026amp; Joomla rock because of these free extensions). So after long time i feel satisfied with the way my website looks. Here are the screenshots of the old and present new look for archives ;)\u003c/p\u003e","title":"My re-designed website ;)"},{"content":"A small post to let everybody know that I am alive ;) .\nFew weeks back, in office we were looking at one procedure which was supposed to do a lot but if executed it finished in a sec (or less ;) ). I started looking into it and just opened the procedure and started scrolling to find that the last line of the very first cursor read:\nwhere 1=2;\nWonderful !\nComments Comment by Vaibhav Garg on 2008-12-08 13:12:08 +0530 **You asked for it !!\n1=2: A Proof using Beginning Algebra\nStep 1: Let a=b. Step 2: Then a² = ab, Step 3: a² + a² = a² + ab, Step 4: 2a² = a² + ab, Step 5: 2a² – 2ab = a² + ab – 2ab, Step 6: and 2a² – 2ab = a² – ab. Step 7: This can be written as 2(a² – ab) = 1(a² – ab), Step 8: and cancelling the (a² – ab) from both sides gives 1=2. 🙂\nComment by Sidhu on 2008-12-09 20:11:33 +0530 Oh man o man…ROFL…\nComment by Monika Sharma on 2008-12-29 02:36:47 +0530 Ek Mathematician ke hote huye aisa response kaise diya ja sakta hai 🙂\n2(a^2-ab)=1(a^2-ab) =\u0026gt;2=1 iff a^2-ab 0\nha ha 🙂\nwaise ye kis procedure ki bat kar raha hai Sidhu,mujhe bhi pata lage !\nComment by Sidhu on 2009-01-31 21:13:42 +0530 Oh my God 😛\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/12/06/where-1-2/","summary":"\u003cp\u003eA small post to let everybody know that I am alive ;) .\u003c/p\u003e\n\u003cp\u003eFew weeks back, in office we were looking at one procedure which was supposed to do a lot but if executed it finished in a sec (or less ;) ). I started looking into it and just opened the procedure and started scrolling to find that the last line of the very first cursor read:\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003ewhere 1=2;\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eWonderful !\u003c/p\u003e","title":"where 1=2 ;)"},{"content":"Okay, so kids have stopped crying as they got some milk from their mommy: forums are better now (than when came back after an upgrade). There are reasons that it takes time for the things to be smooth. Fine. But what drives me mad is the way Oracle handled this and now ? The font size and those bloody bullets on left side which tell you if you have read the post before ? Please let me know if someone can interpret what those bullets tell and can read the forum without doing Ctrl + in Mozilla/Chrome and suffering from reading everything in large font size in IE 6 ?\nIs Oracle a bunch of duffers ? Can\u0026rsquo;t they see these two simple things ? And i don\u0026rsquo;t think that these simple changes call for editing some code written in assembly language ?\nGrrrrrrrrrrrrr\u0026hellip;\u0026hellip;\u0026hellip;..\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/09/14/otn-forums-suck/","summary":"\u003cp\u003eOkay, so kids have stopped crying as they got some milk from their mommy: forums are better now (than when came back after an upgrade). There are reasons that it takes time for the things to be smooth. Fine. But what drives me mad is the way Oracle handled this and now ? The font size and those bloody bullets on left side which tell you if you have read the post before ? Please let me know if someone can interpret what those bullets tell and can read the forum without doing Ctrl + in Mozilla/Chrome and suffering from reading everything in large font size in IE 6 ?\u003c/p\u003e","title":"OTN forums suck ?"},{"content":"Well, as everybody has seen, last week they have again upgraded the OTN forums. And sadly it has been handled in a very poor manner. End result is that it is running terribly slow and throwing some errors many times. In my office, i cant even open it properly may be its too heavy or some issue with the internet. Also there has been lots of hues and cries about the new point system and all. That in my opinion is fine except that speed at which forums are running should be fine.\nBut Oracle itself handling an upgrade like this is really bad. It should have happened in a sweet manner. Or no money comes from OTN, so no experts out there ?\nSince the upgrade, i am not feeling like posting and i think number of posts have also gone down.\nLets hope for some improvements\u0026hellip;\nComments Comment by Aman\u0026hellip;. on 2008-08-28 17:50:21 +0530 It presented a sorry face again today. Its not at all opening. God only knows it is upgrade or degrade and if it is upgrade,how it is done?\nMay God help OTN!\nAman….\nComment by Daryl on 2008-08-28 18:57:33 +0530 Maybe they need some “Real Application Testing” …\nComment by Sidhu on 2008-08-28 20:30:58 +0530 Aman and Daryl\nReally it has been totally a poor show both the times. I can’t see a reason why they aren’t taking it seriously. Perhaps OTN doesn’t generate any money, that is why. In their language they call it business importance of something…yuck !\nComment by Paul on 2008-09-04 11:06:06 +0530 Not sure if you have been following http://blog.stackoverflow.com … Joel Spolsky and Jeff Atwood have been podcasting through the development process – an interesting exercise in itself. stackoverflow is still in private beta, but it promises to be the ultimate tech Q\u0026amp;A site, open and community driven. If it delivers on the promise, it could even make the forums obsolete if the oracle community makes the switch.\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/08/28/otn-forums-2nd-upgrade-attempt/","summary":"\u003cp\u003eWell, as everybody has seen, last week they have again upgraded the OTN forums. And sadly it has been handled in a very poor manner. End result is that it is running terribly slow and throwing some errors many times. In my office, i cant even open it properly may be its too heavy or some issue with the internet. Also there has been lots of hues and cries about the new point system and all. That in my opinion is fine except that speed at which forums are running should be fine.\u003c/p\u003e","title":"OTN forums – 2nd upgrade attempt"},{"content":"My company uses Lotus Notes for email and Sametime connect as messenger for all the internal communication. Both are stupid applications and are a big resource hogs. Most of the systems are P IV with 512 MB RAM. You just run Lotus and Sametime 7 and it eats up everything. The system moves like a 386 based machine.\nMoreover, i was looking for Lotus short cuts today and found that there is no short cut in Lotus for Send/Receive mail.\nAt our client\u0026rsquo;s site, i have seen IBM people having kept an exe killnotes.exe on their desktop which they use to quickly exit from Lotus. What bullshit ?\nOutlook/Outlook Express rocks, seriously !\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/08/20/lotus-notes-and-sametime/","summary":"\u003cp\u003eMy company uses Lotus Notes for email and Sametime connect as messenger for all the internal communication. Both are stupid applications and are a big resource hogs. Most of the systems are P IV with 512 MB RAM. You just run Lotus and Sametime 7 and it eats up everything. The system moves like a 386 based machine.\u003c/p\u003e\n\u003cp\u003eMoreover, i was looking for Lotus short cuts today and found that there is no short cut in Lotus for \u003cstrong\u003eSend/Receive\u003c/strong\u003e mail.\u003c/p\u003e","title":"Lotus notes and sametime"},{"content":"I was just wondering what kind of maintenance OTN forums are undergoing ? It is 2nd day today and still not available. I checked it yesterday morning and now today morning still under maintenance.\nIs another new design on the way ? ;)\nWhat you say !\nJust noticed that there is an announcement about the maintenance in Community Feedback forum.\nDue to maintenance, forums.oracle.com will be in read-only mode between 6pm PT, Aug. 8 and 6pm PT, Aug 10. Search will still be available during that time.\nThanks for your patience during this time!\nLets see when it comes up\u0026hellip;mentioned period is over, i think.\nComments Comment by Aman\u0026hellip;. on 2008-08-10 11:34:21 +0530 Well I didn’t know that its two days since it is down. Hope it comes out to be more stable now and more faster.I hate that point system sort of thing.Rating is fine but point system is just insane IMO.\nAman….\nComment by Aman\u0026hellip;. on 2008-08-10 17:55:45 +0530 Well its nearly 48 hours(and counting) and there are no signs for forums to come back anytime soon.Whta kind of maintenance takes more than 2 days, recovery may be ;-)?\nAman….\nComment by Sidhu on 2008-08-10 21:36:49 +0530 LOL…\nThat is what i am thinking…what exactly they are upto ?\nEither recovery 😉 or some major design change 😉\nLets wait n watch !\nComment by Aman\u0026hellip;. on 2008-08-23 12:15:08 +0530 And the history repeats itself.Forum is down again and this time with a note that it will be for 2 days. Great!\nAman….\nComment by Sidhu on 2008-08-25 07:31:50 +0530 Things are kinda messed up again. Lets see how long it takes to get back to normal.\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/08/10/otn-forums-under-maintenance/","summary":"\u003cp\u003eI was just wondering what kind of maintenance OTN forums are undergoing ? It is 2nd day today and still not available. I checked it yesterday morning and now today morning still under maintenance.\u003c/p\u003e\n\u003cp\u003eIs another new design on the way ? ;)\u003c/p\u003e\n\u003cp\u003eWhat you say !\u003c/p\u003e\n\u003cp\u003eJust noticed that there is \u003ca href=\"http://forums.oracle.com/forums/ann.jspa?annID=808\"\u003ean announcement\u003c/a\u003e about the maintenance in Community Feedback forum.\u003c/p\u003e\n\u003cp\u003e\u003cem\u003eDue to maintenance, forums.oracle.com will be in read-only mode between 6pm PT, Aug. 8 and 6pm PT, Aug 10. Search will still be available during that time.\u003c/em\u003e\u003c/p\u003e","title":"OTN forums under maintenance"},{"content":"Last week i had a chance to conduct my life\u0026rsquo;s first interview. The guy was a DBA with 2 years of experience and it was supposed to be a telephonic call. I checked out his CV and wrote down around 8 questions on a paper, just to make sure that i myself don\u0026rsquo;t get confused during the interview ;) . So i called him in the evening and started with introduction and his present job profile. Then i started with the questions from the projects he had done. He was confident about the stuff he was handling and replied all the questions honestly, saying NO at points where he didn\u0026rsquo;t know or was not involved in something. One of such thing was testing the backups. He said there is no testing done as such.\nIn his CV he had written about Data Guard also. So i asked few data guard questions like what is the difference in working of physical and logical standby ? He was not aware about some of the data types not being supported in logical standby.\nOverall he answered the questions pretty confidently and honestly. So at the end, i recommended his induction into the company :)\nHappy Ending !\nBTW from the first experience i can say that it feels good to be on the other side of table (or phone).\nComments Comment by amritpal singh on 2008-08-04 00:52:18 +0530 sahi hai Sidhu saab\nComment by Manikya on 2008-08-04 13:25:00 +0530 Plz tell me if that guy was finally selected by Higher recruitment management level …\nHere i want to see what is the fate of Technically Recommended person at the hands of Non Technical Managers …!!!\nComment by Sidhu on 2008-08-04 21:55:43 +0530 @Manikya\nI don’t really know what happened later on. Will try to find out whether he was selected or not.\nComment by Puja on 2008-08-19 23:35:33 +0530 Conducting interviews is such a stressful job!! Most of the times you meet people who don’t even know what CHECKPOINTS are! And fake resumes show up every now and then.. The tragedy is that there are just not enough of good DBAs around and you tend to compromise on the quality of people that are hired!!\nComment by Sidhu on 2008-08-20 22:42:49 +0530 @Puja\nSo, you too familiar with this beast called Oracle 🙂\nYea, it is, indeed. Quality resources are just so rare and if there are a few, companies are not ready to pay what they ask for. Moreover the way Indian companies are going for quantity of resources (Yea, number of employees is a factor 😉 ) the quality is bound to go down.\nFake resumes are also a headache but i guess if the interviewer is OK, one carrying fake CV can’t go far. Will get caught soon in CHECKPOINTS and SCNs 😉\nComment by Puja on 2008-08-20 23:50:39 +0530 Oh yes, am a big fan of oracle. was working as an Oracle DBA before quitting.\nIn one of the companies that I worked with, they took fresh graduates, trained them on SQL, and put them to work as DBAs!! I am sure within a year, they would project themselves as ‘experienced’ DBAs! And then if the interviewer is smart, they would get trapped in the vicious world of CHECKPOINTS and SCNs!!\nComment by Sidhu on 2008-08-21 07:08:42 +0530 You know what, i would say even thats good that they are at least training them on SQL. I have seen the scenarios where they were not taught anything and still asked to manage databases.\nSo as you said, after 1-2 years they would be so called “experienced” DBAs with 2 years of ex in this that blah blah. Sad indeed.\nThis,specially is the case with the Indian companies. I have never seen them doing any skill based recruitment. They keep on adding people and then throw randomly here and there…where few of them become DBAs, others Unix administrators and so on. So jinna gur panuge unna mitha houga anusaar, things get nowhere.\nBTW how do you feel after quitting ? Don’t you miss all this ? Any plans in future to join back ?\nComment by Puja on 2008-08-22 22:48:04 +0530 Well, I am not too sure about that! The way these people land the database (and eventually themselves) in trouble, I wish it would make the recruiters take a little more pain in identifying the right resource..\nOn a tangential note, my previous company recruited me (an Oracle DBA) while it was a database designer they were looking for!!!\nI had quit the job while I was into fourth month of my pregnancy. I had to do so for medical reasons. Ever since then I have enjoyed every single moment of my pregnancy and motherhood, so no regrets whatsoever 🙂 I do miss the technical stuff at times, and wish to resume at some point of time. As of now, I am busy trying to find out work from home options. Any ideas?\nComment by Sidhu on 2008-08-25 07:37:49 +0530 Fine, that is an integral part of life.\nIn India that would be an issue. I am not aware of any such thing as work from home. With database it becomes even more difficult. You must be looking for database stuff only ?\nFor some programming jobs, i think there could be something.\nWould let you know if i come across any information.\nComment by amritpal singh on 2008-11-23 02:28:27 +0530 man, even I took so many interviews here in my new job for pl-sql developer, Initially I was nervous, but now I am kind of getting hold of it. no one was selected though.\nand Puja I liked your blog page, and added your blog to my google reader list of blogs.\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/07/29/conducting-my-first-interview/","summary":"\u003cp\u003eLast week i had a chance to conduct my life\u0026rsquo;s first interview. The guy was a DBA with 2 years of experience and it was supposed to be a telephonic call. I checked out his CV and wrote down around 8 questions on a paper, just to make sure that i myself don\u0026rsquo;t get confused during the interview ;) . So i called him in the evening and started with introduction and his present job profile. Then i started with the questions from the projects he had done. He was confident about the stuff he was handling and replied all the questions honestly, saying NO at points where he didn\u0026rsquo;t know or was not involved in something. One of such thing was testing the backups. He said there is no testing done as such.\u003c/p\u003e","title":"Conducting my first interview ;)"},{"content":"I don\u0026rsquo;t know a bit about Apache HTTP server but faced one issue in office\u0026hellip;so thought about writing it here ;)\nWe are having a 3 tier setup where Oracle Application Server 10g was there on AIX 5.3. It runs Apache HTTP server and we needed to access the files outside DocumentRoot. A bit of googling revealed that we could use Alias for that. Basically we need to add the following small piece of text to httpd.conf file:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;] Alias /test1/ \u0026ldquo;/home/sidhu/test1/\u0026rdquo;\nOptions Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all [/sourcecode]\nNow we should be able to access the files in /home/sidhu/test1 by hitting the following URL\nhttp://server:port/test1/filename.html\nPS: I couldn\u0026rsquo;t find the reason but it didn\u0026rsquo;t allow me to use word \u0026ldquo;reports\u0026rdquo; in the directory name.\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/07/22/accessing-outside-documentroot-files-in-apache-http-server/","summary":"\u003cp\u003eI don\u0026rsquo;t know a bit about Apache HTTP server but faced one issue in office\u0026hellip;so thought about writing it here ;)\u003c/p\u003e\n\u003cp\u003eWe are having a 3 tier setup where Oracle Application Server 10g was there on AIX 5.3. It runs Apache HTTP server and we needed to access the files outside DocumentRoot. A bit of googling revealed that we could use \u003ca href=\"http://httpd.apache.org/docs/2.2/mod/mod_alias.html#alias\"\u003eAlias\u003c/a\u003e for that. Basically we need to add the following small piece of text to httpd.conf file:\u003c/p\u003e","title":"Accessing outside DocumentRoot files in Apache HTTP server"},{"content":"Our application (3 tier, Front end Forms10g and back end 10gR2) provides user with a front end to refresh the mviews. That form has 2 columns showing mview name and the comment against it. Recently i saw that while opening this front end ORA-01403 NO DATA FOUND was being raised.\nI opened the fmb and found that it was populating comments from DBA_TAB_COMMENTS. In 10g the comments against mviews are stored in DBA_MVIEW_COMMENTS unlike till 9i where it was in ALL_TAB_COMMENTS. So there was a little modification required.\nBTW if you try to comment on the table (which is created with MVIEW) it won\u0026rsquo;t allow you to do so and instead raise ORA-12098: cannot comment on the materialized view.\nSo may be that little change needs to be done !\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/07/21/dba_mview_comments-view-in-10g/","summary":"\u003cp\u003eOur application (3 tier, Front end Forms10g and back end 10gR2) provides user with a front end to refresh the mviews. That form has 2 columns showing mview name and the comment against it. Recently i saw that while opening this front end ORA-01403 NO DATA FOUND was being raised.\u003c/p\u003e\n\u003cp\u003eI opened the fmb and found that it was populating comments from DBA_TAB_COMMENTS. In 10g the comments against mviews are stored in DBA_MVIEW_COMMENTS unlike till 9i where it was in ALL_TAB_COMMENTS. So there was a little modification required.\u003c/p\u003e","title":"DBA_MVIEW_COMMENTS view in 10g"},{"content":"Just upgraded my blog to Wordpress 2.6. There are few new things. This video from Wordpress summarizes the new stuff:\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/07/15/upgrade-to-wordpress-2-6/","summary":"\u003cp\u003eJust upgraded my blog to Wordpress 2.6. There are few new things. This video from Wordpress summarizes the new stuff:\u003c/p\u003e","title":"Upgrade to WordPress 2.6"},{"content":"Aman made me aware about not being able to post any comments on my blog. I checked and found that everything was fine with Firefox but in IE 6 it was not possible to post a comment. I was using Did You Pass Math plugin to stop comment spam and it had some problems in IE. I disabled it for the time being and installed WP-SpamFree. Everything seems to be fine except that it doesn\u0026rsquo;t allow very small comments and i haven\u0026rsquo;t been able to figure out where that setting is ? (yea\u0026hellip;i am really poor with this web stuff :( ).\nThe display of single post in IE is still broken. Some issues with the theme i am using, i guess. Will try to fix that.\nIf you have any difficulty in reading or commenting on the post, please let me know at amardeepsidhu at gmail dot com.\nThanks :)\nComments Comment by Aman\u0026hellip;. on 2008-07-09 23:02:53 +0530 This theme is more vibrant and I guess more easy to read!\nAman….\nComment by Sidhu on 2008-07-14 20:33:53 +0530 🙂\nThis is basically the same theme. I just managed to somehow edit a bit of CSS and change the font \u0026amp; size 😉\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/07/08/commenting-on-my-blog/","summary":"\u003cp\u003e\u003ca href=\"http://amansharma.wordpress.com/\"\u003eAman\u003c/a\u003e made me aware about not being able to post any comments on my blog. I checked and found that everything was fine with Firefox but in IE 6 it was not possible to post a comment. I was using \u003ca href=\"http://www.herod.net/dypm/\" title=\"Visit plugin homepage\"\u003eDid You Pass Math\u003c/a\u003e plugin to stop comment spam and it had some problems in IE. I disabled it for the time being and installed \u003ca href=\"http://www.hybrid6.com/webgeek/plugins/wp-spamfree\"\u003eWP-SpamFree\u003c/a\u003e. Everything seems to be fine except that it doesn\u0026rsquo;t allow very small comments and i haven\u0026rsquo;t been able to figure out where that setting is ? (yea\u0026hellip;i am really poor with this web stuff :( ).\u003c/p\u003e","title":"Commenting on my blog"},{"content":"Just had a glimpse. OTN forums has got a new look. A bunch of new features has also been added. Some of the new things i noticed are:\nThat simple editor is now rich text editor. Also supports emoticons. You can add tags to a post and it displays the tag cloud (for all the posts) on right side. A post can be marked as a question and then there will be some points system based on answers. That list of Top users doesn\u0026rsquo;t show those users n ACEs. It display a new list of people instead. Report Abuse button has been added (with each reply). May be there are some other changes too. That is what i noticed in the very first look.\nNice change indeed ! I am not happy with the font though, it is very small :(\nComments Comment by Aman\u0026hellip; on 2008-07-06 21:12:36 +0530 Hmm I really didnt like two things,okay three about the new OTN forum,\n1)Font size,really needed to zoom to 140% to read without straining eyes.\n2) Extremely slow!\n3)Point system,I mean what’s this point system at all?Than there will be fights ,how come you got 10 when I got 5 and my answer is more better than you.Or better OP doesn’t even know which answer is better and ignores to give any points and than all the people who commented are fighting with him only to give them points,whatever they may be ;-)!\nI guess it was too much to add that’s why we are back to old forum ;).\nAman….\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/06/29/otn-forums-get-a-new-look/","summary":"\u003cp\u003eJust had a glimpse. OTN forums has got a new look. A bunch of new features has also been added. Some of the new things i noticed are:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eThat simple editor is now rich text editor. Also supports emoticons.\u003c/li\u003e\n\u003cli\u003eYou can add tags to a post and it displays the tag cloud (for all the posts) on right side.\u003c/li\u003e\n\u003cli\u003eA post can be marked as a question and then there will be some points system based on answers.\u003c/li\u003e\n\u003cli\u003eThat list of Top users doesn\u0026rsquo;t show those users n ACEs. It display a new list of people instead.\u003c/li\u003e\n\u003cli\u003eReport Abuse button has been added (with each reply).\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eMay be there are some other changes too. That is what i noticed in the very first look.\u003c/p\u003e","title":"OTN forums get a new look"},{"content":"I just opened OTN forums and found:\nHigh availability live ;)\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/06/28/high-availability-live/","summary":"\u003cp\u003eI just opened OTN forums and found:\u003c/p\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"/blog/wp-content/uploads/2008/06/ha.jpg\"\u003e\u003c/p\u003e\n\u003cp\u003eHigh availability live ;)\u003c/p\u003e","title":"High Availability – Live ;)"},{"content":"author: Sidhu category:\noracle-general guid: http://amardeepsidhu.com/blog/?p=79 tag: database title: A database - one tablespace \u0026amp; one datafile url: /blog/a-database-one-tablespace-one-datafile/ Today I was checking OTN forums and came across a thread. OP\u0026rsquo;s concern was:\nOne of the db i am supporting has about 3.3 Terabytes capacity and the application is using only 1 huge tablespace with one big file. the system is linux 4 , 32 bit. oracle version is 10.2.0.4 Is there a limit of space for a tablespace when you consider insert/delete/query performance?\nSingle datafile of 3.3 TB :)\nAwesome !\nKudos to the designer ;)\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/06/10/a-database-one-tablespace-one-datafile/","summary":"\u003cp\u003eauthor: Sidhu\ncategory:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eoracle-general\nguid: \u003ca href=\"http://amardeepsidhu.com/blog/?p=79\"\u003ehttp://amardeepsidhu.com/blog/?p=79\u003c/a\u003e\ntag:\u003c/li\u003e\n\u003cli\u003edatabase\ntitle: A database - one tablespace \u0026amp; one datafile\nurl: /blog/a-database-one-tablespace-one-datafile/\u003c/li\u003e\n\u003c/ul\u003e\n\u003chr\u003e\n\u003cp\u003eToday I was checking OTN forums and came across \u003ca href=\"http://forums.oracle.com/forums/message.jspa?messageID=2578116\"\u003ea thread\u003c/a\u003e. OP\u0026rsquo;s concern was:\u003c/p\u003e\n\u003cp\u003e\u003cem\u003eOne of the db i am supporting has about 3.3 Terabytes capacity and the application is using only 1 huge tablespace with one big file.\u003c/em\u003e \u003cem\u003ethe system is linux 4 , 32 bit.\u003c/em\u003e\n\u003cem\u003eoracle version is 10.2.0.4\u003c/em\u003e \u003cem\u003eIs there a limit of space for a tablespace when you consider insert/delete/query performance?\u003c/em\u003e\u003c/p\u003e","title":"A database – one tablespace \u0026 one datafile"},{"content":"I had got an export of a database and had to import it to a new database. The only difference was that in new database few of the large tables were partitioned. So instead of partitioning it after the import, i thought about pre-creating the tables (with partitioning) and then run import with ignore=Y . Everything went fine. But later on the front end application gave some error and we came to know that default values for columns in some tables were not set. I did some googling and didn\u0026rsquo;t find much. Then i posted the same to OTN forums and came to know that if the table pre-exists, import doesn\u0026rsquo;t take care of default values of columns. Metalink note 224727.1 discusses this. So if you are pre-creating the tables and there are some default values for any columns, set it manually, don\u0026rsquo;t rely on import for this. Same is true for impdp as well.\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/05/10/import-and-default-values-for-columns/","summary":"\u003cp\u003eI had got an export of a database and had to import it to a new database. The only difference was that in new database few of the large tables were partitioned. So instead of partitioning it after the import, i thought about pre-creating the tables (with partitioning) and then run import with ignore=Y . Everything went fine. But later on the front end application gave some error and we came to know that default values for columns in some tables were not set. I did some googling and didn\u0026rsquo;t find much. Then i \u003ca href=\"http://forums.oracle.com/forums/message.jspa?messageID=25113\"\u003eposted\u003c/a\u003e the same to OTN forums and came to know that if the table pre-exists, import doesn\u0026rsquo;t take care of default values of columns. Metalink note \u003ca href=\"https://metalink.oracle.com/metalink/plsql/f?p=130:14:4379750739032245945::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,224727.1,1,1,1,helvetica\"\u003e224727.1\u003c/a\u003e discusses this. So if you are pre-creating the tables and there are some default values for any columns, set it manually, don\u0026rsquo;t rely on import for this. Same is true for impdp as well.\u003c/p\u003e","title":"Import and default values for columns"},{"content":"I just opened the Oracle Community page and :\nComments Comment by Eddie Awad on 2008-04-22 23:44:33 +0530 Hi Amardeep,\nAs you may have noticed already, OracleCommunity.net is back online now 🙂\nCheers!\nComment by Sidhu on 2008-04-23 06:55:02 +0530 Eddie\nYou know what…i was expecting your comment…\nCheers !\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/04/22/maintenance-is-everywhere/","summary":"\u003cp\u003eI just opened the \u003ca href=\"http://www.oraclecommunity.net/\"\u003eOracle Community\u003c/a\u003e page and :\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/blog/wp-content/uploads/2008/04/ocm.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"/blog/wp-content/uploads/2008/04/ocm.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-eddie-awad-on-2008-04-22-234433-0530\"\u003eComment by Eddie Awad on 2008-04-22 23:44:33 +0530\u003c/h3\u003e\n\u003cp\u003eHi Amardeep,\u003c/p\u003e\n\u003cp\u003eAs you may have noticed already, OracleCommunity.net is back online now 🙂\u003c/p\u003e\n\u003cp\u003eCheers!\u003c/p\u003e\n\u003ch3 id=\"comment-by-sidhu-on-2008-04-23-065502-0530\"\u003eComment by Sidhu on 2008-04-23 06:55:02 +0530\u003c/h3\u003e\n\u003cp\u003eEddie\u003c/p\u003e\n\u003cp\u003eYou know what…i was expecting your comment…\u003c/p\u003e\n\u003cp\u003eCheers !\u003c/p\u003e","title":"Maintenance…is everywhere ;)"},{"content":"Today I was searching for an error message in metalink and it was giving strange messages. Then i came to know that the error message contained \u0026ldquo;%\u0026rdquo; character and metalink was not really happy searching for it. Rather it showed a confusing message:\nPretty strange\u0026hellip;\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/04/18/have-you-searched-in-metalink/","summary":"\u003cp\u003eToday I was searching for an error message in metalink and it was giving strange messages. Then i came to know that the error message contained \u0026ldquo;%\u0026rdquo; character and metalink was not really happy searching for it. Rather it showed a confusing message:\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/blog/wp-content/uploads/2008/04/noname.gif\"\u003e\u003cimg loading=\"lazy\" src=\"/blog/wp-content/uploads/2008/04/noname.gif\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003ePretty strange\u0026hellip;\u003c/p\u003e","title":"Have you searched “%” in metalink ?"},{"content":"If you are used to work in Unix enviroment and then sometime, in between have to sit on Windows and tail -f alert_DB.log, its a real pain. There is a small bundle of utilities, called Unxutils which can make you feel at home in Windows too. These are the exe\u0026rsquo;s of all major commands in Unix like more, less, ls, grep etc\u0026hellip;\nTo use it just download the zip file from the link above, extract it to some folder and add the path of exe\u0026rsquo;s to your Windows PATH. Restart your machine and you are done.\nHappy more\u0026rsquo;ing\u0026hellip;less\u0026rsquo;ing\u0026hellip;\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/04/10/unxutils-for-windows/","summary":"\u003cp\u003eIf you are used to work in Unix enviroment and then sometime, in between have to sit on Windows and tail -f alert_DB.log, its a real pain. There is a small bundle of utilities, called \u003ca href=\"http://unxutils.sourceforge.net/\"\u003eUnxutils\u003c/a\u003e which can make you feel at home in Windows too. These are the exe\u0026rsquo;s of all major commands in Unix like more, less, ls, grep etc\u0026hellip;\u003c/p\u003e\n\u003cp\u003eTo use it just download the zip file from the link above, extract it to some folder and add the path of exe\u0026rsquo;s to your Windows PATH. Restart your machine and you are done.\u003c/p\u003e","title":"Unxutils for Windows"},{"content":"I recently upgraded my blog to Wordpress 2.5. The manual process is real cumbersome. Today I came across few plugins which help in almost automating the upgrade stuff.\nWordpress Automatic upgrade: As per description this plugin first takes backup and then upgrades. I didn\u0026rsquo;t try this one, though.\nInstant Upgrade: I upgraded one of my blog using this. It doesn\u0026rsquo;t take care of backup but otherwise the upgrade was super smooth. Just few clicks and you are done.\nHappy upgrading !\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/04/06/automating-wordpress-2-5-upgrade/","summary":"\u003cp\u003eI recently \u003ca href=\"/blog/2008/03/30/upgrading-to-wordpress-25/\"\u003eupgraded\u003c/a\u003e my blog to Wordpress 2.5. The manual process is real cumbersome. Today I came across few plugins which help in almost automating the upgrade stuff.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://wordpress.org/extend/plugins/wordpress-automatic-upgrade/\"\u003eWordpress Automatic upgrade:\u003c/a\u003e As per description this plugin first takes backup and then upgrades. I didn\u0026rsquo;t try this one, though.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://www.zirona.com/software/wordpress-instant-upgrade/\"\u003eInstant Upgrade:\u003c/a\u003e I upgraded one of my blog using this. It doesn\u0026rsquo;t take care of backup but otherwise the upgrade was super smooth. Just few clicks and you are done.\u003c/p\u003e","title":"Automating WordPress 2.5 upgrade"},{"content":"Finally finally finally a small sticky has been posted on OTN Database forums homepage. But its there only on Database forums page not on PL/SQL. It reads like:\nPosters, please mind these common-sense rules when participating here: - When asking a question, provide all the details that someone would need to answer it. Consulting documentation first is highly recommended. (See http://blogs.oracle.com/shay/2007/03/02 for more hints.) - When answering a question, please be courteous; there are different levels of experience represented here. A poorly worded question is better ignored than flamed - or better yet, help the poster ask a better question. Thanks for doing your part to make this community as valuable as possible for everyone! - OTN\nThat means Database forum enjoys most number of flames ;)\nComments Comment by APC on 2008-04-04 13:39:50 +0530 I think the reason why Justin has posted the sticky on the DB General site is that has been where the most vicious threads have been, at least recently. This is mainly because certain regulars – I don’t need to to name them, we know who I’m talking about – tend to restrict themselves to that forum.\nI don’t think it will make much difference. The sort of person who posts a question titled URGENT SQL PROBLEM!!!! is not the sort of person who will bother reading a post on Forum Etiquette. Likewise persistent flamers will continue to flame on. But at least it’s a start.\nCheers, APC\nComment by Sidhu on 2008-04-04 16:03:29 +0530 Yup Andrew\nAgree ! Database forum has given birth to many flaming threads over past 2-3 months.\nI don’t think it will make much difference. The sort of person who posts a question titled URGENT SQL PROBLEM!!!! is not the sort of person who will bother reading a post on Forum Etiquette. Likewise persistent flamers will continue to flame on. But at least it’s a start.\nExactly. The newbies don’t read any rules (its almost on every forum, not only OTN) and the people who want to post “flaming” stuff will not look at it. So again the same thing is rendered useless.\nBut definitely its the first step in this direction…\nSidhu\nComment by Aman\u0026hellip;. on 2008-04-05 13:44:10 +0530 I agree that its a good step.And for the people who start the flame wars, well there is nothing I guess ever we can do about it.Its really a personal perception if we say to some one clearly whatto do or just mention RTFM.Its much difficult in my view to clearly show the way rather than just mentioning RTFM.\nAman….\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/04/04/otn-forums-etiquettes/","summary":"\u003cp\u003eFinally finally finally a \u003ca href=\"http://forums.oracle.com/forums/ann.jspa?annID=718\"\u003esmall sticky\u003c/a\u003e has been posted on OTN Database forums homepage. But its there only on Database forums page not on PL/SQL. It reads like:\u003c/p\u003e\n\u003cp\u003e\u003cem\u003ePosters, please mind these common-sense rules when participating here:\u003c/em\u003e \u003cem\u003e- When asking a question, provide all the details that someone would need to answer it. Consulting documentation first is highly recommended. (See \u003ca href=\"http://blogs.oracle.com/shay/2007/03/02\"\u003ehttp://blogs.oracle.com/shay/2007/03/02\u003c/a\u003e for more hints.)\u003c/em\u003e\n\u003cem\u003e- When answering a question, please be courteous; there are different levels of experience represented here. A poorly worded question is better ignored than flamed - or better yet, help the poster ask a better question.\u003c/em\u003e \u003cem\u003eThanks for doing your part to make this community as valuable as possible for everyone!\u003c/em\u003e \u003cem\u003e- OTN\u003c/em\u003e\u003c/p\u003e","title":"OTN forums etiquettes"},{"content":"Few days ago i posted about DBAzine being down. I was just checking to see if its back.\nIts rocking now.\nWelcome back DBAzine.com :)\nComments Comment by arshad on 2011-11-01 01:46:03 +0530 I saw your article — interesting\nI need to know many things\n1st of all please let me know how to\nmove (parameter, text) to word document from oracle form 6i. data is queried from table\ni can open existing document\ni can create new document\nnow I wanyt to move text to my document like it would had been through mail-merge\nBest Regards\nComment by arshad on 2011-11-01 02:14:50 +0530 please reply on my email\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/31/dbazine-com-is-back/","summary":"\u003cp\u003eFew days ago i \u003ca href=\"/blog/2008/03/22/dbazinecom-has-expired/\"\u003eposted\u003c/a\u003e about \u003ca href=\"http://www.dbazine.com/\"\u003eDBAzine\u003c/a\u003e being down. I was just checking to see if its back.\u003c/p\u003e\n\u003cp\u003eIts rocking now.\u003c/p\u003e\n\u003cp\u003eWelcome back DBAzine.com :)\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-arshad-on-2011-11-01-014603-0530\"\u003eComment by arshad on 2011-11-01 01:46:03 +0530\u003c/h3\u003e\n\u003cp\u003eI saw your article — interesting\u003cbr\u003e\nI need to know many things\u003cbr\u003e\n1st of all please let me know how to\u003cbr\u003e\nmove (parameter, text) to word document from oracle form 6i. data is queried from table\u003cbr\u003e\ni can open existing document\u003cbr\u003e\ni can create new document\u003cbr\u003e\nnow I wanyt to move text to my document like it would had been through mail-merge\u003c/p\u003e","title":"DBAzine.com is back :)"},{"content":"Just finished upgrading to Wordpress 2.5. Everything seems to be working fine.\nAnother issue I just came to know that wherever i am using syntaxhighlighter to format the code, it doesn\u0026rsquo;t display properly in Internet Explorer. May be something to do with the plugin. Will try to fix it. It works fine in Mozilla.\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/30/upgrading-to-wordpress-2-5/","summary":"\u003cp\u003eJust finished upgrading to Wordpress 2.5. Everything seems to be working fine.\u003c/p\u003e\n\u003cp\u003eAnother issue I just came to know that wherever i am using \u003ca href=\"/blog/2008/03/03/syntax-highlighting-for-code-in-wordpress/\"\u003esyntaxhighlighter\u003c/a\u003e to format the code, it doesn\u0026rsquo;t display properly in Internet Explorer. May be something to do with the plugin. Will try to fix it. It works fine in Mozilla.\u003c/p\u003e","title":"Upgrading to WordPress 2.5"},{"content":"Today I was upgrading 10gR1 to 10gR2 (10.2.0.1) on Linux x86. The upgrade went almost fine (except that I had to install one package and change few kernel parameters) but while running DBUA to upgrade databases, it gave an error:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]Could not get database version from the Oracle Server component. The CEP file rdbmsup.sql does not provide the version directive\nand\nStart of root element expected. Upgrade Configuration file \u0026lsquo;C:\\Oracle10g2\\cfgtoollogs\\dbua\\test\\upgrade5\\upgrade.xml\u0026rsquo; is not a valid XML file.[/sourcecode]\nI searched in the metalink and found that this all happens due to customized glogin.sql file which was there in my case also. And removing that customization made DBUA rock :)\nYou might want to check here, here and here.\nComments Comment by Manoj Abraham on 2008-10-10 17:29:16 +0530 Hello Amardeep,\nCan you help with the resolution of the error while upgrading Oracle 9i to 10g. The Oracle and the tnslistener are up and running.\nAlso, kindly let me know how to make sure that there is an issue with glogin.sql.\nThe error from the silent.log file is given below.\n—————————\nFor input string: “”\nUpgrade Configuration file `/home/oracle/oracle10g/OraHome10g/cfgtoollogs/dbua/arcsight/upgrade5/upgrade.xml` is not a valid XML file.\nThe Upgrade Assistant failed in executing any query on the database arcsight. Oracle Home /home/oracle/OraHome1 obtained from file /etc/oratab was used to connect to the database. Either the database is not running from Oracle Home /home/oracle/OraHome1 or its not in OPEN status. Correct the error and run the Upgrade Assistant again.\nCould not proceed with Upgrade due to errors.\nFix the errors and restart again!\n——-\nRegards,\nManoj\nComment by Amardeep Sidhu on 2008-10-11 18:18:48 +0530 Hi Manoj\nAlso, kindly let me know how to make sure that there is an issue with glogin.sql.\nSimply look for any customization in the glogin.sql like setting custom SQL prompt or anything else.\nAbout the error, are you running the DBUA in silent mode ?\nAlso are you able to connect to the database manually from the shell prompt ?\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/27/upgrade-10gr1-to-10gr2-dbua-error/","summary":"\u003cp\u003eToday I was upgrading 10gR1 to 10gR2 (10.2.0.1) on Linux x86. The upgrade went almost fine (except that I had to install one package and change few kernel parameters) but while running DBUA to upgrade databases, it gave an error:\u003c/p\u003e\n\u003cp\u003e[sourcecode language=\u0026lsquo;css\u0026rsquo;]Could not get database version from the Oracle Server component. The CEP file rdbmsup.sql does not provide the version directive\u003c/p\u003e\n\u003cp\u003eand\u003c/p\u003e\n\u003cp\u003eStart of root element expected. Upgrade Configuration file\n\u0026lsquo;C:\\Oracle10g2\\cfgtoollogs\\dbua\\test\\upgrade5\\upgrade.xml\u0026rsquo; is not a valid XML file.[/sourcecode]\u003c/p\u003e","title":"Upgrade 10gR1 to 10gR2 – DBUA error"},{"content":"Today I was running export of an Oracle 9.2.0.1 database. The export completed but with an ORA-600 error:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]\nEXP-00008: ORACLE error 600 encountered ORA-00600: internal error code, arguments: [xsoptloc2], [4], [4], [0], [], [], [], [] ORA-06512: in \u0026ldquo;SYS.DBMS_AW\u0026rdquo;, line 347 ORA-06512: in \u0026ldquo;SYS.DBMS_AW\u0026rdquo;, line 470 ORA-06512: in \u0026ldquo;SYS.DBMS_AW_EXP\u0026rdquo;, line 270 ORA-06512: in line 1 EXP-00083: The previous problem occurred when calling SYS.DBMS_AW_EXP.schema_info_exp[/sourcecode]\nI googled a bit and found that the problem is with applying some patchset. Then metalink confirmed the same. Somebody tried applying a patch to upgrade it to 9.2.0.5 but didn\u0026rsquo;t perform all the steps (missed post installation steps, to be precise). Metalink Note 300849.1 covers the issue and also gives the solution. In nutshell startup the database with startup migrate and run catpatch.sql.\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/24/exp-00008-oracle-error-600-encountered/","summary":"\u003cp\u003eToday I was running export of an Oracle 9.2.0.1 database. The export completed but with an ORA-600 error:\u003c/p\u003e\n\u003cp\u003e[sourcecode language=\u0026lsquo;css\u0026rsquo;]\u003c/p\u003e\n\u003cp\u003eEXP-00008: ORACLE error 600 encountered\nORA-00600: internal error code, arguments: [xsoptloc2], [4], [4], [0], [], [], [], []\nORA-06512: in \u0026ldquo;SYS.DBMS_AW\u0026rdquo;, line 347\nORA-06512: in \u0026ldquo;SYS.DBMS_AW\u0026rdquo;, line 470\nORA-06512: in \u0026ldquo;SYS.DBMS_AW_EXP\u0026rdquo;, line 270\nORA-06512: in line 1\nEXP-00083: The previous problem occurred when calling SYS.DBMS_AW_EXP.schema_info_exp[/sourcecode]\u003c/p\u003e\n\u003cp\u003eI googled a bit and found that the problem is with applying some patchset. Then metalink confirmed the same. Somebody tried applying a patch to upgrade it to 9.2.0.5 but didn\u0026rsquo;t perform all the steps (missed post installation steps, to be precise). Metalink \u003ca href=\"https://metalink.oracle.com/metalink/plsql/f?p=130:14:3653505393918947609::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,300849.1,1,1,1,helvetica\"\u003eNote 300849.1\u003c/a\u003e covers the issue and also gives the solution. In nutshell startup the database with \u003cstrong\u003estartup migrate\u003c/strong\u003e and \u003cstrong\u003erun catpatch.sql\u003c/strong\u003e.\u003c/p\u003e","title":"EXP-00008: ORACLE error 600 encountered"},{"content":"I was copying some data from a DVD and it stopped in between due to some error with some file. Then I was reminded of Total Copy (A small alternate utility to Windows copy) but even thats is not much stable and you have to right click \u0026amp; drag for copy/paste.\nI googled a bit and came across another small utility called Tera Copy. Its just 1 MB download and replaces Windows copy/paste. It provides many options like skipping a file, resuming copy and some other stuff. A must have for all Windows users. There is a pro version too.\nComments Comment by Gary on 2008-03-24 02:44:19 +0530 Thanks for that. I’ve been dropping down to DOS and XCOPY up until now.\nComment by Sidhu on 2008-03-24 07:30:30 +0530 🙂\nBTW refreshed my memories of Windows 98 doing dir, fdisk and fdisk /mbr in DOS 😀\nComment by Dvd Xcopy on 2008-04-07 09:37:38 +0530 Good Day, I fell lucky that I located this post while browsing for dvd xcopy. I am with you on the topic of of your copy paste woes in Windows. Ironically, I was just putting a lot of thought into this last Sunday.\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/23/get-rid-of-your-copy-paste-woes-in-windows/","summary":"\u003cp\u003eI was copying some data from a DVD and it stopped in between due to some error with some file. Then I was reminded of \u003ca href=\"http://www.ranvik.net/totalcopy/\"\u003eTotal Copy\u003c/a\u003e (A small alternate utility to Windows copy) but even thats is not much stable and you have to right click \u0026amp; drag for copy/paste.\u003c/p\u003e\n\u003cp\u003eI googled a bit and came across another small utility called \u003ca href=\"http://www.codesector.com/teracopy.php\"\u003eTera Copy\u003c/a\u003e. Its just 1 MB download and replaces Windows copy/paste. It provides many options like skipping a file, resuming copy and some other stuff. A must have for all Windows users. There is a pro version too.\u003c/p\u003e","title":"Get rid of your copy paste woes in Windows"},{"content":"Engineers Develop Solid-state Fan That Puts Traditional Coolers to Shame\nCool ! isn\u0026rsquo;t it ?\nSoon we may be using 5 Ghz super cool processors in our desktop box :)\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/23/solid-state-cooling-for-processors/","summary":"\u003cp\u003e\u003ca href=\"http://www.dailytech.com/Engineers+Develop+Solidstate+Fan+That+Puts+Traditional+Coolers+to+Shame/article11158.htm\"\u003eEngineers Develop Solid-state Fan That Puts Traditional Coolers to Shame\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eCool ! isn\u0026rsquo;t it ?\u003c/p\u003e\n\u003cp\u003eSoon we may be using 5 Ghz super cool processors in our desktop box :)\u003c/p\u003e","title":"Solid state cooling for processors"},{"content":"Today I was googling about Oracle Data Guard and came across an article on dbazine.com. I opened the link and it showed some stupid page. I checked again and again. Finally I noticed a message on top right:\ndbazine.com expired on 03/13/2008 and is pending renewal or deletion.\n:(\nIt was a nice website.\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/22/dbazine-com-has-expired/","summary":"\u003cp\u003eToday I was googling about Oracle Data Guard and came across an article on dbazine.com. I opened the link and it showed some stupid page. I checked again and again. Finally I noticed a message on top right:\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003edbazine.com\u003c/strong\u003e expired on 03/13/2008 and is pending renewal or deletion.\u003c/p\u003e\n\u003cp\u003e:(\u003c/p\u003e\n\u003cp\u003eIt was a nice website.\u003c/p\u003e","title":"dbazine.com has expired"},{"content":"I came across a very nice post about Autonomous Transactions in Oracle written by Kevin Meade on orafaq. Thought about sharing the link.\nHis blog also has some very nice stuff.\nComments Comment by Aman\u0026hellip;. on 2008-03-09 12:28:30 +0530 Nice one Sidhu.learned some thing new about AT.\n🙂\nCheers,\nAman….\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/04/autonomous-transactions-in-oracle/","summary":"\u003cp\u003eI came across a very nice post about Autonomous Transactions in Oracle written by \u003ca href=\"http://www.orafaq.com/blog/kevin_meade\"\u003eKevin Meade\u003c/a\u003e on \u003ca href=\"http://www.orafaq.com\"\u003eorafaq\u003c/a\u003e. Thought about sharing \u003ca href=\"http://www.orafaq.com/node/1915\"\u003ethe link\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003eHis \u003ca href=\"http://www.orafaq.com/blog/kevin_meade\"\u003eblog\u003c/a\u003e also has some very nice stuff.\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-aman-on-2008-03-09-122830-0530\"\u003eComment by Aman\u0026hellip;. on 2008-03-09 12:28:30 +0530\u003c/h3\u003e\n\u003cp\u003eNice one Sidhu.learned some thing new about AT.\u003cbr\u003e\n🙂\u003cbr\u003e\nCheers,\u003cbr\u003e\nAman….\u003c/p\u003e","title":"Autonomous Transactions in Oracle"},{"content":"I came across a post about syntax highlighting for code in Wordpress on Thomas Roach\u0026rsquo;s blog who in turn saw it on Tyler Muth\u0026rsquo;s blog. There is a plugin called syntax highlighter which does all the beauty. Use is very simple:\nDownload it from here.\nUpload to your blog\u0026rsquo;s plugins folder and extract it over there.\nNow (on Wordpress) go to Plugins page and activate the plugin.\nTo use it wrap your code like\nand you are done.\nWanna see, how does it look like ;)\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]SQL\u0026gt;drop database;[/sourcecode]\nIsn\u0026rsquo;t beautiful ;)\nComments Comment by NeilC on 2008-03-07 05:30:27 +0530 Hi Amardeep\ni’m new to blogging (starting today), and am trying to use WordPress, I have two questions\nWhere do I find my blogs plugins folder ? Where (on WordPress) do I find Plugins page to activate the plugin? many thanks\nComment by Sidhu on 2008-03-07 06:55:55 +0530 Hi Neil\nNot sure if you have signed up with wordpress.com for a blog or registered your own domain name and trying to setup blog using wordpress software.\nIf its latter then:\n1. Where do I find my blogs plugins folder ?\nIt will be in /blog/wp-content folder of your blog.\n2. Where (on WordPress) do I find Plugins page to activate the plugin?\nOn your Dashboard go to plugins page and from there you can activate the installed plugins.\nHope that helps…\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/03/syntax-highlighting-for-code-in-wordpress/","summary":"\u003cp\u003eI came across \u003ca href=\"http://www.oraclerant.com/?p=26\"\u003ea post\u003c/a\u003e about syntax highlighting for code in Wordpress on \u003ca href=\"http://www.oraclerant.com\"\u003eThomas Roach\u0026rsquo;s blog\u003c/a\u003e who in turn saw it on \u003ca href=\"http://tylermuth.wordpress.com/\"\u003eTyler Muth\u0026rsquo;s blog.\u003c/a\u003e There is a plugin called \u003ca href=\"http://wordpress.org/extend/plugins/syntaxhighlighter/\"\u003esyntax highlighter\u003c/a\u003e which does all the beauty. Use is very simple:\u003c/p\u003e\n\u003cp\u003eDownload it from \u003ca href=\"http://downloads.wordpress.org/plugin/syntaxhighlighter.zip\"\u003ehere\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003eUpload to your blog\u0026rsquo;s plugins folder and extract it over there.\u003c/p\u003e\n\u003cp\u003eNow (on Wordpress) go to Plugins page and activate the plugin.\u003c/p\u003e\n\u003cp\u003eTo use it wrap your code like\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/blog/wp-content/uploads/2008/03/code.JPG\" title=\"code.JPG\"\u003e\u003cimg alt=\"code.JPG\" loading=\"lazy\" src=\"/blog/wp-content/uploads/2008/03/code.JPG\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Syntax highlighting for code in WordPress"},{"content":"Today one of my colleague was working on a simple PL/SQL procedure. Based on some logic it was returning count(*) from all_tab_columns for few tables. It gave count incorrectly for one table out of around fifty in total. He just hard coded the table name and ran it but again it showed count as zero.\nThen he took the code out of procedure and wrote it in DECLARE, BEGIN, END and after running it showed the correct count. But ran as database procedure it always shows incorrectly.\nFinally just as hit and trial, he gave SELECT on the TABLE to database user [Table was in different schema], used to run the procedure and everything was ok. Isn\u0026rsquo;t it bit stupid :)\nUpdate: Well, it happens for a reason. Nigel Thomas pointed out in the comment. The reason is that privileges granted to a role are not seen from PL/SQL stored procedures. You need to give direct grant to the user for this or another method is to define the procedure or package with invoker rights.\nThanks Nigel :)\nComments Comment by Nigel Thomas on 2008-03-04 14:57:32 +0530 Well known feature: privileges granted to a role are not normally seen from PL/SQL stored procs, which execute as if you had SET ROLE NONE – see http://www.jlcomp.demon.co.uk/faq/plsql_privs.html. Workarounds are:\n– grant privileges directly to the user (what a pain)\n– define the procedure/package with invoker’s rights – see eg http://www.unix.org.ua/orelly/oracle/guide8i/ch03_01.htm\nRegards Nigel\nComment by Sidhu on 2008-03-04 20:13:55 +0530 Thanks Nigel\nInteresting !!!\nI have read about invoker rights but never thought in this direction…something new 🙂\nAlso adding your blog to my list 🙂\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/03/03/missing-grants/","summary":"\u003cp\u003eToday one of my colleague was working on a simple PL/SQL procedure. Based on some logic it was returning count(*) from all_tab_columns for few tables. It gave count incorrectly for one table out of around fifty in total. He just hard coded the table name and ran it but again it showed count as zero.\u003c/p\u003e\n\u003cp\u003eThen he took the code out of procedure and wrote it in DECLARE, BEGIN, END and after running it showed the correct count. But ran as database procedure it always shows incorrectly.\u003c/p\u003e","title":"Missing grants"},{"content":"From Eddie\u0026rsquo;s blog I got a link to 3 posts on Regular Expressions on OTN written by CD. Wonderful stuff. Check out.\nPart 1 Part 2 Part 3\n\u0026amp; Thanks CD\u0026hellip;wonderful work buddy !\nComments Comment by Tyler on 2008-02-28 07:51:45 +0530 I’ve been working with regular expressions in PL/SQL a lot lately and the one tool that’s helped me more than any other is Regex Buddy (http://www.regexbuddy.com/). Yeah, it’s not free, but it was the best $40 I’ve spent. There are free options, like regex coach and a few free online options, but the problem is you have to find one that matches the Oracle specific Posix ERE format. Regex buddy even has a drop-down for Oracle syntax (http://www.regexbuddy.com/oracle.html). I have no affiliation with this tool whatsoever, just found it to be exceptionally useful.\nTyler\nComment by Sidhu on 2008-03-03 21:02:10 +0530 Thanks Tyler\nSurely, I will have a look at Regex Buddy 🙂\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/02/27/learning-regular-expressions/","summary":"\u003cp\u003eFrom \u003ca href=\"http://awads.net/wp/2008/02/25/5-useful-links-for-2008-02-25/\" title=\"Eddie's blog\"\u003eEddie\u0026rsquo;s blog\u003c/a\u003e I got a link to 3 posts on Regular Expressions on OTN written by \u003ca href=\"http://www.l2is.com/apex/f?p=999:3:3754894570320873::NO::P3_NAME:ARTICLE46\" title=\"CD's blog\"\u003eCD\u003c/a\u003e. Wonderful stuff. Check out.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://forums.oracle.com/forums/thread.jspa?threadID=427716\" title=\"Part 1\"\u003ePart 1\u003c/a\u003e \u003ca href=\"http://forums.oracle.com/forums/thread.jspa?threadID=430647\" title=\"Part 2\"\u003ePart 2\u003c/a\u003e \u003ca href=\"http://forums.oracle.com/forums/thread.jspa?threadID=435109\" title=\"Part 3\"\u003ePart 3\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u0026amp; Thanks CD\u0026hellip;wonderful work buddy !\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-tyler-on-2008-02-28-075145-0530\"\u003eComment by Tyler on 2008-02-28 07:51:45 +0530\u003c/h3\u003e\n\u003cp\u003eI’ve been working with regular expressions in PL/SQL a lot lately and the one tool that’s helped me more than any other is Regex Buddy (\u003c!-- raw HTML omitted --\u003e\u003ca href=\"http://www.regexbuddy.com/\"\u003ehttp://www.regexbuddy.com/\u003c/a\u003e\u003c!-- raw HTML omitted --\u003e). Yeah, it’s not free, but it was the best $40 I’ve spent. There are free options, like regex coach and a few free online options, but the problem is you have to find one that matches the Oracle specific Posix ERE format. Regex buddy even has a drop-down for Oracle syntax (\u003c!-- raw HTML omitted --\u003e\u003ca href=\"http://www.regexbuddy.com/oracle.html\"\u003ehttp://www.regexbuddy.com/oracle.html\u003c/a\u003e\u003c!-- raw HTML omitted --\u003e). I have no affiliation with this tool whatsoever, just found it to be exceptionally useful.\u003c/p\u003e","title":"Learning Regular Expressions"},{"content":"Well, it was in my mind for some time, to register my domain name and finally its here :) . I have imported all the posts from my blog on blogger. I will be checking and editing all the posts for any kind of issues with wordpress. If you see anything out of the way do let me know :)\nComments Comment by Amritpal Singh on 2008-02-06 20:57:22 +0530 good work Sidhu saab, this proves JOSH haje hai baaki, so no tension, and Guru di kirpa naal chakki jao kam\nbest wishes on your new work.\ntake care\nComment by Sidhu on 2008-02-06 21:43:46 +0530 Thanks 🙂\nYea,hoping to keep it up 🙂\nComment by Aman\u0026hellip;. on 2008-02-06 21:50:34 +0530 cool 22g.keep up the good work and make sure to have notes ready for me 🙂\nAman….\nComment by Vaibhav Garg on 2008-02-08 19:51:08 +0530 Great stuff man. now remember content is more important not design 😉\nComment by Raphael on 2008-02-13 05:46:04 +0530 Hi Sidhu,\nI find your site very helpful. i have a question, is it possible to backup a database i.e cold backup on Linux server and to restore the database (datafiles) on a windows servers? in other words are the datafiles platform independent? how can i just copy all the files from Linux and restore it on windows?\nThank you\nRaphael\nComment by Yurtdisi Egitim on 2008-03-13 06:07:18 +0530 it seems like e very good web site but my English is not good. It would be great if it might be availible in other languages too. Thanks.\nComment by Sidhu on 2008-03-14 07:16:47 +0530 Hi Yurtdisi\nI just added translator function to my blog. See if its useful.\nThanks 🙂\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/02/06/welcome-to-my-new-website-blog/","summary":"\u003cp\u003eWell, it was in my mind for some time, to register my domain name and finally its here :) . I have imported all the posts from \u003ca href=\"http://amardeepsidhu.blogspot.com\" title=\"My blogger blog\"\u003emy blog\u003c/a\u003e on blogger. I will be checking and editing all the posts for any kind of issues with wordpress. If you see anything out of the way do let me know :)\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-amritpal-singh-on-2008-02-06-205722-0530\"\u003eComment by Amritpal Singh on 2008-02-06 20:57:22 +0530\u003c/h3\u003e\n\u003cp\u003egood work Sidhu saab, this proves JOSH haje hai baaki, so no tension, and Guru di kirpa naal chakki jao kam\u003c/p\u003e","title":"Welcome to my new website \u0026 blog…"},{"content":"Howard has posted a pdf on Oracle Administration on his new website. Do check out. Its awesome.\nUpdate: Howard has shutdown his website, so unfortunately this pdf is not available.\nSidhu\nComments Comment by Amritpal Singh on 2008-01-22 11:03:00 +0530 paaji no pdf, all the content is password protected.\nComment by Sidhu on 2008-01-22 21:54:00 +0530 Due to some reasons Howard has password protected all the content 🙁\nRead more at\nhttp://tinyurl.com/yvypmj\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/01/03/oracle-from-dizwell/","summary":"\u003cp\u003eHoward has posted \u003ca href=\"http://www.dizwell.net/prod/concepts\"\u003ea pdf\u003c/a\u003e on Oracle Administration on his \u003ca href=\"http://www.dizwell.net/prod/\"\u003enew website\u003c/a\u003e. Do check out. Its awesome.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eUpdate:\u003c/strong\u003e Howard has \u003ca href=\"http://dizwell.net/\"\u003eshutdown\u003c/a\u003e his website, so unfortunately this pdf is not available.\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-amritpal-singh-on-2008-01-22-110300-0530\"\u003eComment by Amritpal Singh on 2008-01-22 11:03:00 +0530\u003c/h3\u003e\n\u003cp\u003epaaji no pdf, all the content is password protected.\u003c/p\u003e\n\u003ch3 id=\"comment-by-sidhu-on-2008-01-22-215400-0530\"\u003eComment by Sidhu on 2008-01-22 21:54:00 +0530\u003c/h3\u003e\n\u003cp\u003eDue to some reasons Howard has password protected all the content 🙁\u003c/p\u003e\n\u003cp\u003eRead more at\u003c/p\u003e\n\u003cp\u003e\u003c!-- raw HTML omitted --\u003e\u003ca href=\"http://tinyurl.com/yvypmj\"\u003ehttp://tinyurl.com/yvypmj\u003c/a\u003e\u003c!-- raw HTML omitted --\u003e\u003c/p\u003e","title":"Oracle from Dizwell…"},{"content":"At my workplace we were facing a problem with refresh of a mview. Say it was created in schema of user1 but when I tried to refresh it from user2 it would give ORA-03113: end-of-file on communication channel. Then we raised a SR and have been following up with Oracle support for long but it was not getting anywhere. Yesterday that guy seemed to have reached some point. The mviews that we have created and are having problem with refresh are created on top of both local \u0026amp; remote objects and he said that up to 11gr2 there is no possibility of creating mviews on both local and remote objects. I did validate this thing. All the mviews failing to refresh are created on top of both local \u0026amp; remote objects. But again from the owner the refresh is fine but from another user it gives problem. By the way that guy hinted at bug 4084125 and also suggested a work around. I haven\u0026rsquo;t tried that yet. Will try and update about the results.\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2008/01/03/ora-03113-refresh-of-a-mview-in-oracle-10g/","summary":"\u003cp\u003eAt my workplace we were facing a problem with refresh of a mview. Say it was created in schema of user1 but when I tried to refresh it from user2 it would give ORA-03113: end-of-file on communication channel. Then we raised a SR and have been following up with Oracle support for long but it was not getting anywhere. Yesterday that guy seemed to have reached some point. The mviews that we have created and are having problem with refresh are created on top of both local \u0026amp; remote objects and he said that up to 11gr2 there is no possibility of creating mviews on both local and remote objects. I did validate this thing. All the mviews failing to refresh are created on top of both local \u0026amp; remote objects. But again from the owner the refresh is fine but from another user it gives problem. By the way that guy hinted at \u003ca href=\"https://metalink.oracle.com/metalink/plsql/f?p=130:15:3284197721413823598::::p15_database_id,p15_docid,p15_show_header,p15_show_help,p15_black_frame,p15_font:BUG,4084125,1,1,1,helvetica\"\u003ebug 4084125\u003c/a\u003e and also suggested a work around. I haven\u0026rsquo;t tried that yet. Will try and update about the results.\u003c/p\u003e","title":"ORA-03113 Refresh of a mview in Oracle 10g"},{"content":"Last week I attended a training on AIX system administration from IBM (organized by company, obviously ;) It was a 7 days course covering all of the system administration stuff. There was a lot of new stuff to learn, LVM being the most number of times uttered word, once we did the chapter on LVM. It was a nice experience as a whole as for the first time I attended any training on Unix.\nThe sessions (specially after lunch) were sleepy also. I find this ppt method of training pretty boring. The trainer (most of) strictly, stupidly follows the slides and slide, I feel is a dumb sort of thing, makes you feel sleepy except at the moments when there are few eye opening bullet points.\nThere should be bare minimum number of slides and for rest of the things, trainer should use white board so that everybody follows that and doesn\u0026rsquo;t sleep :)\nAnyways it was really enjoying to be familiar with so many things in Unix.\nSidhu\nComments Comment by Aman Sharma on 2007-11-29 16:39:00 +0530 Hi there,\nWell it actually depends.If you ever come to us in Oracle Univ , we very STRICTLY follow the same method but yes the flow of the slides or the knowledge shared is solely dependant on the trainer.Slides are just ment for references for the trainer and audience.I dont follow them that only what they contain will be told but yeah what they contain will be the main focus area rest will be just ‘touched’.There is always a schedule management that we have to do.Well I agree some what that training that too a full time is little boring.\nGuaranteed that you will not sleep in my class as my voice will not let you ;-).\nCheers,\nAman….\nComment by Sidhu on 2007-11-29 23:34:00 +0530 Yup true…depends on the trainer…but this typical ppt method is dumb…there has to be very high level of interaction and smartness…the trainer has to be very active…funny….as bearing this technical stuff (out of which not all you love) for whole of the day sitting in a classroom like kids is just impossible…\nyou have to crack jokes at times…make people feel lighter…\nyou have to make the flow very smooth…\n\u0026amp; most importantly you have to be an expert in your domain…that generates a lot of confidence in you and “feel good” factor in audience…\nIn whole of ma life….in those typical dumb type sessions…mostly I have been sleeping except where there was a chance for some leg pulling 😉\nSidhu\nComment by Aman Sharma on 2007-12-02 00:41:00 +0530 Agreed,as I said it depends again on the trainer itself.\nWell everyone is a different person in the training.I shallnot reveal my style here as its a live experience that one has to come and get ( just kidding ) but still I agree that ears of the audience starts bleeding when they hear so much tech jargons for almost 9 hours.\nPpts as I mentioned are just a way to organize time and material, they shouldnt be the sole method of training.Hmmm I guess time is there to make my sales team call your training dept and arrange some trainings for you guys.May be OU experience will change some things ;-).\nLastly, you know how it feels on the other side too(trainer side).You have been there done that right.Its not so easy believe me.Even if you are teaching the simplest subject to the dumbest audience too , its not so easy.But again,he the trainer is the caption of the cruise so he has to be for sure the best to keep the spirits alive of all aboard.\nCheers,\nAman….\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/11/27/trained-in-aix/","summary":"\u003cp\u003eLast week I attended a training on AIX system administration from IBM (organized by company, obviously ;) It was a 7 days course covering all of the system administration stuff. There was a lot of new stuff to learn, LVM being the most number of times uttered word, once we did the chapter on LVM. It was a nice experience as a whole as for the first time I attended any training on Unix.\u003c/p\u003e","title":"Trained in AIX…"},{"content":"Eddie Awad started a new series Oracle in 3 minutes on his blog. In the first post he has discussed about multi-versioning. Its a must watch for everyone who is working on Oracle. Hoping to get more of such stuff from Eddie.\nSidhu\nComments Comment by Amritpal Singh on 2008-01-22 11:05:00 +0530 nice man, keep posting all the good work.\nComment by Sidhu on 2008-01-22 21:55:00 +0530 Thanks man 🙂\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/11/27/oracle-from-eddie-awad/","summary":"\u003cp\u003e\u003ca href=\"http://awads.net/wp/\"\u003eEddie Awad\u003c/a\u003e started a new series \u003ca href=\"http://awads.net/wp/2007/11/26/oracle-in-3-minutes-multi-versioning/\"\u003eOracle in 3 minutes\u003c/a\u003e on his \u003ca href=\"http://awads.net/wp/\"\u003eblog\u003c/a\u003e. In the first post he has discussed about multi-versioning. Its a must watch for everyone who is working on Oracle. Hoping to get more of such stuff from Eddie.\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-amritpal-singh-on-2008-01-22-110500-0530\"\u003eComment by Amritpal Singh on 2008-01-22 11:05:00 +0530\u003c/h3\u003e\n\u003cp\u003enice man, keep posting all the good work.\u003c/p\u003e\n\u003ch3 id=\"comment-by-sidhu-on-2008-01-22-215500-0530\"\u003eComment by Sidhu on 2008-01-22 21:55:00 +0530\u003c/h3\u003e\n\u003cp\u003eThanks man 🙂\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e","title":"Oracle from Eddie Awad…"},{"content":"Howard Rogers has done a post on his blog that is the site Dizwell worth it ? Do post your opinion. If he plans to close it down, we will miss a great resource for Oracle.\nSidhu\nComments Comment by Aman Sharma on 2007-10-23 09:20:00 +0530 Fingers crossed that he wont do that and thats the voice of all(almost) too!Hope he takes the right decision.\nCheers,\nAman….\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/10/23/howards-decision/","summary":"\u003cp\u003e\u003ca href=\"http://www.dizwell.com/\"\u003eHoward Rogers\u003c/a\u003e has done \u003ca href=\"http://www.dizwell.com/prod/node/1058\"\u003ea post\u003c/a\u003e on his \u003ca href=\"http://www.dizwell.com/prod/blog\"\u003eblog\u003c/a\u003e that is the site \u003ca href=\"http://www.dizwell.com/\"\u003eDizwell\u003c/a\u003e worth it ? Do post your opinion. If he plans to close it down, we will miss a great resource for Oracle.\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-aman-sharma-on-2007-10-23-092000-0530\"\u003eComment by Aman Sharma on 2007-10-23 09:20:00 +0530\u003c/h3\u003e\n\u003cp\u003eFingers crossed that he wont do that and thats the voice of all(almost) too!Hope he takes the right decision.\u003cbr\u003e\nCheers,\u003cbr\u003e\nAman….\u003c/p\u003e","title":"Howard’s decision…"},{"content":"Tom Kyte did a post on his blog about posting of reviews of questions on Asktom. In a nutshell the reviews not related to the original question will be ignored/deleted (not decided yet, as Tom said).\nAs other people said in the comments, personally I too like this idea very much. Earlier, many times, there were questions which started with someone asking about appropriate SGA size, then there were some other twists and discussions and then the thread ended in discussion about good or bad authors or something similar light years away from the original topic.\nNow this action will make the discussions flow in a very controlled and neat \u0026amp; clean manner, all about the original topic. Hoping to see all the \u0026ldquo;great content\u0026rdquo; in a very orderly manner :)\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/10/16/asking-tom-read-this/","summary":"\u003cp\u003e\u003ca href=\"http://tkyte.blogspot.com/\"\u003eTom Kyte\u003c/a\u003e did \u003ca href=\"http://tkyte.blogspot.com/2007/10/new-policy-of-sorts.html\"\u003ea post\u003c/a\u003e on \u003ca href=\"http://tkyte.blogspot.com/\"\u003ehis blog\u003c/a\u003e about posting of reviews of questions on \u003ca href=\"http://asktom.oracle.com/\"\u003eAsktom\u003c/a\u003e. In a nutshell the reviews not related to the original question will be ignored/deleted (not decided yet, as Tom said).\u003c/p\u003e\n\u003cp\u003eAs other people said in the comments, personally I too like this idea very much. Earlier, many times, there were questions which started with someone asking about appropriate SGA size, then there were some other twists and discussions and then the thread ended in discussion about good or bad authors or something similar light years away from the original topic.\u003c/p\u003e","title":"Asking Tom ? Read this !"},{"content":"Today I was checking Eddie Awad\u0026rsquo;s blog. From there I came to know that Oracle has started Official Oracle Wiki hosted by Wetpaint. Check out. I just made my login. Hoping to contribute whatever little I can\nSidhu\nComments Comment by Frank on 2007-11-09 03:10:00 +0530 Another GREAT wiki to contribute to is http://www.orafaq.com/wiki\nComment by Sidhu on 2007-11-15 07:48:00 +0530 Yea Frank\nIndeed this website orafaq.com is a good one. Pretty good articles and content.\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/10/15/official-oracle-wiki/","summary":"\u003cp\u003eToday I was checking \u003ca href=\"http://awads.net/wp\"\u003eEddie Awad\u0026rsquo;s blog\u003c/a\u003e. From there I came to know that \u003ca href=\"http://www.oracle.com/\"\u003eOracle\u003c/a\u003e has started \u003ca href=\"http://wiki.oracle.com/\"\u003eOfficial Oracle Wiki\u003c/a\u003e hosted by \u003ca href=\"http://www.wetpaint.com/\"\u003eWetpaint\u003c/a\u003e. Check out. I just made my login. Hoping to contribute whatever little I can\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-frank-on-2007-11-09-031000-0530\"\u003eComment by Frank on 2007-11-09 03:10:00 +0530\u003c/h3\u003e\n\u003cp\u003eAnother GREAT wiki to contribute to is \u003c!-- raw HTML omitted --\u003e\u003ca href=\"http://www.orafaq.com/wiki\"\u003ehttp://www.orafaq.com/wiki\u003c/a\u003e\u003c!-- raw HTML omitted --\u003e\u003c/p\u003e\n\u003ch3 id=\"comment-by-sidhu-on-2007-11-15-074800-0530\"\u003eComment by Sidhu on 2007-11-15 07:48:00 +0530\u003c/h3\u003e\n\u003cp\u003eYea Frank\u003c/p\u003e\n\u003cp\u003eIndeed this website orafaq.com is a good one. Pretty good articles and content.\u003c/p\u003e","title":"Official Oracle wiki"},{"content":"Today, I was following a thread on Oracle Forums. Someone asked a question about UNDO tablespace wrt to a scenario. The question was:\nThere is a database and its hot backup is taken on Friday. Now for Saturday, Sunday and Monday there are archive logs but no backups. Suppose the machine crashes on Monday. After we restore the database to Friday (from backup), recovery will happen. As UNDO tablespace is of Friday so it has no information related to transactions that happened on Saturday, Sunday and Monday. So in the end of recovery process when we need to rollback some transactions from where that required information will come ?\nHoward did 3 beautiful follow ups of this post, explaining how UNDO works. Just saving it here for quick reference. Hope its no copyright mess :)\nFollow up 1:\nYes, it can certainly be confusing, especially when you get told completely incorrect information! As has already (and thankfully!) been pointed out, redo logs contain redo change records from both committed and uncommitted transactions.\nThe answer to your question is that as we re-perform transactions by applying redo in a recovery session, we redo exactly what would have been done when the transactions were first performed. That is, we\u0026rsquo;d see a redo change record that says (in effect) update EMP set sal=900 where ename=\u0026lsquo;Bob\u0026rsquo;, so we\u0026rsquo;d find the Bob record in the restored Friday copy of the data file, and we\u0026rsquo;d lock that record. Then we\u0026rsquo;d store details of Bob\u0026rsquo;s existing salary in an undo block. Then we\u0026rsquo;d store the new and old salaries in redo (yup, recovery generates redo!). Then we\u0026rsquo;d change the Bob record itself.\nIf that\u0026rsquo;s all that\u0026rsquo;s in the archives and online redo logs, that\u0026rsquo;s all that happens: Bob\u0026rsquo;s record is left locked and changed. At the end of the recovery process, we realise that a lot of re-performed transactions need rolling back, so SMON does that\u0026hellip;. and it knows what to roll the stuff back to because in re-performing the transactions, we generated fresh undo.\nSo your question says, \u0026ldquo;how does the uncommited transactions rolled back with the Friday undo tablespace since they do [not] have latest uncommited trnsactions\u0026rdquo;, but that\u0026rsquo;s not right. The undo tablespace certainly STARTS at the state it was in on Friday. But as recovery proceeds, it gets \u0026lsquo;freshened up\u0026rsquo;, because the new transactions generate fresh undo.\n(A slightly more accurate description would be to point out that when you generated undo on Saturday and Sunday, those changes to the undo blocks would themselves have generated redo. Therefore, your redo stream, archives and online alike, have the necessary information to recover the undo tablespace. Personally, I don\u0026rsquo;t find that any more informative than thinking that applying redo generates fresh undo, but it\u0026rsquo;s up to you which mental model you prefer to work with).\nFollow up 2:\n\u0026gt;does \u0026ldquo;recovery generates redo\u0026rdquo; mean that during recovery we regenerate the same amount \u0026gt;of redo that was generated since the last backup\nNo, it\u0026rsquo;s not the same. If you do a simple test, you\u0026rsquo;ll see that. Update EMP, commit, check with Log Miner that the redo is in the logs in analyzable form. The blow up your database, restore it and recover it. Your log sequence number will have moved on, redo will have been generated\u0026hellip; but you can mine the logs till Christmas and you won\u0026rsquo;t find a second \u0026lsquo;update EMP\u0026rsquo; set of redo records.\nThat\u0026rsquo;s why I mentioned the \u0026lsquo;more accurately\u0026rsquo; bit further on in my original reply. Recovery is a bit more subtle than just sitting there issuing a lot of insert/update/delete statements as if from the keyboard of an incredibly fast typist! Metaphorically you can say, \u0026ldquo;We repeat transactions during recovery\u0026rdquo;. But actually, it\u0026rsquo;s \u0026ldquo;we apply redo change vectors\u0026rdquo;\u0026hellip; and that doesn\u0026rsquo;t generate a one-for-one amount of redo as the original transactions did.\n\u0026gt;Will there be duplicate archive logs then?\nNo. Do the Log Mining test to see this for yourself. The updates you did before the blow-up will not be visible in the logs from after (or during) the recovery.\n\u0026gt;Can we say that the undo tablespace starts from scratch with no undo at all\nI\u0026rsquo;m not sure what you\u0026rsquo;re getting at, but an undo segment is just a special sort of table, and it\u0026rsquo;s got data stored in it, even if the transactions that placed them there are long-since finished. So when you restore that file on Monday, it comes back in the state it was in when it was backed up -with data in it. Leaving aside undo_retention for a moment, it follows from the fact that you\u0026rsquo;re doing a database recovery that none of the stuff inside the rollback segments is related to \u0026ldquo;live\u0026rdquo; transactions, therefore all of it is over-writeable. So in that sense, yes, you could say the undo tablespace, just after restore and just before recovery, is \u0026lsquo;clean\u0026rsquo;.\nFollow up 3:\nI didn\u0026rsquo;t give \u0026ldquo;2 ideas\u0026rdquo;. I gave just one. I just happened to give you the option of thinking of it in two different ways.\nApplying redo causes transactions to be re-performed. That is the only, solo, number 1, all on its own, lonely idea being conveyed here.\nNow you can describe the re-performance of transactions either as \u0026ldquo;re-doing the transactions\u0026rdquo; as if some virtual user were sitting there typing insert, update and delete statements maniacally fast. That\u0026rsquo;s the way I usually think of it, largely because that\u0026rsquo;s what it looks like when you use Log Miner to peer inside the logs. Or you can think of it in a slightly more technical and accurate way, of taking the description of changes to byte-data which the redo change vectors represent and re-applying those changes.\nThey\u0026rsquo;re both descriptions of exactly the same process with the same outcome. They just happen to use different words to describe them because some people\u0026rsquo;s mental models of what is happening work better with one than the other. It doesn\u0026rsquo;t help, however, to start thinking that somehow I\u0026rsquo;ve described two different processes.\nBy way of a rather stretched analogy. Suppose you are a photographer. You take a lovely colour photograph of a landscape. You make a couple of copies of this photograph. The copy you framed and took especial care of one day gets damaged. Now, you can either repair that photo by going back to the scene, setting up the camera in the exact spot as before, waiting until the light conditions are exactly as they were, and then re-taking the shot. Or you can take the existing photo, compare it with another copy of the photo, and where you are lacking red/green/blue information in the damaged picture, you \u0026lsquo;paint in\u0026rsquo; the corresponding red/blue/green values determined from the other print. Either way, you end up with a new photograph that looks like the old one did, and that\u0026rsquo;s the important thing: you end up with a restored print to hang back on your wall.\nTechnically, Oracle recoveries apply deltas to existing data, rather than re-performing transactions as if done by a human being with very fast fingers. So that\u0026rsquo;s the \u0026ldquo;truth\u0026rdquo; if you want to think in those terms. But the core fact is, recovery restores data and it doesn\u0026rsquo;t really matter precisely HOW that\u0026rsquo;s done: whatever description we give for the process will be \u0026ldquo;wrong\u0026rdquo;, in either case, because only people who have seen the source code know *actually* how it\u0026rsquo;s done. So, I just as happily thing of \u0026ldquo;very fast fingers\u0026rdquo; as \u0026ldquo;applying deltas\u0026rdquo;, and in fact, I prefer that mental model. I just gave you the choice of models, in other words. But don\u0026rsquo;t, please, confuse that with their being two mechanisms.\nI have no idea why on Earth anyone participating in this thread is so hung up about the undo tablespace. The central question appears to be, \u0026lsquo;How can an undo tablespace from Friday help in the recovery of a database the following Monday\u0026rsquo;. But I explained that the first time and 20 posts ago! Very simply, during a recovery, the undo tablespace gets rolled forward like any other part of the database. It gets redo applied to it just as much as the USERS tablespace does. At the end of the rolled forward process, your undo tablespace is effectively a Monday tablespace. It\u0026rsquo;s fresh. It\u0026rsquo;s got all the undo generated by the weekend\u0026rsquo;s transactions in it. And there\u0026rsquo;s a bunch of transactions which were uncomitted at the time the database blew up, so although recovery blindly replayed those transactions and therefore re-performed them, they are left there in an uncommitted state and SMON goes ahead and rolls them back (users also help, but SMON does the bulk of the work).\nAnd SMON knows what to roll the uncommitted transactions back to because all the undo needed to do that was FRESHLY created by the recovery\u0026rsquo;s roll forward phase.\nYou don\u0026rsquo;t need to read anything into the sentence in bold that isn\u0026rsquo;t there in plain English. You have to stop looking for deep and secret meanings and just read the words that are there: Your redo, whether it comes from the online logs or the archives, contains all the information necessary to recover the undo tablespace. Just as it has all the information necessary to recover ANY tablespace, in fact.\nThink of an undo segment just as if it were the EMP segment, or the DEPT segment. You don\u0026rsquo;t seem to be bothered about EMP or DEPT being restored from Friday and yet managing to be recovered to the state they were in on Monday. Neither should anyone be surprised that a Friday undo segment can be transformed into a Monday undo segment. That is, after all, what recovery does.\nRecovery is the application of redo to a datafile to make it and the segments it contains -be they \u0026ldquo;ordinary\u0026rdquo; segments like tables and indexes, or more \u0026ldquo;exotic\u0026rdquo; segments like undo segments- more up\u0026ndash;to-date. That means recovery is the \u0026ldquo;rolling forward in time of a datafile\u0026rdquo;.\nBut when we start rolling forward, we can\u0026rsquo;t predict the future. So when I see \u0026ldquo;update EMP set sal=900\u0026rdquo; in the redo stream, I do not yet know whether you managed to commit that or not. I can\u0026rsquo;t see ahead. So I just blindly re-play that update and keep my fingers crossed. And I do that for every transaction recorded in the redo stream. And in the process of replaying that transaction, I also re-generate the fact that the original salaries in the EMP table were 800\u0026hellip; which means I\u0026rsquo;ve just re-generated the undo for the EMP transaction.\nOnly at the end of the roll forward phase can I look around and see, \u0026lsquo;Ah, that one wasn\u0026rsquo;t committed; neither was that one; and this one wasn\u0026rsquo;t even finished when the database blew up\u0026rsquo;. I therefore set SMON to work rolling those uncommitted transactions back\u0026hellip; and it is at THAT point that the undo tablespace, storing that freshly-recreated undo, becomes vital for completing the recovery process.\nIn words of few syllables: every recovery requires a roll forward and a roll back phase. Redo lets us roll things forward. Undo allows us to do the roll back. The undo tablespace is vital to recovering a database, therefore, because without it, half the job couldn\u0026rsquo;t be done.\nSidhu\nComments Comment by Aman Sharma on 2007-10-14 17:41:00 +0530 Hi sidhu,\nHoward is a GREAT GREAT teacher and extremely knowledgeable person.I welcome you to read these two topics.I am sure you will be delighted with his wisdom.And the same goes for Tim also.\nHave a read:\nAbout SCNs internals\nhttp://www.dizwell.com/prod/node/1003\nAbout FAST_START_MTTR_TARGET internals\nhttp://www.dizwell.com/prod/node/1040\nI am sure you will laugh at my little knowledge when will read the questions :)!\nThanks for the thread!It was amazing to read it!\nCheers,\nAman….\nComment by Sidhu on 2007-10-14 17:48:00 +0530 True Aman,\nI am a big fan of Howard’s writings. He has got a unique way of writing things. And the thing I love the most is that he never writes a thing without doing full R \u0026amp; D on it. And when he writes, blows everything away 🙂\nWill check those posts.\nSidhu\nComment by Aman Sharma on 2007-10-14 17:59:00 +0530 Sure do give a check!I am sure you will like them!Do let me know your response after reading both.\nCheers,\nAman….\nPS:Enable word verification on your blog!I guess its required looking at the 1st spam comment.\nComment by Aman Sharma on 2007-10-15 20:17:00 +0530 hi sidhu,\ncan you pass me the translation code?I some how not able to make it work?\nCheers,\nAman….\nComment by Sidhu on 2007-10-17 08:37:00 +0530 Aman\nThis is the first spam comment, so probably time to wake up 🙂\nThat translation thing…lol…I don’t even know the ABC of this “web” thing, so this was a copy paste from some link I cant find right now. Check out this http://labnol.blogspot.com/2006/11/add-google-translation-flags-to-your.html\nProbably its the same one. If it doesn’t work, let me know, will find out some way 🙂\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/10/09/undo-tablespace-in-oracle/","summary":"\u003cp\u003eToday, I was following \u003ca href=\"http://forums.oracle.com/forums/thread.jspa?threadID=571079\"\u003ea thread\u003c/a\u003e on \u003ca href=\"http://forums.oracle.com\"\u003eOracle Forums\u003c/a\u003e. Someone asked a question about UNDO tablespace wrt to a scenario. The question was:\u003cbr\u003e\nThere is a database and its hot backup is taken on Friday. Now for Saturday, Sunday and Monday there are archive logs but no backups. Suppose the machine crashes on Monday. After we restore the database to Friday (from backup), recovery will happen. As UNDO tablespace is of Friday so it has no information related to transactions that happened on Saturday, Sunday and Monday. So in the end of recovery process when we need to rollback some transactions from where that required information will come ?\u003c/p\u003e","title":"UNDO tablespace in Oracle…"},{"content":"Being a new kid on the block, I think, my post will not fire any serious and \u0026ldquo;scary\u0026rdquo; discussion as it has happened many times in the past. Just writing my experience. Whenever I search something related to Oracle in Google there are few sites that are bound to come up in the very first results ( No surety about the relevance and completeness of the content, though). Today I was searching about scheduling new jobs in oracle and I was, sort of, surprised to see the results [Though, I got from there, what I was looking for \u0026amp; I would like to say Thanks for that]. Many other times also, I have seen these websites pop up like anything.\nWhen I had started my job and after that put the CV on few job sites. Everybody used to say: put more number of keywords in the CV as their search bots select the resumes on the basis of keywords only. Perhaps same thing applies here also. They have included each and every possible keyword in Oracle on their websites [\u0026amp; I think in a better way than it\u0026rsquo;s been done in Oracle documentation ;)]. If its not one of their own websites then some books website (that also their own, obviously) will come up [\u0026amp; the keywords matched here are from table of contents or some portion from some chapter Ctrl+C\u0026rsquo;ed and Ctrl+V\u0026rsquo;ed there] with advertisement all around imitating the big bang universe theory. Some special experience, with making websites Google friendly, they have got ;)\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/09/19/google-friendliness-of/","summary":"\u003cp\u003eBeing a new kid on the block, I think, my post will not fire any serious and \u0026ldquo;scary\u0026rdquo; discussion as it has happened many times in the past. Just writing my experience. Whenever I search something related to Oracle in Google there are few sites that are bound to come up in the very first results ( No surety about the relevance and completeness of the content, though). Today I was searching about scheduling new jobs in oracle and I was, sort of, surprised to see \u003ca href=\"http://www.google.co.in/search?hl=en\u0026amp;client=firefox-a\u0026amp;rls=org.mozilla%3Aen-US%3Aofficial\u0026amp;hs=JQ1\u0026amp;q=oracle+new+job\u0026amp;btnG=Search\u0026amp;meta=\"\u003ethe results\u003c/a\u003e [Though, I got from there, what I was looking for \u0026amp; I would like to say Thanks for that]. Many other times also, I have seen these websites pop up like anything.\u003c/p\u003e","title":"Google friendliness of …"},{"content":"I was searching for some good tutorials on awk. Found a very nice (brilliant indeed) article on Oracle website by Emmett Dulaney. A very good introduction for beginners. I searched for some other links as well. Have a read:\n1. AWK: The Linux Administrators\u0026rsquo; Wisdom Kit\n2. A Guided Tour Of Awk\n3. AWK Programming\n4. UNIX Utilities - awk\nHappy awk\u0026rsquo;ing :)\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/09/17/learning-awk/","summary":"\u003cp\u003eI was searching for some good tutorials on awk. Found a very nice (brilliant indeed) article on Oracle website by Emmett Dulaney. A very good introduction for beginners. I searched for some other links as well. Have a read:\u003c/p\u003e\n\u003cp\u003e1. \u003ca href=\"http://www.oracle.com/technology/pub/articles/dulaney_awk.html\"\u003eAWK: The Linux Administrators\u0026rsquo; Wisdom Kit\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e2. \u003ca href=\"http://www.vectorsite.net/tsawk_1.html\"\u003eA Guided Tour Of Awk\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e3. \u003ca href=\"http://www.softpanorama.org/Tools/awk.shtml\"\u003eAWK Programming\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e4. \u003ca href=\"http://www.uga.edu/%7Eucns/wsg/unix/awk/#ee\"\u003eUNIX Utilities - awk\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eHappy awk\u0026rsquo;ing :)\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e","title":"Learning AWK…"},{"content":"Today Stumble gave me a link to an article on a website. That article talks about a guy Virgil Griffith who wrote a piece of software to track IP addresses of the people, editing the Wikipedia content. Some really interesting things came up: Apple attacking Microsoft and then Microsoft taking revenge. Have a look at the full article. It makes an interesting read. Read more about this guy on Wikipedia here.\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/09/16/tracing-wikipedia/","summary":"\u003cp\u003eToday \u003ca href=\"http://www.stumbleupon.com/\"\u003eStumble\u003c/a\u003e gave me a link to \u003ca href=\"http://www.maltastar.com/pages/msFullArt.asp?an=14323\"\u003ean article\u003c/a\u003e on a website. That article talks about a guy \u003ca href=\"http://virgil.gr/\"\u003eVirgil Griffith\u003c/a\u003e who wrote a piece of software to track IP addresses of the people, editing the \u003ca href=\"http://en.wikipedia.org/wiki/Virgil_Griffith\"\u003eWikipedia\u003c/a\u003e content. Some really interesting things came up: Apple attacking Microsoft and then Microsoft taking revenge. Have a look at the full article. It makes an interesting read. Read more about this guy on Wikipedia \u003ca href=\"http://en.wikipedia.org/wiki/Virgil_Griffith\"\u003ehere\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e","title":"Tracing Wikipedia…"},{"content":"Few days back, I had to give Oracle DBA training to a group of about 20-25 semi-technical people (Semi-technical, because most of them were not really DBA kinda folks doing all the techie stuff with Oracle, but in-fact having learned some bits \u0026amp; bytes of Oracle sometime back and these days looking into application functionality from technical perspective). I, having just about 10 months of experience with DBA profile, had to cover, everything about Oracle starting from creating database and up to performance tuning :) Its really an interesting job if the audience is good. But not an easy game. You have to know everything and have to be ready to answer people\u0026rsquo;s queries (some stupid \u0026amp; dumb questions also :) I had to cover a total of about 350 slides in a day or less. So at times, really went fast and skipping some of things. It was a nice experience as a whole :)\n\u0026amp; in the end when I finished it, was dead tired :(\nHats off to the trainers, who stand for the full day and that too for many days continuously :)\nSidhu\nComments Comment by RollerCoaster on 2007-08-19 19:58:00 +0530 this is great man…\ngood achievement.\nteaching something does mean that u r very good at it. wow.\nComment by Aman Sharma on 2007-09-15 18:09:00 +0530 Hi sidhu,\nGreat to hear that you tasted the fun of training.Being a trainer myself for Oracle Database Technologies for more than 4 years, I completely agree that training kills you after the session ends but its a nice thing to see that light bulb on over some body when they understand something so complex as like Oracle Database Technologies.Isnt it?\nCheers,\nAman….\nComment by Sidhu on 2007-09-16 09:07:00 +0530 Yeap Aman\nLiterally it kills. At the end of the day my head was screwed up like anything. But you do enjoy it and increase your clarity of concepts many folds after answering peoples’ questions. But being a sort of new-bie [it has been almost 10 months ,since I became a DBA \u0026amp; I havn’t really tasted the recovery thing yet] SO it was bit difficult for me 🙁 But I enjoyed it 🙂\nSidhu\nComment by Aman Sharma on 2007-09-19 10:06:00 +0530 it has been almost 10 months ,since I became a DBA \u0026amp; I havn’t really tasted the recovery thing yet\nI wish you wont taste that in near time too.Its always good to recoved in a test db than in prod, trust me ;-).\nCheers,\nAman….\nComment by Sidhu on 2007-09-21 07:51:00 +0530 Yeap True Aman\nI have seen people going through and its next to “horrible”. So better to do it on Test database 🙂\nBTW training coming again…next week…Lets see what comes out this time 🙂\nSidhu\nComment by Virag Sharma on 2008-02-08 16:15:05 +0530 I have seen people going through and its next to “horrible”.\nIt is not horrible , it is part of game/job , better gain confidence early.\nFirst time you will feel “horrible” , but latter you feel it is day in day out job.\nComment by Sidhu on 2008-02-09 23:06:41 +0530 it is part of game/job , better gain confidence early.\nFirst time you will feel “horrible” , but latter you feel it is day in day out job.\nAgreed !!!\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/08/19/being-an-oracle-trainer/","summary":"\u003cp\u003eFew days back, I had to give Oracle DBA training to a group of about 20-25 semi-technical people (Semi-technical, because most of them were not really DBA kinda folks doing all the techie stuff with Oracle, but in-fact having learned some bits \u0026amp; bytes of Oracle sometime back and these days looking into application functionality from technical perspective). I, having just about 10 months of experience with DBA profile, had to cover, everything about Oracle starting from creating database and up to performance tuning :) Its really an interesting job if the audience is good. But not an easy game. You have to know everything and have to be ready to answer people\u0026rsquo;s queries (some stupid \u0026amp; dumb questions also :) I had to cover a total of about 350 slides in a day or less. So at times, really went fast and skipping some of things. It was a nice experience as a whole :)\u003c/p\u003e","title":"Being an Oracle trainer…"},{"content":"Many times, we are required to restore a database from an export dmp file. Its a simple task but sometimes there are some issues left like invalid objects or some objects missing, in the newly created database. Following steps, followed in order can help in creating an error free database:\nCreate a blank database:The very first step is to create a blank database which is to be used as the target database. That can be done using Database Configuration Assistant. (In last step of the DBCA, change redo log file sizes to 500 MB each (or some appropriate values depdening upon the size of the databaes), as during import, lot of redo will be generated, so large redo size helps in that scenario)\nExtract DDLs and create tablespaces: Now run the import with show=Y option and create a log of all DDL statements. The main things to be looked for in the log are DDLs to create tablespaces and DB links. You may need to change the create tablespace statements according to the version of the Oracle you are using. If you have the export taken in an older version, where dictionary tablespaces were being used, you will need to change the statements accordingly, to create locally managed tablespaces. (If you have the dmp file in compressed (.Z) format check here, to run the import directly from compressed file)\nAdjust the size of SYSTEM, TEMP, USERS and UNDO: As SYSTEM, TEMP, USERS and UNDO tablespaces will get created with the database itself, so you can alter the sizes as per the sizes in the old database.\nEdit tnsnames.ora and create dblinks: Now edit tnsnames.ora to include all the databases used in the db links and create db links using the statements from DDL log.\nRun the import: Finally, run the import with FULL=Y and IGNORE=Y options and after the import finishes, look for any errors in the log. At last, compile all the invalid objects in the database ( Here is the link to a script to compile all the invalid objects). (If the import terminates with ORA-01435, then have a look at this post.)\nTo read about all the options with imp have a look at Original Import \u0026amp; Export Utilities chapter of Oracle Utilities guide.\nSidhu\nComments Comment by krish on 2011-02-02 10:44:21 +0530 useful material thanks alot\nComment by krish on 2011-02-02 10:45:31 +0530 hi,\nvery nice material very useful for starters\nthanks\nComment by Sidhu on 2011-02-04 00:29:59 +0530 Great to know that it was useful for you.\nCheers !!!\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/08/18/importing-a-full-database/","summary":"\u003cp\u003eMany times, we are required to restore a database from an export dmp file. Its a simple task but sometimes there are some issues left like invalid objects or some objects missing, in the newly created database. Following steps, followed in order can help in creating an error free database:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003e\n\u003cp\u003eCreate a blank database:The very first step is to create a blank database which is to be used as the target database. That can be done using Database Configuration Assistant. (In last step of the DBCA, change redo log file sizes to 500 MB each (or some appropriate values depdening upon the size of the databaes), as during import, lot of redo will be generated, so large redo size helps in that scenario)\u003c/p\u003e","title":"Importing a full database…"},{"content":"The day when Oracle 11g was made available for download on OTN, there was sort of, flood of posts in the Oracle blogsphere. Here is a quick recap of few of the posts (Whatsoever I could find through OraNA and my Netvibes)\nEddie Awad about 11g, I think he was the first one to post\nThen Doug Burns here\nHoward on installing 11g\nAnother interesting article from Howard\nTim Hall about 11g\nTim Hall on installing 11g\nThen Laurent\nAn article on ADR by Virag Sharma on his blog\nJaffar about Active Standby database\nA 11g PL/SQL article on AMIS blog\nWell, if you don\u0026rsquo;t want to get into any hassles and just want to download Oracle 11g, you can get it here. (As of now its available for Linux x86 only).\nSidhu\nComments Comment by RollerCoaster on 2007-08-15 12:43:00 +0530 i hope u were a bigger fan of MSSQL..\nmay be in future 🙂\nnice info on oracle..\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/08/13/oracle-11g/","summary":"\u003cp\u003eThe day when Oracle 11g was made available for download on OTN, there was sort of, flood of posts in the Oracle blogsphere. Here is a quick recap of few of the posts (Whatsoever I could find through \u003ca href=\"http://orana.info/\"\u003eOraNA\u003c/a\u003e and my \u003ca href=\"http://www.netvibes.com/\"\u003eNetvibes\u003c/a\u003e)\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://awads.net/wp/2007/08/09/download-oracle-database-11g-release-1-now/\"\u003eEddie Awad about 11g\u003c/a\u003e, I think he was the first one to post\u003c/p\u003e\n\u003cp\u003eThen Doug Burns \u003ca href=\"http://oracledoug.com/serendipity/index.php?/archives/1308-Does-Anyone-Know-When-11g-Will-Be-Released.html\"\u003ehere\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eHoward on \u003ca href=\"http://www.dizwell.com/prod/node/930\"\u003einstalling 11g\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://www.dizwell.com/prod/node/933\"\u003eAnother interesting article\u003c/a\u003e from Howard\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://www.oracle-base.com/blog/2007/08/10/database-11g/\"\u003eTim Hall about 11g\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eTim Hall on \u003ca href=\"http://www.oracle-base.com/articles/11g/OracleDB11gR1InstallationOnEnterpriseLinux4and5.php\"\u003einstalling 11g\u003c/a\u003e\u003c/p\u003e","title":"Oracle 11g…"},{"content":"I have been installing Linux for last 6 years and for more than half the number of times, came across a message something like \u0026ldquo;This partitions is beyond the 1024 cylinder boundary and may not be bootable\u0026rdquo;. But never cared for it much and understood what exactly it meant to say ?\nYesterday I was reading System Admin guide to Linux by Lars Wirzenius (Thanks Howard for the link :) From there I came to know what exactly that message meant. Quoting from the guide itself:\nUnfortunately, the BIOS has a design limitation, which makes it impossible to specify a track number that is larger than 1024 in the CMOS RAM, which is too little for a large hard disk. To overcome this, the hard disk controller lies about the geometry, and translates the addresses given by the computer into something that fits reality. For example, a hard disk might have 8 heads, 2048 tracks, and 35 sectors per track. Its controller could lie to the computer and claim that it has 16 heads, 1024 tracks, and 35 sectors per track, thus not exceeding the limit on tracks, and translates the address that the computer gives it by halving the head number, and doubling the track number. The mathematics can be more complicated in reality, because the numbers are not as nice as here (but again, the details are not relevant for understanding the principle). This translation distorts the operating system\u0026rsquo;s view of how the disk is organized, thus making it impractical to use the all-data-on-one-cylinder trick to boost performance\u0026hellip;.When using IDE disks, the boot partition (the partition with the bootable kernel image files) must be completely within the first 1024 cylinders. This is because the disk is used via the BIOS during boot (before the system goes into protected mode), and BIOS can\u0026rsquo;t handle more than 1024 cylinders. It is sometimes possible to use a boot partition that is only partly within the first 1024 cylinders. This works as long as all the files that are read with the BIOS are within the first 1024 cylinders. Since this is difficult to arrange, it is a very bad idea to do it; you never know when a kernel update or disk defragmentation will result in an unbootable system. Therefore, make sure your boot partition is completely within the first 1024 cylinders.\nHope it clears the logic why Linux cries about 1024 cylinder issue at the time of installation.\nYou can read the guide online from the link above and download the pdf here. Its simple and concise and just too good. Small thing covering much :)\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/08/09/why-linux-cries-about-1024-cylinders-thing-at-the-time-of-installation/","summary":"\u003cp\u003eI have been installing Linux for last 6 years and for more than half the number of times, came across a message something like \u0026ldquo;This partitions is beyond the 1024 cylinder boundary and may not be bootable\u0026rdquo;. But never cared for it much and understood what exactly it meant to say ?\u003c/p\u003e\n\u003cp\u003eYesterday I was reading \u003ca href=\"http://tldp.org/LDP/sag/html/index.html\"\u003eSystem Admin guide to Linux\u003c/a\u003e by Lars Wirzenius (Thanks \u003ca href=\"http://www.dizwell.com/prod/node/675\"\u003eHoward\u003c/a\u003e for the link :) From there I came to know what exactly that message meant. Quoting from the guide itself:\u003c/p\u003e","title":"Why Linux cries about \"1024 cylinders thing\" at the time of installation…"},{"content":"Last to last week, we shifted to a new house. As there was no internet connection, so we were without any internet access for last 2 weeks. Today we got the new internet connection. Its a DSL one. And the guy who came to do the installation threw a little bit of technical jargon like rebooting the router and so on.\nIt feels so good to be back in the world of www :)\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/08/08/plugged-in-back-to-www/","summary":"\u003cp\u003eLast to last week, we shifted to a new house. As there was no internet connection, so we were without any internet access for last 2 weeks. Today we got the new internet connection. Its a DSL one. And the guy who came to do the installation threw a little bit of technical jargon like rebooting the router and so on.\u003c/p\u003e\n\u003cp\u003eIt feels so good to be back in the world of www :)\u003c/p\u003e","title":"Plugged-in back to www…"},{"content":"Many times you need to move datafiles from one location to another. The simplest approach for this is to take the tablespace offline, copy the datafiles to new location, rename the files with alter database rename file (Except that you dont have to move the SYSTEM and UNDO tablespace, as you can\u0026rsquo;t take SYSTEM tablespace offline)\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]\nSYS@orcl AS SYSDBA\u0026gt; alter tablespace system offline; alter tablespace system offline * ERROR at line 1: ORA-01541: system tablespace cannot be brought offline; shut down if necessary\nSYS@orcl AS SYSDBA\u0026gt;\n[/sourcecode]\nWell lets try moving USERS tablespace.\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]\nSYS@orcl AS SYSDBA\u0026gt; column file_name format a50\nSYS@orcl AS SYSDBA\u0026gt; set lines 100 SYS@orcl AS SYSDBA\u0026gt; select file_name,tablespace_name from dba_data_files where tablespace_name=\u0026lsquo;USERS\u0026rsquo;;\nFILE_NAME TABLESPACE_NAME ---\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026ndash; \u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash; C:\\ORACLE\\ORCL\\USERS01.DBF USERS\nSYS@orcl AS SYSDBA\u0026gt;[/sourcecode]\nThe current location of the datafile is C:\\ORACLE\\ORCL\\. Suppose I have to move it to c:\\oracle\\oradata. So first lets take the tablespace offline\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]\nSYS@orcl AS SYSDBA\u0026gt; alter tablespace users offline;\nTablespace altered.\nSYS@orcl AS SYSDBA\u0026gt;[/sourcecode]\nNow copy the datafile to new location [Note the new directory should be already created]\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]\nC:\\oracle\\ORCL\u0026gt;copy USERS01.DBF c:\\oracle\\oradata 1 file(s) copied.\nC:\\oracle\\ORCL\u0026gt;[/sourcecode]\nNow make database aware of the new location of the datafile using alter database rename file:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]\nSYS@orcl AS SYSDBA\u0026gt; alter database rename file \u0026lsquo;c:\\oracle\\orcl\\users01.dbf\u0026rsquo; to \u0026lsquo;c:\\oracle\\oradata\\users01.dbf\u0026rsquo;;\nDatabase altered.\nSYS@orcl AS SYSDBA\u0026gt;[/sourcecode]\nThe last thing is to bring the tablespace online. If everything has gone rightly the message like this will appear and you can view the new location of the datafile.\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]\nSYS@orcl AS SYSDBA\u0026gt; alter tablespace users online;\nTablespace altered.\nSYS@orcl AS SYSDBA\u0026gt; select file_name,tablespace_name from dba_data_files where tablespace_name=\u0026lsquo;USERS\u0026rsquo;;\nFILE_NAME TABLESPACE_NAME ---\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026ndash; \u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash; C:\\ORACLE\\ORADATA\\USERS01.DBF USERS\nSYS@orcl AS SYSDBA\u0026gt;[/sourcecode]\nIn next post we will discuss moving datafiles, controlfiles and logfiles.\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/07/21/moving-datafiles-control-files-and-log-files-part1/","summary":"\u003cp\u003eMany times you need to move datafiles from one location to another. The simplest approach for this is to take the tablespace offline, copy the datafiles to new location, rename the files with \u003cem\u003ealter database rename file\u003c/em\u003e (Except that you dont have to move the SYSTEM and UNDO tablespace, as you can\u0026rsquo;t take SYSTEM tablespace offline)\u003c/p\u003e\n\u003cp\u003e[sourcecode language=\u0026lsquo;css\u0026rsquo;]\u003c/p\u003e\n\u003cp\u003eSYS@orcl AS SYSDBA\u0026gt; alter tablespace system offline;\nalter tablespace system offline\n*\nERROR at line 1:\nORA-01541: system tablespace cannot be brought offline; shut down if necessary\u003c/p\u003e","title":"Moving datafiles,control files and log files – Part 1"},{"content":"Today I came across a requirement where users needed to ftp files time and again. So ftp\u0026rsquo;ing again and again is not a very good option. I wrote a small batch file for the same. Just sharing the same over here. I created a folder ftp in C drive and a file get_file.bat Contents of get_file.bat are:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]\nset /p file_name=Enter the name of the file you want to ftp: echo oracle\u0026gt;c:\\ftp\\param.cfg echo oracle123\u0026raquo;c:\\ftp\\param.cfg echo cd /home/oracle\u0026raquo;c:\\ftp\\param.cfg echo lcd c:\\ftp\u0026raquo;c:\\ftp\\param.cfg bin get %file_name% ftp -s:param.cfg 127.0.0.1\n[/sourcecode]\nIt will create a file param.cfg having all the things like username, password and command to get the file in the same folder (c:\\ftp). Then we invoke ftp with -s option with specifying the file param.cfg. It will ask the user to enter the file name and ftp the file from server to c:\\ftp\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/07/16/batch-file-for-ftping-files/","summary":"\u003cp\u003eToday I came across a requirement where users needed to ftp files time and again. So ftp\u0026rsquo;ing again and again is not a very good option. I wrote a small batch file for the same. Just sharing the same over here. I created a folder ftp in C drive and a file get_file.bat\nContents of get_file.bat are:\u003c/p\u003e\n\u003cp\u003e[sourcecode language=\u0026lsquo;css\u0026rsquo;]\u003c/p\u003e\n\u003cp\u003eset /p file_name=Enter the name of the file you want to ftp:\necho oracle\u0026gt;c:\\ftp\\param.cfg\necho oracle123\u0026raquo;c:\\ftp\\param.cfg\necho cd /home/oracle\u0026raquo;c:\\ftp\\param.cfg\necho lcd c:\\ftp\u0026raquo;c:\\ftp\\param.cfg\nbin\nget %file_name%\nftp -s:param.cfg 127.0.0.1\u003c/p\u003e","title":"Batch file for ftp’ing files…"},{"content":"One may encounter this error while importing from a dmp file from older versions of Oracle. Genereally this error is caused by some statement like alter session set current_schema=scott; And the simple reason is that the user scott doesn\u0026rsquo;t exist. Yesterday I came across this error. And the reason was that user was not created. As in case of import we generally create tablespaces first (by creating the DDL using option show=Y) but creation of users is done by import itself. In older versions of Oracle, the temp tablespaces were no different from other tablespaces. But in newer versions temp tablespaces are different. So in dmp files from thoese older versions create user statements are written like create user t1 identified by t1 default tablespace temp temporary tablespace temp. This thing worked fine in older versions but in newer versions we cannot specify the TEMP tablespace as the default tablespace for a user. So the statement create user t2 identified by t2 default tablespace temp temporary tablespace temp throws ORA-12910: cannot specify temporary tablespace as default tablespace. In such cases the users (for which default statement is specified as TEMP) have to be created manually by specifying the appropriate tablespace as default tablespace and then the import should be run with ignore=Y.\nSidhu\nComments Comment by Arnab Ghosh on 2017-08-13 19:57:26 +0530 Good one!\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/07/14/import-ora-01435-user-does-not-exist/","summary":"\u003cp\u003eOne may encounter this error while importing from a dmp file from older versions of Oracle. Genereally this error is caused by some statement like alter session set current_schema=scott; And the simple reason is that the user scott doesn\u0026rsquo;t exist. Yesterday I came across this error. And the reason was that user was not created. As in case of import we generally create tablespaces first (by creating the DDL using option show=Y) but creation of users is done by import itself. In older versions of Oracle, the temp tablespaces were no different from other tablespaces. But in newer versions temp tablespaces are different. So in dmp files from thoese older versions create user statements are written like create user t1 identified by t1 default tablespace temp temporary tablespace temp. This thing worked fine in older versions but in newer versions we cannot specify the TEMP tablespace as the default tablespace for a user. So the statement create user t2 identified by t2 default tablespace temp temporary tablespace temp throws ORA-12910: cannot specify temporary tablespace as default tablespace. In such cases the users (for which default statement is specified as TEMP) have to be created manually by specifying the appropriate tablespace as default tablespace and then the import should be run with ignore=Y.\u003c/p\u003e","title":"Import ORA-01435: user does not exist…"},{"content":"Since I switched to this new job, my profile has changed. Here I work as a DBA. So my interaction with Oracle \u0026amp; anything related to Oracle has also increased. Started exploring Oracle related forums and websites specially OTN forums (http://forums.oracle.com) This is the only page thats almost always open on my Desktop in office \u0026amp; Laptop at home and I refresh more than my office Lotus Notes. Today I was reading APCs blog , that Jonathan Lewis has also started posting on OTN forums and being so busy man, from where he finds the time ? I too think the same. There are so many people having many years of experience in industry, answering the questions on OTN, uesnet groups and various other forums and everything is for free. They are not paid anything for the same thing. Its like taking time out of your time, understand somebody\u0026rsquo;s problem, create same scenario your PC, try out and then post the answer ! I am, sort of new to the forums and sometimes for whole of week, I am unable to post any answers, even knowing something about the issue someone has posted. Just the \u0026ldquo;time\u0026rdquo; thing. These days I am very close to OTN forums, visit for whole of the day and also post answers to the questions I know something about. There are many people who are regular visitors and are answering questions on the daily basis. Hats off ! to all these \u0026ldquo;big bosses\u0026rdquo; of the technology !\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/07/04/hats-off/","summary":"\u003cp\u003eSince I switched to this new job, my profile has changed. Here I work as a DBA. So my interaction with Oracle \u0026amp; anything related to Oracle has also increased. Started exploring Oracle related forums and websites specially OTN forums (\u003ca href=\"http://forums.oracle.com\"\u003ehttp://forums.oracle.com\u003c/a\u003e) This is the only page thats almost always open on my Desktop in office \u0026amp; Laptop at home and I refresh more than my office Lotus Notes. Today I was reading \u003ca href=\"http://radiofreetooting.blogspot.com/2007/07/where-does-he-find-time.html\"\u003eAPCs blog\u003c/a\u003e , that Jonathan Lewis has also started posting on OTN forums and being so busy man, from where he finds the time ? I too think the same. There are so many people having many years of experience in industry, answering the questions on OTN, uesnet groups and various other forums and everything is for free. They are not paid anything for the same thing. Its like taking time out of your time, understand somebody\u0026rsquo;s problem, create same scenario your PC, try out and then post the answer ! I am, sort of new to the forums and sometimes for whole of week, I am unable to post any answers, even knowing something about the issue someone has posted. Just the \u0026ldquo;time\u0026rdquo; thing. These days I am very close to OTN forums, visit for whole of the day and also post answers to the questions I know something about. There are many people who are regular visitors and are answering questions on the daily basis. Hats off ! to all these \u0026ldquo;big bosses\u0026rdquo; of the technology !\u003c/p\u003e","title":"Hats off…"},{"content":"On OTN someone asked a question that how to spool data from a table into a xls file. Spooling a single table I discussed in one of the previous posts. We can use the same approach to spool data from more than 1 table also. Well here I will do it through a shell script and assume that you have a text file having list of tables to be spooled (Even if you don\u0026rsquo;t have one, it can be easily made by spooling the names of tables into a simple text file) Here is the shell script that you can use to spool data to various xls files, table wise.\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]cat list.txt | while read a do echo \u0026ldquo;spooling $a\u0026rdquo; sqlplus username/password@string \u0026laquo;EOF set feed off markup html on spool on spool /home/oracle/$a.xls select * from $a; spool off set markup html off spool off EOF done [/sourcecode] I didn\u0026rsquo;t see any work around for Windoze as SQLPLUS \u0026laquo; EOF thing doesn\u0026rsquo;t seem to work in Windows. Will try to find some alternative. If you come across something, do let me know.\nSidhu\nComments Comment by Sandep on 2011-07-07 16:31:24 +0530 Very very useful..thanks a lot mate .\nComment by Sidhu on 2011-07-20 10:26:28 +0530 Cheers ! 🙂\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/06/26/shell-script-to-spool-a-no-of-tables-into-xls-files/","summary":"\u003cp\u003eOn OTN someone asked \u003ca href=\"http://forums.oracle.com/forums/thread.jspa?threadID=523835\u0026amp;tstart=50\"\u003ea question\u003c/a\u003e that how to spool data from a table into a xls file. Spooling a single table I discussed in one of the \u003ca href=\"/blog/2007/06/16/spool-to-a-xls-excel-file\"\u003eprevious posts\u003c/a\u003e. We can use the same approach to spool data from more than 1 table also. Well here I will do it through a shell script and assume that you have a text file having list of tables to be spooled (Even if you don\u0026rsquo;t have one, it can be easily made by spooling the names of tables into a simple text file) Here is the shell script that you can use to spool data to various xls files, table wise.\u003c/p\u003e","title":"Shell script to spool a no of tables into .xls files…"},{"content":"An interesing post by Laurent. Check out http://laurentschneider.com/wordpress/2007/06/to-divide-or-to-multiply.html\nMy findings on 10gR2 on Windoze XP\nSQL\u0026gt; var z number\rSQL\u0026gt; var y number\rSQL\u0026gt; exec :z := power(2,102)*2e-31;PL/SQL procedure successfully completed.SQL\u0026gt; exec :y := 1e125;PL/SQL procedure successfully completed.SQL\u0026gt; set timi on\rSQL\u0026gt; exec while (:y\u0026gt;1e-125) loop :y:=:y/:z; end loopPL/SQL procedure successfully completed.Elapsed: 00:00:00.10\rSQL\u0026gt; set timi off\rSQL\u0026gt; print yY\r----------\r9.988E-126SQL\u0026gt; exec :z := power(2,-104)*2e31;PL/SQL procedure successfully completed.SQL\u0026gt; exec :y := 1e125;PL/SQL procedure successfully completed.SQL\u0026gt; set timi on\rSQL\u0026gt; exec while (:y\u0026gt;1e-125) loop :y:=:y*:z; end loopPL/SQL procedure successfully completed.Elapsed: 00:00:00.04\rSQL\u0026gt; set timi off\rSQL\u0026gt; print yY\r----------\r9.988E-126SQL\u0026gt; Sidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/06/18/is-multiplication-faster-than-division/","summary":"\u003cp\u003eAn interesing post by Laurent. Check out \u003ca href=\"http://laurentschneider.com/wordpress/2007/06/to-divide-or-to-multiply.html\"\u003ehttp://laurentschneider.com/wordpress/2007/06/to-divide-or-to-multiply.html\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eMy findings on 10gR2 on Windoze XP\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e\r\nSQL\u0026gt; var z number\r\nSQL\u0026gt; var y number\r\nSQL\u0026gt; exec :z := power(2,102)*2e-31;PL/SQL procedure successfully completed.SQL\u0026gt; exec :y := 1e125;PL/SQL procedure successfully completed.SQL\u0026gt; set timi on\r\nSQL\u0026gt; exec while (:y\u0026gt;1e-125) loop :y:=:y/:z; end loopPL/SQL procedure successfully completed.Elapsed: 00:00:00.10\r\nSQL\u0026gt; set timi off\r\nSQL\u0026gt; print yY\r\n----------\r\n9.988E-126SQL\u0026gt; exec :z := power(2,-104)*2e31;PL/SQL procedure successfully completed.SQL\u0026gt; exec :y := 1e125;PL/SQL procedure successfully completed.SQL\u0026gt; set timi on\r\nSQL\u0026gt; exec while (:y\u0026gt;1e-125) loop :y:=:y*:z; end loopPL/SQL procedure successfully completed.Elapsed: 00:00:00.04\r\nSQL\u0026gt; set timi off\r\nSQL\u0026gt; print yY\r\n----------\r\n9.988E-126SQL\u0026gt;\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eSidhu\u003c/p\u003e","title":"Is multiplication faster than division ?"},{"content":"A small tip, I read on OTN about spooling to a .xls (excel) file:\nIt goes like this\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]set feed off markup html on spool onspool c:\\salgrade.xls select * from salgrade; spool offset markup html off spool off[/sourcecode]\nAnd the xls it makes shows up like:\nSidhu\nComments Comment by hemant on 2007-06-26 16:44:00 +0530 hi\ni am working for a bank and we are using 10g.\ni am very raw at the oracle and have just started teaching myself through a book i have.\nwe have many reports devised by our vendor still we need some that r not available.\nso we wanted the data to be exported in xl wherein i could manipulate to data to our need.\nabt this spooling thing. i have copied down ur script and want to test it… but how do i access the shell prompt(do not know unix either)\nthanks for the help\nextend it bit further for me please\nhemu\nComment by Sidhu on 2007-06-26 20:21:00 +0530 Hemant\nHere what I wrote for spooling to xls file, needs to be run on SQL Prompt not shell. Any other help you need leave a message here.\nCheers\nSidhu\nComment by Ajith on 2007-07-09 12:20:00 +0530 Hi Amardeep,\nThis is Ajith,working in kuwait as Oracle Consultant.I would like to get some basic learning materials on shell scripting.if can refer me some good sites or materials i would be very greatful.\nthanks and regards,\nAjith\nComment by Prachi on 2007-08-14 17:22:00 +0530 Hi,\nI am facing some issues with the spooling method. I m generating an xls using the spool . However in some lines , the lines get CUT into 2 lines in the output spool file , leading to 2 rows in xls instead of one. PLease let me know if you have come across this and have a solution.\nAn example of what the problem is\nspool $MONTH_DATA_FILE;\nselect data from ap_tc_reports_temp;\nspool off;\nOutput\n=================\nC1 C2 C3 C4 C5 C6 C\n7 C8 C9 C10 The “C” is actually C7 , all in one line, but output is in 2 lines.\nComment by Sidhu on 2007-08-15 00:02:00 +0530 Prachi\nI doubt about the length of some field in table. May be excel doesn’t support that much length of a column and it takes it to next row. I didn’t face this issue. Will try and update you.\nSidhu\nComment by Prachi on 2007-08-16 11:32:00 +0530 Hi ,\nI found a solution. It needs the lines to be set to some value . The value i was setting was less.\nComment by Sidhu on 2007-08-17 07:41:00 +0530 So simple 🙂 I got into too technical things 😀\nSidhu\nComment by sami on 2009-03-21 12:30:06 +0530 this scripting is working fine.. but i need some enhancement. my problem is when i am running the script its generating xls file with the some text like “SQL\u0026gt;select * from salgrade ” and header are reprinted for several times after some rows.. so how can i overcome this problem.\nComment by Amardeep Sidhu on 2009-03-21 19:38:11 +0530 To stop the header from printing again and again you can set pages to some value more than the total number of rows your query is returning.\nComment by Kostas Hairopoulos on 2009-08-29 20:22:31 +0530 Is any way to import from “dbms_output” into Excel file?\nNice tip and thank you for sharing with us\nBest Regards,\nkhair\nComment by Amardeep Sidhu on 2009-09-18 22:51:17 +0530 @Kostas\nWhat exactly are you trying to print using dbms_output ?\nComment by Kostas Hairopoulos on 2009-09-21 23:03:46 +0530 I am running the snapper utility from Tanel Poder and the only option is to output the file and then import as CSV in ms-excel.\nMy question is more generic, if there is any option to flush the dbms_output to excel or csv format\nThank you in advance,\nkhair\nComment by Ayush on 2011-06-14 19:11:24 +0530 Hi Sidhu,\nI have created a shell script which needs to get some customer data from a .dat file. However, when i’m trying to export the data to xls file, its not reflecting in the file. Although, the timestamp of last modification of the file is getting updated.\nCan you please help me in this ?\nComment by Sidhu on 2011-07-20 10:31:05 +0530 Ayush,\nI am not sure if i exactly got what you are trying to achieve ?\nCould you please come with some more details.\nSidhu\nComment by sai on 2019-02-19 16:47:12 +0530 hi\nthe script which you shared on your blog is not fine its not working in linux\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/06/16/spool-to-a-xls-excel-file/","summary":"\u003cp\u003eA small tip, I read on \u003ca href=\"http://forums.oracle.com/forums/thread.jspa?messageID=1849526\"\u003eOTN\u003c/a\u003e about spooling to a .xls (excel) file:\u003c/p\u003e\n\u003cp\u003eIt goes like this\u003c/p\u003e\n\u003cp\u003e[sourcecode language=\u0026lsquo;css\u0026rsquo;]set feed off markup html on\nspool onspool c:\\salgrade.xls\nselect * from salgrade;\nspool offset markup html off\nspool off[/sourcecode]\u003c/p\u003e\n\u003cp\u003eAnd the xls it makes shows up like:\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e\u003c/code\u003e\u003c/pre\u003e\u003cp\u003e\u003cimg loading=\"lazy\" src=\"file:///C:/DOCUME%7E1/Amardeep/LOCALS%7E1/Temp/moz-screenshot.jpg\"\u003eSidhu\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-hemant-on-2007-06-26-164400-0530\"\u003eComment by hemant on 2007-06-26 16:44:00 +0530\u003c/h3\u003e\n\u003cp\u003ehi\u003cbr\u003e\ni am working for a bank and we are using 10g.\u003cbr\u003e\ni am very raw at the oracle and have just started teaching myself through a book i have.\u003cbr\u003e\nwe have many reports devised by our vendor still we need some that r not available.\u003cbr\u003e\nso we wanted the data to be exported in xl wherein i could manipulate to data to our need.\u003cbr\u003e\nabt this spooling thing. i have copied down ur script and want to test it… but how do i access the shell prompt(do not know unix either)\u003cbr\u003e\nthanks for the help\u003cbr\u003e\nextend it bit further for me please\u003cbr\u003e\nhemu\u003c/p\u003e","title":"Spool to a .xls (excel) file…"},{"content":"A very nice series of articles by Howard Rogers about Oracle Concepts. It includes all the basics like what a database, instance is ? Various types of files and all basic stuff. Read it here\nSidhu\nComments Comment by Anu on 2011-06-01 12:16:50 +0530 Hi Amardeep,\nAm a newbie for Oracle and was searching for Howard Rogers articles. I found your post but It doesn’t have his articles anymore, When i click the link. It will be great if you can send me if the correct link or where can i find them or mail me if you have any.\nThank You,\nANu.\nComment by Sidhu on 2011-07-20 10:28:30 +0530 Anu,\nSorry for the late reply. Was stuck in some other stuff and not having a look at the blogs.\nWell, Howard closed his website long back. To know the reason you may want to Google and you will find many discussions about this.\nSidhu\nComment by Anu on 2011-07-22 16:15:09 +0530 Hi Sidhu,\nThanks for the response. No problem. I found couple of his docs and now i follow him @ http://diznix.com/\nThanks,\nAnu.\nComment by Sidhu on 2011-07-22 21:09:58 +0530 Great !\nCheers !\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/06/13/oracle-concepts/","summary":"\u003cp\u003eA very nice series of articles by Howard Rogers about Oracle Concepts. It includes all the basics like what a database, instance is ? Various types of files and all basic stuff. Read it \u003ca href=\"http://www.dizwell.com/prod/node/271\"\u003ehere\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e\n\u003ch2 id=\"comments\"\u003eComments\u003c/h2\u003e\n\u003ch3 id=\"comment-by-anu-on-2011-06-01-121650-0530\"\u003eComment by Anu on 2011-06-01 12:16:50 +0530\u003c/h3\u003e\n\u003cp\u003eHi Amardeep,\u003c/p\u003e\n\u003cp\u003eAm a newbie for Oracle and was searching for Howard Rogers articles. I found your post but It doesn’t have his articles anymore, When i click the link. It will be great if you can send me if the correct link or where can i find them or mail me if you have any.\u003c/p\u003e","title":"Oracle concepts…"},{"content":"Well, a simple method to import(export) directly to(from) compressed files using pipes. Its for Unix based systems only, as I am not aware of any pipe type functionality in Windows. The biggest advantage is that you can save lots of space as uncompressing a file makes it almost 5 times or more. (Suppose you are uncompressing a file of 20 GB, it will make 100 GB) As a newbie I faced this problem, so thought about writing a post.\nLets talk about export first. The method used is that create a pipe, write to a pipe(ie the file in exp command is the pipe we created), side by side read the contents of pipe, compress (in the background) and redirect to a file. Here is the script that achieves this:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]export ORACLE_SID=MYDB rm -f ?/myexport.pipe mkfifo ?/myexport.pipe cat ?/myexport.pipe |compress \u0026gt; ?/myexport.dmp.Z \u0026amp; sleep 5 exp file=?/myexport.pipe full=Y log=myexport.log\n[/sourcecode]\nSame way for import, we create a pipe, zcat from the dmp.Z file, redirect it to the pipe and then read from pipe:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]export ORACLE_SID=MYDB rm -f ?/myimport.pipe mkfifo ?/myimport.pipe zcat ?/myexport.dmp.Z \u0026gt; ?/myimport.pipe \u0026amp; sleep 5 imp file=myimport.pipe full=Y show=Y log=?/myimport.log[/sourcecode]\nIn case there is any issue with the script, do let me know :)\nUpdate: If you are on Wintel, you can directly use a compressed folder as an export target. No need to create a pipe as the file system will automatically do it for you. ( Thanks Noons for the tip)\nIf you are using gunzip:\nFor export:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]export ORACLE_SID=MYDB rm -f exp.pipe mknod exp.pipe gzip \u0026lt; exp.pipe \u0026gt; T1.dmp.gz \u0026amp; exp file=exp.pipe full=Y log=myexport.log\n[/sourcecode]\nFor import:\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]export ORACLE_SID=MYDB rm -f imp.pipe mknod imp.pipe gzip \u0026lt; T1.dmp.gz \u0026gt; imp.pipe \u0026amp; imp file=imp.pipe full=Y log=myimport.log\n[/sourcecode]\nUpdate: I came across an article that discusses few ways to achieve the same on Windows. Check it here.\nComments Comment by Noons on 2007-06-27 17:34:00 +0530 Suggestion: in Wintel you could use a compressed folder as the target for the export file.\nNo need for the pipe as the file system does it for you automagically.\nComment by Gabriela on 2008-08-01 20:06:11 +0530 I want to do compressed export on the fly, but may database is big and in my export I need to put filesize=5G and I had 12 files until now, then I want to do compressed export but using more than 1 file.\nCan you help me?\nComment by Sidhu on 2008-08-03 21:53:40 +0530 Hi\nWhen you pass filesize=5g to exp, it is handled by Oracle and it knows that it is creating multiple files. When we import it back and pass all the file names, again Oracle is aware that “I created these files and i know how to read them in sequence”.\nBut here we do a trick: we create a pipe, feed it from one end and redirect the other end to a file and ask OS to compress it. Now this is entirely an OS thing. Oracle is writing to only one file that is export pipe !\nRight now nothing is hitting my mind, but if something is possible, i think that has again to be done at OS level. exp doesn’t understand anything like compression of files.\nIf you come to know something, do let me know 🙂\nComment by Aman\u0026hellip;. on 2008-08-04 11:32:08 +0530 Gabriel,\nAs Sidhu already mentioned,there is no way by whch export files can understand compression. There is a limit that oracle puts over the export file and that’s 2g. That’s what they maintain.For the compression either you have to use the filesize option or use a pipe to compress it. Here are two links that you can use as a refenece for using a pipe.\nhttp://www.tc.umn.edu/~hause011/code/exp-imp-db.ksh\nhttp://www.jlcomp.demon.co.uk/faq/bigexp.html\nHTH\nAman….\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/06/06/import-export-from-to-compressed-files-directly/","summary":"\u003cp\u003eWell, a simple method to import(export) directly to(from) compressed files using pipes. Its for Unix based systems only, as I am not aware of any pipe type functionality in Windows. The biggest advantage is that you can save lots of space as uncompressing a file makes it almost 5 times or more. (Suppose you are uncompressing a file of 20 GB, it will make 100 GB) As a newbie I faced this problem, so thought about writing a post.\u003c/p\u003e","title":"IMPORT \u0026 EXPORT from/to compressed files directly…"},{"content":"The very first question that may come to your mind is: what the heck this \u0026ldquo;OC\u0026rdquo; is ? Well if PC stands for Personal Computer then OC stands for Office Computer ;) The story behind all this is:\nOne of my friend (He is working in a non-IT company) called me and asked how to burn a PC so that it no more works :) Why ? because the PC they are given is too slow \u0026amp; old and thats the only way they can get a new one. I thought for a while and 1 or 2 ideas came to my mind. But to dig into little more details I called my friend Vaibhav ( The guy I call to \u0026ldquo;discuss\u0026rdquo; and \u0026ldquo;know more about\u0026rdquo; all the techie things) He gave some really nice ideas. Combining all the thoughts, here is the summary of all the methods that you can use ;)\nWater the motherboard: Add a bit of salt to water, fill a syringe and use it to spread water on the motherboard before booting. It will blow the motherboard. Stop processor\u0026rsquo;s fan: One more thing put something in the processer\u0026rsquo;s fan to stop it so that processor gets heated and it stops :) but here chances are more that the system will shutdown or restart. Break pin(s) of processor: Another interesting and simple thing that you can do is, break one or more pins of processor and put it back in the slot. Remove one or more ICs: Another thing that you can do is remove one or more ICs from the board. As the board is already so clumsy, nobody can see that anything has been removed from there. Use Pencil: Use a carbon pencil and rub it on the golden plating of RAM slots, any ICs near CPU or BIOS. Its gonna spoil that ;) Cut some wire: A very simple method you can use is that cut some wires or data cables in such a way that nobody can see it ;) One update from comments posted by Vaibhav: Most of u might not know abt the voltage selector switch at the back of the SMPS in the CPU tower. It is there to select between 110V(american) and 230V(indian). The trick is to just switch it to 110V :) The SMPS has no protection of over voltage at this high level and will burn instantly.\nAdvantages:\n-No need to open PC\n-No Smoke or sparks\n-Nothing can be found on u if checked(like syringe, screw driver or wire cutter)\n-Instant\n-One Second Job\n-Can be done again and again if system repaired\nDisadvantages:\n-Replacing SMPS makes the system operational\nWell, few things you can try if you too have an old computer in the office. Best of OC Burning :P\nSidhu\nComments Comment by Amritpal Singh on 2007-05-30 23:45:00 +0530 janaab, cool stuff ….\npar je pata lag gaya te, babean ne fire ho jaana 🙂\nComment by Neeraj Bhatia on 2007-05-31 12:30:00 +0530 another way to get a new PC is as follows:\nWrite a batch program or script(Win/Unix) that forcefully shutdown the system and schedule it after every n number of minutes …. Open each and every file, about which the antivirus alerts you that its a virus ……\nHell of ideas are coming out for my “destructive” mind…… Be aware that try to implement all these things just after an important project will be assigned to you(obviously it will have a deadline) or just before the project’s deadline…\nDon’t forget to share your experience, if someone try this ….:)\n— Neeraj\nComment by RollerCoaster on 2007-06-01 10:51:00 +0530 People People People!\nBow down. The god of PC distruction is here…\n🙂\nJust messin with u guys…\nNow, that post is what i call entertainment.\nThere is a very simple and smart way to make a PC useless. Most of u might not know abt the voltage selector switch at the back of the SMPS in the CPU tower. It is there to select between 110V(american) and 230V(indian). The trick is to just switch it to 110V 🙂 The SMPS has no protection of over voltage at this high level and will burn instantly.\nAdvantages:\n-No need to open PC\n-No Smoke or sparks\n-Nothing can be found on u if checked(like syringe, screw driver or wire cutter)\n-Instant\n-One Second Job\n-Can be done again and again if system repaired\nDisadvantages:\n-Replacing SMPS makes the system operational\nregards,\nVaibhav\nComment by Sidhu on 2007-06-03 00:16:00 +0530 but in India,when you take your PC to hardware people, the first thing they do is, spread all the parts on the table \u0026amp; attach a new POWER SUPPLY 😀\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/05/30/burning-your-oc-whyz-and-howz/","summary":"\u003cp\u003eThe very first question that may come to your mind is: what the heck this \u0026ldquo;OC\u0026rdquo; is ? Well if PC stands for Personal Computer then OC stands for Office Computer ;) The story behind all this is:\u003c/p\u003e\n\u003cp\u003eOne of my friend (He is working in a non-IT company) called me and asked how to burn a PC so that it no more works :) Why ? because the PC they are given is too slow \u0026amp; old and thats the only way they can get a new one. I thought for a while and 1 or 2 ideas came to my mind. But to dig into little more details I called my friend \u003ca href=\"http://rollercoasters-bunker.blogspot.com/\"\u003eVaibhav\u003c/a\u003e ( The guy I call to \u0026ldquo;discuss\u0026rdquo; and \u0026ldquo;know more about\u0026rdquo; all the techie things) He gave some really nice ideas. Combining all the thoughts, here is the summary of all the methods that you can use ;)\u003c/p\u003e","title":"Burning your \"OC\", whyz and howz …"},{"content":"Well, a really old post on Google groups (uesnet:comp.databases.oracle.server). Someone posted a thread about Tom Kyte and then people responded with their thoughts (perfectly as expected). Read it here.\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/05/22/tom-kyte/","summary":"\u003cp\u003eWell, a really old post on Google groups (uesnet:comp.databases.oracle.server). Someone posted a thread about Tom Kyte and then people responded with their thoughts (perfectly as expected). Read it \u003ca href=\"http://groups.google.com/group/comp.databases.oracle.server/browse_thread/thread/e417151ae51e46e3/d342b8b1adfac87d?lnk=gst\u0026amp;q=tom+kyte\u0026amp;rnum=1#d342b8b1adfac87d\"\u003ehere\u003c/a\u003e.\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e","title":"Tom Kyte…"},{"content":"If you are crazy@*nix, 2 really interesting posts about run levels in *nix\nNo runlevels?? What is the use of runlevels? Sidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/05/14/run-levels-in-nix/","summary":"\u003cp\u003eIf you are crazy@*nix, 2 really interesting posts about run levels in *nix\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"http://blogs.ittoolbox.com/unix/bsd/archives/no-runlevels-16195\"\u003eNo runlevels??\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://blogs.ittoolbox.com/linux/locutus/archives/what-is-the-use-of-runlevels-16231\"\u003eWhat is the use of runlevels?\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eSidhu\u003c/p\u003e","title":"Run levels in *nix…"},{"content":"Sometimes, while installing Linux, installing LILO/GRUB to MBR makes you run into loads of issues, one of the most popular being that after reboot you are not able to boot into either of the OS ;) There is a way to use NTLDR to boot Linux, this way the MBR remains untouched and if you don\u0026rsquo;t want to see the Linux option, what you need to do is, edit your boot.ini and your are done.\nFor this while installing GRUB, install it to the partition where you are installing Linux, instead of MBR. Now after rebooting, Linux will not boot as the partition on which you installed GRUB is not active. What actually we will do is, copy first 512 bytes of the partition where GRUB is installed, make a bin file, copy it to C drive and add the path of the same to boot.ini. So now, when you select Linux from the list of options displayed, NTLDR will call GRUB and then GRUB will boot Linux, just like normal.\nThere is a small utility called bootpart that does this all for us. Here is the direct link. Extract it and go to command prompt.\n[sourcecode language=\u0026lsquo;css\u0026rsquo;]C:\\bootpart\u0026gt;dir\nVolume in drive C has no label. Volume Serial Number is 08E8-E412\nDirectory of C:\\bootpart\n05/05/2007 04:19 PM . 05/05/2007 04:19 PM .. 08/01/2005 02:26 AM 32bits 08/01/2005 02:06 AM 44,544 bootpart.exe 08/01/2005 02:06 AM 12,055 bootpart.txt 08/01/2005 02:06 AM 119 bootpart.url 08/01/2005 02:06 AM 383 file_id.diz 4 File(s) 57,101 bytes 3 Dir(s) 4,188,106,752 bytes free\nC:\\bootpart\u0026gt;[/sourcecode]\nRun bootpart, it will show all the partitions on the disk like\n[sourcecode language=\u0026lsquo;css\u0026rsquo;] C:\\bootpart\u0026gt;bootpart Boot Partition 2.60 for WinNT/2K/XP (c)1995-2005 G. Vollant ([email protected]) WEB : http://www.winimage.com and http://www.winimage.com/bootpart.htm Add partition in the Windows NT/2000/XP Multi-boot loader Run \u0026ldquo;bootpart /?\u0026rdquo; for more information\nPhysical number of disk 0 : 282d282d 0 : C:* type=7 (HPFS/NTFS), size= 25599546 KB, Lba Pos=63 1 : C: type=c (Win95 Fat32 LBA), size= 11751547 KB, Lba Pos=208828935 2 : C: type=d7 , size= 1052257 KB, Lba Pos=232332030 3 : C: type=f (Win95 XInt 13 extended), size= 78814890 KB, Lba Pos=51199155 4 : C: type=7 (HPFS/NTFS), size= 25607578 KB, Lba Pos=51199218 5 : C: type=5 (Extended), size= 35736592 KB, Lba Pos=102414375 6 : C: type=7 (HPFS/NTFS), size= 35736561 KB, Lba Pos=102414438 7 : C: type=5 (Extended), size= 15366172 KB, Lba Pos=173887560 8 : C: type=83 (Linux native), size= 15366141 KB, Lba Pos=173887623 9 : C: type=5 (Extended), size= 2104515 KB, Lba Pos=204619905 10 : C: type=82 (Linux swap), size= 2104483 KB, Lba Pos=204619968 [/sourcecode]\nNow the one at 8th number is my native Linux partition. Now run bootpart 8 c:\\linux.bin (Here 8 is my Linux partition number and linux.bin is the name of the file which it will create in C drive) It automatically adds the entry to boot.ini So now you are ready to go. Just reboot and you will see 2 options there. Windows \u0026amp; Linux :)\nHappy NTLDR\u0026rsquo;ing\u0026hellip;\nSidhu\nComments Comment by RollerCoaster on 2007-05-06 21:15:00 +0530 cool utility man\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/05/06/using-ntldr-to-boot-linux/","summary":"\u003cp\u003eSometimes, while installing Linux, installing LILO/GRUB to MBR makes you run into loads of issues, one of the most popular being that after reboot you are not able to boot into either of the OS ;) There is a way to use NTLDR to boot Linux, this way the MBR remains untouched and if you don\u0026rsquo;t want to see the Linux option, what you need to do is, edit your boot.ini and your are done.\u003c/p\u003e","title":"Using NTLDR to boot Linux…"},{"content":"Found a very interesting article on Dizwell\u0026rsquo;s blog. It was about keeping history of the SQL commands in SQL Plus on Linux. It is almost very simple. Just need to download a small utility called rlwrap from here. Its a tar.gz file. Download it, un-tar using\n[sourcecode language=\u0026lsquo;css\u0026rsquo;] tar -xvf rlwrap-0.28.tar.gz [/sourcecode]\nIt will create a directory with the same name. cd to the directory and run\n[sourcecode language=\u0026lsquo;css\u0026rsquo;] ./configure [/sourcecode]\nNow do\n[sourcecode language=\u0026lsquo;css\u0026rsquo;] make install [/sourcecode]\n(I was logged in as oracle user, then did su, but it gave some errors, finally I logged in as root and it worked fine)\nNow what is left to be done is make an alias for sqlplus as\n[sourcecode language=\u0026lsquo;css\u0026rsquo;] alias sqlplus=\u0026lsquo;rlwrap sqlplus\u0026rsquo; [/sourcecode]\nUsing up/down arrows, commands can be scrolled up and down just like windows. Have a look at full article here.\nCheers\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/05/04/command-line-history-in-sql-for-linux/","summary":"\u003cp\u003eFound a very interesting article on Dizwell\u0026rsquo;s blog. It was about keeping history of the SQL commands in SQL Plus on Linux. It is almost very simple. Just need to download a small utility called rlwrap from \u003ca href=\"http://utopia.knoware.nl/%7Ehlub/rlwrap/\"\u003ehere\u003c/a\u003e. Its a tar.gz file. Download it, un-tar using\u003c/p\u003e\n\u003cp\u003e[sourcecode language=\u0026lsquo;css\u0026rsquo;]\ntar -xvf rlwrap-0.28.tar.gz\n[/sourcecode]\u003c/p\u003e\n\u003cp\u003eIt will create a directory with the same name. cd to the directory and run\u003c/p\u003e\n\u003cp\u003e[sourcecode language=\u0026lsquo;css\u0026rsquo;]\n./configure\n[/sourcecode]\u003c/p\u003e","title":"Command line history in SQL (for Linux)…"},{"content":"Well, here I am listing down the important tools and utilities I seem to use on the daily basis and without them life on internet \u0026amp; laptop really seems to be stuck, sort of :(\n1. Winamp: No need to say anything. Winamp is possibly the best and lightest mp3 (these days videos also) player. Just run Winamp in the system tray and enjoy\n2. Winrar: Utility to uncompress compressed files. Earlier winzip was my favorite but recently I swichted to Winrar as Winzip doesn\u0026rsquo;t support rar format. I am pretty happy using it.\n3. Mozilla: I have almost stopped using Internet Explorer (except with some of the websites which say flat NO, sort of, to Mozilla). It is really good. With this I have stopped using any standalone download manager (DAP and Flashget have been among my favorites) also as Mozilla has one inbuilt and there are many addons also available.\n4. Messengers: The latest communication channel: messengers. Latest versions of Yahoo, Gtalk and Skype.\n5. uTorrent: Recently I started using torrents also. uTorrent, I am using as torrent client. It is light weight and pretty heavy to use. The only issue I have with it is that there is no feature like automatic bandwidth management. I have manually set a speed for download and upload as assiging the full badwidth to utorrent screws up normal surfing. And with manual settings when I am not surfing some of the bandwidth goes wasted ;)\n6. Realplayer Alternate: I was a typical user who would stick with Real Player though its pretty heavy and eats a lot of resources. Sometime back my friend Vaibhav suggested me to go for Real Player alternate. I am happy using it now. very simple, small and precise :)\n7. Replay Music: Just a new thing in my life too. A tool to record streaming audio, infact any audio coming out of sound card. Good one, saves as mp3 and has a very simple interface, just start, stop buttons and you are done.\n8. RSS reader: Again a new item I am experimenting with. Till now I havn\u0026rsquo;t been able to find a good one. Generally the interface is clumsy and you don\u0026rsquo;t enjoy reading in that small window. I used RSS Reader first and using Omea Reader these days.\nFSL Super finder: A replacement of windows search. I never liked (I seem to hate, indeed) windows search after Windows 2000. In XP it is totally screwed up. This one is a free utility with good interface and speed. Use of all these tools makes me a happy user \u0026amp; surfer :)\nCheers Sidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/04/16/utilities-i-cant-survive-without/","summary":"\u003cp\u003eWell, here I am listing down the important tools and utilities I seem to use on the daily basis and without them life on internet \u0026amp; laptop really seems to be stuck, sort of :(\u003c/p\u003e\n\u003cp\u003e1. Winamp: No need to say anything. Winamp is possibly the best and lightest mp3 (these days videos also) player. Just run Winamp in the system tray and enjoy\u003c/p\u003e\n\u003cp\u003e2. Winrar: Utility to uncompress compressed files. Earlier winzip was my favorite but recently I swichted to Winrar as Winzip doesn\u0026rsquo;t support rar format. I am pretty happy using it.\u003c/p\u003e","title":"Utilities I cant survive without…"},{"content":"I was going through Eddie Awad\u0026rsquo;s blog. There was one post about an article about NO_DATA_FOUND exception in Oracle. Both the posts you can find here and here. Do read. It is really interesting.\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/04/13/no_data_found/","summary":"\u003cp\u003eI was going through \u003ca href=\"http://awads.net/wp/\"\u003eEddie Awad\u0026rsquo;s blog\u003c/a\u003e. There was one post about an article about NO_DATA_FOUND exception in Oracle. Both the posts you can find \u003ca href=\"http://awads.net/wp/2007/04/10/no_data_found-gotcha/\"\u003ehere\u003c/a\u003e and \u003ca href=\"http://blogs.ittoolbox.com/oracle/guide/archives/minitip-9-no-data-found-bug-or-feature-15602\"\u003ehere\u003c/a\u003e. Do read. It is really interesting.\u003c/p\u003e\n\u003cp\u003eSidhu\u003c/p\u003e","title":"NO_DATA_FOUND…"},{"content":"There are times when you just can\u0026rsquo;t do anything, I repeat, you just can\u0026rsquo;t do anything. In the evening I was working on my laptop and everything was fine. Next day in the morning when I got up, switched on the laptop and clicked on IE icon, it happily gave a beautiful error message \u0026ldquo;This application failed to start because msvcrl.dll was not found. Re-installing the application may fix the problem.\u0026rdquo; So it was the time for Googling about the error. Thank God I had Opera installed. After 2-3 hits I came to know that it was a trojan. Finally, there was one dll which needed to be deleted to get rid of this error message :) But after that there was some problem with IE in opening some of the sites. One option was to upgrade to IE 7 but don\u0026rsquo;t know why, I havn\u0026rsquo;t started liking IE 7 yet. So no other option left than restoring windows (I have Windows XP Media Center Edition installed).\nOn all the laptops HP creates a partition where they store all the crap to restore windows back to original condition. They call it recovery partition (It eats upto 11-12 Gigs of space and recovers nothing really :( In the recovery tool provided by HP, there are 2 options one is called normal recovery which will restore the OS and won\u0026rsquo;t touch your data (They say so but I read many posts where people talked about having lost everything to this so called normal recovery :( another is called destructive recovery (I just love the name :-* ) which erases everything on the hard drive and restore back the system to factory shipped condition (Term courtesy HP). Obviously I didn\u0026rsquo;t want to lose any data. So I tried with normal recovery and it didn\u0026rsquo;t work actually. So I contacted HP for the same and asked about any other option. They said NO in a very stylish manner as technical people are taught to do.\nNow the only option left was to go for destructive recovery. I got some blanks DVDs and burned the data so as to free my hard drive. Finally after a few hours of work my hard drive was ready to face destructive recovery. I ran the tool and it worked out something for around 40 mins showing me a progress bar for which I waited to complete as a lover would wait for his watch to strike the moment when his sweet heart has to come :) Finally it was 100% complete and system rebooted. You know to find everything intact as if not even touched. So started looking for why the hell this 12 GB recovery partition if it has to do nothing ?\nI also had Fedora Core installed so one thought struck that might be possible it is not able to do because of those Linux partitions, so deleted those partitions also and then tried but of no use. Finally only one thing was left that delete all the partitions and give the whole space to one partition and then try. Thank God it worked and my laptop was back with a fresh installation of windows. (Later on I came to know that existing of more than one partition was the reason why normal recovery also didn\u0026rsquo;t work and few more interesting things - in all the cases it didn\u0026rsquo;t give any error that was not able to restore or something and in case of normal recovery they delete all the softwares installed later but their shortcuts will still be there ? ridiclous, foolish, stupid ). I already had made my mind to make Norton Ghost image of the C drive to avoid this operation again in future. Before making the image, many things had to be done like configuring internet connection, updating windows, installation of some utilities \u0026amp; uninstalling Norton Internet Security that I got pre-installed along with the OS.\nI use Sify broadband for internet. So installed the sify dialer and when I tried to connect it gave one strange bloody,out of hell message that \u0026ldquo;You don\u0026rsquo;t have any anti-virus installed, click here to download updated antivirus from Sify\u0026rdquo;. I just can\u0026rsquo;t understand why the hell Sify is worried about anti-virus on my system. As I said at sometimes you just can\u0026rsquo;t do anything, had to install Norton Internet Security again to be able to connect to internet. Later on I got another command based dialer Supersify for Sify from internet developed by an individual (Many thanks to him (using it I can connect to internet with AVG Free anti-virus installed which is not recognised by Sify\u0026rsquo;s dialer as a good anti-virus) \u0026amp; one of my friend Vaibhav who googled this thing for me and was with me on voice during whole of this story which lasted for a day, a good short story it could make on a TV channel broadcasting peoples\u0026rsquo; frustrations with the technology) So after sweating out for one full day my laptop was back to normal :)\nPS: If you just got a laptop and Windows is in fine condition, please make an image of your C drive using some tool. Other wise probably you also have to write a blog ;)\nSidhu\nComments Comment by RollerCoaster on 2007-04-02 11:24:00 +0530 That is windows for you.\ntho i am angry u did not include my name in the article, after all i was the one providing you with all the support! Grrr\nComment by Sushubh on 2007-04-02 18:17:00 +0530 the link to supersify is dead mate. 🙂\nComment by Sidhu on 2007-04-02 21:12:00 +0530 thanks Sushubh\nI corrected it.\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/04/01/height-of-helplessness/","summary":"\u003cp\u003eThere are times when you just can\u0026rsquo;t do anything, I repeat, you just can\u0026rsquo;t do anything. In the evening I was working on my laptop and everything was fine. Next day in the morning when I got up, switched on the laptop and clicked on IE icon, it happily gave a beautiful error message \u0026ldquo;This application failed to start because msvcrl.dll was not found. Re-installing the application may fix the problem.\u0026rdquo; So it was the time for Googling about the error. Thank God I had Opera installed. After 2-3 hits I came to know that it was a trojan. Finally, there was one dll which needed to be deleted to get rid of this error message :) But after that there was some problem with IE in opening some of the sites. One option was to upgrade to IE 7 but don\u0026rsquo;t know why, I havn\u0026rsquo;t started liking IE 7 yet. So no other option left than restoring windows (I have Windows XP Media Center Edition installed).\u003c/p\u003e","title":"{ Height of } Helplessness…"},{"content":"When I got my Laptop, along with it also got Norton Internet Security and 2 months subscription for updates \u0026amp; all. Whenever I would boot up the laptop Internet Security would open the window for configuration. I kept on cancelling it for 2 weeks or so. But one fine (read not fine :( day I configured it. It was very happy. Said Thank You at the end. But the best was yet to come. After 10-15 days Windows (XP media center is installed on my Laptop) started showing signs of strain ;) It won\u0026rsquo;t start properly, after saying loading your settings it will stuck. Then I had to kill explorer.exe and start it again. With this also one strange problem that no startup items will be loaded :( I was discussing the same with one of my friend. He said that Norton must be the culprit. So I thought about uninstalling it. When I tried to uninstall it gave some strange error. Anyways it also told about one URL from where one could check details of the error. Finally when the uninstall started it gave one beautiful message \u0026ldquo;You still have 39 days of subscription left, if you uninstall the product you will not be able to reuse the subscription\u0026rdquo; :D\nAfter uninstalling when I rebooted, Windows was perfectly fine. Then I enabled windows firewall and installed AVG Free Antivirus (One of my favorite when it comes to using resources and loading at startup). Everything in place now :)\nCheers !\nSidhu\nComments Comment by RollerCoaster on 2007-03-26 10:41:00 +0530 looks like u followed my advise. after all i have one XP that is abt to complete 500 days and is still standing tall 🙂\nthat was only possible because i did not install crap stuff like norton and keep auto updates on.\ncheers\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/03/25/viruses-intrusion-attempts-and-norton/","summary":"\u003cp\u003eWhen I got my Laptop, along with it also got Norton Internet Security and 2 months subscription for updates \u0026amp; all. Whenever I would boot up the laptop Internet Security would open the window for configuration. I kept on cancelling it for 2 weeks or so. But one fine (read not fine :( day I configured it. It was very happy. Said \u003cstrong\u003eThank You\u003c/strong\u003e at the end. But the best was yet to come. After 10-15 days Windows (XP media center is installed on my Laptop) started showing signs of strain ;) It won\u0026rsquo;t start properly, after saying loading your settings it will stuck. Then I had to kill explorer.exe and start it again. With this also one strange problem that no startup items will be loaded :( I was discussing the same with one of my friend. He said that Norton must be the culprit. So I thought about uninstalling it. When I tried to uninstall it gave some strange error. Anyways it also told about one URL from where one could check details of the error. Finally when the uninstall started it gave one beautiful message \u0026ldquo;You still have 39 days of subscription left, if you uninstall the product you will not be able to reuse the subscription\u0026rdquo; :D\u003c/p\u003e","title":"Viruses, Intrusion attempts and Norton…"},{"content":"I was checking Tom Kyte\u0026rsquo;s blog. It was about an article titled A Note To Employers: 8 Things Intelligent People, Geeks and Nerds Need To Work Happily. Well first of all meaning of all three words from \u0026ldquo;define: * in Google\u0026rdquo;\n1. Intelligent: having the capacity for thought and reason especially to a high degree\n2. Geek: In computers and the Internet, a geekis a person who is inordinately dedicated to and involved with the technologyto the point of sometimes not appearing to be like the rest of us (non-geeks).Being a geek also implies a capability with the technology.\n3. Nerd: A computer expert by aptitude and not mere training. Usually male, under the age of 35 and socially inept; a person whose tremendous skill with operating or designing computer hardware or software is exceeded only by his, rarely her, passionate love of the technology.\nSo all the three words refer to some exceptional class of people, not everybody ;) Here are my views about all the points:\n1. Yes flexible timings is a big thing. Different classes of people are there. One that can work in morning, another late night so and so and one that can\u0026rsquo;t anytime (but this article is not for those ;)\n2. Yes everybody likes different kind of environment around. I like greenry and natural things, if possible :) \u0026amp; the nap thing i strongly agree to ;)\n3. Except while sleeping and listening to some classical genre of music, I need light :)\n4. Nice idea. \u0026amp; I ALMOST HATE those people talking on the phone everytime, specially sales people \u0026amp; managers.\n5. Yes. No suits and formals. It is upto you whatever you like. Reid \u0026amp; Taylor Suits are ok for James Bond in 007 series but are not going to help in bringing up a server from crash or catching an exception that has propagated to 5th-6th calling program because somebody didn\u0026rsquo;t handle it at proper place earlier :)\n6. Not a big issue. Just need the company of like minded people and it is fine :)\n7. Yes. No meetings just for the sake of meetings. We need work not meetings.\n8. It hurts the soul.\nSo those are all my views about this. Do post what you think ?\nCheers !\nSidhu\nComments Comment by Neeraj Bhatia on 2007-04-04 15:07:00 +0530 Whats this??\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/03/19/intelligent-people-geeks-and-nerds/","summary":"\u003cp\u003eI was checking \u003ca href=\"http://tkyte.blogspot.com/\"\u003eTom Kyte\u0026rsquo;s blog\u003c/a\u003e. It was about an \u003ca href=\"http://nomadishere.com/2007/03/12/a-note-to-employers-8-things-intelligent-people-geeks-and-nerds-need-to-work-happy\"\u003earticle\u003c/a\u003e titled A Note To Employers: 8 Things Intelligent People, Geeks and Nerds Need To Work Happily. Well first of all meaning of all three words from \u0026ldquo;define: * in Google\u0026rdquo;\u003c/p\u003e\n\u003cp\u003e1. Intelligent: having the capacity for thought and reason especially to a high degree\u003c/p\u003e\n\u003cp\u003e2. Geek: In computers and the Internet, a geekis a person who is inordinately dedicated to and involved with the technologyto the point of sometimes not appearing to be like the rest of us (non-geeks).Being a geek also implies a capability with the technology.\u003c/p\u003e","title":"Intelligent people, geeks and nerds…"},{"content":"Running Linux from right inside Windows, just like a normal application !!! Idea looks cool. I was also searching for the same thing. After little bit of Googling got one link http://www.lifehack.org/articles/technology/beginners-guide-run-linux-like-any-other-program-in-windows.html This article explains everything about making it possible. (Thanks Kyle Pott :)\nI had downloaded Fedore Core 7 Live CD. Tried same with this method. It is working perfectly fine. Rocking Infact !!!I have 1 Gigs of RAM in my laptop. It is giving good performance but will definitly rock with 2 Gigs.This is how it looks like on my laptop .\nCheers !Sidhu\nComments Comment by Aman Sharma on 2007-03-17 09:17:00 +0530 Hi Amar,\nWell why you didnt try vmare to check the same functionality.\nYou can also have a “feel” by running Cygwin.\nCheers\nAman….\nComment by Sidhu on 2007-03-18 10:30:00 +0530 Yup I did it through VMWare only. Once I tried Wipro’s UWIN. Will try Cygwin also…\nCheerz\nSidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/03/16/running-linux-from-inside-windows/","summary":"\u003cp\u003eRunning Linux from right inside Windows, just like a normal application !!! Idea looks cool. I was also searching for the same thing. After little bit of Googling got one link \u003ca href=\"http://www.lifehack.org/articles/technology/beginners-guide-run-linux-like-any-other-program-in-windows.html\"\u003ehttp://www.lifehack.org/articles/technology/beginners-guide-run-linux-like-any-other-program-in-windows.html\u003c/a\u003e This article explains everything about making it possible. (Thanks Kyle Pott :)\u003c/p\u003e\n\u003cp\u003eI had downloaded Fedore Core 7 Live CD. Tried same with this method. It is working perfectly fine. Rocking Infact !!!I have 1 Gigs of RAM in my laptop. It is giving good performance but will definitly rock with 2 Gigs.This is how it looks like on my laptop .\u003c/p\u003e","title":"Running Linux from inside Windows…"},{"content":"For the last 4-5 months I was almost without a PC and internet. This problem was solved when I got my HP laptop last month and then internet connection this month. One thing I had heard much about was RSS feeds and also wanted to try the same because there were many blogs which I used to check regularly. I had an idea about the concept used by RSS feeds that you will need to install a piece of software on your PC, give link of the page you want it to monitor and it will inform you when there is something new since your last visit.(period) Nothing more I knew.\nSo as I generally do started looking for some good RSS reader(possibly the best ;). From the very first search in Google I opened the first link and downloaded RSS Reader and installed it. Then after installing, had to give the link of RSS feed for the page that you wanted it to monitor. It was very simple copy-paste kind of job. Just took around 2 mins. Now I have added many websites and blogs to it. Every time when there is something new it rings the bell :) Some of my favourite links: http://www.gurdasmaan.com/sforum/main.php Official Forum of Gurdas Maan http://tkyte.blogspot.com/ Blog of Tom Kyte http://asktom.oracle.com/ Hompage of Asktom http://jonathanlewis.wordpress.com/ Homepage of Jonathan Lewis http://www.jlcomp.demon.co.uk/ Website of Jonathan Lewis\nCheers !!! Sidhu\nComments Comment by Greg on 2008-04-11 11:48:09 +0530 I like your blog, this post is really good, but please vary your topics, it will broad your readership.\n","permalink":"https://v2.amardeepsidhu.com/blog/2007/03/14/rss-feeds/","summary":"\u003cp\u003eFor the last 4-5 months I was almost without a PC and internet. This problem was solved when I got my HP laptop last month and then internet connection this month. One thing I had heard much about was RSS feeds and also wanted to try the same because there were many blogs which I used to check regularly. I had an idea about the concept used by RSS feeds that you will need to install a piece of software on your PC, give link of the page you want it to monitor and it will inform you when there is something new since your last visit.(period) Nothing more I knew.\u003c/p\u003e","title":"RSS Feeds…"},{"content":"sometimes even very small things mess up everything in your head\u0026hellip;something similar happened with me also\u0026hellip;had two lists of numbers\u0026hellip;some numbers were missing in one\u0026hellip;wanted to find out thoes..instead of inserting values in tables then using SQL or using diff on Unix\u0026hellip;thought about using Excel for the same\u0026hellip;much heard about function VLOOKUP is there\u0026hellip;which we can use for such scenarios\u0026hellip;so I also tried the same\u0026hellip;.had heard a lot abt vlookup but was generally heard of it\u0026hellip;as when people used to write \u0026ldquo;=vlookup(bla bla bla bla\u0026hellip;)\u0026rdquo;\u0026hellip;i just used to count my heartbeat\u0026hellip;and look around\u0026hellip;so this was the time when i really needed to learn vlookup so as a normal computer geek wud do\u0026hellip;i opened google.com and started searching the same\u0026hellip;what i got was really disappointing\u0026hellip;.everywhere i almost found the same language\u0026hellip;.(oops\u0026hellip;.i also checked microsoft help even dat techie lingo didnt help as when you have messed up the things\u0026hellip;nothing really gets into your head\u0026hellip;.)\u0026hellip;some of that text\u0026hellip;m pasting here with a hope that it wont create copyright issues\u0026hellip;.i really got nothing out of this\u0026hellip;.\n\u0026ldquo;Searches for a value in the leftmost column of a table, and then returns a value in the same row from a column you specify in the table. Use VLOOKUP instead of HLOOKUP when your comparison values are located in a column to the left of the data you want to find.\nThe V in VLOOKUP stands for \u0026ldquo;Vertical.\u0026rdquo;\nSyntax\nVLOOKUP( lookup_value, table_array, col_index_num,range_lookup)\nLookup_value is the value to be found in the first column of the array. Lookup_value can be a value, a reference, or a text string.\nTable_array is the table of information in which data is looked up. Use a reference to a range or a range name, such as Database or List.\u0026rdquo;\nso when everything said no\u0026hellip;started looking around\u0026hellip;.and found wat the heck vlookup is and how to handle it\u0026hellip;.wud like to write here\u0026hellip;for people like me\u0026hellip;..anything you dont undestand\u0026hellip;do leave a comment\u0026hellip;..it wud make me happy in 2 ways\u0026hellip;one thing that my post helped you to learn vlookup n another that people are reading my post\u0026hellip;.so here goes the story abt vlookup\u0026hellip;\n\u0026ldquo;basically vlookup is a function which we use to compare two lists of values\u0026hellip;two in the sense\u0026hellip;.like i have first list as \u0026ldquo;1 2 3 4 5\u0026rdquo; and second list as \u0026ldquo;3 4 5\u0026rdquo;\u0026hellip;now i want to pick up values from 2nd list one by one and search in first list and let me know that whether it is there in the first list or not\u0026hellip;note that it is one way only\u0026hellip;.values in first list will not be searched in second list\u0026hellip;you need to write another vlookup in reverse manner to accomplish this\u0026hellip;so lets come to the point\u0026hellip;\nthe syntax of vlookup is\nVLOOKUP(lookup_value,table_array,col_index_num,range_lookup)\nlookup_value is the value we want to look for\u0026hellip;.as per above example\u0026hellip;first value from 2nd list\u0026hellip;.table_array is the list of values in which we will look for lookup_value\u0026hellip;i mean the first list\u0026hellip;.(! note one thing that the first list(the list of values in which we will look for a value,always has to be the first column in the excel sheet)\u0026hellip;.necassary kinda thing\u0026hellip;\nas we write general functions in Excel..this is also written in the same manner..\nlike =VLOOKUP(B2,$A$2:$A$6,1,0)\nB2 value we want to search\n$A$2:$A$6 the list in which we will look the above said value\n1 basically denotes what will be written if the value is there in the first column\u0026hellip;here i will give 1 that it will print the same value from first column itself\u0026hellip;\n0 this field is a logical value that specifies whether you want VLOOKUP to find an exact match or an approximate match. If TRUE or omitted, an approximate match is returned. In other words, if an exact match is not found, the next largest value that is less than lookup_value is returned. If FALSE, VLOOKUP will find an exact match. If one is not found, the error value #N/A is returned(here 0 means false)\nnow why the range of values in which i am searching my value is written as\n$A$2:$A$6 not as A2:A6\u0026hellip;there is a logic behind this also\u0026hellip;if i write as A2:A6\u0026hellip;there will be one problem\u0026hellip;.suppose it picks up the first value and searches for it in the first column\u0026hellip;.and the value is found at 4th place\u0026hellip;.(place here means 4th row\u0026hellip;)\u0026hellip;then when it will search 2nd value\u0026hellip;it will start from 4th row only not 1st\u0026hellip;thatz not wat i want\u0026hellip;.so with $A$2:$A$6 i have fixed the range that every values is to be looked in this range only ie. first row to last row of first column\u0026hellip;\nhere is the screen shot of everything\nhope it makes using vlookup beautiful\u0026hellip;\nthanks for your time\n\u0026hellip;.Sidhu\n","permalink":"https://v2.amardeepsidhu.com/blog/2006/09/27/small-things/","summary":"\u003cp\u003esometimes even very small things mess up everything in your head\u0026hellip;something similar happened with me also\u0026hellip;had two lists of numbers\u0026hellip;some numbers were missing in one\u0026hellip;wanted to find out thoes..instead of inserting values in tables then using SQL or using diff on Unix\u0026hellip;thought about using Excel for the same\u0026hellip;much heard about function VLOOKUP is there\u0026hellip;which we can use for such scenarios\u0026hellip;so I also tried the same\u0026hellip;.had heard a lot abt vlookup but was generally heard of it\u0026hellip;as when people used to write \u0026ldquo;=vlookup(bla bla bla bla\u0026hellip;)\u0026rdquo;\u0026hellip;i just used to count my heartbeat\u0026hellip;and look around\u0026hellip;so this was the time when i really needed to learn vlookup so as a normal computer geek wud do\u0026hellip;i opened google.com and started searching the same\u0026hellip;what i got was really disappointing\u0026hellip;.everywhere i almost found the same language\u0026hellip;.(oops\u0026hellip;.i also checked microsoft help even dat techie lingo didnt help as when you have messed up the things\u0026hellip;nothing really gets into your head\u0026hellip;.)\u0026hellip;some of that text\u0026hellip;m pasting here with a hope that it wont create copyright issues\u0026hellip;.i really got nothing out of this\u0026hellip;.\u003cbr\u003e\n\u0026ldquo;Searches for a value in the leftmost column of a table, and then returns a value in the same row from a column you specify in the table. Use VLOOKUP instead of HLOOKUP when your comparison values are located in a column to the left of the data you want to find.\u003c/p\u003e","title":"Small things…"},{"content":"","permalink":"https://v2.amardeepsidhu.com/archive/","summary":"archive","title":"Archive"}]