This blog has moved here.

Thursday, June 21, 2007

/etc/oratab or /etc/rc.d/rc3.d/S99gcstartup?


A few days ago I had updated the Oracle Grid Control and its database located on a RedHat AS4 server. Everything worked fine till the server was rebooted and I had the “nice” surprise to see that the repository database was restarted but using the old ORACLE_HOME. At this point it is important to review the steps I had followed as part of the upgrading task:


1. stop agent

10ag
emctl stop agent

2. stop opmn stack

10gr
$ORACLE_HOME/opmn/bin/opmnctl stopall

3. stop listener

10db
lsnrctl stop

4. stop database

sqlplus / as sysdba
shutdown immediate

5. backup agent, database, oms homes

cp -r /opt/oracle/product/10.2.0/agent /opt/oracle/backup/
cp -r /opt/oracle/product/10.2.0/db10g /opt/oracle/backup/
cp -r /opt/oracle/product/10.2.0/oms10g /opt/oracle/backup/

6. apply patch for 4329444

oracle@aut-vie-racman:kits$ unzip p4329444_10104_LINUX.zip
Archive: p4329444_10104_LINUX.zip
creating: 4329444/
creating: 4329444/files/
creating: 4329444/files/lib/
creating: 4329444/files/lib/libserver10.a/
inflating: 4329444/files/lib/libserver10.a/qerix.o
creating: 4329444/etc/
creating: 4329444/etc/config/
inflating: 4329444/etc/config/inventory
inflating: 4329444/etc/config/actions
creating: 4329444/etc/xml/
inflating: 4329444/etc/xml/GenericActions.xml
inflating: 4329444/etc/xml/ShiphomeDirectoryStructure.xml
inflating: 4329444/README.txt
oracle@aut-vie-racman:kits$ cd 4329444/
oracle@aut-vie-racman:4329444$ /opt/oracle/product/10.2.0/db10g/OPatch/opatch apply
Invoking OPatch 10.2.0.1.0
Oracle interim Patch Installer version 10.2.0.1.0
Copyright (c) 2005, Oracle Corporation. All rights reserved..
Oracle Home : /opt/oracle/product/10.2.0/db10g
Central Inventory : /opt/oracle/oraInventory
from : /opt/oracle/product/10.2.0/db10g/oraInst.loc
OPatch version : 10.2.0.1.0
OUI version : 10.2.0.1.0
OUI location : /opt/oracle/product/10.2.0/db10g/oui
Log file location : /opt/oracle/product/10.2.0/db10g/cfgtoollogs/opatch/opatch-
2007_Jun_19_16-16-19-CEST_Tue.log
ApplySession applying interim patch '4329444' to OH
'/opt/oracle/product/10.2.0/db10g'
Invoking fuser to check for active processes.
Invoking fuser on "/opt/oracle/product/10.2.0/db10g/bin/oracle"
OPatch detected non-cluster Oracle Home from the inventory and will patch the local
system only.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local
system.
(Oracle Home = '/opt/oracle/product/10.2.0/db10g')
Is the local system ready for patching?
Do you want to proceed? [y|n]
y
User Responded with: Y
Backing up files and inventory (not for auto-rollback) for the Oracle Home
Backing up files affected by the patch '4329444' for restore. This might take a
while...
Backing up files affected by the patch '4329444' for rollback. This might take a
while...
Patching component oracle.rdbms, 10.1.0.4.0...
Updating archive file "/opt/oracle/product/10.2.0/db10g/lib/libserver10.a" with
"lib/libserver10.a/qerix.o"
Running make for target ioracle
ApplySession adding interim patch '4329444' to inventory
The local system has been patched and can be restarted.
OPatch succeeded.

7. Start db listener

oracle@aut-vie-racman:4329444$ 10db
oracle@aut-vie-racman:4329444$ lsnrctl start
LSNRCTL for Linux: Version 10.1.0.4.0 - Production on 19-JUN-2007 16:18:42
Copyright (c) 1991, 2004, Oracle. All rights reserved.
Starting /opt/oracle/product/10.2.0/db10g/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 10.1.0.4.0 - Production
System parameter file is /opt/oracle/product/10.2.0/db10g/network/admin/listener.ora
Log messages written to /opt/oracle/product/10.2.0/db10g/network/log/listener.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=aut-vieracman.
bankgutmann.co.at)(PORT=1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.1.0.4.0 - Production
Start Date 19-JUN-2007 16:18:43
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/opt/oracle/product/10.2.0/db10g/network/admin/listener.ora
Listener Log File /opt/oracle/product/10.2.0/db10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=aut-vieracman.
bankgutmann.co.at)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

8. start emrep database

oracle@aut-vie-racman:4329444$ sqlplus / as sysdba
SQL*Plus: Release 10.1.0.4.0 - Production on Tue Jun 19 16:20:27 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 536870912 bytes
Fixed Size 780056 bytes
Variable Size 275781864 bytes
Database Buffers 260046848 bytes
Redo Buffers 262144 bytes
Database mounted.
Database opened.

9. Patch OMS.
10. Patch agent
11. Start agent

10ag
emctl start agent

12. Start oms

10gr
$ORACLE_HOME/opmn/bin/opmnctl startall


The Grid Control was patched and it worked just fine. The only problem was that the database
remained at 10.1.0.4.0 version. So, I decided to upgrade the repository database too, following the steps below:


1. stop agent
2. stop oms
3. stop database
4. stop listener
5. install 10.2.0.1 Oracle software into a new home
6. upgrade to 10.2.0.3 the Oracle software
7. upgrade the database using “dbua”
8. start oms
9. start agent


Everything was normal until today when we had to shutdown the server for maintenance reasons. Unfortunately, on reboot the repository database continued to be started under the previous 10.1.0.4 oracle home and, of course, because of this the OEM couldn't be used. I had a look into /etc/oratab file and the entries were fine. The database was supposed to be started from the new upgraded ORACLE_HOME!!! Furthermore, changing the /etc/oratab file seemed to have no effect. In the end, I found the reason. Under the /etc/rc.d/rc3.d/ directory there is a script called S99gcstartup. This script calls the startup scripts for repository database, OMS and for the agent. It's enough to copy the startup script for the database under the new ORACLE_HOME directory and to change the ORACLE_HOME variable which is declared in that script. After these changes the repository database was/is started up with the correct ORACLE_HOME on every reboot.

Friday, June 08, 2007

Losing One Voting Disk

Voting disks are used in a RAC configuration for maintaining nodes membership. They are critical pieces in a cluster configuration. Starting with ORACLE 10gR2, it is possible to mirror the OCR and the voting disks. Using the default mirroring template, the minimum number of voting disks necessary for a normal functioning is two.

Scenario Setup

In this scenario it is simulated the crash of one voting disk by using the following steps:

  1. identify votings:

crsctl query css votedisk

0. 0 /dev/raw/raw1

1. 0 /dev/raw/raw2

2. 0 /dev/raw/raw3

  1. corrupt one of the voting disks (as root):

    dd if=/dev/zero /dev/raw/raw3 bs=1M

Recoverability Steps

  1. check the “$CRS_HOME/log/[hostname]/alert[hostname].log” file. The following message should be written there which allows us to determine which voting disk became corrupted:

    [cssd(9120)]CRS-1604:CSSD voting file is offline: /opt/oracle/product/10.2.0/crs_1/Voting1. Details in /opt/oracle/product/10.2.0/crs_1/log/aut-arz-ractest1/cssd/ocssd.log.

  2. According to the above listing the Voting1 is the corrupted disk. Shutdown the CRS stack:

    srvctl stop database -d fitstest -o immediate

    srvctl stop asm -n aut-vie-ractest1

    srvctl stop asm -n aut-arz-ractest1

    srvctl stop nodeapps -n aut-vie-ractest1

    srvctl stop nodeapps -n aut-arz-ractest1

    crs_stat -t

    On every node as root:

    crsctl stop crs

  3. Pick a good voting from the remaining ones and copy it over the corrupted one:

    dd if=/dev/raw/raw4 of=/dev/raw/raw3 bs=1M

  4. Start CRS (on every node as root):

      crsctl start crs

  5. Check log file “$CRS_HOME/log/[hostname]/alert[hostname].log”. It should look like shown below:

    [cssd(14463)]CRS-1601:CSSD Reconfiguration complete. Active nodes are aut-vie-ractest1 aut-arz-ractest1 .

    2007-05-31 15:19:53.954

    [crsd(14268)]CRS-1012:The OCR service started on node aut-vie-ractest1.

    2007-05-31 15:19:53.987

    [evmd(14228)]CRS-1401:EVMD started on node aut-vie-ractest1.

    2007-05-31 15:19:55.861 [crsd(14268)]CRS-1201:CRSD started on node aut-vie-ractest1.

  6. After a couple of minutes check the status of the whole CRS stack:

    [oracle@aut-vie-ractest1 ~]$ crs_stat -t

    Name Type Target State Host

    ------------------------------------------------------------

    ora....SM2.asm application ONLINE ONLINE aut-...est1

    ora....T1.lsnr application ONLINE ONLINE aut-...est1

    ora....st1.gsd application ONLINE ONLINE aut-...est1

    ora....st1.ons application ONLINE ONLINE aut-...est1

    ora....st1.vip application ONLINE ONLINE aut-...est1

    ora....SM1.asm application ONLINE ONLINE aut-...est1

    ora....T1.lsnr application ONLINE ONLINE aut-...est1

    ora....st1.gsd application ONLINE ONLINE aut-...est1

    ora....st1.ons application ONLINE ONLINE aut-...est1

    ora....st1.vip application ONLINE ONLINE aut-...est1

    ora....test.db application ONLINE ONLINE aut-...est1

    ora....t1.inst application ONLINE ONLINE aut-...est1

    ora....t2.inst application ONLINE ONLINE aut-...est1


Note: There's also possible to recover a lost voting disk from an old voting backup and to perform the “dd” command without shutting down the CRS stack.

Monday, February 05, 2007

select * from XML

A few days ago, one of our programmers asked me how he can call a procedure from the database passing as a parameter an array of records. Oracle provides this feature on the server side by using, for example, collections of objects, but building such collections on the client side and passing them as parameters is not a trivial task at all (I don’t even know if this is possible with complex types, especially if we are taking into consideration that we use IBATIS to handle database connections, statements and so on, which of course makes this attempt more difficult). However, the first thing which came into my mind was the usage of XML. In the light of this approach, the client side has to build up a XML and pass it to the invoked procedure as a replacement of an array of records. This is not difficult as far as the client csharp code is concerned and, of course, is not a big deal to write a PL/SQL procedure which expects a CLOB parameter, which actually represents the XML content. However, being a SQL guy I must admit that I don’t feel very comfortable to manually parse the XML and to do a lot of DBMS_XML stuff in order to get the data from that XML. Instead, I would rather like to see the XML content as a regular table. Below is how you can do the trick:

SELECT id, value
FROM XMLTABLE('/ROWSET/ROW'
PASSING xmltype('
<ROWSET>
<ROW>
<ID>1</ID>
<VALUE>aaa</VALUE>
</ROW>
<ROW>
<ID>2</ID>
<VALUE>bbb</VALUE>
</ROW>
</ROWSET>')
COLUMNS
id INTEGER PATH 'ID',
value VARCHAR2(30) PATH 'VALUE') src;

If you want to embed the above piece of code within a procedure with the XML taken as parameter then it’s just of matter of replacing the actual plain XML content with the corresponding CLOB parameter.

SELECT id, value
FROM XMLTABLE('/ROWSET/ROW'
PASSING xmltype(p_clob_xml)
COLUMNS
id INTEGER PATH 'ID',
value VARCHAR2(30) PATH 'VALUE') src;

I have tested this on an Oracle 10.2.0.2 server. It might work on previous releases but I cannot offer any guarantees. Happy XMLing! :P

Tuesday, January 30, 2007

Get the “tnsnames.ora” from OPENLDAP

One thing which really bothers me is when I have to go to every client machine and I must edit the local “tnsnames.ora” file. If you have to deal with just a few clients is not a big issue but as soon as the number of clients exceed five or ten, this approach becomes ugly. The most convenient way would be to have these connection descriptors into a central place and every client to query that repository in order to get the requested descriptor. We have at least two possibilities here: to use “oranames” which is obsolete or to use a LDAP server as the recommended way. Oracle provides its own LDAP server known as OID (Oracle Internet Directory) but it is part of the Advanced Security option and you have to pay for it. In this case, when the whole goal is to use a LDAP server just as a replacement for the local “tnsnames.ora”, it’s not affordable to buy such a wide option for such a tiny functionality. We’ll use instead a free LDAP server which runs seemly on every UNIX platform and it is called OPENLDAP (there are OPENLDAP distributions for Windows as well).

So, the first thing is to install the required packages. You’ll need: openldap, opeldap-servers and openldap-clients. On my Fedora server I can query the RPM repository as shown below:

root@ssafe:~# rpm -qa | grep openldap
openldap-2.3.19-4
openldap-devel-2.3.19-4
openldap-clients-2.3.19-4

As you can see the openldap-servers package is missing therefore we have to install it. We’ll use YUM to do this:

root@ssafe:~# yum install openldap-servers
Loading "installonlyn" plugin
Setting up Install Process
Setting up repositories
core [1/3]
extras [2/3]
updates [3/3]
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Package openldap-servers.i386 0:2.3.19-4 set to be updated
--> Running transaction check

Dependencies Resolved

=============================================================================
Package Arch Version Repository Size
=============================================================================
Installing:
openldap-servers i386 2.3.19-4 core 2.2 M

Transaction Summary
=============================================================================
Install 1 Package(s)
Update 0 Package(s)
Remove 0 Package(s)
Total download size: 2.2 M
Is this ok [y/N]: y
Downloading Packages:
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing: openldap-servers ######################### [1/1]

Installed: openldap-servers.i386 0:2.3.19-4
Complete!

Now, we have a nice LDAP server installed on our server. The next step is to configure it. Oracle will need some special LDAP schemas which must be loaded into our server. These schema files are: oidbase.schema, oidrdbms.schema and oidnet.schema. Save them into a new created directory called, let’s say, “oracle-ldap”. Below is the content of the needed files:
1. oidbase.schema:


attributetype ( 2.16.840.1.113894.7.1.1 NAME 'orclVersion' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.2 NAME 'orclOracleHome' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.3 NAME 'orclSystemName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.4 NAME 'orclServiceType' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.5 NAME 'orclSid' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.7.1.6 NAME 'orclProductVersion' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


objectClass ( 2.16.840.1.113894.7.2.2 NAME 'orclContainer' SUP 'top' STRUCTURAL MUST ( cn ) )


objectClass ( 2.16.840.1.113894.7.2.3 NAME 'orclContext' SUP 'top' STRUCTURAL MUST ( cn ) )


objectClass ( 2.16.840.1.113894.7.2.6 NAME 'orclSchemaVersion' SUP 'top' STRUCTURAL MUST ( cn $ orclProductVersion ) )


attributetype ( 2.16.840.1.113894.3.1.12 NAME 'orclNetDescName' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.13 NAME 'orclNetDescString' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


objectClass ( 2.16.840.1.113894.7.2.1001 NAME 'orclService' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclServiceType $ orclOracleHome $ orclSystemName $ orclSid $ orclNetDescName $ orclNetDescString $ orclVersion $ Description ) )


2. oidrdbms.schema file:


attributetype ( 2.16.840.1.113894.2.1.1 NAME 'orclDBtrustedUser' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.2 NAME 'orclDBServerMember' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.3 NAME 'orclDBEntUser' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.4 NAME 'orclDBEntRoleAssigned' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.5 NAME 'orclDBServerRole' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.2.1.6 NAME 'orclDBTrustedDomain' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.2.1.7 NAME 'orclDBRoleOccupant' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' )


attributetype ( 2.16.840.1.113894.2.1.8 NAME 'orclDBDistinguishedName' EQUALITY distinguishedNameMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.12' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.2.1.9 NAME 'orclDBNativeUser' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.2.1.10 NAME 'orclDBGlobalName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


objectClass ( 2.16.840.1.113894.2.2.1 NAME 'orclDBServer' SUP 'orclService' STRUCTURAL MAY ( userCertificate $ orclDBtrustedUser $ orclDBGlobalName ) )


objectClass ( 2.16.840.1.113894.2.2.2 NAME 'orclDBEnterpriseDomain' SUP top STRUCTURAL MUST cn MAY ( orclDBServerMember $ orclDBEntUser $ orclDBTrustedDomain ) )


objectClass ( 2.16.840.1.113894.2.2.3 NAME 'orclDBEnterpriseRole' SUP top STRUCTURAL MUST cn MAY ( orclDBServerRole $ orclDBEntRoleAssigned $ description $ seeAlso $ o $ ou $ orclDBRoleOccupant ) )


objectClass ( 2.16.840.1.113894.2.2.4 NAME 'orclDBEntryLevelMapping' SUP top STRUCTURAL MUST cn MAY ( orclDBDistinguishedName $ orclDBNativeUser ) )


objectClass ( 2.16.840.1.113894.2.2.5 NAME 'orclDBSubtreeLevelMapping' SUP top STRUCTURAL MUST cn MAY ( orclDBDistinguishedName $ orclDBNativeUser ) )


3. oidnet.schema file:


attributetype ( 2.16.840.1.113894.3.1.1 NAME 'orclNetSourceRoute' SYNTAX '1.3.6.1.4.1.1466.115.121.1.5' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.2 NAME 'orclNetLoadBalance' SYNTAX '1.3.6.1.4.1.1466.115.121.1.5' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.3 NAME 'orclNetFailover' SYNTAX '1.3.6.1.4.1.1466.115.121.1.5' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.4 NAME 'orclNetSdu' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.5 NAME 'orclNetServer' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.6 NAME 'orclNetServiceName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.7 NAME 'orclNetInstanceName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.8 NAME 'orclNetHandlerName' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.9 NAME 'orclNetParamList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.10 NAME 'orclNetAuthenticationType' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.11 NAME 'orclNetAuthParams' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.14 NAME 'orclNetAddressString' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.15 NAME 'orclNetProtocol' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.16 NAME 'orclNetShared' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.17 NAME 'orclNetAddrList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.18 NAME 'orclNetProtocolStack' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


attributetype ( 2.16.840.1.113894.3.1.19 NAME 'orclNetDescList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.20 NAME 'orclNetConnParamList' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' )


attributetype ( 2.16.840.1.113894.3.1.21 NAME 'orclNetAuthenticationService' EQUALITY caseIgnoreMatch SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )


objectClass ( 2.16.840.1.113894.3.2.5 NAME 'orclNetService' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetDescName $ orclNetDescString $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.4 NAME 'orclNetDescriptionList' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetDescList $ orclNetSourceRoute $ orclNetLoadBalance $ orclNetFailover $ orclNetShared $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.3 NAME 'orclNetDescription' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetAddrList $ orclNetProtocolStack $ orclNetSdu $ orclSid $ orclNetServer $ orclNetServiceName $ orclNetInstanceName $ orclNetHandlerName $ orclOracleHome $ orclNetAuthenticationType $ orclNetAuthenticationService $ orclNetAuthParams $ orclNetParamList $ orclNetConnParamList $ orclNetSourceRoute $ orclNetLoadBalance $ orclNetFailover $ orclNetShared $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.2 NAME 'orclNetAddressList' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetAddrList $ orclNetSourceRoute $ orclNetLoadBalance $ orclNetFailover $ orclNetShared $ orclVersion $ Description ) )


objectClass ( 2.16.840.1.113894.3.2.1 NAME 'orclNetAddress' SUP 'top' STRUCTURAL MUST ( cn ) MAY ( orclNetAddressString $ orclNetProtocol $ orclNetShared $ orclVersion $ Description ) )


Then, edit the slapd.conf file. On my installation this file can be found under /etc/openldap directory. Before changing something it is wise to make a copy of the original slapd.conf file.
cp /etc/openldap/slapd.conf /etc/openldap/slapd.conf.bak

Now, open the /etc/openldap/slapd.conf file and change it as following (see the bolded lines):

#
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema


include /oracle-ldap/oidbase.schema
include /oracle-ldap/oidrdbms.schema
include /oracle-ldap/oidnet.schema


# Allow LDAPv2 client connections. This is NOT the default.
allow bind_v2

# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#referral ldap://root.openldap.org

pidfile /var/run/openldap/slapd.pid
argsfile /var/run/openldap/slapd.args

# Load dynamic backend modules:
# modulepath /usr/lib/openldap
# moduleload back_bdb.la
# moduleload back_ldap.la
# moduleload back_ldbm.la
# moduleload back_passwd.la
# moduleload back_shell.la

# The next three lines allow use of TLS for encrypting connections using a
# dummy test certificate which you can generate by changing to
# /etc/pki/tls/certs, running "make slapd.pem", and fixing permissions on
# slapd.pem so that the ldap user or group can read it. Your client software
# may balk at self-signed certificates, however.
# TLSCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
# TLSCertificateFile /etc/pki/tls/certs/slapd.pem
# TLSCertificateKeyFile /etc/pki/tls/certs/slapd.pem

# Sample security restrictions
# Require integrity protection (prevent hijacking)
# Require 112-bit (3DES or better) encryption for updates
# Require 63-bit encryption for simple bind
# security ssf=1 update_ssf=112 simple_bind=64

# Sample access control policy:
# Root DSE: allow anyone to read it
# Subschema (sub)entry DSE: allow anyone to read it
# Other DSEs:
# Allow self write access
# Allow authenticated users read access
# Allow anonymous users to authenticate
# Directives needed to implement policy:
# access to dn.base="" by * read
# access to dn.base="cn=Subschema" by * read

access to *

# by self write
# by users read

by anonymous auth

#
# if no access controls are present, the default policy
# allows anyone and everyone to read anything but restricts
# updates to rootdn. (e.g., "access to * by * read")
#
# rootdn can always read and write EVERYTHING!

#######################################################################
# ldbm and/or bdb database definitions
#######################################################################

database bdb


suffix "dc=itfits,dc=biz"
rootdn "cn=Manager,dc=itfits,dc=biz"


# Cleartext passwords, especially for the rootdn, should
# be avoided. See slappasswd(8) and slapd.conf(5) for details.
# Use of strong authentication encouraged.


rootpw secret


# rootpw {crypt}ijFYNcSNctBYg

# The database directory MUST exist prior to running slapd AND
# should only be accessible by the slapd and slap tools.
# Mode 700 recommended.
directory /var/lib/ldap

# Indices to maintain for this database
index objectClass eq,pres
index ou,cn,mail,surname,givenname eq,pres,sub
index uidNumber,gidNumber,loginShell eq,pres
index uid,memberUid eq,pres,sub
index nisMapName,nisMapEntry eq,pres,sub

# Replicas of this database
#replogfile /var/lib/ldap/openldap-master-replog
#replica host=ldap-1.example.com:389 starttls=critical
# bindmethod=sasl saslmech=GSSAPI
# authcId=host/ldap-master.example.com@EXAMPLE.COM

The “suffix” and “rootdn” should be changed according to your specific domain. It’s time to start the “slapd” daemon by simply typing slapd . Now our LDAP server is ready for incoming requests but we’ll have to create the initial structure of the LDAP tree. In order to do this we’ll need some so called “ldif” files which will be used to specify LDAP entries along with their attributes.

Let’s create the following ldif files:
1. base.ldif, with the following content:
dn: dc=itfits,dc=biz
objectclass: dcObject
objectclass: organization
o: ITFITS Network
dc: itfits
2. manager.ldif with the following content:
dn: cn=Manager,dc=itfits,dc=biz
objectClass: organizationalRole
cn: Manager
3. OracleContext.ldif with the following content:
dn: cn=OracleContext,dc=itfits,dc=biz
objectclass: orclContext
cn: OracleContext
4. pdmdb.ldif with the following content:
dn: cn=pdmdb,cn=OracleContext,dc=itfits,dc=biz
objectclass: top
objectclass: orclNetService
cn: pdmdb
orclNetDescString: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ops)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PDMDB)))

After we have created the above files we must use them to alter the LDAP tree structure:

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f base.ldif
adding new entry "dc=itfits,dc=biz"

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f manager.ldif
adding new entry "cn=Manager,dc=itfits,dc=biz"

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f OracleContext.ldif
adding new entry "cn=OracleContext,dc=itfits,dc=biz"

root@ssafe:oracle-ldap# ldapadd -x -D "cn=Manager,dc=itfits,dc=biz" -w secret -f pdmdb.ldif
adding new entry "cn=pdmdb,cn=OracleContext,dc=itfits,dc=biz"

Oookey, the LDAP server is ready to serve our oracle clients. However, our oracle clients might not be prepared for talking with a LDAP server therefore we should spend some time to configure them. It is always recommended to use the provided graphical tools therefore we’ll use “netca”. Choose “Naming Method configuration” and add “Directory naming” from the available methods as a “Selected Naming” method. The “Directory naming” should be the only selected method or at least should be the first one.
Perfect! Now, we must create an “$ORACLE_HOME/network/admin/ldap.ora” file, with the following content:

DIRECTORY_SERVERS= (ssafe:389:636)
DEFAULT_ADMIN_CONTEXT = "dc=itfits,dc=biz"
DIRECTORY_SERVER_TYPE = OID

“ssafe” is the DNS name of the machine where the LDAP server was installed.


Now, let’s test the configuration:
C:\Documents and Settings\alec>tnsping pdmdb

TNS Ping Utility for 32-bit Windows: Version 10.2.0.1.0 - Production on 30-JAN-2007 12:07:27

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:
C:\oracle\product\10.2.0\client_1\network\admin\sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ops)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PDMDB)))
OK (0 msec)



Lovely! From now on our oracle clients will get the network descriptors from the LDAP repository. No more “tnsnames.ora” is required.

Monday, January 22, 2007

Recreating the Database Control Repository

I don’t know why, but after I upgraded my database to 10.2.0.2, the DB Console started to behave quite unusual. For example, I was not able to submit backup jobs because it was complaining that the host privileges were not set up. Trying to manually set those privileges within the “Preferences/Host Preferred Credentials” was unsuccessful because the displayed page was simply empty. So, I decided to recreate the repository and this fixed my problem. Below are the steps I followed:


  1. drop the repository:


  2. emca -deconfig dbcontrol db -repos drop

  3. recreate the repository using dbca.

Friday, January 12, 2007

Writing PLSQL Developer Plugins in CSHARP

Few months ago I decided to write a plugin for the well known PLSQL Developer environment, using csharp. Form the very beginning, I must admit that I haven’t a large experience with this programming language as “deep inside” I’m an Oracle guy, but sometimes is nice to play around with it, especially if you have former knowledge with Java. At that time my plans were to develop a Microsoft Source Safe plugin using InteropSSafeTypeLib. The main problem was the fact that my plugin was not seen by the PLSQL Developer IDE, despite that from the outside it looked like a regular DLL (it turned out that I was wrong). So, below is the class from which I started my research:

using System;
using System.Collections.Generic;
using System.Text;

namespace test_plugin
{
public class VSSPlugin
{
public static string IdentifyPlugIn(int ID)
{
string result = "Hello from c#";
return result;
}
}

}

This compiles nicely into a self-packaged DLL but, as I previously said, it is not suitable to be used as a PL/SQL Developer plugin. The problem actually resides in the way the DLL is built-up, its structure being designed to handle managed code which cannot be directly handled by an external application written in C or Delphi. The workaround is to decompile the DLL into its pure code called MSIL to make some changes and to recompile again providing the modified MSIL code for our DLL.

So, to decompile you have to use the following command:

ildasm.exe /OUT:test_plugin.il test_plugin.dll

This will create in the current directory two files: “test_plugin.il” and “test_plugin.res”. The file which is interesting for us is “test_plugin.il” and it contains the MSIL code. You’ll obtain something like this:

// Microsoft (R) .NET Framework IL Disassembler. Version 2.0.50727.42
// Copyright (c) Microsoft Corporation. All rights reserved.



// Metadata version: v2.0.50727
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 )
.ver 2:0:0:0
}
.assembly test_plugin
{
...
...bla bla bla...
...
}
.module test_plugin.dll
// MVID: {CA976282-5C4E-46F5-A770-39DF57596ECE}
.imagebase 0x00400000
.file alignment 0x00001000
.stackreserve 0x00100000
.subsystem 0x0003 // WINDOWS_CUI
.corflags 0x00000001 // ILONLY
// Image base: 0x00EB0000


// =============== CLASS MEMBERS DECLARATION ===================

.class public auto ansi beforefieldinit test_plugin.VSSPlugin
extends [mscorlib]System.Object
{
.method public hidebysig static string
IdentifyPlugIn(int32 ID) cil managed
{
// Code size 8 (0x8)
.maxstack 1
.locals init ([0] string result)
IL_0000: ldstr "Hello from c#"
IL_0005: stloc.0
IL_0006: ldloc.0
IL_0007: ret
} // end of method VSSPlugin::IdentifyPlugIn

.method public hidebysig specialname rtspecialname
instance void .ctor() cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ret
} // end of method VSSPlugin::.ctor

} // end of class test_plugin.VSSPlugin


// =============================================================

// *********** DISASSEMBLY COMPLETE ***********************
// WARNING: Created Win32 resource file test_plugin.res

Now it’s time to make our changes. Bellow is the IL file, modified to support the export of our “IdentifyPlugIn” function. The changed and the added lines are bolded.


// Microsoft (R) .NET Framework IL Disassembler. Version 2.0.50727.42
// Copyright (c) Microsoft Corporation. All rights reserved.



// Metadata version: v2.0.50727
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 )
.ver 2:0:0:0
}
.assembly test_plugin
{
...
...bla bla bla...
...
}
.module test_plugin.dll
// MVID: {CA976282-5C4E-46F5-A770-39DF57596ECE}
.imagebase 0x00400000
.file alignment 0x00001000
.stackreserve 0x00100000
.subsystem 0x0003 // WINDOWS_CUI



.corflags 0x00000002
.vtfixup [1] int32 fromunmanaged at VT_01
.data VT_01 = int32(0)



// Image base: 0x00EB0000


// =============== CLASS MEMBERS DECLARATION ===================

.class public auto ansi beforefieldinit test_plugin.VSSPlugin
extends [mscorlib]System.Object
{
.method public hidebysig static string
IdentifyPlugIn(int32 ID) cil managed
{


.vtentry 1 : 1
.export [1] as IdentifyPlugIn



// Code size 8 (0x8)
.maxstack 1
.locals init ([0] string result)
IL_0000: ldstr "Hello from c#"
IL_0005: stloc.0
IL_0006: ldloc.0
IL_0007: ret
} // end of method VSSPlugin::IdentifyPlugIn

.method public hidebysig specialname rtspecialname
instance void .ctor() cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ret
} // end of method VSSPlugin::.ctor

} // end of class test_plugin.VSSPlugin


// =============================================================

// *********** DISASSEMBLY COMPLETE ***********************
// WARNING: Created Win32 resource file test_plugin.res

It’s time to recompile the IL code into the target DLL file. You can use the following command:

ilasm /OUT:test_plugin.dll test_plugin.il /DLL

Copy the generated test_plugin.dll file under the PLSQL Developer “PlugIns” directory and start the IDE environment. Take a look into Tools/Configure Plugins! Bingo!!! You’ll see there “Hello from c#” which is the name of our first plugin written in csharp.

This approach should be taken as proof of concept that, basically, you can write a PLSQL Developer plugin using csharp but I cannot tell you for sure what problems you might expect or if you will be allowed to use forms or other advanced csharp programming techniques.

Thursday, January 11, 2007

The Infamous Update on a Primary Key

Today I was put in front of a new challenging situation. We have a reporting application which is fed from another system using materialized views. In short, on the remote source system, for several “interesting” tables materialized view logs were defined with the “WITH PRIMARY KEY” option. These logs are used to support a fast refresh mechanism for the corresponding materialized views created on the reporting system. Each materialized view was created on a pre-built table. Of course, there’s a refresh job which takes place on regular intervals in order to keep in sync both systems. As soon as a new record arrives or an existing one is updated this is marked accordingly into a flag column within the pre-built table through a trigger. Likewise, the operation type (INSERT or UPDATE) is written as well into the pre-built table. Now, the big problem is the fact that some modules from the source system are updating (yes, very ugly!!!) values from columns which are part of the primary key. From the materialized view log perspective this is converted into a delete and an insert therefore as soon as the DBMS_MVIEW.REFRESH procedure is called on the reporting system the BEFORE/AFTER DELETE and BEFORE/AFTER INSERT triggers are executed instead of a BEFORE/AFTER UPDATE trigger. Now, one of the MV interface requirement was to convert this sequence of one DELETE and one INSERT so that on the reporting system that initial UPDATE on the primary key to look like a regular update. Even at the very beginning this might appear as a trivial task in fact is quite cumbersome. The solution we found so far relies on using ROWIDs for our pre-built tables.

Okey, because the so called “too much sound but little sense” is not going to be the subject of this post, let’s delve into a concrete test scenario.

First of all, create the source table:

create table muci (col1 integer, col2 integer, col3 integer);
alter table muci add primary key (col1, col2);

insert into muci values (1, 1, 1);
insert into muci values (2, 2, 2);
insert into muci values (3, 3, 3);

commit;
create materialized view log on muci with primary key;

Now, let’s create the pre-built table and the materialized view.

create table muci_mv as
select t.rowid rid, t.*, '1' processed, 'INSERT' operation from muci t;

create materialized view muci_mv
on prebuilt table
refresh fast on demand as
select t.rowid rid, t.* from muci t;

It is important to notice that we’ve created the pre-built table with additional columns: the “processed” and the “operation” fields. These fields are going to be filled in using a trigger on the MUCI_MV table. Also, pay attention to the “rid” field which is supposed to store ROWID addresses for all records refreshed from the MUCI table (not ROWIDs of MUCI_MV records!). I’ll show you right in a minute what the reason of this column is.

Let’s go on creating a very simple package:

create or replace package MUCI_TOOLKIT
is

g_old_rid rowid;

end MUCI_TOOLKIT;

This package has a global variable which is intended to store the old value of the “rid” column of a record which is about to be deleted.

Our test scenario is almost finished! The final step is to create a trigger onto the MUCI_MV table which actually takes care of updating the additional flags:

create or replace trigger trg_biudr_muci_mv
before insert or update or delete on muci_mv
for each row
declare
begin
if inserting then
dbms_output.put_line('inserting...');
:new.processed := '0';
if muci_toolkit.g_old_rid is not null and :new.rid = muci_toolkit.g_old_rid then
:new.operation := 'UPDATE';
muci_toolkit.g_old_rid := null; -- cleanup
else
:new.operation := 'INSERT';
end if;
end if;
if updating then
dbms_output.put_line('updating...');
:new.processed := '0';
:new.operation := 'UPDATE';
end if;
if deleting then
dbms_output.put_line('deleting...');
muci_toolkit.g_old_rid := :old.rid;
end if;
end trg_biudr_muci_mv;

Now, it’s time to effectively test!

external_interface@pdmdb> set serveroutput on
external_interface@pdmdb> select * from muci;

COL1 COL2 COL3
----- ----- -----
1 1 1
2 2 2
3 3 3

external_interface@pdmdb> select * from muci_mv;

RID COL1 COL2 COL3 PROCESSED OPERATION
------------------ ----- ----- ----- --------- ---------
AACdH+AAFAADheGAAA 1 1 1 1 INSERT
AACdH+AAFAADheGAAB 2 2 2 1 INSERT
AACdH+AAFAADheGAAC 3 3 3 1 INSERT

external_interface@pdmdb> update muci set col1='123' where col1=1 and col2=1;

1 row updated

external_interface@pdmdb> commit;

Commit complete

external_interface@pdmdb> exec dbms_mview.refresh('MUCI_MV', 'f');

deleting...
inserting...

PL/SQL procedure successfully completed

external_interface@pdmdb> select * from muci_mv;

RID COL1 COL2 COL3 PROCESSED OPERATION
------------------ ----- ----- ----- --------- ---------
AACdH+AAFAADheGAAB 2 2 2 1 INSERT
AACdH+AAFAADheGAAC 3 3 3 1 INSERT
AACdH+AAFAADheGAAA 123 1 1 0 UPDATE

As you can see it works as expected… smooth! The whole trick consists in checking if the DELETE and INSERT sequence refers to the same source ROWID. If this is the case then we know that an update on the primary key was involved.

However, there are some open points regarding this approach:
1. additional storage is required for storing the added RID column;
2. the DBMS_MVIEW.REFRESH mechanism must follow the sequence order of DML commands performed on the source system which, as far as the oracle documentation states, is not always guaranteed in all Oracle versions.
3. in some MetaLink notes (see Note:67424.1) it is stated that triggers are not supported on read only materialized views.. Yet, you can find a HOWTO on the same MetaLink (176213.1) which advise you to create a trigger on a prebuilt table in order to fill in an additional column (as shown above) with the current SYSDATE value.
4. because we rely on ROWID values of course we’ll end up in big problems if on the source site some source tables are moved to another tablespace or they are re-imported.

Tuesday, January 09, 2007

To DESC or @DESC...

There is one SQLPLUS command which seems to be out of my control as far as its displaying layout is concerned. As you already guess from the post title it's about the well known DESC command. Seems that it has a “self-tuning” layout which takes into consideration the current LINESIZE settings.

For example, using a high value for the LINESIZE setting we obtain:

scott@DDB> set linesize 300
scott@DDB> desc emp
Name
Null? Type
------------------------------------------------------------------------------
-------------------------------------------------------------------------------
---------------- -------- -----------------------------------------------------
---------------------------------------------------------------
EMPNO
NOT NULL NUMBER(4)
ENAME
VARCHAR2(10)
JOB
VARCHAR2(9)
MGR
NUMBER(4)
HIREDATE
DATE
SAL
NUMBER(7,2)
COMM
NUMBER(7,2)
DEPTNO
NUMBER(2)

But using a lower value for the LINESIZE setting the output looks fine:

scott@DDB> set linesize 80
scott@DDB> desc emp
Name Null? Type
----------------------------------------- -------- ----------------------------
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)

From my point of view this behavior is quite ugly especially when I am working within a console with horizontal scrolling capabilities. Using a high value for LINESIZE allows me to execute various 'SELECT * FROM' without worrying about output wrapping but when DESC command comes in place I will always end up scrolling to the right side till I reach the displaying of the columns type, which is really, really ugly! So, in the end I've decided to create a script which provides the expected nice output. The main idea is to decrease the LINESIZE setting, to execute DESC and then to restore the old LINESIZE setting. Not a big deal! Below is my script:

store set crr_settings.cfg replace
set linesize 80
desc &1
@crr_settings.cfg

Now, all I have to do is to take care to prefix with a @ the DESC command:

scott@DDB> set linesize 300
scott@DDB> @desc emp
Wrote file crr_settings.cfg
Name Null? Type
----------------------------------------- -------- ----------------------------
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)

Have fun! :)

Sunday, January 07, 2007

Do Direct Load Inserts Generate Redo?

As to so many Oracle related questions the answer is: IT DEPENDS...

If you are in the middle of such a tricky situation where, in the development environment your bulk direct load process performs well but into production the same direct load process behaves poorly because it generates a lot of redo, then you should start by checking the following:
  1. is your development database running in NOARCHIVELOG mode? In NOARCHIVELOG mode the direct load operations produce redo information at minimum which is not necessary the same into a database which runs in ARCHIVELOG mode. It's not unlikely to have the development environment in NOARCHIVELOG and the production in ARCHIVELOG mode therefore this is the first thing to check.
  2. is the destination table from the production site configured with NOLOGGING attribute? Without this setting the direct load inserts against the table will continue to generate redo information as normal. So, check the LOGGING column from (USER/ALL/DBA)_TABLES view for the corresponding destination table.
  3. has the production database the FORCE LOGGING option enabled? This may be required if there are some standby databases in place therefore all direct load operations will produce redo information no matter what. To check this you can query the FORCE_LOGGING column from the V$DATABASE view.

Thursday, January 04, 2007

Impersonating Of Any Oracle User?

Yes, it is possible! It's like sudo command in UNIX or “Run As” in Windows (but without prompting for password). Starting with 8i version, Oracle has a special undocumented package called DBMS_SYS_SQL which provides the functionality needed for impersonating users. It is internally used in databases replication or within the XDB platform. Of course, execute right against this package should be granted with extreme care, as it can be easily exploited by hackers. Furthermore, as an additional precaution it is advisable to check from time to time to whom execute privilege for this package was granted.

So, lets try it!

SQL> connect sys as sysdba
Enter password:
Connected.

SQL> grant execute on dbms_sys_sql to alek;

Grant succeeded.

SQL> connect alek
Enter password:
Connected.
SQL> declare
2 l_uid number;
3 l_sqltext varchar2(100) := 'grant dba to super_user identified by xxx';
4 l_myint integer;
5
6 begin
7 select user_id into l_uid from all_users where username like 'SYS';
8
9 l_myint:=sys.dbms_sys_sql.open_cursor();
10 sys.dbms_sys_sql.parse_as_user(l_myint, l_sqltext, dbms_sql.native, l_uid);

11
12 sys.dbms_sys_sql.close_cursor(l_myint);
13 end ;
14 /

PL/SQL procedure successfully completed.

SQL> connect super_user/xxx
Connected.


Brrr!!! Nice but scary!

Saturday, December 23, 2006

Spelling Numbers

Wouldn't be nice to be able to display numbers in their spelling textual format? There are a lot of applications which require such a feature: billing, bookkeeping systems etc. Well, if we remember right there is a special suffix for specifying the mask to convert a date using the TO_CHAR function. That special suffix is SP. However, being designed to work with date types it's not very obvious how it can be used for plain numbers. Well, even it's not very often used we can convert a number into a date representation using the Julian format and then spell it using the SP prefix. For example:

SQL> select to_char(to_date(34, 'j'), 'jspth') spell from dual;

SPELL
-------------
thirty-fourth

SQL> select to_char(to_date(1023, 'j'), 'jspth') spell from dual;

SPELL
-------------------------
one thousand twenty-third

Lovely! Nevertheless, there are some limitations here, and some of them are quite ugly therefore they should be carefully analyzed before deciding to use this approach into a production system. The following are some of the problems you might encounter:
  1. it's not suitable for multi-language applications. The SP prefix always speaks in English, despite of the current NLS settings.
  2. It cannot be used to display large numbers. In fact, the maximum number which can be spelled using this approach is exactly 5373484. It's quite small for a bookkeeping system, right?
  3. How about negative numbers or zero? Forget it! It will not work!

SQLPLUS and Custom Languages

Even I'm from Romania, I don't like how the translation of various messages sounds into my own language. However, sometimes it's useful to enable such a feature if, for example, the error messages which come from the Oracle server must be displayed into the client application but using the native language. So, at the very beginning you might think it's enough just to issue an “ALTER SESSION SET NLS_LANGUAGE=ROMANIAN;” but most of the time you'll find that nothing changes. This is because Oracle needs some additional translation files which, of course, must be installed. To install them you'll need the well known Oracle Universal Installer and you have to provide -addLangs as a parameter. You'll be led into a special wizard page like the one shown below:


After choosing and installing the preferred language we can go on and test if now SQLPLUS speaks our lingo.


SQL> ALTER SESSION SET NLS_LANGUAGE=romanian;

Sesiune modificata.

SQL> select * from no_table;
select * from no_table
*
EROARE la linia 1:
ORA-00942: tabelul sau vizualizarea nu exist?

SQL> select eroare bre;
select eroare bre
*
EROARE la linia 1:
ORA-00923: cuvantul cheie FROM nu este g?sit in locul a?teptat


So far so good! Nevertheless, as I previously said the translated messages are quite funny. By the way, I already found three translation mistakes into the above output but I will let Romanian readers to find them. Aaaa, I almost forgot to say: those strange question marks are in place there because my database character set is not appropriate for displaying romanian characters therefore please don't count this as a translation error. :)

Friday, December 22, 2006

CANCEL if you want to work!

Ohoo, this is quite nice! Usually, you wouldn't expect to make something to work unless you specify CANCEL, right? One interesting case is when you want to recreate the control file with the RESETLOGS option. The scenario is simple, you have an old backup which contains all your datafiles and archivelogs and, in addition, you was smart enough to backup the control file to trace. Now supposing you lost all current datafiles and the current redolog files. You have to restore and recover from your backup. The first thing is to restore your datafiles to the known location, to start the instance in the nomount state and to issue the CREATE CONTROLFILE command from the trace file. After the command is successfully executed your instance will be brought into the mount state using the fresh created control file. Now, the next step is to use the RECOVER DATABASE command. Bellow is a sample output:

SQL> startup nomount
ORACLE instance started.

Total System Global Area 264241152 bytes
Fixed Size 1218868 bytes
Variable Size 75499212 bytes
Database Buffers 184549376 bytes
Redo Buffers 2973696 bytes
CREATE CONTROLFILE REUSE DATABASE "DDB" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 (
'/opt/oracle/oradata/DDB/onlinelog/o1_mf_1_2pdco23c_.log',
'/opt/oracle/flash_recovery_area/DDB/onlinelog/o1_mf_1_2pdco49x_.log'
) SIZE 50M,
GROUP 2 (
'/opt/oracle/oradata/DDB/onlinelog/o1_mf_2_2pdco6j0_.log',
'/opt/oracle/flash_recovery_area/DDB/onlinelog/o1_mf_2_2pdco8ns_.log'
) SIZE 50M,
GROUP 3 (
'/opt/oracle/oradata/DDB/onlinelog/o1_mf_3_2pdcoby5_.log',
'/opt/oracle/flash_recovery_area/DDB/onlinelog/o1_mf_3_2pdcof0o_.log'
) SIZE 50M
-- STANDBY LOGFILE
DATAFILE
'/opt/oracle/oradata/DDB/datafile/o1_mf_system_2rlg719d_.dbf',
'/opt/oracle/oradata/DDB/datafile/o1_mf_undotbs1_2rlg71l3_.dbf',
'/opt/oracle/oradata/DDB/datafile/o1_mf_sysaux_2rlg714v_.dbf',
'/opt/oracle/oradata/DDB/datafile/o1_mf_users_2rlg71kk_.dbf'
CHARACTER SET WE8ISO8859P1
;

SQL> select open_mode from v$database;

OPEN_MODE
----------
MOUNTED

SQL> recover database using backup controlfile;
ORA-00279: change 793731 generated at 12/22/2006 12:00:32 needed for thread 1
ORA-00289: suggestion :
/opt/oracle/flash_recovery_area/DDB/archivelog/1_1_609804799.dbf
ORA-00280: change 793731 for thread 1 is in sequence #1


Specify log: {=suggested | filename | AUTO | CANCEL}

ORA-00279: change 794266 generated at 12/22/2006 12:09:00 needed for thread 1
ORA-00289: suggestion :
/opt/oracle/flash_recovery_area/DDB/archivelog/1_2_609804799.dbf
ORA-00280: change 794266 for thread 1 is in sequence #2
ORA-00278: log file
'/opt/oracle/flash_recovery_area/DDB/archivelog/1_1_609804799.dbf' no longer
needed for this recovery


Specify log: {=suggested | filename | AUTO | CANCEL}

ORA-00279: change 794268 generated at 12/22/2006 12:09:03 needed for thread 1
ORA-00289: suggestion :
/opt/oracle/flash_recovery_area/DDB/archivelog/1_3_609804799.dbf
ORA-00280: change 794268 for thread 1 is in sequence #3
ORA-00278: log file
'/opt/oracle/flash_recovery_area/DDB/archivelog/1_2_609804799.dbf' no longer
needed for this recovery


Specify log: {=suggested | filename | AUTO | CANCEL}

ORA-00279: change 794271 generated at 12/22/2006 12:09:07 needed for thread 1
ORA-00289: suggestion :
/opt/oracle/flash_recovery_area/DDB/archivelog/1_4_609804799.dbf
ORA-00280: change 794271 for thread 1 is in sequence #4
ORA-00278: log file
'/opt/oracle/flash_recovery_area/DDB/archivelog/1_3_609804799.dbf' no longer
needed for this recovery


Specify log: {=suggested | filename | AUTO | CANCEL}

ORA-00308: cannot open archived log
'/opt/oracle/flash_recovery_area/DDB/archivelog/1_4_609804799.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3


SQL> ALTER DATABASE OPEN RESETLOGS;
ALTER DATABASE OPEN RESETLOGS
*
ERROR at line 1:
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1:
'/opt/oracle/oradata/DDB/datafile/o1_mf_system_2rlg719d_.dbf'

Uppsy! File 1 needs media recovery? Why?
Because, when I have been asked for the last log archive I didn't specify CANCEL. From the Oracle server point of view, the recover process was unsuccessful. So, the correct approach is:

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 794271 generated at 12/22/2006 12:09:07 needed for thread 1
ORA-00289: suggestion :
/opt/oracle/flash_recovery_area/DDB/archivelog/1_4_609804799.dbf
ORA-00280: change 794271 for thread 1 is in sequence #4


Specify log: {=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.
SQL> ALTER DATABASE OPEN RESETLOGS;

Database altered.

So, explicit CANCEL does matter in this case!

Thursday, December 21, 2006

Should I Delete All Archives In One Shoot?

I was just wondering what is the best way to delete the backed up archives using RMAN. I have to choose between DELETE INPUT and DELETE ALL INPUT options.
In my environment, all archives are written on two different locations. If I am going to use the DELETE ALL INPUT option then after the archives from one of the two locations are backed up, all the corresponding archive files are deleted from all available locations. If I am going to use DELETE INPUT option then only the backed up archives are deleted, just from one of the two locations.
Now, I must admit that using the BACKUP ARCHIVELOG ALL command with DELETE ALL INPUT sounds to me a little bit scary as after the successful completion of it I will end up with just one backup set with all backed up archives but no archives redundancy. This backup set becomes the single point of failure if a database recovery must take place and those archives are required.
So, the other approach is to use the DELETE INPUT option only. The below figure shows what's happening in this case:


At T0 moment, there are archives not backed up on both locations. The BACKUP command creates the “Bacup Set1” and deletes the corresponding archives from the LOCATION 1. So, now I have archives on LOCATION 2 and into the backup set as well (redundancy still 2). Suppose that after new archives are generated a new backup is taken. The following figure depicts what will happen in this case:


As it can be noticed, the next BACKUP command creates the “Backup Set2” which contains all archives from the LOCATION2 not deleted by the previous BACKUP command and, in addition, the new generated archive logs from the LOCATION1. These backed up archives are deleted but the redundancy level for them is still two. Much safer, right?

Wednesday, December 20, 2006

Don't Forget to Restore Your Read-only Tablespaces

I know, this is a basic one! But, me personally, I don't restore/recover databases everyday therefore is quite easy to forget some basics. So, supposing that you have lost all datafiles and some of them were part of some read-only tablespaces then it is important to remember that, by default, RMAN will not restore any datafiles from those read-only tablespaces. Hopefully, RMAN will display a warning, something like this: “datafile X not processed because file is read-only”, but the restore operation will go on without problems. So, in order to restore all datafiles including the ones from the read-only tablespaces then the correct command is: “RESTORE DATABASE CHECK READONLY;”.

Does any Datablock Change Generates Flashback Logs?

In the “Database Backup and Recovery Basics” book from the 10g Release 2 (10.2) documentation, in the sub-chapter “5.3.7 Performance Tuning for Flashback Database” there is a remark which says:

“Queries do not change data and thus do not contribute to logging activity for Flashback Database.”

Hmm... indeed they do not change actual business data but they can change database blocks in the so called “block cleanout” process and, in such cases, they actually generate redolog information.

My first assumption was that “Flashback database” engine detects on the very low level any change in data blocks, either as a consequence of a DML or as a consequence of an internal data block maintenance process such as the “block cleanout” operation.

So, I've decided to test if the “block cleanout” process does or does not generate flashback logs. The scenario is simple: my test database has a datablock size of 8K so I will create a dummy table so that each of its rows to fit into one oracle datablock. Then I will insert enough records into this table so that, in the end, my transaction to simply give up cleaning out all modified blocks. In the end I will force a full table scan against the sample table in order to perform the cleanout process. During these operations I will closely look to some of my session statistics.

Here we go:

1.first of all, some checking and some rights are needed:

SQL> connect / as sysdba
Connected.
SQL> show parameter db_block_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_block_size integer 8192
SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SQL> grant select on v_$mystat to alek;

Grant succeeded.

SQL> grant select on v_$statname to alek;

Grant succeeded.

2.now, lets create our test environment:

10:57:46 SQL> connect alek
Enter password:
Connected.
10:57:55 SQL> create table muci (col1 char(2000), col2 char(2000), col3 char(2000));

Table created.

10:58:12 SQL> select name, value from v$mystat natural join v$statname
where regexp_like(name, '(.*flash.*)|(redo size)');

NAME VALUE
---------------------------------------------------------------- ----------
physical reads for flashback new 6
redo size 9564
flashback log writes 0

10:58:22 SQL> insert into muci select 'a', 'b', 'c' from all_objects;

40710 rows created.

11:01:21 SQL> commit;

Commit complete.

11:01:37 SQL> select name, value from v$mystat natural join v$statname
where regexp_like(name, '(.*flash.*)|(redo size)');

NAME VALUE
---------------------------------------------------------------- ----------
physical reads for flashback new 41526
redo size 263629980
flashback log writes 0

Once again after a minute:

11:01:45 SQL> select name, value from v$mystat natural join v$statname
where regexp_like(name, '(.*flash.*)|(redo size)');

NAME VALUE
---------------------------------------------------------------- ----------
physical reads for flashback new 41526
redo size 263629980
flashback log writes 0

There's no change! Go on with the clean out process:

11:05:29 SQL> select count(1) from muci;

COUNT(1)
----------
40710

11:06:11 SQL> select name, value from v$mystat natural join v$statname
where regexp_like(name, '(.*flash.*)|(redo size)');

NAME VALUE
---------------------------------------------------------------- ----------
physical reads for flashback new 41526
redo size 266561132
flashback log writes 0

Trying once again after some minutes within the same session reveals no changes, therefore we can conclude that flashback logs were not generated even asynchronously after some time.

11:06:21 SQL> select name, value from v$mystat natural join v$statname
where regexp_like(name, '(.*flash.*)|(redo size)');

NAME VALUE
---------------------------------------------------------------- ----------
physical reads for flashback new 41526
redo size 266561132
flashback log writes 0


Thats interesting! As it can be seen, the cleanup process has, indeed, generated redolog information which means that it changed some header information from the cleaned out datablocks but no flashback logs were produced.

So, as far as these tests are concerned the above statement from the Oracle official documentation seems to be correct!

Tuesday, December 19, 2006

Playing Around with “Flashback Database”

“Flashback Database” is a new feature available in 10g databases and it can be used to rewind the content of the database to a previous moment in time. The key elements of this feature are:
  • flashback logs
  • restore points
By default the database runs with the “flashback database” feature disabled. I can always find out if this feature is activated or not by querying the V$DATABASE view.

SQL> SELECT FLASHBACK_ON FROM V$DATABASE;

FLASHBACK_ON
------------------
NO

There are some important requirements which must be fulfilled in order to start using this feature:
  • a “Flash Recovery Area” must be setup;
  • the database must run in ARCHIVELOG mode;
To enable flashback database the following steps should be followed:

1.clean shutdown of the database:

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

2.startup the database in mount state:

SQL> startup mount
ORACLE instance started.

Total System Global Area 264241152 bytes
Fixed Size 1218868 bytes
Variable Size 79693516 bytes
Database Buffers 180355072 bytes
Redo Buffers 2973696 bytes
Database mounted.

3. enable the flashback database feature:
SQL> alter database flashback on;

Database altered.

4.open the database:

SQL> alter database open;

Database altered.

Now, that the “Flashback Database” is enabled I can monitor and estimate the size of flashback logs by using the V$FLASHBACK_DATABASE_LOG and V$FLASH_RECOVERY_AREA_USAGE views. I can perform some massive updates/inserts and to query those two views afterwards to see how the effective and estimate flashback log sizes are affected.

SQL> select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
624609 19-DEC-06 1440 8192000 283582464

SQL> select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE .33 0 1
ONLINELOG 7.32 0 3
ARCHIVELOG .61 0 1
BACKUPPIECE 77.72 25.84 8
IMAGECOPY 0 0 0
FLASHBACKLOG .38 0 1

6 rows selected.

The above listing reveals that currently I have a flashback log file of ~8M and the estimated flashback log size is ~270M.
OK, lets see this feature in action. I am going to create a restore point, to perform some inserts and to flashback the database to the restore point.

SQL> create restore point before_update;

Restore point created.

SQL> create table alek.fback_test as select * from all_objects;

Table created.

SQL> select count(1) from alek.fback_test;

COUNT(1)
----------
49337

Now, I want to rewind the database back to the restore point. This will imply a shutdown, a startup in mount state, the corresponding FLASHBACK command and the opening of the database with the RESETLOGS clause.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 264241152 bytes
Fixed Size 1218868 bytes
Variable Size 88082124 bytes
Database Buffers 171966464 bytes
Redo Buffers 2973696 bytes
Database mounted.
SQL> flashback database to restore point before_update;

Flashback complete.

SQL> alter database open resetlogs;

Database altered.

SQL> select count(1) from alek.fback_test;
select count(1) from alek.fback_test
*
ERROR at line 1:
ORA-00942: table or view does not exist
Everything went just perfect! The database was restored as it was when the restore point was defined. However, if my business requirements decide the need to restore the database to the defined restore point only and not between now and the time the restore point was defined then a better approach is to use guaranteed restore points with flashback database disabled. In this case the amount of generated flashback logs is smaller and the performance of an application which often modifies data is improved.
So, if I am going to use the above approach the first thing to be done is to disable the flashback database feature. This can be done with the database open.

SQL> alter database flashback off;

Database altered.

However, switching back is not possible when the database is in the OPEN state:

SQL> alter database flashback on;
alter database flashback on
*
ERROR at line 1:
ORA-38759: Database must be mounted by only one instance and not open.
Another important thing to notice is that I cannot create the first guaranteed restore point with the database open. If trying to do so I'll get an error message as shown below:

SQL> create restore point before_delivery guarantee flashback database;
create restore point before_delivery guarantee flashback database
*
ERROR at line 1:
ORA-38784: Cannot create restore point 'BEFORE_DELIVERY'.
ORA-38787: Creating the first guaranteed restore point requires mount mode
when flashback database is off.
The error message is quite self explanatory therefore the next step would be to cleanly shutdown the database and to start it in the mount state.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 264241152 bytes
Fixed Size 1218868 bytes
Variable Size 88082124 bytes
Database Buffers 171966464 bytes
Redo Buffers 2973696 bytes
Database mounted.
SQL> create restore point before_delivery guarantee flashback database;

Restore point created.

SQL> alter database open;

Database altered.
Now, lets simulate the same scenario as we did with the flashback database feature enabled.
SQL> create table alek.fback_test as select * from all_objects;

Table created.

SQL> select count(1) from alek.fback_test;

COUNT(1)
----------
49337

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 264241152 bytes
Fixed Size 1218868 bytes
Variable Size 88082124 bytes
Database Buffers 171966464 bytes
Redo Buffers 2973696 bytes
Database mounted.
SQL> flashback database to restore point before_delivery;

Flashback complete.

SQL> alter database open resetlogs;

Database altered.

SQL> select count(1) from alek.fback_test;
select count(1) from alek.fback_test
*
ERROR at line 1:
ORA-00942: table or view does not exist
Lovely, it works as expected! Now, I'm wondering what happen with my guaranteed restore point.

SQL> select name, time, guarantee_flashback_database from v$restore_point;

NAME TIME GUA
------------------------------ ---------------------------------------- ---
BEFORE_DELIVERY 19-DEC-06 05.01.55.000000000 PM YES
BEFORE_UPDATE 19-DEC-06 03.54.19.000000000 PM NO
It is still there. The guaranteed restore points are never automatically age out therefore the only option I have in order to delete it is to use the DROP RESTORE POINT command.

SQL> drop restore point before_delivery;

Restore point dropped.

SQL> select name, time, guarantee_flashback_database from v$restore_point;

NAME TIME GUA
------------------------------ ---------------------------------------- ---
BEFORE_UPDATE 19-DEC-06 03.54.19.000000000 PM NO
Now, the ugly part of this is that I've end up with the previous situation when no guaranteed restore points were defined and the first one must be always created with the database started in the mount state.
SQL> create restore point before_delivery guarantee flashback database;
create restore point before_delivery guarantee flashback database
*
ERROR at line 1:
ORA-38784: Cannot create restore point 'BEFORE_DELIVERY'.
ORA-38787: Creating the first guaranteed restore point requires mount mode
when flashback database is off.

The workaround would be to create a second guaranteed restore point and to delete the previous one so that at least one guaranteed restore point to be defined. However, the drawback of this is that I have to maintain this guaranteed restore point if my goal is to maximize the availability of the database.

After the evaluation of this feature I can say what I like and what I don't like about it:

I like:
  • the straightforward way of configuring and using it;
  • the possibility to estimate the size of flashback logs through the V$FLASHBACK_DATABASE_LOG view;
  • much easier to use than the classical database incomplete recovery;
I don't like:
  • the potential performance issues on a productive system which performs a lot of changes in the underlying data.
  • the fact that i cannot use it to rewind the content of just one or several tablespaces. If the database hosts more than one application then all of them will be affected therefore the remaining option will still remain the TPITR. The “Flashback Database” feature is closer to the “Database incomplete recovery” than the TPITR feature.
  • From my point of view this feature suits well into test and/or developing environments were the likelihood of making mistakes is greater in comparison with the production environment which is supposed to be used in a restrictive and controlled way. The main benefit of using it in production systems could be seen from the application deployment strategy perspective. Before running an update script as part of the application delivery process the DBA can define a guaranteed restore point and if something nasty happens within that script then he/she can easily rewind the database as it was at the time the restore point was defined. However, the fact that the database should be shutdown doesn't sound very well to me as precious cached data from the SGA will be lost.