tag:blogger.com,1999:blog-363353102024-02-20T11:08:31.218-08:00ORACLE CookiesThis blog is mainly about ORACLE, a great database server. I am going to put here my thoughts about various ORACLE topics, tips & tricks and other cool stuff as far as these technologies are concerned.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.comBlogger58125tag:blogger.com,1999:blog-36335310.post-4424389339787762452012-04-24T07:24:00.000-07:002012-04-28T01:25:58.755-07:00A simple way to configure a DNS for your Oracle playground RACBecause we are talking about a test/playground environment, the first idea which comes to mind is virtualization. VirtualBox, Oracle VM, VMWare and, of course, others, all may be used to create a RAC playground, provided that you have a computer powerful enough to support this kind of setup.<br />
<br />
But, despite the advantage of virtualization, you also have to think of how the new virtualized RAC will talk to the external world and how you're going to support the new SCAN feature available in Oracle 11g. Yeap, you guessed right! I'm talking about the integration with your DNS server. I will not cover here the GNS feature but just the SCAN address, which must be defined in your DNS to three different IPs, in a round-robin fashion.<br />
<br />
Hmmm, damn it! My virtualized RAC is on my home desktop, I have a stupid wireless rooter without any possibility to define DNS entries there, and installing a new machine just for a DNS server seems to be a awfully heavyweight solution. In addition, using <a href="http://www.isc.org/software/bind" target="_blank">BIND</a>, the well know DNS server, is not very appealing for me because I'm not a network administrator and I simply don't get it. But, if you plan to stick with BIND you may find an example <a href="http://www.oracle-base.com/articles/linux/dns-configuration-for-scan.php" target="_blank">here</a>. Of course, being a playground, you may rely to the /etc/hosts file but you'll receive warnings on install and it doesn't feel right, does it?<br />
<br />
Ok, introducing <a href="http://www.maradns.org/" target="_blank">MaraDNS</a>. They claim that: "<i>People like MaraDNS because it's small, lightweight, easy to set up, and remarkably secure. It's also cross platform -- the program runs both in Windows and in UNIX clones</i>". Hey, this is what I'm looking for. Great! So, comming back to our environment.<br />
<br />
In my case, VirtualBox is installed on a Windows 7 box and the RAC nodes are virtualized with OL6. The idea is that I will have MaraDNS installed on my Windows 7 and it will service my virtualized RAC nodes. The following picture should shed some light.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLKCE14fhDsrP1wcFDo7FAd5gdbKA0AZcqa4BHFz6toJk_sShkL7v96_5nazwOsp-IzKxnoaq0XYwJuMVQJO8FEX06NDNSIolenPWZGAqv0wv8hCZxu34ds4X30xxEI_d2Qssc/s1600/mara.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLKCE14fhDsrP1wcFDo7FAd5gdbKA0AZcqa4BHFz6toJk_sShkL7v96_5nazwOsp-IzKxnoaq0XYwJuMVQJO8FEX06NDNSIolenPWZGAqv0wv8hCZxu34ds4X30xxEI_d2Qssc/s320/mara.png" width="300" /></a></div>
<br />
Now let's configure MaraDNS to act like we want. The Windows version doesn't have any installer. You'll simply extract the archive and you'll have everything you want. Then, we need to edit "mararc" config file. I ended up with:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">ipv4_bind_addresses = "127.0.0.1"</span><br />
<span style="font-family: "Courier New",Courier,monospace;">timestamp_type = 2</span><br />
<span style="font-family: "Courier New",Courier,monospace;">hide_disclaimer = "YES"</span><br />
<span style="font-family: "Courier New",Courier,monospace;">random_seed_file = "secret.txt"</span><br />
<span style="font-family: "Courier New",Courier,monospace;">csv2 = {}</span><br />
<span style="font-family: "Courier New",Courier,monospace;">csv2["altik.ro."] = "db.lan.txt"</span><br />
<br />
I basically say:<br />
<ol>
<li>listen on localhost for requests (strange, isn't it? Be patient!)</li>
<li>set the timestamp_type, but don't bother</li>
<li>hide the disclaimer when launching the server. I like it quiet</li>
<li>in secret.txt file I have some random text to secure my server, but don't bother too much</li>
<li>everything with a trailing "altik.ro" (the domain) is mapped in the "db.lan.txt" file. Of course, you have to pick your domain. It doesn't need to be a real one, but if it exists it becomes unreachable.</li>
</ol>
Okey, great! Now let me see the "db.lan.txt" file:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">ping.altik.ro. FQDN4 192.168.1.100</span><br />
<span style="font-family: "Courier New",Courier,monospace;">pong.altik.ro. FQDN4 192.168.1.101</span><br />
<span style="font-family: "Courier New",Courier,monospace;">ping-vip.altik.ro. FQDN4 192.168.1.200</span><br />
<span style="font-family: "Courier New",Courier,monospace;">pong-vip.altik.ro. FQDN4 192.168.1.201</span><br />
<span style="font-family: "Courier New",Courier,monospace;">poc-scan.altik.ro. FQDN4 192.168.1.150</span><br />
<span style="font-family: "Courier New",Courier,monospace;">poc-scan.altik.ro. FQDN4 192.168.1.151</span><br />
<span style="font-family: "Courier New",Courier,monospace;">poc-scan.altik.ro. FQDN4 192.168.1.152</span><br />
<br />
Mmm, that's all? Pretty much! Go ahead, start the server using the provided "bat" file and execute some nslookup commands. For example, "<span style="font-family: "Courier New",Courier,monospace;">nslookup ping.altik.ro 127.0.0.1</span>". Does it work? Test the SCAN as well. Every nslookup should return another IP (round robin algorithm).<br />
<br />
Now, we have two problems:<br />
<ol>
<li>our DNS server is listening on localhost and there's no way to make requests from a remote node </li>
<li>I'd also like to have internet on my RAC nodes in order to use YUM and other tasks which need Internet</li>
</ol>
No problem! In MaraDNS folder there is a sub-folder called "Deadwood...", which contain a so-called DNS recursive server. It doesn't allow to define your DNS entries but you can tell it to ask another DNS server for its entries and to cache them. Hey, that's exactly what we want. Ask the DNS server of my ISP for its entries and my local MaraDNS. Let's have a look at the config file (dwood3rc.txt):<br />
<br />
<div style="font-family: "Courier New",Courier,monospace;">
root_servers = {}</div>
<div style="font-family: "Courier New",Courier,monospace;">
<br /></div>
<div style="font-family: "Courier New",Courier,monospace;">
# My MaraDNS </div>
<div style="font-family: "Courier New",Courier,monospace;">
root_servers["192.in-addr.arpa."] = "127.0.0.1"<br />
root_servers["altik.ro."] = "127.0.0.1"</div>
<div style="font-family: "Courier New",Courier,monospace;">
<br /></div>
<div style="font-family: "Courier New",Courier,monospace;">
</div>
<div style="font-family: "Courier New",Courier,monospace;">
</div>
<div style="font-family: "Courier New",Courier,monospace;">
</div>
<div style="font-family: "Courier New",Courier,monospace;">
</div>
<div style="font-family: "Courier New",Courier,monospace;">
</div>
<div style="font-family: "Courier New",Courier,monospace;">
</div>
<div style="font-family: "Courier New",Courier,monospace;">
</div>
<div style="font-family: "Courier New",Courier,monospace;">
# DNS server from my ISP<br />
root_servers["."] = "213.154.124.1,"<br />
root_servers["."] += "193.231.252.1"</div>
<div style="font-family: "Courier New",Courier,monospace;">
# The IP this program has <br />
bind_address="192.168.1.10"<br />
<br />
# The IPs allowed to connect and use the cache<br />
recursive_acl = "192.168.1.0/24"<br />
<br />
# The file containing a hard-to-guess secret<br />
random_seed_file = "secret.txt" <br />
<br />
# This is the file Deadwood uses to read the cache to and from disk<br />
cache_file = "dw_cache_bin"</div>
<span style="font-family: "Courier New",Courier,monospace;"># By default, for security reasons, Deadwood does not allow IPs in the</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># 192.168.x.x, 172.[16-31].x.x, 10.x.x.x, 127.x.x.x, 169.254.x.x,</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># 224.x.x.x, or 0.0.x.x range. If using Deadwood to resolve names</span><br />
<span style="font-family: "Courier New",Courier,monospace;"># on an internal network, uncomment the following line:</span><br />
<span style="font-family: "Courier New",Courier,monospace;">filter_rfc1918 = 0</span><br />
<br />
The above configuration is quite self explanatory. Everyting which refers to my local domain will be served from MaraDNS and everything else will be fetched from a real DNS server provided by my ISP. The "in-adr.arpa" entry is needed in order to support reverse DNS. Pay attention that this DNS server will listen on a real address, the one of my Windows 7 host.<br />
<br />
Okay, great! Install Deadwood (as a service), and then test it with nslookup but instead of 127.0.0.1 use 192.168.1.10. If everything is working then it's just a matter of configuring your virtualized nodes to use 192.168.1.10 as a DNS server.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com1tag:blogger.com,1999:blog-36335310.post-28660510424549597082012-03-30T08:31:00.001-07:002012-04-03T11:53:46.670-07:00Escape HTML Entities Except for Some Well Known TagsWe are currently developing an APEX application and one of the requirement is to let users
enter formatted text data into the system via a <a href="http://ckeditor.com/">CKEDITOR</a> control.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMKT-oZ49v6PvuqUuoslBHzIzr6QJIulkznBGMxBSMEvm2aByJzLBkG-zqBnUnovdgmU3fC-xCowBzSR2VvZpQCdFCoKoxq4Yo6PHcfuXWhoO-7k424x5oWL3Qq9WrFWKQ27JJ/s1600/ckedit.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="318" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMKT-oZ49v6PvuqUuoslBHzIzr6QJIulkznBGMxBSMEvm2aByJzLBkG-zqBnUnovdgmU3fC-xCowBzSR2VvZpQCdFCoKoxq4Yo6PHcfuXWhoO-7k424x5oWL3Qq9WrFWKQ27JJ/s320/ckedit.png" width="320" /></a></div>
<br />
As it may be noticed above, not all the formatting options are enabled but just the possibility
to apply bold, italic and underlined styling and, in addition, to create ordered and unordered lists.
This means that the data we store into the database has html tags within and it must be echoed back as it is
into the browser when the page which renders that data is about to be displayed.<br />
<br />
Of course, the first thing you must take into consideration is the security issue: we don't want to render our
pages with unsecure tags therefore we must sanitize the input. What we want is to remove all tags which are not into
our whitelist. It seems like a very simple problem but the solution is not so obvious.<br />
<br />
Our first attempt was to use a smart regular expression to do the job. It turned out that the expression was simply too
complicated and very difficult to maintain. Put into the mix the fact that the Oracle regexp engine doesn't have all
the goodies you'd expect if you are a Perl or Ruby developer and you quickly end up in disapointment.<br />
<br />
The actual solution we implemented (which seems to work pretty well) is to parse the html content and to remove the
offending tags. In order to parse we used just an ordinary xmltype. Let's see an example:
<br />
<pre class="brush: plain">select regexp_replace(
deletexml(
xmltype('<post>' ||
'You have <b>been</b> <u>hacked</u> ' ||
'by <br/><a href="http://www.youtube.com/watch?v=CjPfBxJboJI">Gurin Remus Adrian</a>.' ||
'</post>'),
'/post/*[(name() != "b") and (name() != "u")]'),
'(^<post>)|(</post>$)',
'')
from dual;
CLEAN_HTML
--------------------------------------
You have <b>been</b><u>hacked</u> by .
</pre>
Great! Having that it's quite easy to write a PL/SQL function to embed the above logic:
<br />
<pre> function strip_html_tags(html_in in clob, except_tags_in in varchar2 := null) return clob as
l_clean_html clob;
l_filter varchar2(32767) := '/post';
l_predicates varchar2(32767);
l_array apex_application_global.vc_arr2;
begin
if except_tags_in is null then
l_filter := l_filter || '/*[(name() != "post")]';
else
l_array := apex_util.string_to_table(except_tags_in, ',');
for i in 1..l_array.count loop
if l_array(i) is not null then
l_predicates := l_predicates || case when i > 1 then ' and ' else '' end || '(name() != "' || l_array(i) || '")';
end if;
end loop;
if l_predicates is not null then
l_filter := l_filter || '/*[' || l_predicates || ']';
end if;
end if;
select regexp_replace(
deletexml(
xmltype('<post>' || html_in || '</post>'),
l_filter),
'(^<post>)|(</post>$)',
'') into l_clean_html
from dual;
return l_clean_html;
end;
</pre>
The only problem we had using this approach was with German characters or other HTML character entities. For example, by default CKEDIT sends the German umlaut character as &uuml; and, apparently, the Oracle XML parser is not smart enough to recognize this notation. It simply fails with LPX-00118 error. In fact, if you think about it, it's not so unusual given that we are talking about a XML parser, not an HTML one. However, if you encode umlaut with its numerical notation which is &#252; then everything is working like a charm.
This boils down to properly configure CKEDITOR in order to force it to encode this characters with their numerical notation. The key property is:
<br />
<pre>entities_processNumerical": 'force'</pre>
Mission accomplished!Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-47501339473914100072011-01-26T01:32:00.000-08:002011-01-26T01:32:08.859-08:00Light HTML Oracle Support SiteThe standard Flash GUI of the actual <a href="http://support.oracle.com">Oracle Support</a> site is driving me crazy. For a guy like me, used to work especially with console applications, addicted to <a href="http://www.vim.org/">VIM</a> and browsing using <a href="http://vimperator.org/vimperator">Vimperator</a>, having this Flash site in front of me really hurts. It hurts when I have to move my hand to click something with the mouse, it hurts when I cannot go back to the previous page, it hurts... it hurts... it hurts! Oracle Support is not a game, it doesn't need animation, video or other such things Flash is good at. <br />
<br />
Okey, enough with all these complaints! If you don't like Flash, or your browser simply doesn't support it you may switch to the plain html interface: <a href="https://supporthtml.oracle.com">https://supporthtml.oracle.com</a>. Isn't it cool? Just have a look: <br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHgLGgKb7LoxLOhAa9IkyKvrZ1vxqcYtY459sUozGVJM-mGzZnxUfYl5da4J3GDoltTP-lRI-dU0yGThFW_J78iwdzrIa3PuV-aY4Wd7pmrXgFM6QZC1FBDLA1U-4droJsz4is/s1600/html-mos.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="254" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHgLGgKb7LoxLOhAa9IkyKvrZ1vxqcYtY459sUozGVJM-mGzZnxUfYl5da4J3GDoltTP-lRI-dU0yGThFW_J78iwdzrIa3PuV-aY4Wd7pmrXgFM6QZC1FBDLA1U-4droJsz4is/s400/html-mos.png" /></a></div><br />
Now I can use my beloved Vimperator and, of course, <a href="https://addons.mozilla.org/en-US/firefox/addon/flashblock/">Flashblock</a>.<br />
<br />
However, I have to point out the limitations.The HTML option does not include the following functionality, which is only available in the Flash version of My Oracle Support:<br />
<br />
* Systems<br />
* Projects<br />
* Healthchecks<br />
* Patch Advice & Recommendations<br />
* Inventory Reporting<br />
* OnDemand Portal, Service Request and RFC Functionality <br />
* CRM OnDemand Service Requests & Knowledge<br />
<br />
As far as I'm concerned, I didn't use those features anyway therefore I'm good!Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com2tag:blogger.com,1999:blog-36335310.post-79735561602442132011-01-24T02:27:00.000-08:002011-01-25T00:26:51.377-08:00crs_stat pretty print on RAC 11.2I really don't like the way <code>crs_stat -t</code> displays RAC status information on my 11.2.0.2 database. For example, one of my instances are down. Can you figure out this looking at the output below?<br />
<br />
<pre>[grid@owl ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.DATA.dg ora....up.type ONLINE ONLINE hen
ora....ER.lsnr ora....er.type ONLINE ONLINE hen
ora....N1.lsnr ora....er.type ONLINE ONLINE hen
ora....N2.lsnr ora....er.type ONLINE ONLINE hen
ora....N3.lsnr ora....er.type ONLINE ONLINE owl
ora.asm ora.asm.type ONLINE ONLINE hen
ora.cvu ora.cvu.type ONLINE ONLINE hen
ora.gns ora.gns.type ONLINE ONLINE hen
ora.gns.vip ora....ip.type ONLINE ONLINE hen
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....SM2.asm application ONLINE ONLINE hen
ora....EN.lsnr application ONLINE ONLINE hen
ora.hen.gsd application OFFLINE OFFLINE
ora.hen.ons application ONLINE ONLINE hen
ora.hen.vip ora....t1.type ONLINE ONLINE hen
ora....network ora....rk.type ONLINE ONLINE hen
ora.oc4j ora.oc4j.type ONLINE ONLINE hen
ora.ons ora.ons.type ONLINE ONLINE hen
ora....SM1.asm application ONLINE ONLINE owl
ora....WL.lsnr application ONLINE ONLINE owl
ora.owl.gsd application OFFLINE OFFLINE
ora.owl.ons application ONLINE ONLINE owl
ora.owl.vip ora....t1.type ONLINE ONLINE owl
ora.poc.db ora....se.type ONLINE ONLINE hen
ora....uci.svc ora....ce.type ONLINE ONLINE hen
ora....ry.acfs ora....fs.type ONLINE ONLINE hen
ora.scan1.vip ora....ip.type ONLINE ONLINE hen
ora.scan2.vip ora....ip.type ONLINE ONLINE hen
ora.scan3.vip ora....ip.type ONLINE ONLINE owl </pre>The resource "ora.poc.db" is ONLINE therefore no red flags you might say. Well, bad luck: the database is up & running but with only one instance. The other one is dead. The database is policy managed but I want to be aware if all instances from the pool are running. How can we figure this out? Not a very big deal, just issue: "crsctl status resource". You'll get something like this:<br />
<br />
<pre>NAME=ora.poc.db
TYPE=ora.database.type
TARGET=OFFLINE, ONLINE
STATE=OFFLINE, ONLINE on hen</pre><br />
Looking at TARGET and STATE is clear that something is OFFLINE and the red flag pops up right away. Of course, "crsctl status resource" command has the same problem as crs_stat when it comes to pretty print the result. It also has a tabular format (see the -t switch) but it'a a little bit verbose as it also displays the local resources status. But hey, do you remember Note 259301.1? It was about an awk script used for parsing the crs_stat output and displaying it in a nicer way. Okey, let's take that script and change it to take the output of the "crsctl status resource" command. I'm not an awk expert, but the following script is working pretty well:<br />
<br />
<pre>#!/usr/bin/ksh
#
# Sample 10g CRS resource status query script
#
# Description:
# - Returns formatted version of crs_stat -t, in tabular
# format, with the complete rsc names and filtering keywords
# - The argument, $RSC_KEY, is optional and if passed to the script, will
# limit the output to HA resources whose names match $RSC_KEY.
# Requirements:
# - $ORA_CRS_HOME should be set in your environment
RSC_KEY=$1
QSTAT=-u
AWK=/bin/awk # if not available use /usr/bin/awk
# Table header:echo ""
$AWK \
'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State";
printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}'
# Table body:
$ORACLE_HOME/bin/crsctl status resource | $AWK \
'
function ltrim(s) { sub(/^[ \t]+/, "", s); return s }
function rtrim(s) { sub(/[ \t]+$/, "", s); return s }
function trim(s) { return rtrim(ltrim(s)); }
BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; split(apptarget, atarget, ","); state=2;}
$1~/STATE/ && state == 2 {appstate = $2; split(appstate, astate, ","); state=3;}
state == 3 { split(appname, a, ",");
for (i = 1; i <= length(atarget); i++) {
printf "%-45s %-10s %-18s\n", appname, trim(atarget[i]), trim(astate[i])
};
state=0;}'</pre>
And the output is:
<pre>[grid@owl ~]$ cs
HA Resource Target State
----------- ------ -----
ora.DATA.dg ONLINE ONLINE on hen
ora.DATA.dg ONLINE ONLINE on owl
ora.LISTENER.lsnr ONLINE ONLINE on hen
ora.LISTENER.lsnr ONLINE ONLINE on owl
ora.LISTENER_SCAN1.lsnr ONLINE ONLINE on hen
ora.LISTENER_SCAN2.lsnr ONLINE ONLINE on hen
ora.LISTENER_SCAN3.lsnr ONLINE ONLINE on owl
ora.asm ONLINE ONLINE on hen
ora.asm ONLINE ONLINE on owl
ora.cvu ONLINE ONLINE on hen
ora.gns ONLINE ONLINE on hen
ora.gns.vip ONLINE ONLINE on hen
ora.gsd OFFLINE OFFLINE
ora.gsd OFFLINE OFFLINE
ora.hen.vip ONLINE ONLINE on hen
ora.net1.network ONLINE ONLINE on hen
ora.net1.network ONLINE ONLINE on owl
ora.oc4j ONLINE ONLINE on hen
ora.ons ONLINE ONLINE on hen
ora.ons ONLINE ONLINE on owl
ora.owl.vip ONLINE ONLINE on owl
ora.poc.db OFFLINE OFFLINE
ora.poc.db ONLINE ONLINE on hen
ora.poc.muci.svc ONLINE ONLINE on hen
ora.poc.muci.svc ONLINE OFFLINE
ora.registry.acfs ONLINE ONLINE on hen
ora.registry.acfs ONLINE ONLINE on owl
ora.scan1.vip ONLINE ONLINE on hen
ora.scan2.vip ONLINE ONLINE on hen
ora.scan3.vip ONLINE ONLINE on owl </pre><br />
Looking at the above output I can clearly see the partial OFFLINE status of my database. From my point of view, is much better.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com3tag:blogger.com,1999:blog-36335310.post-77859430638520108492010-12-14T06:31:00.000-08:002010-12-14T06:34:50.123-08:00Kill a Session From Any NodeI really like this new 11g feature which allows the DBA to kill a session despite his session is on a different instance than the instance where the session to be killed resides. The ALTER SYSTEM KILL SESSION statement has been improved and allows specifying the instance number where the session you want to kill is located:<br />
<br />
<code>ALTER SYSTEM KILL SESSION 'sid, serial#, @inst_no';</code><br />
<br />
Great!Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-36133763687121068532010-12-11T01:47:00.000-08:002010-12-11T01:47:30.652-08:00Extending my RAC with a new nodeI have a 11.2.0.2 database comprised of one node. I especially created it with one node just to have the chance to add another node later. Why? Because I wanted to play with this new GPnP feature. So, despite my RAC was comprised of one node, it was actually a fully functional environment, with GNS, IPMI, CTSAS and a policy managed database. Okey, the process should be straightforward: run some CVU checks to see if the node to be added is ready and then run addNode.sh script from the GI home of the existing RAC node. In my case, the existing node was named "owl" and the node to be added was "hen".<br />
<br />
First of all, I ran:<br />
<pre>[grid@owl bin]$ cluvfy stage -pre nodeadd -n hen
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "owl"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
Node connectivity check passed
Checking CRS integrity...
CRS integrity check passed
Checking shared resources...
Checking CRS home location...
The location "/u01/app/11.2.0.2/grid" is not shared but is present/creatable on all nodes
Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
Node connectivity check passed
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "owl:/tmp"
Free disk space check passed for "hen:/tmp"
Check for multiple users with UID value 1100 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed
Checking OCR integrity...
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across nodes
Checking GNS integrity...
The GNS subdomain name "vmrac.fits.ro" is a valid domain name
GNS VIP "poc-gns-vip.vmrac.fits.ro" resolves to a valid IP address
<b><span style="color: #990000;">PRVF-5229 : GNS VIP is active before Clusterware installation</span>
<span style="color: #990000;">PRVF-5232 : The GNS subdomain qualified host name "hen.vmrac.fits.ro" was resolved into an IP address</span>
<span style="color: #990000;">GNS integrity check failed</span></b>
Pre-check for node addition was unsuccessful on all the nodes.</pre>PRVF-5229 is really a strange error: of course the GNS VIP is active because I already have my RAC installed. It really makes sense when installing a new RAC and the GNS vip sould be unallocated but otherwise I don't get it. So, I decided to go on even the CVU was complaining.<br />
<br />
The next step would be to run addNode.sh script from [GI_HOME]/oui/bin location. I ran the script and I found that it does nothing if the CVU checks are not passed. You can figure out this if you run the script with debugging:<br />
<br />
<pre>[grid@owl bin]$ sh -x ./addNode.sh -silent "CLUSTER_NEW_NODES={hen}"
+ OHOME=/u01/app/11.2.0.2/grid
+ INVPTRLOC=/u01/app/11.2.0.2/grid/oraInst.loc
+ ADDNODE='/u01/app/11.2.0.2/grid/oui/bin/runInstaller -addNode -invPtrLoc /u01/app/11.2.0.2/grid/oraInst.loc ORACLE_HOME=/u01/app/11.2.0.2/grid -silent CLUSTER_NEW_NODES={hen}'
+ '[' '' = Y -o '!' -f /u01/app/11.2.0.2/grid/cv/cvutl/check_nodeadd.pl ']'
+ CHECK_NODEADD='/u01/app/11.2.0.2/grid/perl/bin/perl /u01/app/11.2.0.2/grid/cv/cvutl/check_nodeadd.pl -pre -silent CLUSTER_NEW_NODES={hen}'
+ /u01/app/11.2.0.2/grid/perl/bin/perl /u01/app/11.2.0.2/grid/cv/cvutl/check_nodeadd.pl -pre -silent 'CLUSTER_NEW_NODES={hen}'
+ '[' 1 -eq 0 ']'</pre><br />
As you can see, the check_nodeadd.pl script ends with a non-zero exit code which means error (this perl script is really running the cluvfy utility so, it fails because of the GNS check). The only workaround I found was to ignore this checking using: <code>export IGNORE_PREADDNODE_CHECKS=Y</code><br />
After that I was able to successfully run addNode.sh script:<br />
<br />
<pre>[grid@owl bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={hen}"
Starting Oracle Universal Installer...
... output truncated ...
Saving inventory on nodes (Friday, December 10, 2010 8:49:27 PM EET)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'hen'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each cluster node.
/u01/app/oraInventory/orainstRoot.sh #On nodes hen
/u01/app/11.2.0.2/grid/root.sh #On nodes hen
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/11.2.0.2/grid was successful.
Please check '/tmp/silentInstall.log' for more details.</pre><br />
Okey, GREAT! Let's run those scripts on the new node:<br />
<pre>[root@hen app]# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@hen app]# /u01/app/11.2.0.2/grid/root.sh
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0.2/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params
Creating trace directory
<b><span style="color: #990000;">PROTL-16: Internal Error</span>
<span style="color: #990000;">Failed to create or upgrade OLR</span></b></pre><pre><b><span style="color: #990000;"> </span></b>Failed to create or upgrade OLR at /u01/app/11.2.0.2/grid/crs/install/crsconfig_lib.pm line 6740.
/u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib -I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl execution failed</pre><br />
Ups! I did not see that coming! First of all, OLR?! Yea, it's like an OCR but local. The only note I found about this error was 1123453.1 and it advises to double check if all install prerequisites are passed using cluvfy. In my case, the only problem I had was with the GNS check. Does GNS have anything to do with my error? As it turned out, no, it doesn't! The big mistake I made (and the cluvfy didn't notice that) was that the SSH setup between nodes was wrong. Connecting from owl to hen was okey, but not vice-versa. After I fixed the SSH configuration the root.sh script was executed without any problems. Great!<br />
<br />
The next step was to clone the database oracle home. That was really easy: just run the addNode.sh in the same way I did for GI. So far so good... at this point I was expecting that little magic to happen. Look what the documentation says:<br />
<br />
<i>If you store your policy-managed database on Oracle Automatic Storage Management (Oracle ASM), Oracle Managed Files (OMF) is enabled, and if there is space in a server pool for node2, then crsd adds the Oracle RAC instance to node2 and no further action is necessary. If OMF is not enabled, then you must manually add undo and redo logs.</i><br />
<br />
Hey, that's my case! Unfortunately, the new instance didn't show up. Furthermore, the pool configuration was asking for a new node:<br />
<pre>[oracle@hen oracle]$ srvctl config srvpool -g poc
Server pool name: poc
Importance: 10, Min: 2, Max: -1
Candidate server names: </pre>Look, I have increased the importance level and I set the "Min" property to 2. Damn it! I don't know why the new server was not automatically picked up, but maybe is also my leak of experience concerning this new server pools concept. In the end I launched "dbca" from the new added node hoping that some new magic options were added. But, no... even the "Instance Management" option was disabled. But, if you are choosing "Configure database" and next, next, next until the SYSDBA credentials are requested then dbca will try to connect to the local instance and it will actually create this new instance. I'm sure this is not the way it was supposed to work but, at least, I could see some results. However, there was another interesting thing. Looking into the alert of the new created instance I found:<br />
<pre>Could not open audit file: /u01/app/oracle/admin/poc/adump/poc_2_ora_18197_1.aud
Retry Iteration No: 1 OS Error: 2
Retry Iteration No: 2 OS Error: 2
Retry Iteration No: 3 OS Error: 2
Retry Iteration No: 4 OS Error: 2
Retry Iteration No: 5 OS Error: 2
OS Audit file could not be created; failing after 5 retries</pre>I didn't create the /u01/app/oracle/admin/poc/adump folder on my new node and that was causing the error. So, this is another thing I should remember... as part of the addNode.sh cloning process the "adump" location is not automatically created.<br />
And, that's all! Now, my fancy RAC has a new baby node.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-71015420627345436432010-12-08T04:03:00.000-08:002010-12-21T10:45:02.368-08:00Upgrade GI to 11.2.0.2: Simply Surprising...I never thought I'd write a post about such a trivial task... Well, if you are going to upgrade from 11.2.0.1 to 11.2.0.2 be prepared for surprises. <br />
<br />
The <b>first surprise</b> is given by the download page from the oracle support site (formally know as Oracle Metalink). The 11.2.0.2 patch set has 4.8G! WTF?! Furthermore, it is split in 7 pieces... Despite of this huge size, the good thing is that, unlike the previous releases, this patch-set may be used as a self-contained Oracle installer, which means you don't have to install a base 11.2.0.1 release and, after that, to apply the 11.2.0.2 patch-set on top of it, but you may simply install the 11.2.0.2 release directly. There's one more catch: if you want to upgrade just the Grid Infrastructure you don't need all 7 pieces from the patch-set. On the download page is not very clear mentioned but if you have the curiosity to open the README (and you should!) then you'll find the following:<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLt17UiIccN_YsAoallMxmQKnatAOGHUZbw2Q14NNwVJ9yM5J57lx7Td-pKyYrByGC3K4FpIfvpxPE_6ZxjmBJvQzKFHM0MSEHq4sgmZv06R5WKgpB6LZwpG-3RD-Q0N7Ew_r9/s1600/upgrade-files.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="108" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLt17UiIccN_YsAoallMxmQKnatAOGHUZbw2Q14NNwVJ9yM5J57lx7Td-pKyYrByGC3K4FpIfvpxPE_6ZxjmBJvQzKFHM0MSEHq4sgmZv06R5WKgpB6LZwpG-3RD-Q0N7Ew_r9/s400/upgrade-files.png" width="400" /></a></div>Great! So, for the beginning we'd need the 3rd piece in order to upgrade our Grid Infrastructure.<br />
<br />
The <b>second surprise</b> is the fact that the GI cannot be in-place upgraded. In previous releases we used to patch providing an existing home location. Starting with 11.2.0.2 in-place upgrades for GI are not supported. According to the "Upgrade" guide:<br />
<br />
<i>As of Oracle Database 11g release 2 (11.2), the Oracle Clusterware software must be upgraded to a new home location in the Oracle grid infrastructure home. Additionally, Oracle ASM and Oracle Clusterware (and Oracle Restart for single-instance databases) must run in the same Oracle grid infrastructure home. When upgrading Oracle Clusterware to release 11.2, OUI automatically calls Oracle ASM Cluster Assistant (ASMCA) to perform the upgrade into the grid infrastructure home.</i><br />
<br />
Okey, good to know! Let's start the upgrade process of GI. The wizard provided by the OUI is quite intuitive therefore I will not bother you with screenshots and other obvious things. However, <b>the next surprise</b> comes when you are running the <code>rootupgrade.sh</code> script. The error is:<br />
<pre>Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
The fixes for bug 9413827 are not present in the 11.2.0.1 crs home
Apply the patches for these bugs in the 11.2.0.1 crs home and then run
rootupgrade.sh /oragi/perl/bin/perl -I/oragi/perl/lib -I/oragi/crs/install /oragi/crs/install/rootcrs.pl execution failed</pre>WTF? You cannot patch if you don't have another stupid patch already there. Okey, as an Oracle DBA you have to be a patient guy... take a deep breath and start looking for bug 9413827. First of all there is the 10036834.8 note, which basically says that you might still get this error even if you apply the patch for the 9413827 bug. As an workaround they suggest to also apply the patch for 9655006 bug. That's madness! In the end it turns out that 9655006 patch is actually the July 10.2.0.1.2 PSU. Okey, just download the appropriate version for your platform. Now, <b>another surprise</b>... you need an updated version of OPatch utility. Damn it! Back to metalink, search for patch 6880880 and download the 11.2.0.0.0 version for your platform (Take care not to download the wrong version. By the way, did you noticed that you may download a wget script which can be used to download the patch without using a browser? Yea, finally something good on that shitty flash GUI.) According to the README they suggest to unzip the upgraded OPatch utility directly into your CRS home, using something like:<br />
<pre>unzip [p6880880...zip] -d [your GI home]</pre>... which I did!<br />
Now, you have to unzip the PSU patch into an empty folder, let's say /u01/stage, and run the following command as root:<br />
<pre><your gi="" home="">/OPatch/opatch auto /u01/stage/ -och [your GI home]</your></pre>In my case, the output was:<br />
<pre>Executing /usr/bin/perl /u01/app/11.2.0.1/grid/OPatch/crs/patch112.pl -patchdir /u01 -patchn stage -och /u01/app/11.2.0.1/grid/ -paramfile /u01/app/11.2.0.1/grid/crs/install/crsconfig_params
2010-12-08 12:32:19: Parsing the host name
2010-12-08 12:32:19: Checking for super user privileges
2010-12-08 12:32:19: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0.1/grid/crs/install/crsconfig_params
The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0.1/grid/
The opatch Component check failed. This patch is not applicable for /u01/app/11.2.0.1/grid/
Patch Component/Conflict check failed for /u01/app/11.2.0.1/grid/</pre>Upssy! <b>Another surprise!</b> This patch is not applicable for bla bla bla? Are you serious? Let's check the logs. They should be in your $CRS_HOME/cfgtoollogs. Search for a log file named as <code>opatchauto[timestamp].log</code>. The important part for the log:<br />
<pre>2010-12-08 12:32:19: The component check failed with following error
2010-12-08 12:32:19: bash: /u01/app/11.2.0.1/grid/OPatch/opatch: Permission denied</pre>Ha? I'm root! Aaaa... okey! Apparently it tries to run the OPatch tool under the grid user. Okey, let's fix the permissions.<br />
<pre>chown root:oinstall /u01/app/11.2.0.1/grid/OPatch -R
chmod g+r /u01/app/11.2.0.1/grid/OPatch/opatch</pre>Now, try again! Yeap... now it's working.<br />
After applying the patch we are ready for our <code>rootupgrade.sh</code>. It's interesting that the output still contains the <code>Failed to add (property/value):('OLD_OCR_ID/'-1')</code> message but the upgrade continues without any other complaints. Okey, let's perform a quick check:<br />
<pre>srvctl config asm
ASM home: /u01/app/11.2.0.2/grid
ASM listener: LISTENER
srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <crs home="">
/u01/app/11.2.0.2/grid on node(s) owl
End points: TCP:1521</crs></pre>Great, the ASM and listeners are relocated to the new GI home. The next logical thing to do is to uninstall the old GI home, right? It's as simple as:<br />
<pre><your gi="" home="">/deinstall/deinstall</your></pre>Oookey, meet <b>SURPRISE Number 6</b>:<br />
<pre>ERROR: You must delete or downgrade the Oracle RAC databases and de-install the Oracle RAC homes before attempting to remove the Oracle Clusterware homes.</pre>Isn't it great? On metalink I found Bug 10332736 and, on the WORKAROUND section, it says something about writing a note with a manual uninstall procedure. However, at the time of writing this, the note wasn't published yet. Yea... all I can say is that I'm tired of these stupid issues. What happend with the Oracle testing department? They encourage to patch frequently but, as far as I'm concerned, I always have this creepy feeling before doing it.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com4tag:blogger.com,1999:blog-36335310.post-80900451320104798462010-11-28T15:05:00.000-08:002010-12-03T13:27:00.061-08:00My first 11gR2 RAC on VirtualBox - Learned LessonsOracle 11gR2 comes with many RAC goodies and, of course, I want to see them in action. But, the first step is to actually have a 11gR2 available. So, I decided to install one on my poor Dell PowerEdge 1850 server. As it turned out, even the installation process is changed with the introduction of the SCAN, GNS, server pools etc. However it makes this task more challenging, doesn't it?<br />
<br />
Because this RAC was not intended to be used for any productive purposes my first choice was for a virtualized environment. I tried Oracle VM in the first place and I was quite disappointed about the results:<br />
<br />
1. my virtual machines were reboting unexpectedly, even they were idle. I didn't managed to find the cause of this.<br />
<br />
2. during heavy loads on my virtual machines I was constantly getting: <br />
<br />
<pre>INFO: task kjournal:337 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_sec" disables this message.</pre><br />
I guess Oracle already fixed that but I don't have any ULN subscriptions, so no updates. <br />
<br />
My next option was VirtualBox which is a nice choice and it is also provided by Oracle. VirtualBox supports now shared disks which makes it a very appealing solution for RAC testing. In addition, there's also a well written guide about how to install a RAC on VirtualBox <a href='http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php'>here</a>.<br />
<br />
To summarize, below are the main lessons I learned out of this RAC installation process:<br />
<br />
1. <strong>High CPU load for my virtual hosts</strong>: after I created the hosts which were supposed to act as RAC nodes, I noticed that the CPUs on the host server was on 100% even the guests were idle. My host server has 8G RAM and 2 physical CPU on 3.4GHz, so this high CPU consumption didn't feel right at all. The solution was to boot my virtual hosts with the <strong>divider=10</strong> option. Even with this tweak the whole installation process was slow, so be prepared to wait...<br />
<br />
2. <strong>pay attention to the groups membership for the oracle/grid users</strong>: I made a stupid mistake and I forgot to add the oracle user to asmdba group. The prerequisites checks didn't complained and I successfully installed the Grid Infrastructure and the Oracle database software. However, when I reached the database installation phase using dbca I noticed that no asm diskgroups were available even they were accessible on my "grid" user. So, in order to save precious time for debugging such tricky issues double check these groups membership requirements.<br />
<br />
3. <strong>the time synchronization issue</strong>: because I wanted to use new stuff for my RAC I decided to get rid of the ntpd synchronization and to use the Oracle CTSSD implementation. However, be careful here. Oracle is peaky when it comes to detecting if other synchronization software is installed. Even your ntpd daemon is stopped you also have to remove/rename the /etc/ntpd.conf file. Otherwise, the time synchronization check will fail. And another thing: if you configure your NIC interfaces via DHCP you may end up having this /etc/ntpd.conf after every node reboot. In order to prevent this you may use static address initialization or you may add PEERNTPD=no to your ifcfg-ethX scripts.<br />
<br />
4. <strong>GNS preparations</strong>: this GNS (Grid Naming Service) component is new in 11gR2 and is not a very tasty concept for those DBAs (like me) who do not have a lot of knowledge in network administration field. So, if you are going to use GNS, be sure you have an experienced system administrator around, to provide you support for configuring it. However, you still need to know what to ask him to do. Basically, you have to agree on a new DNS zone. If your company domain is gigel.ro you may choose for your RAC rac.gigel.ro. Then, you need to ask him to delegate the requests form *.rac.gigel.ro to an unallocated IP address from the same IP class as your future RAC public interface. This IP is the VIP for your GNS and it will be available only when your RAC installation is successfully finished. Then, your system administrator will ask you under which name to "glue" the new rac.gigel.ro zone. He actually want to know under which DNS name to register this GNS vip address. <a href="http://en.wikipedia.org/wiki/Domain_Name_System#Circular_dependencies_and_glue_records">The glue</a> is really a well-known concept in the DNS terminology. As far as I noticed Oracle uses <cluster_name>-gns-vip.<gns_zone>. So, for our hypothetical example, assuming the rac name is "muci", the gns glue would be: muci-gns-vip.rac.gigel.ro.<br />
<br />
5. <strong>ORA-15081</strong>: I think this has to do with the membership mistake. DBCA was reporting ORA-15081, complaining that it cannot create stuff into ASM diskgroups. The metalink note 1084186.1 provides the solution.<br />
<br />
Okey, that would be all. Happy (but slow) RAC on VirtualBox.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com3tag:blogger.com,1999:blog-36335310.post-35065319632491463472010-10-31T13:14:00.000-07:002010-10-31T13:14:39.764-07:00SHARED remote_login_password_fileWhen talking about the shared option of the remote_login_password_file parameter, the official 11.2 documentation states:<br />
<br />
<i>One or more databases can use the password file. The password file can contain SYS as well as non-SYS users.</i><br />
<br />
Whiles that's true, it is important to mention that, as soon as you set this parameter on SHARED, you are not allowed to add more SYSDBA users nor to change their passwords. A shared password file may contain non-SYS users, only if they were previously granted SYSDBA privilege, at the time the password file was in exclusive mode.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-88944385587546794832010-09-22T06:54:00.000-07:002010-09-23T00:28:47.955-07:00Statistics on Client Result CacheI've just noticed that the result cache client statistics are not very accurate on my 11.2.0.1 Oracle server. I have the following java code:<br /><pre><br />package test;<br /><br />import java.io.IOException;<br />import java.sql.Connection;<br />import java.sql.PreparedStatement;<br />import java.sql.ResultSet;<br />import java.sql.SQLException;<br />import oracle.jdbc.pool.OracleDataSource;<br /><br />public class ClientResultCache {<br /><br /> public static void main(String[] args) throws SQLException, IOException, InterruptedException {<br /> OracleDataSource ods = new OracleDataSource();<br /> ods.setDriverType("oci");<br /> ods.setTNSEntryName("owldb");<br /> ods.setUser("talek");<br /> ods.setPassword("muci");<br /> Connection conn = ods.getConnection();<br /> String query = "select /*+ result_cache */ * from xxx";<br /> ((oracle.jdbc.OracleConnection)conn).setImplicitCachingEnabled(true);<br /> ((oracle.jdbc.OracleConnection)conn).setStatementCacheSize(10);<br /> PreparedStatement pstmt;<br /> ResultSet rs;<br /> for (int j = 0 ; j < 1000 ; j++ ) {<br /> System.out.println(j);<br /> pstmt = conn.prepareStatement (query);<br /> rs = pstmt.executeQuery();<br /> while (rs.next( ) ) {<br /> }<br /> rs.close();<br /> pstmt.close();<br /> Thread.sleep(100);<br /> } <br /> System.in.read();<br /> }<br /><br />}<br /></pre><br />While the above code is running I'm monitoring the CLIENT_RESULT_CACHE_STATS$. And this is what I've got:<br /><pre> <br />STAT_ID NAME VALUE<br />---------- ------------------------------ ----------<br /> 1 Block Size 256<br /> 2 Block Count Max 128<br /> 3 Block Count Current 128<br /> 4 Hash Bucket Count 1024<br /> 5 Create Count Success 1<br /> 6 Create Count Failure 0<br /> 7 Find Count 812<br /> 8 Invalidation Count 0<br /> 9 Delete Count Invalid 0<br /> 10 Delete Count Valid 0<br /></pre><br /><br />The "Find Count" should be 999, right? My test program is still running (see the System.in.read at the end) therefore I expect my client result cache to be still there. My first guess was a delay in computing the statistics but even after 15 minutes of waiting I didn't get the right figures. Hmm... am I miss something?Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-57828926500239036352010-07-15T06:56:00.000-07:002010-07-15T06:56:49.534-07:00Oracle IDE for GeeksLet's be honest guys... how many times you find yourself googeling for “the best oracle IDE”? If you are like me then the answer is “too many times”... Why this? Well, partly I guess because we are not satisfied with what the market offers us in this area. <br />
If we're going to take a look at what we have now, the most well known Oracle IDEs are:<br />
<ol><li><a href="http://www.toadsoft.com/toad_oracle.htm">Toad</a> from Quest Software</li>
<li><a href="http://www.allroundautomations.com/plsqldev.html">PLSQL Developer</a> offered by Allround Automations</li>
<li><a href="http://www.oracle.com/technology/products/database/sql_developer/index.html">SQL Developer</a> from Oracle</li></ol>If you ask me, my favorite choice would be PLSQL Developer. It has a lot of cool features on a fair price. Toad is also nice but it's expensive. As far as SQL Developer is concerned I simply can't get used with it... In other words, I don't like it: it's slow (maybe Java has something to do with this) and it uses Java Swing for its GUI, bypassing my OS look & feel. However, there are some pros which might count: it's free and it's cross-platform. In fact, if you are on a Unix based OS you don't have many choices but SQL Developer. Of course, there is/was <a href="http://torasql.com/">Tora</a>... but I would rather mention it on past tense.<br />
So, what I don't like about these tools? Let's see:<br />
<ol><li>They are heavy... some of them take a lot of time just to startup.</li>
<li>Most of them are not cross platform.</li>
<li>They are closed software. You don't have access to the code.</li>
<li>Limited editing features. I know they offer templates, auto-complete and stuff, but they look so small in comparison with what <a href="http://www.vim.org/">VIM</a> provides.</li>
<li>They are not suitable for server environments. I mean... what if you have to connect to the database on a remote Unix server, connected via ssh within a "friendly" console? I guess sqlplus is all you have there and it's not a very pleasant experience.</li>
<li>A lot of the so useful sqlplus commands doesn't work in these environments. PLSQL Developer does a good job emulating many of these commands but I still miss AUTOTRACE, sub-totals and all the other cool features sqlplus provides.</li></ol>So, taking into consideration the above limitations I decided to create (why not?) my own Oracle IDE. It may sound stupid or too ambitious, but is not (well, maybe ambitious is). The new Oracle IDE I'm working on is called <a href="http://code.google.com/p/vorax/">VoraX</a> and is hosted on GoogleCode. VoraX stands for Vim ORAcle eXtenstion and yes... you have right: it's a VIM plugin. If you are a big fan of the VIM editor then you have to give VoraX a try. Of course there is also <a href="http://www.vim.org/scripts/script.php?script_id=356">dbext</a> plugin which provides support for Oracle databases but the main problem with it is that it doesn't maintain a persistent connection to the database. In dbext, when you are going to execute something, sqlplus is launched, a new connection is done, the statement is handed over, the results are fetched into VIM and then, sqlplus is closed. This cycle is restarted on every statement execution which is quite expensive, slow and does not preserve your transactional context. Anyway, dbext is a tool designed for many databases, not just Oracle and, personally, I don't like generic database tools.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com6tag:blogger.com,1999:blog-36335310.post-60652268833295666752010-05-26T03:29:00.000-07:002010-05-26T11:34:09.162-07:00SqlPlus InjectionDespite that at the very first sight it might look stupid you may be hacked by a colleague in a very rude way. Suppose one developer asks you to create a new user for an upcoming system. Because he's a nice guy, he also hands you a simple script which creates this user along with all the required grants. Of course, even you like your colleague and appreciate his effort, you carefully inspect that script before running it. Let's see a preview of this script in a plain vim window:<br />
<br />
<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNtRiIl_NfZyQ0J8OxtCQm71PHy7JuSUllMugudaJwh8UgmrAw6eqkIm49VlilnlT5_sLFv48ziL4t4LvYdzswm7KuY1WMYa0fDQfTD3aE3H72sJX7AQdbL8l8x_foa4lEXrPm/s1600/sqlplus_hack.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNtRiIl_NfZyQ0J8OxtCQm71PHy7JuSUllMugudaJwh8UgmrAw6eqkIm49VlilnlT5_sLFv48ziL4t4LvYdzswm7KuY1WMYa0fDQfTD3aE3H72sJX7AQdbL8l8x_foa4lEXrPm/s400/sqlplus_hack.png" width="380" /></a></div><br />
Oookey! The script has nice comments, nothing unusual... You run it in your sqlplus SYS session and... BANG! your SYSTEM user is compromised and you'll even don't know that. If you still have the WTF face, then look again.<br />
The catch is in the last comment. We used to think that in sqlplus a multiline comment start with an /* (and because sqlplus is quite picky it has to be further followed by a space or CR) and then, everything till the closing */ is taken as a comment. This assumption is wrong because, in sqlplus, a # at the very beginning of a line means "execute the command on that line". In fact, it doesn't have to be # but this is the symbol configured by default for sqlprefix setting. Just check it out:<br />
<br />
<pre>SQL> show sqlprefix
sqlprefix "#" (hex 23)</pre>However, we are simply fooled by our editor which, with its nice code highlighting feature, just marked our comments accordingly. Of course, it doesn't know anything about the sqlplus "sqlprefix" setting. So, before running any third-party scripts you should carefully look at them, even at comments.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com3tag:blogger.com,1999:blog-36335310.post-89005640790593729392010-05-02T23:22:00.000-07:002010-05-04T00:01:52.256-07:00Autobackup CF with Flash Recovery AreaIn our office we have a 10g RAC database. It has a flash recovery area enabled, which points to an ASM disk. Nothing special I would say... However, from time to time, our nightly backup script simply fails complaining that it can't find some obsolete backups which should be deleted:<br /><pre><br />RMAN-06207: WARNING: 4 objects could not be deleted for DISK channel(s) due<br />RMAN-06208: to mismatched status. Use CROSSCHECK command to fix status<br />RMAN-06210: List of Mismatched objects<br />RMAN-06211: ==========================<br />RMAN-06212: Object Type Filename/Handle<br />RMAN-06213: --------------- ---------------------------------------------------<br />RMAN-06214: Backup Piece /u01/app/oracle/product/10.2.0/db_1/dbs/c-24173594-20100427-00<br />RMAN-06214: Backup Piece /u01/app/oracle/product/10.2.0/db_1/dbs/c-24173594-20100427-01<br />RMAN-06214: Backup Piece /u01/app/oracle/product/10.2.0/db_1/dbs/c-24173594-20100428-00<br />RMAN-06214: Backup Piece /u01/app/oracle/product/10.2.0/db_1/dbs/c-24173594-20100428-01<br /></pre><br />That's weird! All those backup pieces are controlfile autobackups. RMAN looks for them into a local filesystem and, being a RAC database, those files are accessible, obvious, just from one node. But how? They were supposed to be placed into our shared storage, in FRA, to be more precise. Well, let's look once again to our settings:<br /><pre><br />SQL> show parameter recov<br /><br />NAME TYPE VALUE<br />------------------------------------ ----------- ------------------------------<br />db_recovery_file_dest string +DG1<br />db_recovery_file_dest_size big integer 150000M<br />recovery_parallelism integer 0<br /></pre><br />Okey, it's clear we have a FRA! What about RMAN settings?<br /><pre><br />RMAN> show all;<br /><br />using target database control file instead of recovery catalog<br />RMAN configuration parameters are:<br />CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 2 DAYS;<br />CONFIGURE BACKUP OPTIMIZATION OFF; # default<br />CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default<br />CONFIGURE CONTROLFILE AUTOBACKUP ON;<br />CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';<br />CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;<br />CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default<br />CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default<br />CONFIGURE MAXSETSIZE TO UNLIMITED; # default<br />CONFIGURE ENCRYPTION FOR DATABASE OFF; # default<br />CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default<br />CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default<br />CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/10.2.0/db_1/dbs/snapcf_fd1.f'; # default<br /></pre><br />It looks good... the autobackup format for controlfile is '%F' which is the default one, right? The <a href="http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta015.htm#i1017491" target="_blank">documentation</a> proves that:<br /><br /><em>The default location for the autobackup on disk is the flash recovery area (if configured) or a platform-specific location (if not configured). RMAN automatically backs up the current control file using the default format of %F.</em><br /><br />Okey, we have a flash recovery area and a %F default autobackup format... WTF? Well, the answer is given by the 338483.1 metalink note. Apparently, there is a big difference between having the autobackup format set on its default value and having it reset to its default... Interesting, ha? It is... So, if you set (explicitly) the autobackup format to %F, the autobackup file will go to a OS specific location, which on Linux is $?/dbs. But if you have the autobackup format on its default (explicitly reset it, or never set it at all) and you have a FRA configured then that autobackup file will actually go to FRA.<br />So, in my case the solution was simple (please notice the "# default" marker): <br /><pre><br />RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR; <br /><br />old RMAN configuration parameters:<br />CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';<br />RMAN configuration parameters are successfully reset to default value<br /><br />RMAN> show controlfile autobackup format;<br /><br />RMAN configuration parameters are:<br />CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default<br /></pre><br />Ooookey, really really unintuitive... I think the Oracle documentation should be more precise regarding this.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com2tag:blogger.com,1999:blog-36335310.post-49768315427565970392010-03-19T06:53:00.000-07:002010-03-19T09:16:05.094-07:00When having a rman retention policy based on REDUNDANCY is a bad idea...Suppose you have a RMAN retention policy of "REDUNDANCY 2". This means that as long as you have at least two backups of the same datafile, controlfile/spfile or archivelog the other older backups become obsolete and RMAN is allowed to safely remove them.<br /><br />Now, let's also suppose that every night you backup your database using the following script:<br /><pre>CONFIGURE CONTROLFILE AUTOBACKUP ON;<br />rman {<br /> backup database plus archivelog;<br /> delete noprompt obsolete redundancy 2;<br />}</pre><br />The backup task is quite simple: first of all it ensures that we have the controlfile autobackup feature on, then it backups the database and archives and, at the end, it deletes all obsolete backups using the REDUNDANCY 2 retention policy.<br />Using the above approach you might think that you can restore your database as it was two days ago, right? For example, if you have a backup taken on Monday and another one taken on Tuesday you may restore your database as it was within the (Monday_last_backup - Today) time interval. Well, that's wrong!<br /><br />Consider the following scenario:<br />1. On Monday night you backup the database using the above script;<br />2. On Tuesday, during the day, you drop a tablespace. Because this is a structural database change a controlfile autobackup will be triggered. Ieeei, you have a new controlfile backup.<br />3. On Tuesday night you backup again the database... nothing unusual, right? <br /><br />Well, the tricky part is regarding the DELETE OBSOLETE command. When the backup script will run this command, RMAN finds out three controlfile backups: one is originating from the Monday backup, one is from the structural change and the third is from our just finished Tuesday backup database command. Now according to the retention policy of "REDUNDANCY 2", RMAN will assume that it is safe to delete the backup of the controlfile taken on Monday night backup because it's out of our retention policy and because this backup is the oldest one. Uuups... this means that we gonna have a big problem restoring the database as it was before our structural change because we don't have a controlfile backup from that time.<br /><br />So, if you intend to incomplete recover your database to a previous time in the past it's really a good idea to switch to a retention policy based on a "RECOVERY WINDOW" instead. In our case a RECOVERY WINDOW OF 2 DAYS would be more appropriate.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com5tag:blogger.com,1999:blog-36335310.post-85600239777384141872010-02-28T11:54:00.000-08:002010-05-26T01:20:17.647-07:00PLSQL "All or Nothing" PitfallTransactions are such a common thing when working with databases. They act on an "all or nothing" basis, that is, they succeed or fail but they always should let the database into a consistent state. Of course, in Oracle databases the rules are the same, but the interesting part I want to refer to is in connection with PL/SQL modules (procedures, functions or packages). <br /><br />A PL/SQL module is some kind of "all or nothing" component. If the procedure fails it rollbacks the uncommited work it has done. Suppose we have the following procedure:<br /><pre class="brush: sql">CREATE OR REPLACE PROCEDURE test AS <br />BEGIN<br /> insert into yyy values (1);<br /> raise_application_error(-20000, 'I am a cute error!');<br />END test;</pre><br />Let's see what happens:<br /><pre class="brush: sql">SQL> truncate table yyy;<br /><br />Table truncated.<br /><br />SQL> exec test;<br />BEGIN test; END;<br /><br />*<br />ERROR at line 1:<br />ORA-20000: I am a cute error!<br />ORA-06512: at "TALEK.TEST", line 4<br />ORA-06512: at line 1<br /><br /><br />SQL> select * from yyy;<br /><br />no rows selected</pre><br /><br />Nice... we didn't explicitly rollback, but Oracle was smart enough to do the cleanup job for us. This makes sense and proves that PLSQL modules are, in a way, "all or nothing" components.<br /><br />Now, let's say we have an oracle job which calls our "test" procedure and if an error occurs it has to log it into another table. A possible implementation of the job PLSQL caller block may be:<br /><br /><pre class="brush: sql">begin<br /> test;<br />exception<br /> when others then<br /> insert into log values (dbms_utility.format_error_stack);<br /> commit;<br /> raise;<br />end;<br />/</pre><br /><br />The above code may seem harmless: the test procedure is called and if it raises an error the exception part of the PL/SQL caller block is executed which further inserts the error into our log table. Of course, we commit the log entry we just inserted and we re-raise the originating error. We know that if test procedure fails then it rollbacks its uncommited work as we seen above. After all, it's an "all or nothing" piece, right? Well, here's the pitfall: if you catch the exception then the procedure which raised the error will not clean up anything as long as you are within the EXCEPTION section. Even the whole anonymous block will fail because of re-raising the original error, the COMMIT statement from the EXCEPTION section will actually commit the incomplete work done by our "TEST" procedure. So, in most cases you should look twice to such EXCEPTION WHEN THEN ... COMMIT definitions... otherwise you may end up with nasty bugs. In the above example, in order to avoid this problem, a ROLLBACK should be performed before logging the error. Of course, there are smarter logging solutions which use autonomous transactions but, anyway, the goal was just to reveal the pitfall.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-85131302637322675062010-02-24T02:12:00.000-08:002010-02-25T04:15:45.686-08:00INS-32018 Warning for Standalone ServerWhen it comes to installing Oracle you should always follow the procedures written into the installation guides. As you already know, Oracle 11.2 packages ASM within a new separate component called Oracle Grid Infrastructure. So, if you want to install the database files into ASM then you must install Grid Infrastructure. As a good practice, Oracle recommends to install it under a different user, typically named "grid".<br />As far as the OFA directories structure is concerned the installation guide recommends:<br /><ul><br /><li>to create an "/u01/app/grid" directory to be used as an ORACLE_BASE for this "grid" user;</li><br /><li>to create an "/u01/app/11.2.0/grid" directory to be used as an ORACLE_HOME for this "grid" user.</li><br /></ul><br />If you're like me, the above configuration looks a little bit weird because I used to think that the ORACLE_HOME should be somewhere under the ORACLE_BASE directory. Nevertheless, the documentation clearly states the following:<br /><br /><b>Caution:</b><br/><br /><i>For grid infrastructure for a cluster installations, the Grid home must not be placed under one of the Oracle base directories, or under Oracle home directories of Oracle Database installation owners, or in the home directory of an installation owner. During installation, ownership of the path to the Grid home is changed to root. This change causes permission errors for other installations.</i><br /><br />However, the above applies just to cluster installations. If you just want ASM installed for a single instance database then it's fine (and recommended) to place the ORACLE_HOME under the ORACLE_BASE. If not doing so, you'll get the following warning:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMTZM-l-9r6EX_bOzOX9OOQg11mgYLPtM8iVTO_qlI-qKnXLUZ3STvPC2RU2DHuIkIxCV8pj624Xk7irbQtde0o-n0IaNxvfIpeGxVFyAUi884pqBSNrozX6TqBT7LX7auQy-L/s1600-h/ins-32018.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 239px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMTZM-l-9r6EX_bOzOX9OOQg11mgYLPtM8iVTO_qlI-qKnXLUZ3STvPC2RU2DHuIkIxCV8pj624Xk7irbQtde0o-n0IaNxvfIpeGxVFyAUi884pqBSNrozX6TqBT7LX7auQy-L/s320/ins-32018.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5441757320885301042" /></a><br /><br />So, to sum up the above ideas, remember that if you are going to install a RAC then you need to create the grid ORACLE_HOME out of the ORACLE_BASE of any oracle software owner. If you choose to install the Oracle Grid Infrastructure for a standalone server then the ORACLE_HOME of the grid user should be under its ORACLE_BASE.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com2tag:blogger.com,1999:blog-36335310.post-35045380457153329802010-02-17T00:47:00.000-08:002010-02-22T02:25:27.953-08:00ALL_TABLES versus ALL_ALL_TABLESIf you ever wondered what's the difference between ALL_TABLES and ALL_ALL_TABLES then here's the answer: both views provide all tables to which the current user has access to but, in addition to the tables returned by ALL_TABLES, the ALL_ALL_TABLES will also return all object tables (system generated or not) accessible by the current user.<br /><br />Pay attention that this may be an interview question (e.g. how can you get all tables you have access to?) and you may leave a good impression if you respond with another question: "Do you also want object tables to be included?". :)Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com1tag:blogger.com,1999:blog-36335310.post-58003208659906070332009-12-16T12:05:00.000-08:002010-05-26T10:58:58.821-07:00A DDL statement may fire a DML triggerMaybe you know this, maybe you don't. Because it's not quite obvious it deserves a little attention. We all know about DML triggers. Remember? Yea, yea... the before/after insert/update/delete each row triggers. We use to think that the INSERT, UPDATE or DELETE statements fire the corresponding triggers (of course, if any are defined). That's true with one (as far as I know) important note: a DDL statement which adds a new column with a default value will also fire the UPDATE trigger.<br /><br />For example, let's create a dummy table:<br /><pre class="brush: sql">SQL> create table muc (col1 integer primary key, modify_date timestamp);<br /><br />Table created.</pre><br />Then, the corresponding trigger:<br /><pre class="brush: sql">SQL> create or replace trigger trg_muc_mod_dt before update on muc for each row<br />2 begin<br />3 :new.modify_date := systimestamp;<br />4 end;<br />5 /</pre><br />Add some records:<br /><pre class="brush: sql">SQL> insert into muc values (1, systimestamp);<br /><br />1 row created.<br /><br />SQL> insert into muc values (2, systimestamp);<br /><br />1 row created.<br /><br />SQL> commit;</pre><br />We end up having:<br /><pre class="brush: sql">SQL> select * from muc;<br /><br /> COL1 MODIFY_DATE<br />---------- ------------------------------<br /> 1 16-DEC-09 09.54.03.804223 PM<br /> 2 16-DEC-09 09.54.41.815575 PM</pre><br />Now, the moment of truth:<br /><pre class="brush: sql">SQL> alter table muc add (active integer default '0');<br /><br />Table altered.<br /><br />SQL> select * from muc;<br /><br /> COL1 MODIFY_DATE ACTIVE<br />---------- ------------------------------ ----------<br /> 1 16-DEC-09 09.55.53.836113 PM 0<br /> 2 16-DEC-09 09.55.53.840896 PM 0</pre><br />Take a look at the MODIFY_DATE and see the new timestamp. The update trigger was invoked in response to our DDL statement. This is important to know. Think to a deposit table which has a column named LAST_UPATED and a trigger which updates it whenever something within a deposit changes. Now, suppose the business logic dictates that a new column must be added with a default value. You run the DDL statement to add that column and... suddenly, all information regarding when a particular deposit was last upated is lost. Ups. So, I should write down one hundred times: "Think twice before adding new columns with default values on a table with UPDATE triggers".Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com1tag:blogger.com,1999:blog-36335310.post-88677415451328545192009-11-29T11:19:00.000-08:002010-02-22T02:26:36.661-08:00Strange RMAN snapshot controlfile issueA strange thing happen today. I executed a delete obsolete command on my RMAN prompt and it reported the snapshot controlfile as obsolete. I don't know under which circumstances this problem occurs and I couldn't find any relevant information on forums or metalink (oh! sorry "my oracle support") about this.<br /><br />Below is the output of the DELETE OBSOLETE command:<br /><pre>RMAN> delete obsolete;<br /><br />RMAN retention policy will be applied to the command<br />RMAN retention policy is set to redundancy 1<br />using channel ORA_DISK_1<br />using channel ORA_DISK_2<br />Deleting the following obsolete backups and copies:<br />Type Key Completion Time Filename/Handle<br />-------------------- ------ ------------------ --------------------<br />Control File Copy 36 29-11-2009 12:35:33 /u01/app/oracle/product/11.2.0/<br /> dbhome_1/dbs/snapcf_tetris.f<br /><br />Do you really want to delete the above objects (enter YES or NO)? y<br />RMAN-00571: ===========================================================<br />RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============<br />RMAN-00571: ===========================================================<br />RMAN-03009: failure of delete command on ORA_DISK_2 channel at 11/29/2009 21:11:16<br />ORA-19606: Cannot copy or restore to snapshot control file</pre><br /><br />Indeed, this is the default configured snapshot controlfile:<br /><pre>RMAN> show snapshot controlfile name; <br /><br />RMAN configuration parameters for database with db_unique_name TETRIS are:<br />CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/<br />dbhome_1/dbs/snapcf_tetris.f';</pre><br />It seems I'm in a kind of deadlock here. The snapshot controlfile is reported as obsolete but it can't be deleted as it is used by RMAN. The only solution I found was to change the RMAN configuration to use another snapshot controlfile, to remove then the reported obsolete one and to switch back to the default. However, the question remains: why the snapshot controlfile is reported as obsolete?<br /><br />PS: This happend on a 11gR2 database installed under a Linux x86 platform.<br /><br /><strong>Update:</strong> Apparently this is encountered after executing a DUPLICATE database from ACTIVE DATABASE. Furthermore, the snapshot controlfile is reported as a "datafile copy" when a CROSSCHECK is suggested. See below:<br /><pre>RMAN> delete obsolete; <br /><br />RMAN retention policy will be applied to the command<br />RMAN retention policy is set to redundancy 1<br />using channel ORA_DISK_1<br />using channel ORA_DISK_2<br />Deleting the following obsolete backups and copies:<br />Type Key Completion Time Filename/Handle<br />-------------------- ------ ------------------ --------------------<br />Control File Copy 40 30-11-2009 18:41:15 /u01/app/oracle/product/11.2.0/dbhome_1<br /> /dbs/snapcf_tetris.f<br /><br />Do you really want to delete the above objects (enter YES or NO)? y <br /><br />RMAN-06207: WARNING: 1 objects could not be deleted for DISK channel(s) due<br />RMAN-06208: to mismatched status. Use CROSSCHECK command to fix status<br />RMAN-06210: List of Mismatched objects<br />RMAN-06211: ==========================<br />RMAN-06212: Object Type Filename/Handle<br />RMAN-06213: --------------- ---------------------------------------------------<br />RMAN-06214: Datafile Copy /u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_tetris.f</pre><br />Obviously, that can't be a datafile copy. So, let's try a crosscheck as suggested:<br /><pre>RMAN> crosscheck datafilecopy '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_tetris.f'; <br /><br />using target database control file instead of recovery catalog<br />allocated channel: ORA_DISK_1<br />channel ORA_DISK_1: SID=148 device type=DISK<br />allocated channel: ORA_DISK_2<br />channel ORA_DISK_2: SID=140 device type=DISK<br />RMAN-00571: ===========================================================<br />RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============<br />RMAN-00571: ===========================================================<br />RMAN-03002: failure of crosscheck command at 11/30/2009 19:09:43<br />RMAN-20230: datafile copy not found in the repository<br />RMAN-06015: error while looking up datafile copy name: /u01/app/oracle/product/11.2.0<br />/dbhome_1/dbs/snapcf_tetris.f</pre><br />Okey, this was expected as I don't have any datafilecopy with that name despite of what RMAN says. So, let's try a crosscheck for the controlfile copy:<br /><pre>RMAN> crosscheck controlfilecopy '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_tetris.f'; <br /><br />released channel: ORA_DISK_1<br />released channel: ORA_DISK_2<br />allocated channel: ORA_DISK_1<br />channel ORA_DISK_1: SID=148 device type=DISK<br />allocated channel: ORA_DISK_2<br />channel ORA_DISK_2: SID=140 device type=DISK<br />validation failed for control file copy<br />control file copy file name=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_tetris.f <br />RECID=40 STAMP=704313675<br />Crosschecked 1 objects</pre><br />As it can be seen the validation fails, although the file exists on that location:<br /><pre>$ ls -al /u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_tetris.f<br />-rw-r----- 1 oracle oinstall 10436608 Nov 30 18:57 /u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_tetris.f</pre><br />I don't know if this is documented somewhere but it looks to me like a bug. No idea why the snapshot control file is messed up after a DUPLICATE TARGET DATABASE ... FROM ACTIVE DATABASE.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com9tag:blogger.com,1999:blog-36335310.post-74396828970624965362009-11-27T13:08:00.000-08:002010-02-22T02:27:13.014-08:00TSPITR to recover a dropped tablespaceA nice feature of Oracle 11gR2 is the ability to recover a dropped tablespace using TSPITR. Of course, in order to succeed this, you need valid backups. Let's test this! First of all, just to be on the safe side, take a fresh backup of the database:<br /><pre>BACKUP DATABASE PLUS ARCHIVELOG;</pre><br />Then supposing you have a "MUCI" tablespace, simply drop it:<br /><pre>drop tablespace MUCI including contents;</pre><br />Let's try to recover "MUCI" tablespace. You'll need the nearest timestamp or SCN before the tablespace was dropped.<br /><br />If you are tempted to use fully automatic TSPITR then be prepared for troubles. This is what happen to me when I tried it:<br /><pre>RMAN> recover tablespace muci until scn 2240386 auxiliary destination '/u01/app/backup';<br /><br />...<br /><br />RMAN-00571: ===========================================================<br />RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============<br />RMAN-00571: ===========================================================<br />RMAN-03002: failure of recover command at 11/27/2009 21:57:13<br />RMAN-06965: Datapump job has stopped<br />RMAN-06961: IMPDP> Job "SYS"."TSPITR_IMP_hilc" stopped due to fatal error at 21:57:09<br />RMAN-06961: IMPDP> ORA-39123: Data Pump transportable tablespace job aborted<br />ORA-01565: error in identifying file '/u01/app/oracle/oradata/TETRIS/datafile/o1_mf_muci_5k0bwdmb_.dbf'<br />ORA-27037: unable to obtain file status<br />Linux Error: 2: No such file or directory<br />Additional information: 3</pre><br /><br />I google it and found <a href="http://ocpdba.wordpress.com/2009/09/17/recovering-a-dropped-tablespace-with-tspitr-in-11gr2/">this</a> post which recommends to drop the tablespace without "AND DATAFILES" but, as far as I'm concerned, it didn't work.<br />Nevertheless, setting a new name for the datafile which belongs to the dropped datafile did the job.<br /><pre>RMAN> run {<br />2> set newname for datafile 6 to new;<br />3> recover tablespace muci until scn 2240386 auxiliary destination '/u01/app/backup';<br />4> }</pre><br />A direct consequence of this in 11gR2 is that you can apply multiple TSPITR for the same tablespace without using a recovery catalog. If you chosen a wrong SCN and you already brought the recovered tablespace ONLINE then you can simply drop it and try again with another SCN.<br /><br />Awesome!Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-24261156749613408372009-11-27T11:02:00.000-08:002010-05-26T11:06:49.731-07:00Annoying Tablespaces QuotasThere's one thing about tablespace quotas which I really don't like. If I allocate quota on a tablespace to a user and then I drop that tablespace the quota is not automatically revoked. It still can be seen in DBA_TS_QUOTAS view but with the DROPPED column set as YES. However, if i create afterwards a tablespace with the same name as the one previously dropped the old quota is auto-magically reactivated on this new tablespace which might not be my intention. Let's see it in action:<br /><br />1. first of all, let's create a dummy tablespace:<br /><pre class="brush: sql">SQL> create tablespace test_tbs datafile size 20M; <br /><br />Tablespace created.</pre><br />2. let's also create a user and grant quota on the TEST_TBS tablespace:<br /><pre class="brush: sql">SQL> create user gogu identified by xxx quota unlimited on users; <br /><br />User created.<br /><br />SQL> alter user gogu quota unlimited on test_tbs; <br /><br />User altered.</pre><br />3. take a look at quotas:<br /><pre class="brush: sql">SQL> select * from dba_ts_quotas where username='GOGU'; <br /><br />TABLESPACE_NAME USERNAME BYTES MAX_BYTES BLOCKS MAX_BLOCKS DRO<br />--------------- --------------- ---------- ---------- ---------- ---------- ---<br />USERS GOGU 0 -1 0 -1 NO<br />TEST_TBS GOGU 0 -1 0 -1 NO</pre><br />4. now drop the TEST_TBS tablespace and look again at quotas:<br /><pre class="brush: sql">SQL> drop tablespace test_tbs including contents and datafiles; <br /><br />Tablespace dropped.<br /><br />SQL> select * from dba_ts_quotas where username='GOGU'; <br /><br />TABLESPACE_NAME USERNAME BYTES MAX_BYTES BLOCKS MAX_BLOCKS DRO<br />--------------- --------------- ---------- ---------- ---------- ---------- ---<br />USERS GOGU 0 -1 0 -1 NO<br />TEST_TBS GOGU 0 -1 0 -1 YES</pre><br />Just notice that the DROPPED column is now set to YES for the TEST_TBS tablespace. This I don't like and if I want to revoke the quota oracle complains that it doesn't know anything about the TEST_TBS tablespace. <br /><pre class="brush: sql">SQL> alter user gogu quota 0 on test_tbs; <br />alter user gogu quota 0 on test_tbs<br />*<br />ERROR at line 1:<br />ORA-00959: tablespace 'TEST_TBS' does not exist</pre><br />Obvious, but then why preserving that quota in DBA_TS_QUOTAS anyway?<br /><br />5. Let's recreate the TEST_TBS tablespace and then look at quotas:<br /><pre class="brush: sql">SQL> create tablespace test_tbs datafile size 20M; <br /><br />Tablespace created.<br /><br />SQL> select * from dba_ts_quotas where username='GOGU'; <br /><br />TABLESPACE_NAME USERNAME BYTES MAX_BYTES BLOCKS MAX_BLOCKS DRO<br />--------------- --------------- ---------- ---------- ---------- ---------- ---<br />USERS GOGU 0 -1 0 -1 NO<br />TEST_TBS GOGU 0 -1 0 -1 NO</pre><br />See how the "DROPPED" column is now back on "NO". But wait... this TEST_TBS tablespace is a new tablespace which just happen to be named like an old dropped tbs. Bleah... ugly!<br /><br />So, this boils down to the conclusion that when you are about to drop a tablespace is a good thing to check the quotas allocated to users and to revoke them before dropping the tablespace. Otherwise they will remain in DBA_TS_QUOTAS and they'll be reactivated when a tablespace with the same name is created. Furthermore, I don't know how you can get rid of them if the tablespace no longer exists. Of course, you can create a dummy tablespace with the same name, revoke quotas and after that to drop the dummy tablespace. But this is an awful workaround.<br /><br /><strong>Update:</strong> Yet, I see an advantage of the above behaviour. In 11gR2 you can recover a dropped tablespace with TSPITR. After the TSPITR successfully completes and the dropped tablespace is recovered, the old quotas are also reactivated which is a good thing for the users who had objects in that tablespace.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-63945999312089979612009-11-18T00:54:00.000-08:002010-02-22T02:28:18.609-08:00Do archivelogs become obsolete if they contain blocks from an BEGIN BACKUP operation?Of course, not every possible case is described within the docs therefore some of them have to be simply tried. So, today I was wondering what would happen if I leave a tablespace in BEGIN BACKUP mode and I will continue to backup the database using:<br /><pre>RUN {<br /> BACKUP DATABASE PLUS ARCHIVELOG; <br /> DELETE NOPROMPT OBSOLETE.<br />}</pre><br />As you already know, if a tablespace is put in BEGIN BACKUP mode then all subsequent changes will force the dirty blocks to be written into the redologs which will be eventually archived. My main concern here was regarding the DELETE OBSOLETE command. Is RMAN smart enough to know that those archives are not going to become obsolete as long as the BEGIN BACKUP status is in place? After some tests I can conclude: RMAN knows this and will NOT consider those archives as obsolete. This was kind of obvious but, you know... it's always good to try and to see by your own eyes.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-84717931813120495772009-11-15T03:43:00.000-08:002010-05-26T11:08:45.774-07:00High Availability Guaranteed Restore PointsI like the flashback database feature introduced by Oracle 10g and especially the guaranteed restore points. We intended to use it on our 10g production database to create guaranteed restore points before major upgrades but without enabling flashback logging because we wanted to affect at least as possible the whole database performance. What really bothered me at the time was the fact that I couldn't create my first guaranteed restore point if the database was open which, in my opinion, affects the high availability goal. The solution to this was to already have at least one guaranteed restore point before creating the next ones with the database open, but this always felt like an ugly workaround. <br />So, the question is: what do you choose: the overhead of always having the flashback logging enabled or the downtime produced by the creation of a guaranteed restore point? Hmmm... <br />As many Oracle new features they seem to be a little bit unpolished when just launched but they tend to become better an better. The same here. I've just had a nice surprise to see that in 11gR2 (I don't know if it's also in R1) you can create guaranteed restore points with the database open, without flashback logging enabled and without any previous guaranteed restore points:<br /><pre class="brush: sql"><br />SQL> select name from v$restore_point; <br /><br />no rows selected<br /><br />SQL> select flashback_on from v$database; <br /><br />FLASHBACK_ON<br />------------------<br />NO<br /><br />SQL> select status from v$instance; <br /><br />STATUS<br />------------<br />OPEN<br /><br />SQL> create restore point before_upgrade guarantee flashback database; <br /><br />Restore point created.<br /><br />SQL> select name from v$restore_point; <br /><br />NAME<br />---------------------------------------------------<br />BEFORE_UPGRADE</pre><br /><br />Nice and good to know.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-17750929539352960882009-11-01T11:07:00.000-08:002009-11-01T12:19:01.663-08:00RMAN Retention Policy with Corrupted BackupsI always assumed that RMAN is smart enough to take care of my database obsolete backups. I give it the retention policy and it's done: whenever I invoke the DELETE OBSOLETE command rman will identify those backups out of the scope of my retention policy and will safely delete them. Nevertheless, there is at least one big exception: when the taken backup is corrupted.<br /><br />The following is quite self explanatory. Lets assume we have a retention policy of redundancy 1 and we take a new backup of the database.<br /><br /><pre>RMAN> backup database;<br /><br />Starting backup at 01-11-2009 11:20:53<br />using channel ORA_DISK_1<br />using channel ORA_DISK_2<br />channel ORA_DISK_1: starting compressed full datafile backup set<br />channel ORA_DISK_1: specifying datafile(s) in backup set<br /><br />...<br />channel ORA_DISK_1: backup set complete, elapsed time: 00:01:26<br />Finished backup at 01-11-2009 11:22:20</pre><br /><br />Now, we have two backups and, according to the configured retention policy, the previous one becomes obsolete. However, let's suppose that the backup we just taken is corrupted. We can simulate this using dd (we're zeroing 1MB somewhere in between):<br /><br /><pre>dd if=/dev/zero of=o1_mf_nnndf_TAG20091101T232053_5gvyxpwt_.bkp bs=1M seek=10 count=1</pre><br /><br />Okey! As a good practice it's nice to validate the backup using the "RESTORE VALIDATE BACKUP" so let's do it:<br /><br /><pre>RMAN> restore validate database;<br /><br />Starting restore at 01-11-2009 11:30:10<br />using target database control file instead of recovery catalog<br />allocated channel: ORA_DISK_1<br />channel ORA_DISK_1: SID=42 device type=DISK<br />allocated channel: ORA_DISK_2<br />channel ORA_DISK_2: SID=37 device type=DISK<br /><br />channel ORA_DISK_1: starting validation of datafile backup set<br />channel ORA_DISK_2: starting validation of datafile backup set<br /><br />...<br /><br />ORA-19599: block number 1280 is corrupt in backup piece <br />/opt/oracle/app/oracle/flash_recovery_area<br />/VENUSDB/backupset/2009_11_01/o1_mf_nnndf_TAG20091101T232053_5gvyxpwt_.bkp<br /><br />channel ORA_DISK_2: piece handle=/opt/oracle/app/oracle/flash_recovery_area<br />/VENUSDB/backupset/2009_11_01/o1_mf_nnndf_TAG20091101T232053_5gvyxp3o_.bkp <br />tag=TAG20091101T232053<br />channel ORA_DISK_2: restored backup piece 1<br />channel ORA_DISK_2: validation complete, elapsed time: 00:00:35<br />failover to previous backup<br /><br />...<br />Finished restore at 01-11-2009 11:31:13</pre><br /><br />As you can see the BACKUP VALIDATE worked as expected. It identified the corrupted backupset and failed over to the previous valid one. However, what if at the end of the backup script there's a "delete noprompt obsolete" command?<br /><br /><pre>RMAN> delete noprompt obsolete;<br /><br />RMAN retention policy will be applied to the command<br />RMAN retention policy is set to redundancy 1<br />using channel ORA_DISK_1<br />using channel ORA_DISK_2<br />Deleting the following obsolete backups and copies:<br />Type Key Completion Time Filename/Handle<br />-------------------- ------ ------------------ --------------------<br />Archive Log 2 01-11-2009 10:40:27 /opt/oracle/app/oracle/flash_recovery_area<br />/VENUSDB/archivelog/2009_11_01/o1_mf_1_6_5gvwkv55_.arc<br />Backup Set 10 01-11-2009 11:19:57<br /> Backup Piece 10 01-11-2009 11:19:57 /opt/oracle/app/oracle/flash_recovery_area<br />/VENUSDB/backupset/2009_11_01/o1_mf_nnndf_TAG20091101T231814_5gvyrqdc_.bkp<br />Backup Set 9 01-11-2009 11:19:53<br /> Backup Piece 9 01-11-2009 11:19:53 /opt/oracle/app/oracle/flash_recovery_area<br />/VENUSDB/backupset/2009_11_01/o1_mf_nnndf_TAG20091101T231814_5gvyrqtm_.bkp<br />Backup Set 11 01-11-2009 11:20:04<br /> Backup Piece 11 01-11-2009 11:20:04 /opt/oracle/app/oracle/flash_recovery_area<br />/VENUSDB/autobackup/2009_11_01/o1_mf_s_701824802_5gvyw3h1_.bkp<br />deleted archived log<br />archived log file name=/opt/oracle/app/oracle/flash_recovery_area/VENUSDB/archivelog/<br />2009_11_01/o1_mf_1_6_5gvwkv55_.arc RECID=2 STAMP=701822427<br />deleted backup piece<br />backup piece handle=/opt/oracle/app/oracle/flash_recovery_area/VENUSDB/backupset/<br />2009_11_01/o1_mf_nnndf_TAG20091101T231814_5gvyrqdc_.bkp RECID=10 STAMP=701824695<br />deleted backup piece<br />backup piece handle=/opt/oracle/app/oracle/flash_recovery_area/VENUSDB/backupset/<br />2009_11_01/o1_mf_nnndf_TAG20091101T231814_5gvyrqtm_.bkp RECID=9 STAMP=701824695<br />deleted backup piece<br />backup piece handle=/opt/oracle/app/oracle/flash_recovery_area/VENUSDB/autobackup/<br />2009_11_01/o1_mf_s_701824802_5gvyw3h1_.bkp RECID=11 STAMP=701824803<br />Deleted 4 objects</pre><br /><br />Uuups! It just deleted our valid backupset. The proof:<br /><br /><pre>RMAN> restore validate database;<br /><br />Starting restore at 01-11-2009 11:35:03<br />using channel ORA_DISK_1<br />using channel ORA_DISK_2<br /><br />channel ORA_DISK_1: starting validation of datafile backup set<br />channel ORA_DISK_2: starting validation of datafile backup set<br /><br />...<br /><br />RMAN-00571: ===========================================================<br />RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============<br />RMAN-00571: ===========================================================<br />RMAN-03002: failure of restore command at 11/01/2009 23:35:40<br />RMAN-06026: some targets not found - aborting restore<br />RMAN-06023: no backup or copy of datafile 5 found to restore<br />RMAN-06023: no backup or copy of datafile 3 found to restore<br />RMAN-06023: no backup or copy of datafile 2 found to restore</pre><br /><br />I don't know if the above behavior is clearly mentioned in the Oracle backup and recovery documentation but this should be taken into account when defining the backup and recovery strategy. Of course a RETENTION POLICY of 1 is not a setting to be used in productive systems but, anyway, I expect troubles even if the retention policy is set to a higher redundancy. In my option, it would be great if RMAN could label somehow the corrupted backups at the time the restore validate is invoked and then to take into account this when the retention policy is applied.<br /><br />Meanwhile, in order to avoid the above scenario within your backup scripts, it's advisable to group the RESTORE VALIDATE and DELETE NOPROMPT OBSOLETE within a RUN { ... } command. If the first command fails then the DELETE command will never be executed.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com0tag:blogger.com,1999:blog-36335310.post-18039062802251967842009-04-28T22:51:00.000-07:002010-02-22T02:30:26.271-08:00CREATE VIEW with FORCE does not workYesterday I loaded an oracle dump in our 10.2.0.4 database... and guess what? Not all the views were created. I took a look into the impdp log and I saw some errors complaining that: ORA-00980: synonym translation is no longer valid. So what? The CREATE VIEW statements were issued with the FORCE clause therefore it should have been created, right?<br /><br />Well, after some diggings on metalink I found <a href="https://metalink2.oracle.com/metalink/plsql/f?p=130:14:1113444823878836480::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_b">this</a>. It basically says that there is a(nother) bug and according to their description: <code>create force view using a synonym for a table fails to create the view if the synonym is invalid.</code> The 10.2.0.3 and 10.2.0.4 databases are confirmed to be affected and this bug is supposed to be fixed in 10.2.0.5 and 11.2.<br /><br />In my case, the solution was to fix the synonyms problem and after that to reimport just the views using the INCLUDE parameter of the impdp utility.Alexandru Ticăhttp://www.blogger.com/profile/12127407011884904816noreply@blogger.com1