This blog has moved here.

Tuesday, October 07, 2008

Remotely Connect to a RESTRICT Opened Database

Lately, I have this sub-conscience mantra which basically says: don’t believe everything Oracle Official Docs say but try and prove those facts! For example, one thing to try is starting an instance in restricted mode and prove after that what Oracle says in the Administration Guide 11g/Starting Up a Database chapter:

when the instance is in restricted mode, a database administrator cannot access the instance remotely through an Oracle Net listener, but can only access the instance locally from the machine that the instance is running on.

Lets try! On the server:

SQL> startup restrict
ORACLE instance started.

Total System Global Area 835104768 bytes
Fixed Size 2149000 bytes
Variable Size 595592568 bytes
Database Buffers 230686720 bytes
Redo Buffers 6676480 bytes
Database mounted.
Database opened.

On the client, using an admin user:

Enter user-name: admin@tbag
Enter password:
ORA-12526: TNS:listener: all appropriate instances are in restricted mode

What they forget to say here is the fact that this behavior is obtained just with dynamic listener registration. If I’m going to explicitly specify the SID_LIST within my listener.ora file then I can connect remotely without problems.

Thursday, June 12, 2008

Oracle Linux Date

If you’ll ever need to get the current Linux time from Oracle then you might be interested in the following solution. First of all, the Linux epoch time is expressed as the number of seconds since 1970-01-01 00:00:00 UTC and can be obtain by using the date +'%s' command. For example:

oracle@oxg:~$ date +'%s'

From Oracle you can use the following custom function:

create or replace function current_linux_date return integer is
l_crr_date timestamp(9) := SYS_EXTRACT_UTC(systimestamp);
l_ref_date timestamp(9) := to_date('01011970', 'ddmmyyyy');
l_seconds integer;
l_seconds := extract(day from (l_crr_date - l_ref_date)) * 24 * 3600 +
extract(hour from (l_crr_date - l_ref_date)) * 3600 +
extract(minute from (l_crr_date - l_ref_date)) * 60 +
extract(second from (l_crr_date - l_ref_date));
end current_linux_date;

Now, you should get the same result from Oracle:

SQL> select current_linux_date from dual;


oracle@oxg:~$ date +'%s'

Have fun!

Monday, May 12, 2008

Profiling the new SIMPLE_INTEGER type

Oracle 11g comes with a new PLSQL type called SIMPLE_INTEGER. The official documentation says that this type yield significant performance compared to PLS_INTEGER type. Because I want to see this with my own eyes I’ve decided to test it using another new 11g component called hierarchical profiler which I also want to see how it’s working.

First of all, let’s setup the environment:

1. on the database server create a new directory to be used for creating profiler trace files:

oracle@obi:oracle$ mkdir profiler
oracle@obi:oracle$ chmod o-rx profiler/

2. create the DIRECTORY object in the database too, and grant read/write privileges to the testing user (in our case TALEK user):

SQL> create directory profiler_dir as '/opt/oracle/profiler';

Directory created.

SQL> grant read, write on directory profiler_dir to talek;

Grant succeeded.

3. grant execute privilege for DBMS_HPROF package to the TALEK user:

SQL> grant execute on dbms_hprof to talek;

Grant succeeded.

4. connect using TALEK user and create the following package (the only difference between the first and second approach is the type of the l_count variable):

create or replace package trash is

procedure approach_1;

procedure approach_2;

end trash;

create or replace package body trash is

procedure approach_1 as
l_count pls_integer := 0;
for i in 1..10000 loop
l_count := l_count + 1;
end loop;

procedure approach_2 as
l_count simple_integer := 0;
for i in 1..10000 loop
l_count := l_count + 1;
end loop;

end trash;

5. Profile the approaches:

SQL> exec dbms_hprof.start_profiling(location => 'PROFILER_DIR', filename => 'test.trc');

PL/SQL procedure successfully completed

SQL> exec trash.approach_1;

PL/SQL procedure successfully completed

SQL> exec trash.approach_2;

PL/SQL procedure successfully completed

SQL> exec dbms_hprof.stop_profiling;

PL/SQL procedure successfully completed

6. Analyze the generated trace file. For this we’ll use the "plshprof" command line utility.

oracle@obi:profiler$ plshprof -output report test.trc
PLSHPROF: Oracle Database 11g Enterprise Edition Release - Production
[8 symbols processed]
[Report written to 'report.html']

Aaaaand, the WINNER is:

TALEK.TRASH.APPROACH_1 -> 5713 (microseconds)
TALEK.TRASH.APPROACH_2 -> 100706 (microseconds)

Well… this is unexpected. According to Oracle docs, the SIMPLE_INTEGER should be faster. Ok, back to official doc: "The new PL/SQL SIMPLE_INTEGER data type is a binary integer for use with native compilation which is neither null checked nor overflow checked". Ahaaa… native compilation! Let’s check this:

SQL> show parameter plsql_code_type

------------------------------------ ----------- ------------------------------
plsql_code_type string INTERPRETED

So, we have a first clue and a first conclusion. If the database doesn’t use NATIVE compilation the SIMPLE_INTEGER type is actually much slower.

Let’s switch to native compilation. This can be easily done because the "plsql_code_type" parameter is dynamic:

SQL> alter system set plsql_code_type=native scope=both;

System altered.

It is important to compile once again the package because otherwise the old PLSQL byte code will be used (you can use "alter package trash compile plsql_code_type=native;"), then repeat the profiler tests.

The new results are:

TALEK.TRASH.APPROACH_2 -> 3927 (microseconds)
TALEK.TRASH.APPROACH_1 -> 12556 (microseconds)

Now, the second approach with SIMPLE_INTEGER is much faster and, interestingly, the PLS_INTEGER approach is slightly slower on native compilation compared with the same approach on the initial PLSQL interpreted environment.

Okey, one more thing. I really enjoy using the new 11g hierarchical profiler. From my point of view is a big step forward compared with the old DBMS_PROFILER, and the provided HTML reports produced by "plshprof" are quite lovely.

Sunday, May 11, 2008

WTF is that? (ep. 1)

I've just decided to put here some posts about the (...well, you know) Oracle WTF stories, those moments (... hours, and sometimes days) when being at my desk, I'm just staring at that stupid SQLPLUS> prompt couldn't figure out what the hell is happening.

Today, episode 1:

The scenario is very simple. I have two nice users: TALEK and SIM. TALEK has a table and gives UPDATE rights to SIM.

SQL> connect talek
Enter password:

SQL> create table muci (col1 varchar2(10));

Table created.

SQL> insert into muci values ('abc')
2 /

1 row created.

SQL> commit;

Commit complete.

SQL> grant update on muci to sim;

Grant succeeded.

SQL> connect sim
Enter password:

SQL> update talek.muci set col1='xyz' where col1='abc';
update talek.muci set col1='xyz' where col1='abc'
ERROR at line 1:
ORA-01031: insufficient privileges

Of course, this is the WTF moment. Why the UPDATE failed? The first thing to do is to check the DBA_TAB_PRIVS view for a confirmation that the UPDATE privilege is still there. (I'm pretty sure that no one was so fast to revoke meanwhile the granted privilege but, just in case...)

SQL> select grantee, owner, table_name, privilege 
from dba_tab_privs where table_name='MUCI' and owner='TALEK';

----- ----- --------- ----------

And yes, the privilege is there. Hmmm... what's next? Usually the next thought is that another Oracle bug makes fun of me. But, this sounds too scary to be true. Finally, the stupid answer comes to light.

SQL> show parameter sql92_security

------------------------------------ ----------- ------------------------------
sql92_security boolean TRUE

The database reference documentation says the following:
"The SQL92 standards specify that security administrators should be able to require that users have SELECT privilege on a table when executing an UPDATE or DELETE statement that references table column values in a WHERE or SET clause. SQL92_SECURITY specifies whether users must have been granted the SELECT object privilege in order to execute such UPDATE or DELETE statements."

With the above sql92_security parameter set, actually the "where col1='abc'" filter from the UPDATE statement complains about "insufficient privileges" and not the UPDATE itself. Without a filter the update executes as expected:

SQL> update talek.muci set col1='xyz';

1 row updated.

Ok, another lesson has been learned!

Friday, May 09, 2008

Obsolete Policy Violation

This is quite funny. Starting with 10g the LOG_ARCHIVE_START parameter is obsolete. Nevertheless, into the "Database Control" of a new installed 11g database I have a policy rule violation called "Use of Automatic Log Archival Features". The description of this policy rule says: "This policy ensures that archiving of redo logs is done automatically and prevents suspension of instance operations when redo logs fill. Only applicable if database is in archivelog mode".

Well, my database is already configured to run in archivelog mode:
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 15
Next log sequence to archive 17
Current log sequence 17

So, where is the problem? The action suggested by "Database Control" console is "Set the value of the LOG_ARCHIVE_START initialization parameter to TRUE". Hmmm, that is to set an obsolete parameter... no, thanks!

Update: Actually, I discovered that there are a lot of non-relevant/obsolete policies for 11g, like SGA_TARGET is not used, or PGA_AGGREGATE_TARGET should be used etc. It seems that many default settings from the "Database Control" do not take into consideration the actual version of the managed database therefore you have to manually suppress these policy/rule violations.

Thursday, March 27, 2008

TKPROF Everywhere

Lately, I was searching for a solution to let developers to “tkprof” their statements in the most non-intrusive way possible. Of course, a quite appealing solution I found was the one suggested by Tom Kyte in his “Effective Oracle by Design” book. The solution involves reading the user trace file from the server and returning the content of that trace through a pipelined table function. The result may be spooled into a file on the client side and, after that, “tkprof” may be executed for this local file. However, the problem I have is that me, personally, I use oracle instant client and I don't have tkprof on my local machine. Furthermore, I don't use sqlplus all the time, therefore I would really like to get the formatted tkprof output directly within a simple SQL-SELECT statement.

The solution is quite simple: instead of returning the raw trace file it's enough to run tkprof utility on the server with the user dump trace file as a parameter and, eventually, to return the final output.

In order to setup the needed environment we'll have to:

1.create the user which will own the “tkprof” framework:

grant create session, alter session, create procedure to tools identified by xxx;

2.grant additional privileges for this user:

grant select on sys.v_$process to tools;
grant select on sys.v_$session to tools;
exec dbms_java.grant_permission( 'TOOLS',
'SYS:java.lang.RuntimePermission', writeFileDescriptor', '' );
exec dbms_java.grant_permission( 'TOOLS',
'SYS:java.lang.RuntimePermission', 'readFileDescriptor', '' );

3.create the following java source object. This is needed in order to execute an external program in the OS environment (thanks again Tom for this, see the Java procedure for host calls on Unix environment question.):

create or replace and compile java source named util as
import java.lang.*;

public class Util extends Object

public static int RunThis(String args)
Runtime rt = Runtime.getRuntime();
int rc = -1;

Process p = rt.exec(args);

int bufSize = 4096;
BufferedInputStream bis =
new BufferedInputStream(p.getInputStream(), bufSize);
int len;
byte buffer[] = new byte[bufSize];

// Echo back what the program spit out
while ((len =, 0, bufSize)) != -1)
System.out.write(buffer, 0, len);

rc = p.waitFor();
catch (Exception e)
rc = -1;
return rc;

4.Create the package which embeds the whole tracing logic:

create or replace package tkprof
Provides the tkprof tracing feature to developers. This package is
supposed to be used in developing environments only.

Required rights (the owner of this package is supposed to be TOOLS):

grant alter session to tools;
grant select on sys.v_$process to tools;
grant select on sys.v_$session to tools;

exec dbms_java.grant_permission( 'TOOLS', 'SYS:java.lang.RuntimePermission', 'writeFileDescriptor', '' );
exec dbms_java.grant_permission( 'TOOLS', 'SYS:java.lang.RuntimePermission', 'readFileDescriptor', '' );


The complete path along the name of the tkprof utility. Change this constant to fit to your
environment. After that the following right is required:

exec dbms_java.grant_permission( 'TOOLS', '', TKPROF.TKPROF_EXECUTABLE, 'execute' );

If a RAC configuration is used than the path should be the same accross all nodes. If the ORACLE_HOME
env variable is different between the RAC nodes then the value of the ORACLE_HOME should be get dynamically
by using, for example, the DBMS_SYSTEM.get_env procedure or symbolinc links may be created in the OS
environment of every RAC node.
TKPROF_EXECUTABLE constant varchar2(300) := '/opt/oracle/product/10.2.0/db_1/bin/tkprof';

The directory where the user trace files are stored. May be found out using:

show parameter user_dump_dest

If a RAC configuration is used than the path should be the same accross all nodes. If not, then the value
should be fetch dynamically from v$parameter view. Another approach would be to create symbolic links in the
OS environment of every RAC node.
UDUMP_PATH constant varchar2(300) := '/opt/oracle/admin/rfd/udump';

The name of the oracle directory object which points out to the above
path. The owner of this package must have read privileges on this

create directory UDUMP_DIR as '/opt/oracle/admin/rfd/udump';
grant read on directory UDUMP_DIR to tools;
UDUMP_DIR constant varchar2(30) := 'UDUMP_DIR';

A simple type used to return the tkprof_output.
type tkprof_output is table of varchar2(4000);

Enable tracing for the current session.
procedure enable;

Disable tracing for the current session.
procedure disable;

Get the status of the tracing for the current session.

#return 'TRUE' if the trace is enabled, 'FALSE' otherwise.
function is_enabled return varchar2;

Get the tkprof content thorough a pipelined table function.

#pi_tkprof_params additional parameters to tkprof
function output(pi_tkprof_params varchar2 := '') return tkprof_output pipelined;

end tkprof;

create or replace package body tkprof is

g_unique_session_id varchar2(100);
g_trace_file_name varchar2(4000);
g_is_enabled boolean;

function run(pi_cmd in varchar2) return number as
language java name 'Util.RunThis(java.lang.String) return integer';

procedure enable as
execute immediate 'alter session set tracefile_identifier=''' ||
g_unique_session_id || '''';
execute immediate 'alter session set timed_statistics=true';
execute immediate 'alter session set max_dump_file_size=unlimited';
execute immediate 'alter session set events ''10046 trace name context forever, level 12''';
g_is_enabled := true;
dbms_output.put_line('session tracing enabled under ' ||
g_unique_session_id || ' identifier.');

function is_enabled return varchar2 as
if g_is_enabled then
return 'TRUE';
return 'FALSE';
end if;

procedure disable as
execute immediate 'alter session set events ''10046 trace name context off''';
g_is_enabled := false;
dbms_output.put_line('session tracing disabled');

function output(pi_tkprof_params varchar2 := '') return tkprof_output
pipelined as
l_status pls_integer;
l_bfile bfile := bfilename(UDUMP_DIR, g_trace_file_name || '.prf');
l_last pls_integer := 1;
l_current pls_integer;
l_status := run(TKPROF_EXECUTABLE || ' ' || UDUMP_PATH || '/' ||
g_trace_file_name || ' ' || UDUMP_PATH || '/' ||
g_trace_file_name || '.prf ' || pi_tkprof_params);
if l_status != 0 then
dbms_output.put_line('status: ' || l_status);
dbms_output.put_line('ERROR: cannot produce the tkprof trace!');
l_current := dbms_lob.instr(l_bfile, '0A', l_last, 1);
exit when(nvl(l_current, 0) = 0);
pipe row(utl_raw.cast_to_varchar2(dbms_lob.substr(l_bfile,
l_current -
l_last + 1,
l_last := l_current + 1;
end loop;
end if;
when others then
if dbms_lob.isopen(l_bfile) != 0 then
end if;

g_unique_session_id := sys_context('userenv', 'sessionid');
select sys_context('userenv', 'instance_name') || '_ora_' ||
ltrim(to_char(a.spid)) || '_' || g_unique_session_id || '.trc'
into g_trace_file_name
from v$process a, v$session b
where a.addr = b.paddr
and b.audsid = sys_context('userenv', 'sessionid');
end tkprof;

5.Change the TKPROF_EXECUTABLE and UDUMP_PATH constants to fit with your environment.
6.Create the UDUMP_DIR directory:

create directory UDUMP_DIR as '/opt/oracle/admin/rfd/udump';
grant read on directory UDUMP_DIR to tools;

7.Grant execute rights for tkprof executable:

exec dbms_java.grant_permission( 'TOOLS', '',

8.Make the TKPROF package available to everybody:

grant execute on tools.tkprof to public;
create public synonym tkprof for tools.tkprof;

Okey, that's it! Now, let's test it:

fits_cr@RFD> exec tkprof.enable

session tracing enabled under 7154134 identifier.

PL/SQL procedure successfully completed.

fits_cr@RFD> select count(1) from rolmda;


fits_cr@RFD> exec tkprof.disable;
session tracing disabled

PL/SQL procedure successfully completed.

fits_cr@RFD> select * from table(tkprof.output);

TKPROF: Release - Production on Thu Mar 27 15:13:06 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: /opt/oracle/admin/rfd/udump/rfd3_ora_26618_7154134.trc
Sort options: default

count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call

select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece
idl_sb4$ where obj#=:1 and part=:2 and version=:3 order by piece#

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 2 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 2 0 0

... and so on ...

I'm sure this package can be improved but as a proof of concept I think that it's more than enough. Likewise, I have to mention that this was tested on a 10gR2 database therefore I don't know if you can use it without any changes on a different Oracle database version.

Saturday, March 22, 2008

What's Wrong with Oracle TIMESTAMP WITH TIME ZONE Data-type?

First of all, let's start by recalling what a TIMESTAMP WITH TIME ZONE data-type is. According to the Oracle official documentation "a TIMESTAMP WITH TIME ZONE is a variant of TIMESTAMP that includes a time zone region name or time zone offset in its value. The time zone offset is the difference (in hours and minutes) between local time and UTC (Coordinated Universal Time, formerly Greenwich Mean Time)".

Well, very nice... this is a data-type which might be very useful in a distributed environment across all over the world where, unlike its counterpart TIMESTAMP WITH LOCAL TIME ZONE data-type, is also important to record the time zone from which, a transaction or whatsoever information has been inputed into the system. A potential suitable case for using this timezone information might be the producing of a report with all peak hours of activity in the context of a specific timezone area.

So far so good. What I really don't like about this data-type is that I cannot create a primary/unique constraint on a such a column. The Oracle documentation clear states that "none of the columns in the unique/primary key can be of LOB, LONG, LONG RAW, VARRAY, NESTED TABLE, OBJECT, REF, TIMESTAMP WITH TIME ZONE, or user-defined type. However, the unique/primary key can contain a column of TIMESTAMP WITH LOCAL TIME ZONE".

Furthermore, any index created for such a column is actually converted into a functional index. Behind the scenes, Oracle automatically normalize all the values of a TIMESTAMP WITH TIME ZONE column using the SYS_EXTRACT_UTC() function. In a way, this makes sens. Suppose we have to order the result-set of a SQL-SELECT using a TIMESTAMP WITH TIME ZONE column. Having for example:

TIMESTAMP '2003-01-01 2:00:00 -08:00
TIMESTAMP '2003-01-01 2:00:00 -02:00

what record should be the first one? According to what point of reference? The solution is to normalize all values and only after that to compare the values. Oracle transparently does this.
Now, what if I want that the values to be unique in a TIMESTAMP WITH TIME ZONE column? Well, you can't create a unique constraint but you can create a unique index.

SQL> create table muci_test (
2 col1 timestamp with time zone,
3 constraint uk_muci_test_col1 unique(col1)
4 );

create table muci_test (
col1 timestamp with time zone,
constraint uk_muci_test_col1 unique(col1)

ORA-02329: column of datatype TIME/TIMESTAMP WITH TIME ZONE
cannot be unique or a primary key
SQL> create table muci_test (
2 col1 timestamp with time zone
3 );

Table created

Executed in 0,204 seconds

SQL> create unique index ix_muci_text_col1 on muci_test(col1);

Index created

Executed in 0,25 seconds

SQL> insert into muci_test values (to_timestamp('22-MAR-08'));

1 row inserted

Executed in 0,031 seconds

SQL> insert into muci_test values (to_timestamp('22-MAR-08'));

insert into muci_test values (to_timestamp('22-MAR-08'))

ORA-00001: unique constraint (ADMIN.IX_MUCI_TEXT_COL1) violated

As I already said, Oracle has actually created a functional index and this can be easily checked:

SQL> select index_type from user_indexes where index_name = 'IX_MUCI_TEXT_COL1';


If you don't feel very comfortable with functional indexes or you really want a primary/unique key constraint then the application design should take into consideration this limitation. Me personally, I would choose a TIMESTAMP WITH LOCAL TIME ZONE data-type and an additional VARCHAR2 column to store the originating TIME ZONE offsets. Using this approach it is possible to enforce a multi-column primary/unique key.

Sunday, February 10, 2008

ORA_ROWSCN pitfall for optimistic locking

I can see lately that there's a big enthusiasm around this new 10g pseudo-column called ORA_ROWSCN, one of its great benefit being for implementing an optimistic locking mechanism. From a simplistic point of view, this mechanism consists of:

1.on the application server or on the client side we have an entity class which is mapped on a particular table. Instances of that class (objects) reflect rows on that particular table.
2.Every entity object has also a version field which maps to the ORA_ROWSCN pseudo-column.
3.At the very beginning, we are loading our entities using SELECT T.*, ORA_ROWSCN FROM TABLE T so that every entity has the corresponding values and version.
4.Now, suppose that the user changes some attributes from the entity through the application front-end. This changes have to be propagated into the database, most likely using an UPDATE statement, but taking care of the version field. That is, we need to be sure that meanwhile, the version of the entity we are about to persist didn't change within the database. So, all is narrowing down on performing something like : UPDATE TABLE set COLUMN = [value from the entity] where PK_COLUMN = [id from the entity] and ORA_ROWSCN = [version from the entity]. If the above UPDATE will return “0 rows updated” we'll know for sure that somebody else has changed the record into the database and the application will throw an “Optimistic Lock Exception”.

So, everything seems to work nicely: no more timestamp version columns, no more OLD and NEW column values comparisons, no more row hashes. In 10g, it seems that Oracle offered us a bullet proof solution. BUUUT, there's a pitfall here. Every decent application is working with transactions therefore every DML is nicely encapsulated into an Oracle transaction. What happens if on the 4th step our update succeeds? Well, in this case the version of the entity will have to be updated accordingly. But, our update is not yet committed therefore we don't have access to the new value of the ORA_ROWSCN which will be available only after COMMIT. This means that, in order to refresh our entity, we'll have to issue an SQL-SELECT after COMMIT, something like: SELECT T.*, ORA_ROWSCN FROM TABLE T WHERE PK_COLUMN=[id from the entity]. The main problem here is the fact that between our COMMIT and the SQL-SELECT there is a tiny time frame within which a lot of nasty things may happen: somebody else may delete our record or may update it once again. Furthermore, the fact that I have to issue an SQL-SELECT after the UPDATE doesn't sound to me very appealing as involves additional work for the database and might affect the whole scalability. In addition I would really like to use the RETURNING clause of the INSERT/UPDATE statements and to refresh my entity in one step.

In conclusion, if you are designing an WEB application which doesn't reuse entities across WEB pages then using ORA_ROWSCN for implementing optimistic locking is fine as long as the working flow is the following:

1.the user select the record he/she wants to change from a grid;
2.the application loads the entity and assigns the ORA_ROWSCN value as version;
3.the user performs the required changes and press the SAVE button;
4.the application performs the UPDATE into the database taking care of the ORA_ROWSCN values (WHERE ... and ORA_ROWSCN=[version from the entity]) and COMMITs the changes.
5.the application display once again the initial grid re-querying the database.

However, if you are designing an application which is required to take advantage of caching (most likely a stand-alone rich client application) then the ORA_ROWSCN is not the right solution. For example:

1.the application loads a grid and every grid record has the corresponding entity attached;
2.the user select the record he/she wants to change from the grid;
3.the application relies on the data from the corresponding entity therefore it doesn't query the database;
4.the user performs the changes and press the SAVE button;
6.the application performs the UPDATE into the database taking care of the ORA_ROWSCN values (WHERE ... and ORA_ROWSCN=[version from the entity]) and COMMIT the changes.
7.If the UPDATE returns “1 record updated” then our entity is stale because the new version from the database doesn't reflect the old one from the entity. A new SELECT, outside of the UPDATE transaction is required in order to refresh the entity which, of course, is not convenient at all.

So for this second kind of application I would use a regular version-column or hashing techniques which allows to use the RETURNING clause of the INSERT/UPDATE statements or to refresh the entity within the same modifying transaction.