Quantcast
Channel: Obsessed with Oracle PL/SQL
Viewing all 312 articles
Browse latest View live

How to debug your dynamic SQL code

$
0
0
Got this plea for help via our AskTOM PL/SQL Office Hours program:

Dear Experts, I have below written below code:
----------------------------------------------
Declare
v_Table all_tables.TABLE_NAME%type;
v_Mnt varchar2(2):='08';
Type Cur_type is Ref Cursor;
C Cur_type;

Begin
v_Table:='ddi_ticket_10_1018';
Open C for 'SELECT * from bill.'||v_Table||v_Mnt||'Where called_nbr=123';

End;
-------------------------------------------------------------------
When executing this code, I face this Error message.
ORA-00933-SQL Command not properly ended
ORA-06512: At Line 9.
Please check the above code and modify for syntax correction

I could, at a glance, pretty well guess what the problem is.

Can you?

I am not trying to boast. I just encourage you to not read further and instead examine the code. What could be causing his problem?

Dynamic SQL can be tricky - not so much before OPEN-FOR or EXECUTE IMMEDIATE are complicated parts of the PL/SQL language. But because it's just so darned easy to mess up the SQL or PL/SQL you are constructing dynamically. You could:
  • Leave out a ";" (from PL/SQL code).
  • Forget to leave white space between sections of your SQL.
  • Have unmatched parentheses.
  • On and on and on.
In this case, I wrote back to say: "I am pretty sure you will find the problem is that you don't have a space before the "Where" keyword in:

v_Mnt||'Where called_nbr=123';


This exchange then reminded me that I should write a blog post with some simple tips for making it so much easier to debug your dynamic SQL code - and ensure that it works as intended. Here goes.
  1. Define your subprogram with AUTHID CURRENT_USER (invokers rights).
  2. If you're executing dynamic DDL, make the subprogram an autonomous transaction.
  3. Always EXECUTE IMMEDIATE or OPEN FOR a variable.
  4. Always handle the exception that may arise from the dynamic SQL execution.
  5. Log the error information plus the variable that you tried to execute.
  6. Build a test mode into your subprogram.
I will demonstrate the value of these points by starting with a version of a super-duper useful+dangerous program that ignores all of them: the drop_whatever procedure.

PROCEDURE drop_whatever (nm  IN VARCHAR2 DEFAULT '%',
typ IN VARCHAR2 DEFAULT '%')
IS
CURSOR object_cur
IS
SELECT object_name, object_type
FROM user_objects
WHERE object_name LIKE UPPER (nm)
AND object_type LIKE UPPER (typ)
AND object_name <> 'DROP_WHATEVER';
BEGIN
FOR rec IN object_cur
LOOP
EXECUTE IMMEDIATE
'DROP '
|| rec.object_type
|| ''
|| rec.object_name
|| CASE
WHEN rec.object_type IN ('TABLE', 'OBJECT')
THEN
' CASCADE CONSTRAINTS'
ELSE
NULL
END;
END LOOP;
END;

In this procedure, I use a static cursor to find all matching objects, then for each object found, I execute a dynamic DDL DROP statement.

It's useful because I can drop all database objects in my schema by typing nothing more than

EXEC drop_whatever()

And it's dangerous for precisely the same reason.

Oh, but wait. Given how useful it is, maybe we should let everyone be able to use it. I know, I will run this command:

GRANT EXECUTE ON drop_whatever TO PUBLIC

Hey what could go wrong? :-)

So very, very much. Let's step through my recommendations and highlight the potential problems.

1. Define your subprogram with AUTHID CURRENT_USER (invokers rights).

The procedure does not have an AUTHID clause (I bet most of your stored program units do not). This means that it defaults to "definer rights". This means that it always executes with the privileges of the definer/owner of the procedure.

Which means that if, say, HR owns drop_whatever and then SCOTT executes it (thank you, GRANT to PUBLIC!) as in:

EXEC HR.drop_whatever()

Then SCOTT will have just dropped all of the database objects in HR's schema!

2. If you're executing dynamic DDL, make the subprogram an autonomous transaction.

The thing about DDL statements is that Oracle performs an implicit commit both before and after the DDL statement is executed. So if you have a stored procedure that executes dynamic DDL, either you have to warn everyone who might use it that any outstanding changes in their session (that's just rude) or you add this statement to your procedure:

PRAGMA AUTONOMOUS_TRANSACTION;

Now, any commits (or rollbacks) executed in the procedure will affect only those changes made within the procedure.

3. Always EXECUTE IMMEDIATE or OPEN FOR a variable.

It's such a simple thing, but it could save you lots of time when trying to figure out what's wrong with your program.

Here's the thing: it's not hard to figure how to use EXECUTE IMMEDIATE. But it can be very tricky to properly construct your string at run-time. So many small mistakes can cause errors. And if you construct your string directly within the EXECUTE IMMEDIATE statement, how can you see what was executed and where you might have gone wrong?

Suppose, for example, that in the drop_whatever procedure, I constructed my DROP statement as follows:

EXECUTE IMMEDIATE
'DROP '
|| rec.object_type
|| rec.object_name ...

When I try to drop my table, I see:

ORA-00950: invalid DROP option

And what does that tell me? Not much. What option does it think I gave it that is invalid? What did I just try to do?

If, on the other hand, I assign the expression I wish to execute to a variable, and then call EXECUTE IMMEDIATE, I can trap the error, and log / display that variable (see second implementation of drop_whatever below). And then I might see something like:

DROP SYNONYMV$SQL - FAILURE

Oh! I see now. I did not include a space between the object type and the object name. Silly me. So always declare a variable, assign the dynamically-constructed SQL statement to that variable, and EXECUTE IMMEDIATE it.

4. Always handle the exception that may arise from the dynamic SQL execution.
5. Log the error information plus the variable that you tried to execute.

If you don't trap the exception, you can't log or display that variable. If you don't persist that variable value, it's awfully hard to make a useful report of the problem to your support team.

You can't do much except whimper at the crappy design of your code.

6. Build a test mode into your subprogram.

I have been writing code for long and screwing up that code for so long, I have learned that it is very helpful - especially when that code makes changes to data in tables - to implement a test mode that doesn't "do" anything. Just shows me what it would have done if I'd let it.

You can see it in the code below, when I pass TRUE (the default) for the just_checking parameter.

A Much Better (?) Drop_Whatever

The "?" in that title is just to remind us that this procedure is inherently dangerous.

Here's the version of drop_whatever following my recommendations. Note that for real, production code, you should never "report" or "log" an error by calling DBMS_OUTPUT.PUT_LINE. Who's going to see that? Instead, call your standard error logging procedure and if you don't have one then get and use Logger.

PROCEDURE drop_whatever (
nm IN VARCHAR2 DEFAULT '%'
, typ IN VARCHAR2 DEFAULT '%'
, just_checking IN BOOLEAN DEFAULT TRUE
)
AUTHID CURRENT_USER
IS
PRAGMA AUTONOMOUS_TRANSACTION;
dropstr VARCHAR2 (32767);

CURSOR object_cur
IS
SELECT object_name, object_type
FROM user_objects
WHERE object_name LIKE UPPER (nm)
AND object_type LIKE UPPER (typ)
AND object_name <> 'DROP_WHATEVER';
BEGIN
FOR rec IN object_cur
LOOP
dropstr :=
'DROP '
|| rec.object_type
|| ''
|| rec.object_name
|| CASE
WHEN rec.object_type IN ('TABLE', 'OBJECT')
THEN ' CASCADE CONSTRAINTS'
ELSE NULL
END;

BEGIN
IF just_checking
THEN
DBMS_OUTPUT.put_line (dropstr || ' - just checking!');
ELSE
EXECUTE IMMEDIATE dropstr;
DBMS_OUTPUT.put_line (dropstr || ' - SUCCESSFUL!');
END IF;

EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line (dropstr || ' - FAILURE!');
DBMS_OUTPUT.put_line (DBMS_UTILITY.format_error_stack);
END;
END LOOP;
END;

As you will see in the comments, Kevan Gelling pointed out the benefit of using a template for your dynamic SQL string, with calls to REPLACE to substitute actual values for placeholders. I agree and offer yet a third implementation of drop_whatever below utilizing the approach (I won't repeat the BEGIN-END encapsulating the EXECUTE IMMEDIATE. That doesn't change.

PROCEDURE drop_whatever (
nm IN VARCHAR2 DEFAULT '%'
, typ IN VARCHAR2 DEFAULT '%'
, just_checking IN BOOLEAN DEFAULT TRUE
)
AUTHID CURRENT_USER
IS
PRAGMA AUTONOMOUS_TRANSACTION;
dropstr VARCHAR2 (32767) := 'DROP [object_type] [object_name] [cascade]';

CURSOR object_cur ... same as above ;
BEGIN
FOR rec IN object_cur
LOOP
dropstr := REPLACE (dropstr, '[object_type]', rec.object_type);
dropstr := REPLACE (dropstr, '[object_name]', rec.object_name);
dropstr :=
REPLACE (
dropstr,
'[cascase]',
CASE
WHEN rec.object_type IN ('TABLE', 'OBJECT')
THEN
'CASCADE CONSTRAINTS'
END);

BEGIN ... EXECUTE IMMEDIATE ... END;
END LOOP;
END;

You could also do all three of those replaces in a single assignment, but you sacrifice some readability. Thanks, Kevan, for the reminder and the code!

Let's Recap

When you write a stored program unit that contains dynamic SQL:
  1. Define your subprogram with AUTHID CURRENT_USER (invokers rights).
  2. If you're executing dynamic DDL, make the subprogram an autonomous transaction.
  3. Always EXECUTE IMMEDIATE or OPEN FOR a variable.
  4. Always handle the exception that may arise from the dynamic SQL execution.
  5. Log the error information plus the variable that you tried to execute.
  6. Build a test mode into your subprogram.


All About PL/SQL Compilation Settings

$
0
0
A recent Twitter thread delved into the topic of the best way to enable PL/SQL warnings for program units, including this recommendation from Bryn Llewellyn, Distinguished Product Manager for PL/SQL:


which then led to Bryn volunteering me to delve into the details of PL/SQL compiler settings in an AskTOM PL/SQL Office Hours session. 


Which I will do. But I figured I could start right off by writing this post. So let's explore how to set and modify PL/SQL compiler settings.

First, you might wonder what those settings are or could be. The best way to check is by examining the USER_PLSQL_OBJECT_SETTINGS view (and of course the ALL* version to examine attributes of code you do not own but can execute):



The values that are stored for a PL/SQL unit are set every time it is compiled—in response to "create", "create or replace", "alter", invoking a utility like Utl_Recomp, or implicitly as a side effect of trying to execute an invalid PL/SQL unit.

A special case of "set" is to set the exact same values that were already stored for the unit. There's no way to ask for this outcome explicitly as part of the "create" or "create or replace" DDLs. It's the programmer's responsibility to ensure that the set of required values obtains in the session at the moment of compilation. There is a way to ask for this outcome with "alter". It's to say "reuse settings" (and not to mention any settings explicitly). More on this below. Notice that Utl_Recomp and its cousins, and implicit recompilation use the plain "reuse settings" mode of "alter".

I will explore how to set these attributes at both session and program unit level below, how to override, and how to preserve them. You can run all of this code yourself on LiveSQL.

When I first connect to a schema, and until I issue any ALTER statements, compilations of code will rely on the system defaults. You can see what they are by running the following script (thanks to Bryn Llewellyn for providing it):

declare
Plsql_Debug constant varchar2(5) not null :=
case
when $$Plsql_Debug then 'TRUE'
when not $$Plsql_Debug then 'FALSE'
else 'illegal'
end;
Plsql_CCflags constant varchar2(4000) not null :=
case
when $$Plsql_CCflags is null then '[Not Set]'
else $$Plsql_CCflags
end;
begin
if Plsql_CCflags = 'illegal' then
raise Program_Error;
end if;

Sys.DBMS_Output.Put_Line('Plsql_Optimize_Level:' ||To_Char($$Plsql_Optimize_Level, '9'));
Sys.DBMS_Output.Put_Line('Plsql_Code_Type: '||$$Plsql_Code_Type);
Sys.DBMS_Output.Put_Line('Plsql_Debug: '||Plsql_Debug);
Sys.DBMS_Output.Put_Line('Plsql_Warnings: '||$$Plsql_Warnings);
Sys.DBMS_Output.Put_Line('NLS_Length_Semantics: '||$$NLS_Length_Semantics);
Sys.DBMS_Output.Put_Line('Plsql_CCflags: '||Plsql_CCflags);
Sys.DBMS_Output.Put_Line('Plscope_Settings: '||$$Plscope_Settings);
end;

You can also use this query, as offered up by Connor McDonald:

select name, value
from v$parameter
where name like 'plsql%'
or name like 'plscope%'
or name like 'nls_length_semantics%';

Here's the LiveSQL script that performs these two steps.

Note: I hope to update this post soon with a query that does not require you to create a database object first. :-)

Now let's take a look at how you can change the compilation settings for a program unit, at the session level and also for specifics. 

There are three ways you can compile an individual program unit:
  1. CREATE OR REPLACE DDL statement
  2. ALTER-COMPILE statement
  3. DBMS_DDL.ALTER_COMPILE
The third option is really just an API to the ALTER-COMPILE statement.

When you compile your code via CREATE OR REPLACE that program unit will inherit all the current settings in your session.

With both ALTER-COMPILE and DBMS_DDL.ALTER_COMPILE, you can either inherit those settings or reuse (re-apply) the settings that are currently associated with the program unit.

All right, then, let's get going. I connect to my schema and immediately change the setting for PL/SQL compile-time warnings. I then compile my procedure and confirm that it used the session setting.

ALTER SESSION SET plsql_warnings = 'ENABLE:ALL'
/

CREATE OR REPLACE PROCEDURE aftercompile
AUTHID DEFINER
IS
BEGIN
NULL;
END;
/

SELECT plsql_warnings "From Session"
FROM user_plsql_object_settings
WHERE name = 'AFTERCOMPILE'
/

From Session
------------
ENABLE:ALL

Now I am going to recompile that procedure with an ALTER statement and specify a value different from the session's for PL/SQL warnings, but re-use all other settings. ALTER takes all the settings values from the environment. If you mention only some, and don't say "reuse settings", then it takes what you don't mention from the environment. As you can see in the query, the procedure now has PL/SQL warnings set to treat any warning as a compile error.

ALTER PROCEDURE aftercompile COMPILE plsql_warnings = 'ERROR:ALL' REUSE SETTINGS
/

SELECT plsql_warnings "From Procedure Override"
FROM user_plsql_object_settings
WHERE name = 'AFTERCOMPILE'
/

From Procedure Override
-------------------------
ERROR:ALL

Now I CREATE OR REPLACE to demonstrate that the session setting is now applied to the procedure:

CREATE OR REPLACE PROCEDURE aftercompile
AUTHID DEFINER
IS
BEGIN
NULL;
END;
/

SELECT plsql_warnings "Compile From Source"
FROM user_plsql_object_settings
WHERE name = 'AFTERCOMPILE'
/

Compile From Source
-------------------
ENABLE:ALL

But what if you want to recompile your program unit and you do not want to pick up the current session settings? You want to keep all settings intact. We offer you the REUSE SETTINGS clause.

From the last sequence of statements we can see that the session setting for PL/SQL warnings is "ENABLE:ALL". Below, I recompile my procedure, specifying "ERROR:ALL". Then I recompile again, but this time I do not specify a value for PL/SQL warnings. Instead I ask to reuse settings for the procedure.

ALTER PROCEDURE aftercompile COMPILE plsql_warnings = 'ERROR:ALL'
/

SELECT plsql_warnings "Back to Procedure Override"
FROM user_plsql_object_settings
WHERE name = 'AFTERCOMPILE'
/

Back to Procedure Override
--------------------------
ERROR:ALL

ALTER SESSION SET plsql_warnings = 'ENABLE:ALL'
/

ALTER PROCEDURE aftercompile COMPILE REUSE SETTINGS
/

SELECT plsql_warnings "Session Change No Impact with REUSE SETTINGS"
FROM user_plsql_object_settings
WHERE name = 'AFTERCOMPILE'
/

Session Change No Impact with REUSE SETTINGS
--------------------------------------------
ERROR:ALL

As you can see, the setting for my procedure did not change.

OK, I think that covers this territory pretty well (until my readers point out what I missed!).

Here are some links you might find helpful.

The COMPILE Clause

Conditional Compilation

Many thanks for the close reading and numerous suggestions for improvement from Bryn Llewellyn.

Declarative PL/SQL

$
0
0
SQL is a set-oriented, declarative language. A language or statement is declarative if it describes the computation to be performed (or in SQL, the set of data to be retrieved/modified) without specifying how to compute it.

PL/SQL is a procedural language, tightly integrated with SQL, and intended primarily to allow us to build powerful, secure APIs to underlying data (via SQL).

If I try hard, I can maximize the procedural side of PL/SQL and minimize the declarative aspect of SQL (primarily by ignoring or discounting the set-orientation of SQL). That's generally a bad idea. Instead, we Oracle Database developers make the most of the many powerful features of SQL (think: analytic functions, pattern matching, joins, etc.), and minimize processing in PL/SQL.

What we should also do, though, is recognize the make the most of the declarative features of PL/SQL. There are two big reasons to do this:

1. When you don't tell the PL/SQL compiler how to do things, the optimizer has more freedom to improve performance.

2. You can write less code, improving your productivity and reduce the cost of maintaining your code in the future.

Here are a few of my favorite declarative constructs of PL/SQL

The Cursor FOR Loop

Definitely the best showcase for the benefits of declarative programming in PL/SQL.

I've identified a set of rows and columns you need to do something with (in the example below, simply display last name of all employees in department 100 whose salary > 5000).

I could do it the hard, procedural way:

DECLARE
CURSOR emps_cur
IS
SELECT * FROM hr.employees
ORDER BY last_name;

l_emp emps_cur%ROWTYPE;
BEGIN
OPEN emps_cur;

LOOP
FETCH emps_cur INTO l_emp;

EXIT WHEN emps_cur%NOTFOUND;

IF l_emp.department_id = 100 AND l_emp.salary > 5000
THEN
DBMS_OUTPUT.put_line (l_emp.last_name);
END IF;
END LOOP;

CLOSE emps_cur;
END;

There. Job done. But...really? I am going to fetch all the rows in the employees table and then check the department ID and salary? No, no, no! That should be done in SQL. So it should at least look like this:

DECLARE
CURSOR emps_cur
IS
SELECT * FROM hr.employees
WHERE department_id = 100 AND salary > 5000
ORDER BY last_name;

l_emp emps_cur%ROWTYPE;
BEGIN
OPEN emps_cur;

LOOP
FETCH emps_cur INTO l_emp;

EXIT WHEN emps_cur%NOTFOUND;

DBMS_OUTPUT.put_line (l_emp.last_name);
END LOOP;

CLOSE emps_cur;
END;

Now at least we are using a little of the power of SQL, and thereby minimizing the number of rows brought needlessly back from the context switch to the SQL engine.

I could, however, make things much simpler - and faster - with a cursor FOR loop:

BEGIN
FOR l_emp IN (
SELECT last_name FROM hr.employees
WHERE department_id = 100 AND salary > 5000
ORDER BY last_name)
LOOP
DBMS_OUTPUT.put_line (l_emp.last_name);
END LOOP;
END;


In this third iteration, I have stepped waaaay back from writing an algorithm (declare cursor, open cursor, fetch next row, stop when no more rows, display data, back to fetch), and instead have told the compiler, in effect:
Please display the last name for all rows identified by that query.
I don't tell it how to get the job done. I let the compiler figure out the best execution path. No need to open a cursor, fetch, check to see if done, close the cursor, etc.

And boy does the compiler figure out a better execution path! Since I no longer explicitly fetch on a row-by-row basis, the PL/SQL optimizer is free to choose a different approach, and it does (with the PL/SQL optimization level set to 2 or higher): it retrieves 100 rows with each "bulk" fetch, resulting in many fewer context switches and much better performance.

Plus fewer lines of code, which will be greatly appreciated by the developers who maintain your code in years to come.

Nested Table Operators

Nested tables are just one of the three different types of collections in PL/SQL (the others are associative arrays and arrays). But there are a whole boatload of set-oriented features available only to nested tables. That's because they were designed from the start to be multisets, like relational tables. We've got....
  • MULTISET UNION - similar to SQL UNION
  • MULTISET EXCEPT - similar to SQL MINUS
  • MULTISET INTERSECT - similar to SQL INTERSECT
  • SET - Removes duplicates from a nested table (multisets can have duplicates)
  • SUBMULTISET - Returns TRUE if one nested table is entirely contained within another
  • MEMBER OF - Is a value an element of (in) the nested table?
  • = - Um, do I need to explain what this does?
You could implement the functionality embedded in each one of these, and you might even have some fun doing it. But almost certainly your code would have bugs, or would be lots slower, or would be...lots of code.

In a way, my favorite of all these is "=". It's the clearest demonstration of the power of declarative programming in this section. Suppose I define two nested tables as follows (that's right: a nested table of nested tables!).

CREATE OR REPLACE TYPE numbers_t IS TABLE OF NUMBER
/

CREATE OR REPLACE TYPE nt_of_numbers_t IS TABLE OF numbers_t
/

CREATE OR REPLACE PACKAGE nts AUTHID DEFINER
IS
n1 nt_of_numbers_t
:= nt_of_numbers_t (numbers_t (1, 2, 3), numbers_t (4, 5, 6));
n2 nt_of_numbers_t
:= nt_of_numbers_t (numbers_t (4, 5, 6), numbers_t (1, 2, 3));
END;
/

Now I want to know if n1 and n2 are equal (that is, they contain the same elements - and order is not significant), I could write something like this:

DECLARE 
l_equals BOOLEAN := TRUE;
BEGIN
FOR indx IN 1 .. nts.n1.COUNT
LOOP
FOR indx2 IN 1 .. nts.n1 (indx).COUNT
LOOP
l_equals :=
nts.n1 (indx)(indx2) = nts.n2 (indx)(indx2);
EXIT WHEN NOT l_equals;
END LOOP;
END LOOP;

DBMS_OUTPUT.put_line (
CASE WHEN l_equals THEN '=' ELSE '<>' END);
END;
/

That's already bad enough - but then if you factor in the logic you need to write to ensure that order is not significant....OMG. Sure, we can write that stuff. We all took classes on algorithms in university (or some of us, anyway. My university education in computer science was actually very skimpy). We all know how to type s-t-a-c-k-o-v... in Google.

So, yeah, we could muscle our way through it. But why? Instead I could write nothing more than:

BEGIN  
DBMS_OUTPUT.put_line (
CASE WHEN nts.n1 = nts.n2 THEN '=' ELSE '<>' END);
END;
/

Ah, so nice.

And again, in addition to the simplicity of the code, you make it possible for your code to get faster over time, as the very smart folks over at Oracle HQ continually work to optimize performance of built-in elements of the PL/SQL language.

Here are some links to LiveSQL scripts demonstrating these features:

MULTISET Union Examples
MULTISET Intersect Examples
MULTISET Except Examples
SUBMULTISET Demonstation


Anchored Declarations

OK, I confess, that using anchored declarations doesn't always result in less code. Sometimes you will type a few more characters. But over time you are sure to save many keystrokes, when it comes to not having change your code in the future - and debug it today.

What, you may ask, is an anchored declaration? When you anchor or connect the datatype for your variable or constant declaration back to another, previously-defined element.

There are two forms of anchored declarations, %TYPE and %ROWTYPE:

name%TYPE
name%ROWTYPE

You can anchor back to another PL/SQL variable or constant, but the real value of this syntax comes from the ability to anchor back to a column or table. PL/SQL is, after all, a database programming language. So it should be hyper-aware of and able to take advantage of stuff that's in the database.

Suppose I want to declare a variable that "looks like" the last_name column in the employees table. I could look up the definition of the table, see that last_name is VARCHAR2(100), and then write this code:

PROCEDURE do_stuff
IS
l_last_name VARCHAR2(100);

Fine. They are in synch. Well, only in your mind. When the DBA or another developer comes along later and issues this DDL statement:

ALTER TABLE employees MODIFY last_name VARCHAR2(200)

your code and your table are now officially out of synch. And if you select a really long last name from the table into that variable, kaboom! Your program fails with a VALUE_ERROR exception.

That's no good. Instead, declare your variable as follows

PROCEDURE do_stuff 
IS
l_last_name employees.last_name%TYPE;

Now you have declaratively linked your variable to this database structure. And, lo and behold, if the definition of this column changes, Oracle Database will automatically invalidate the DO_STUFF procedure. When it is recompiled, it will pick up the new definition of last_name and your code will be fully in synch with your table.

%TYPE's perfect for individual columns. Use %ROWTYPE when you want to declare a record based on a table, view or cursor (essentially what the cursor FOR loop does for you implicitly).

PROCEDURE do_stuff 
IS
l_employee employees%ROWTYPE;

Bottom line: you tell the PL/SQL compiler "Declare this variable to be like that other thing." and you are done. Oracle Database automatically tracks dependencies and ensures that your code always matches the state of your database objects.

How cool is that?

What Else?

Do you have a suggestion for another handy-dandy declarative construct? Let me know and I can add it the post (with your name up in lights!).

Packaged Cursors Equal Global Cursors

$
0
0
Connor McDonald offered a handy reminder of the ability to declare cursors at the package level in his Public / private cursors blog post.

I offer this follow-on to make sure you are aware of the use cases for doing this, and also the different behavior you will see for package-level cursors.

Generally, first of all, let's remind ourselves of the difference between items declared at the package level ("global") and those declared within subprograms of the package ("local").
Items declared locally are automatically cleaned up ("closed" and memory released) when the block terminates. 
Items declared globally are kept "open" (memory allocated, state preserved) until the session terminates.
Here's are two LiveSQL scripts that demonstrates the difference for a numeric and string variable (not that the types matter).

Global vs Local variable (community contribution)
Global (Package-level) vs Local (Declared in block) Variables (mine)

But this principle also applies to cursors, and the ramifications can be a bit more critical to your application. Let's explore in more detail.

From Understanding the PL/SQL Package Specification:

The scope of a public item is the schema of the package. A public item is visible everywhere in the schema.
and when it comes to the package body:
The package body can also declare and define private items that cannot be referenced from outside the package, but are necessary for the internal workings of the package.
The "scope" describes who/where in your code the packaged element can be referenced. Roughly, any schema or program unit with the execute privilege granted to the package can reference any element in the package specification.

But in this case, we are more interested in what you "see" when you reference that packaged item: the state of that item.

Global items maintain their state (for a variable, its value; for a cursor, as you are about to see, its status of open or closed and if open where in the result set the cursor is pointing) until you explicitly change that state, your session terminates, or the package state is reset.

Is The Cursor Closed?

Let's watch this in action, via a little quiz for you drawn from the Oracle Dev Gym. Suppose I execute the following statements below (try it in LiveSQL):

CREATE TABLE employees
(
employee_id INTEGER,
last_name VARCHAR2 (100)
)
/

BEGIN
INSERT INTO employees VALUES (100, 'Thomsoni');
INSERT INTO employeesVALUES (200, 'Edulis');
COMMIT;
END;
/

CREATE OR REPLACE PACKAGE pkg
IS
CURSOR emp_cur (id_in IN employees.employee_id%TYPE)
IS
SELECT last_name
FROM employees
WHERE employee_id = id_in;
END;
/

CREATE OR REPLACE PROCEDURE show_name1 (
id_in IN employees.employee_id%TYPE)
IS
l_name employees.last_name%TYPE;
BEGIN
OPEN pkg.emp_cur (id_in);
FETCH pkg.emp_cur INTO l_name;
DBMS_OUTPUT.put_line (l_name);
END;
/

CREATE OR REPLACE PROCEDURE show_name2 (
id_in IN employees.employee_id%TYPE)
IS
CURSOR emp_cur (id_in IN employees.employee_id%TYPE)
IS
SELECT last_name
FROM employees
WHERE employee_id = id_in;

l_name employees.last_name%TYPE;
BEGIN
OPEN emp_cur (id_in);
FETCH emp_cur INTO l_name;
DBMS_OUTPUT.put_line (l_name);
END;
/

Which of the blocks below will display the following two lines of text after execution?

Thomsoni 
Edulis

Block 1
BEGIN
show_name1 (100);
show_name1 (200);
END;

Block 2
BEGIN
show_name2 (100);
show_name2 (200);
END;

Block 3
BEGIN
show_name1 (100);
CLOSE plch_pkg.emp_cur;
show_name1 (200);
CLOSE plch_pkg.emp_cur;
END;

Block 4
BEGIN
show_name1 (100);
show_name2 (200);
END;

And the answer is: blocks 2  - 4 will display the desired output, while block 1 fails with:

ORA-06511: PL/SQL: cursor already open

Block 2 works just fine because show_name2 uses a locally-declared cursor. The cursor is opened locally and when the procedure terminates, the cursor is closed.

But in block 1, I am calling show_name1, which opens the package-based cursor. And since the cursor is declared at the package level, once it is opened, it stays open in your session. Even when the procedure in which it was opened terminates.

If you do, however, explicitly close the cursor, then you are able to open it again in that same session, which is why block 3 succeeds.

Block 4 shows the desired output as well, because the packaged cursor is only opened by the first procedure call. The second uses the local "copy" and so there is no error. If, however, you tried to call show_name1 again in the same session, ORA-06511 will be raised.

since the packaged cursor was opened, but not closed, inside the procedure. It remains open when the plch_show_name1 procedure is called a second time.

Use Cases for Package-Level Cursors

OK, hopefully you've got a handle on the different behavior of cursors defined at the package level. Why would you want to define a cursor this way, as opposed to just "burying" the cursor/SQL inside a particular block?

1. Hide the SQL

Hiding information is often a very good idea when it comes to software development.

We can so easily got lost in the "weeds" - the details of how things are implemented - rather than stay at a higher level that focuses on how to use the element in question. That's the whole idea behind APIs.

How does that work with cursors? You can declare the cursor header in the package specification and move the SELECT to the package body! As in:

CREATE OR REPLACE PACKAGE pkg
IS
CURSOR emp_cur (id_in IN employees.employee_id%TYPE)
RETURN employees%ROWTYPE;
END;
/

CREATE OR REPLACE PACKAGE BODY pkg
IS
CURSOR emp_cur (id_in IN employees.employee_id%TYPE)
RETURN employees%ROWTYPE
IS
SELECT *
FROM employees
WHERE employee_id = id_in;
END;
/

The type of the RETURN statement must be a record type. If you try to return say a scalar value type, as in "employees.last_name%TYPE, you will get this compilation error:

PLS-00320: the declaration of the type of this expression is incomplete or malformed

2. Share the Cursor

The main driver for going to package-level cursors is that the cursor is not embedded within a single procedure, function or anonymous block. Which means that you can reference (open-fetch-close) that cursor from multiple subprograms and blocks.

That's nice - you avoid repetition of the same cursor.

That's potentially an issue - because you need to make sure the cursor is not open already, before you try to open it. And if it is open, what should you do?

Bottom line for this use case: each user of the cursor must be sure to close the cursor when they are done.

Have you declared cursors at the package level? Any interesting stories you'd like to share with us?

High Performance PL/SQL

$
0
0
PL/SQL is a key enabling technology in Oracle Database. You should make sure that you are aware of and take advantage appropriately key features of the PL/SQL language focused on performance.

I offer a short list of those features below, along with links to related resources. Most of this is also capture in this slide deck:



Key Performance Features

All of these are covered in the slide deck above; the links will take you to the documentation on these features. Click here for the overall section in the doc on performance and optimization.
But Don't Forget About SQL Optimization

Chances are you could take full advantage of all the features listed above and more in PL/SQL, and still end up with a slow application. That's because at the heart of every application built on Oracle Database are your SQL statements, and if those are not optimized, you can pretty much forget everything else.

Here are some links you might find helpful in this regard:

(list under construction - please offer your suggestions in Comments)




An Introduction to PL/SQL

$
0
0
Just getting started with PL/SQL? You will find PL/SQL to be a very readable and accessible programming language. You'll be productive in a very short amount of time!

I offer this post as a quick way to access a number of resources that will provide a nicely-paced introduction to this powerful database programming language. Of course, it helps a lot to know SQL, too, so check out the Other Useful Links at the bottom of the post.

I wrote a series of "PL/SQL 101" articles for Oracle Magazine several years ago. Here's a convenient index to all those articles:

1. Building with Blocks - an overview of PL/SQL, followed by coverage of some fundamentals
2. Controlling the Flow of Execution - conditional statements and loops
3. Working with Strings
4. Working with Numbers
5. Working with Dates
6. Error Management
7. Working with Records
8. Working with Collections
9. Bulk Processing with BULK COLLECT and FORALL
10. The Data Dictionary: Make Views Work for You
11. Wrap Your Code in a Neat Package
12. Working with Cursors

I added several "PL/SQL 101" posts on this blog as well:

Nulls in PL/SQL
Declaring variables and constants
Writing conditional logic in PL/SQL

Other Useful Links

Oracle PL/SQL home page - latest news, links to lots of other resources

Oracle SQL home page - 'cause what's PL/SQL without SQL?

Database for Developers: Fundamentals - an Oracle Dev Gym class that introduces to fundamental relational database concepts

Database for Developers: Next Level - an Oracle Dev Gym class that makes sure you have a solid foundation in basic SQL operations

Why won't MULTISET work for me?

$
0
0
I recently got an email from an Oracle Database developer who was trying to get the MULTISET operator to work in his code.

He'd created nested tables of records and found that MULTISET UNION would work but MULTISET EXCEPT would not.

When he ran his code he got this error:

PLS-00306: wrong number or types of arguments in call to 'MULTISET_EXCEPT_ALL"

I will confess that it took me longer than I'd like to admit (but I just did!) to get to the heart of his problem, so I figure others might get similarly befuddled. Time for a blog post!

Let's explore some of the nuances behind musing MULTISET, centered around this important statement from the documentation:
Two objects of nonscalar type are comparable if they are of the same named type and there is a one-to-one correspondence between their elements. In addition, nested tables of user-defined object types, even if their elements are comparable, must have MAP methods defined on them to be used in equality or IN conditions.
Note: the code shown below may be executed on LiveSQL here.

First, I create some database objects.

CREATE TABLE limbs
(
nm VARCHAR2 (100),
avg_len NUMBER
)
/

BEGIN
INSERT INTO limbs (avg_len, nm) VALUES (1, 'arm');
INSERT INTO limbs (avg_len, nm) VALUES (2, 'leg');
INSERT INTO limbs (avg_len, nm) VALUES (3, 'tail');
COMMIT;
END;
/

CREATE OR REPLACE TYPE limb_ot IS OBJECT
(
nm VARCHAR2 (100),
avg_len NUMBER
)
/

Now let's see if I can get the MULTISET operators to work. First, MULTISET UNION:

DECLARE
TYPE limbs_t IS TABLE OF limb_ot;
l_limbs limbs_t;
BEGIN
SELECT limb_ot (l.nm, l.avg_len)
BULK COLLECT INTO l_limbs
FROM limbs l
ORDER BY l.nm;

l_limbs := l_limbs MULTISET UNION l_limbs;
DBMS_OUTPUT.put_line ('Lots of limbs! ' || l_limbs.COUNT);
END;
/

Lots of limbs! 6

So far so good. Now MULTISET EXCEPT:

DECLARE
TYPE limbs_t IS TABLE OF limb_ot;
l_limbs limbs_t;
BEGIN
SELECT limb_ot (l.nm, l.avg_len)
BULK COLLECT INTO l_limbs
FROM limbs l
ORDER BY l.nm;

l_limbs := l_limbs MULTISET EXCEPT l_limbs;
DBMS_OUTPUT.put_line ('Lots of limbs! ' || l_limbs.COUNT);
END;
/

PLS-00306: wrong number or types of arguments in call to 'MULTISET_EXCEPT_ALL"

OK, you might now be saying: "Hey, that's a bug! MULTISET EXCEPT is broken." But wait, let's do some more testing. How about a nested table of numbers? Does MULTISET EXCEPT work with that?

DECLARE
TYPE limbs_t IS TABLE OF NUMBER;
l_limbs limbs_t;
BEGIN
SELECT l.avg_len
BULK COLLECT INTO l_limbs
FROM limbs l
ORDER BY l.nm;

l_limbs := l_limbs MULTISET EXCEPT l_limbs;
DBMS_OUTPUT.put_line ('Lots of limbs! ' || l_limbs.COUNT);
END;
/

Lots of limbs! 0

No problem there: I "minused" a collection from itself and nothing was left. So MULTISET EXCEPT works - but only under some circumstances. But why then did MULTISET UNION work?

The key thing to remember is this: MULTISET UNION is equivalent to MULTISET UNION ALL. In other words, the MULTISET operators do not by default remove duplicates (which is the case for SQL UNION. You have to specify DISTINCT if you want that to happen. And when I add DISTINCT in the block below, guess what?

DECLARE
TYPE limbs_t IS TABLE OF limb_ot;
l_limbs limbs_t;
BEGIN
SELECT limb_ot (l.nm, l.avg_len)
BULK COLLECT INTO l_limbs
FROM limbs l
ORDER BY l.nm;

l_limbs := l_limbs MULTISET UNION DISTINCT l_limbs;
DBMS_OUTPUT.put_line ('Lots of limbs! ' || l_limbs.COUNT);
END;
/

PLS-00306: wrong number or types of arguments in call to 'MULTISET_UNION_DISTINCT"

Now it fails, just like EXCEPT. What's different? Now the PL/SQL engine must compare the contents of the two collections and to do that....it needs a map method, which return values that can be used for comparing and sorting. Let's add one to limb_ot: I will specify a mapping based on the length of the name.

There is no way to create a map method on a record type, and t
CREATE OR REPLACE TYPE limb_ot AUTHID DEFINER 
IS OBJECT
(
nm VARCHAR2 (100),
avg_len NUMBER,
MAP MEMBER FUNCTION limb_map RETURN NUMBER
)
/

CREATE OR REPLACE TYPE BODY limb_ot
IS
MAP MEMBER FUNCTION limb_map RETURN NUMBER
IS
BEGIN
RETURN LENGTH (self.nm);
END;
END;
/

And when I add DISTINCT in the block below, guess what? It works!

DECLARE
TYPE limbs_t IS TABLE OF limb_ot;
l_limbs limbs_t;
BEGIN
SELECT limb_ot (l.nm, l.avg_len)
BULK COLLECT INTO l_limbs
FROM limbs l
ORDER BY l.nm;

l_limbs := l_limbs MULTISET UNION DISTINCT l_limbs;
DBMS_OUTPUT.put_line ('Lots of limbs! ' || l_limbs.COUNT);
END;
/

Lots of limbs! 2

Well, I didn't get an error. But did it work? Aren't there three distinct rows in the table? Why does it show a COUNT of 2? Because the map method only uses the length of the name for comparison. Both "arm" and "leg" have three characters, so those two rows are not considered distinct for the purposes of the comparison. Tricky, eh?

What? You don't believe me? OK, fine, let's change the map function so that all three rows return distinct values and then....

CREATE OR REPLACE TYPE BODY limb_ot
IS
MAP MEMBER FUNCTION limb_map
RETURN NUMBER
IS
BEGIN
RETURN LENGTH (self.nm) + self.avg_len;
END;
END;
/

DECLARE
TYPE limbs_t IS TABLE OF limb_ot;

l_limbs limbs_t;
BEGIN
SELECT limb_ot (l.nm, l.avg_len)
BULK COLLECT INTO l_limbs
FROM limbs l
ORDER BY l.nm;

l_limbs := l_limbs MULTISET UNION DISTINCT l_limbs;
DBMS_OUTPUT.put_line ('Lots of limbs! ' || l_limbs.COUNT);
END;
/

Lots of limbs! 3

So if you are going to ask Oracle Database to compare object type instances in a nested table, you'd better provide a map method! And in case it is not entirely clear from the code above, it is up to you to come up with mapping algorithm that makes sense for your object type.

Finally, what if you want to compare records in a nested table with a MULTISET operator? You are out of luck. You cannot do this. There is no mechanism built into PL/SQL to compare two records, and there is no way to create a map method on a record type.

How to Pick the Limit for BULK COLLECT

$
0
0
This question rolled into my In Box today:
In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit?
First I give the quick answer, then I provide support for that answer

Quick Answer
  • Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption.
  • Test to see if that's fast enough (likely will be for many cases).
  • If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory. 
  • Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification.
  • Remember: each session that runs this code will use that amount of memory.
Background

When you use BULK COLLECT, you retrieve more than row with each fetch, reducing context switching between the SQL and PL/SQL engines, and thereby improving performance. Here, for example, is a small piece of code that gets all the rows from the employees table in one round-trip to the SQL engine, displays the number of elements in the collection, and then iterates through the collection displaying the last name.

DECLARE
TYPE employees_t IS TABLE OF employees%ROWTYPE;
l_employees employees_t;
BEGIN
SELECT *
BULK COLLECT INTO l_employees
FROM employees;

DBMS_OUTPUT.put_line (l_employees.COUNT);

FOR indx IN 1 .. l_employees.COUNT
LOOP
DBMS_OUTPUT.put_line (l_employees (indx).last_name);
END LOOP;
END;
/

As you can see, we fetch those multiple rows into a collection (aka, array). That collection consumes per-session Process Global Area memory or PGA. So you face a typical tradeoff when it comes to performance optimization: reduce CPU cycles, but use more memory.

And with the above block of code, using an "unlimited" BULK COLLECT, you really are taking a risk of running out of memory. As the number of rows in the table grows, more memory will be consumed.

So the general recommendation for production code, working with tables that may grow greatly in size, is to avoid SELECT BULK COLLECT INTO (an implicit query) and instead use the FETCH BULK COLLECT with a LIMIT clause.

Here's a rewrite of the above block using the LIMIT clause, retrieving 100 rows with each fetch.

DECLARE
c_limit CONSTANT PLS_INTEGER DEFAULT 100;

CURSOR emp_cur IS SELECT * FROM employees;

TYPE employee_aat IS TABLE OF employees%ROWTYPE INDEX BY BINARY_INTEGER;
l_employee employee_aat;
BEGIN
OPEN emp_cur;

LOOP
FETCH emp_cur BULK COLLECT INTO l_employee LIMIT c_limit;
EXIT WHEN l_employee.COUNT = 0;

DBMS_OUTPUT.put_line ('Retrieved ' || l_employee.COUNT);

FOR indx IN 1 .. l_employee.COUNT
LOOP
DBMS_OUTPUT.put_line (l_employees (indx).last_name);
END LOOP;
END LOOP;

CLOSE emp_cur;
END;
/

Now, no matter how many rows are in the employees table, my session only uses the memory required for 100 rows.

Back in Oracle Database 10g, the (then) brand-new PL/SQL optimizer played a neat trick with cursor FOR loops: it automatically converted it to C code that retrieves 100 rows with each fetch! The thinking is that the amount of memory for most tables need for 100 rows is never going to be that much, and you get a really nice burst in performance with that level of "bulk". This LiveSQL script demonstrates that optimization.

And from tests various people are run over the years, increasing that limit to 500 or 1000 doesn't really seem to offer that much of an improvement.

But for sure if you are working your way through millions of rows, you might see a very nice boost with a limit of 10000 or more. You just need to keep an eye on memory consumption. And that will be much less of a concern if you are running a batch job, not writing code that will be run by many users simultaneously.

You might also find this StackOverflow Q&A helpful.

Reduce the volume of PL/SQL code you write with these tips

$
0
0
I'm not known for being concise. I'm best known in the world of Oracle Database for my "magnum opus"Oracle PL/SQL Programming, which checks in at 1340 pages (the index alone is 50 pages long).

But I've picked up a few tips along the way for writing PL/SQL code that is, well, at least not as long, as verbose, as it could have been. And certainly shorter than my books. :-)

You probably have some ideas of your own; please offer them in comments and I will add them to the post.

Qualified Expressions (new to 18c)

In the bad old days before Oracle Database 18c was released (and is now available for free in its "XE" form), if you wanted to initialize an associative array with values, you had to do in the executable section as follows:

DECLARE   
TYPE ints_t IS TABLE OF INTEGER
INDEX BY PLS_INTEGER;

l_ints ints_t;
BEGIN
l_ints (1) := 55;
l_ints (2) := 555;
l_ints (3) := 5555;

FOR indx IN 1 .. l_ints.COUNT
LOOP
DBMS_OUTPUT.put_line (l_ints (indx));
END LOOP;
END;

As of 18c, you can use a qualified expression (think: constructor function) as follows:

DECLARE  
TYPE ints_t IS TABLE OF INTEGER
INDEX BY PLS_INTEGER;

l_ints ints_t := ints_t (1 => 55, 2 => 555, 3 => 5555);
BEGIN
FOR indx IN 1 .. l_ints.COUNT
LOOP
DBMS_OUTPUT.put_line (l_ints (indx));
END LOOP;
END;

The same is true for user-defined record types. This feature not only leads to a reduction in lines of code but also allows you to write more intuitive code.

Check out my LiveSQL script for lots of examples and also my blog post on this topic.

Normalized Overloading (hey I think I just invented that term!)

Normalization of code is just as important as normalization of data. Don't repeat your data, and don't repeat your code (a.k.a., DRY - don't repeat yourself, and SPOD - single point of definition).

A great example of how important this can be is with overloading. Overloading, also known as static polymorphism (sorry, just couldn't help throwing that in), means you have two or more subprograms with the same name, but different parameter lists or program type (procedure vs function).

It's a very nice feature when it comes to reducing the number of moving parts in your API (package specification), and making it easier for developers to use your code. Usually, those multiple subprograms with the same name are doing almost the same thing, which means most of their implementation will be the same, which means....watch out for redundant code!

Here's an example from the thread (discussion) manager package of the Oracle Dev Gym backend. I start off with a single procedure to insert a new thread:

PACKAGE BODY qdb_thread_mgr
IS
PROCEDURE insert_thread (
user_id_in IN PLS_INTEGER
, parent_thread_id_in IN PLS_INTEGER
, thread_type_in IN VARCHAR2
, subject_in IN VARCHAR2
, body_in IN CLOB)
IS
BEGIN
INSERT INTO qdb_threads (...) VALUES (...);
END;
END qdb_thread_mgr;

That's great, but I now run into a situation in which I need to get the new thread ID back to use in another step of a process. The easiest thing to do is cut and paste.

PACKAGE BODY qdb_thread_mgr
IS
PROCEDURE insert_thread (
user_id_in IN PLS_INTEGER
, parent_thread_id_in IN PLS_INTEGER
, thread_type_in IN VARCHAR2
, subject_in IN VARCHAR2
, body_in IN CLOB)
IS
BEGIN
INSERT INTO qdb_threads (...) VALUES (...);
END;

PROCEDURE insert_thread (
user_id_in IN PLS_INTEGER
, parent_thread_id_in IN PLS_INTEGER
, thread_type_in IN VARCHAR2
, subject_in IN VARCHAR2
, body_in IN CLOB
, thread_id_out OUT PLS_INTEGER)
IS
BEGIN
INSERT INTO qdb_threads (...) VALUES (...);
RETURNING thread_id
INTO thread_id_out;
END;
END qdb_thread_mgr;

It's not hard to argue, in this case, "so what, why not?" After all, the procedure consists of just a single INSERT statement. Why not copy-paste it? I get that, but here's the thing to keep in mind always with code:
It's only going to get more complicated over time.
That one statement will grow to three statements, then to 25 statements. And each time, along the way, you must remember to keep the two procedures in synch. And what if there are five of them?

It makes so much more sense to have a single "reference" procedure or function with all of the common logic in it. Each overloading then takes any actions specific to it before calling the reference procedure, followed by any finishing-up actions.

For the thread manager package, this means that the procedure returning the new primary key is the "reference" implementation, and the original procedure that ignores the new primary key

PACKAGE BODY qdb_thread_mgr
IS
PROCEDURE insert_thread (
user_id_in IN PLS_INTEGER
, parent_thread_id_in IN PLS_INTEGER
, thread_type_in IN VARCHAR2
, subject_in IN VARCHAR2
, body_in IN CLOB
, thread_id_out OUT PLS_INTEGER)
IS
BEGIN
INSERT INTO qdb_threads (...)
VALUES (...)
RETURNING thread_id
INTO l_thread_id;
END;

PROCEDURE insert_thread (
user_id_in IN PLS_INTEGER
, parent_thread_id_in IN PLS_INTEGER
, thread_type_in IN VARCHAR2
, subject_in IN VARCHAR2)
IS
l_id PLS_INTEGER;
BEGIN
insert_thread (
user_id_in => user_id_in
, parent_thread_id_in => parent_thread_id_in
, thread_type_in => thread_type_in
, subject_in => subject_in
, body_in => body_in
, thread_id_out => l_id);
END;
END qdb_thread_mgr;

This is straightforward stuff, not rocket science. It just comes down to discipline and an aversion to repetition. Of course, sometimes it's a bit more of an effort to identify all the common logic and corral it into its own procedure. But it is a refactoring project that is well worth the effort.

CASE Expressions Not Statements

One of things I like best about CASE over IF is that it comes in two flavors: a statement (like IF) and an expression. CASE expressions help me tighten up my code (check out this LiveSQL script for examples).

Consider the following function, which returns the start date for the specified period (month, quarter or year) and date.

FUNCTION start_date (
frequency_in IN VARCHAR2,
date_in IN DATE DEFAULT SYSDATE)
RETURN VARCHAR2
IS
BEGIN
IF frequency_in = 'Y'
THEN
RETURN TO_CHAR (ADD_MONTHS (date_in, -12), 'YYYY-MM-DD');
ELSIF frequency_in = 'Q'
THEN
RETURN TO_CHAR (ADD_MONTHS (date_in, -3), 'YYYY-MM-DD');
ELSIF frequency_in = 'M'
THEN
RETURN TO_CHAR (ADD_MONTHS (date_in, -1), 'YYYY-MM-DD');
END IF;
END;

Hmmm. Methinks there's some repetition of logic in there. CASE expression to the rescue!

BEGIN
RETURN TO_CHAR (
CASE frequency_in
WHEN 'Y' THEN ADD_MONTHS (date_in, -12)
WHEN 'Q' THEN ADD_MONTHS (date_in, -3)
WHEN 'M' THEN ADD_MONTHS (date_in, -1)
END,
'YYYY-MM-DD');
END;

Now I have a single RETURN statement (that always makes me breath a sigh of relief when I have to debug or maintain a function). But wait! I still see some repetition. Let's take another pass at this one.

BEGIN
RETURN TO_CHAR (
ADD_MONTHS (
date_in,
CASE frequency_in
WHEN 'Y' THEN -12
WHEN 'Q' THEN -3
WHEN 'M' THEN -1
END),
'YYYY-MM-DD');
END;

Now all repetition has been removed and CASE expression simply converts a period type to a number of months to "go back."

You can probably see that using CASE expressions is unable to result in some massive reduction in code volume (same with qualified expressions).

But:

  • Every little bit counts.
  • The more you get into the habit of paying attention to unnecessary code and finding ways to get rid of it, the more examples you will find.
Well, I bet you've got your own ideas for writing lean PL/SQL code. Let me know!

PL/SQL 101: Defining and managing transactions

$
0
0
If you've got a read-only database, you don't have to worry about transactions. But for almost every application you're ever going to build, that is not the case. Therefore, the concept and managing of transactions is central to the success of your application.

A transaction is a sequence of one or more SQL statements that Oracle Database treats as a unit: either all of the statements are performed, or none of them are. A transaction implicitly begins with any operation that obtains a TX lock:
  • When a statement that modifies data is issued (e.g., insert, update, delete, merge)
  • When a SELECT ... FOR UPDATE statement is issued
  • When a transaction is explicitly started with a SET TRANSACTION statement or the DBMS_TRANSACTION package
Issuing either a COMMIT or ROLLBACK statement explicitly ends the current transaction.

This post reviews how to define, manage and control the transactions in your application with the following statements and features:
  • Commit and Rollback
  • Savepoints
  • Autonomous transactions
  • The SET TRANSACTION statement
You can find lots more details in the Transaction Processing and Control (doc) and in the links to Oracle Live SQL and Oracle Dev Gym resources below.

Commit and Rollback

Recall the definition of a transaction: "A transaction is a sequence of one or more SQL statements that Oracle Database treats as a unit: either all of the statements are performed, or none of them are." When all of the statements are "performed", that means you committed them or saved them to the database.

Use the COMMIT statement to save all changes, and make them visible to other users. Remember: no one can see the changes made in a particular session until they are committed.  Once committed, every user with access to the affected tables now see the new "state" of the table.

Use the ROLLBACK statement to reverse all changes since the last commit (or since you started the first transaction in your session).

This LiveSQL tutorial  (a part of the Databases for Developers course on the Oracle Dev Gym) demonstrates these basic elements of transaction management.

But wait! What if you only want to reverse some of the changes in your session, but leave others in place, ready to commit at some point in the future? Welcome to the world of savepoints.

Savepoints

Savepoints let you roll back part of a transaction instead of the whole transaction. The number of active savepoints for each session is unlimited.

When you roll back to a savepoint, any savepoints marked after that savepoint are erased. The savepoint to which you roll back is not erased. A simple rollback or commit erases all savepoints.

Savepoint names are undeclared identifiers. Reusing a savepoint name in a transaction moves the savepoint from its old position to the current point in the transaction, which means that a rollback to the savepoint affects only the current part of the transaction.

For the recursive programmers among us: If you mark a savepoint in a recursive subprogram, new instances of the SAVEPOINT statement run at each level in the recursive descent, but you can only roll back to the most recently marked savepoint.

Here is an example of using a savepoint (drawn from the same LiveSQL tutorial):

CREATE TABLE toys
(
toy_id INTEGER,
toy_name VARCHAR2 (100),
colour VARCHAR2 (10)
)
/

DECLARE
l_count INTEGER;
BEGIN
INSERT INTO toys (toy_id, toy_name, colour)
VALUES (8, 'Pink Rabbit', 'pink');

SAVEPOINT after_six;

INSERT INTO toys (toy_id, toy_name, colour)
VALUES (9, 'Purple Ninja', 'purple');

SELECT COUNT (*)
INTO l_count
FROM toys
WHERE toy_id IN (8, 9);

DBMS_OUTPUT.put_line (l_count);

ROLLBACK TO SAVEPOINT after_six;

SELECT COUNT (*)
INTO l_count
FROM toys
WHERE toy_id IN (8, 9);

DBMS_OUTPUT.put_line (l_count);

ROLLBACK;

SELECT COUNT (*)
INTO l_count
FROM toys
WHERE toy_id IN (8, 9);

DBMS_OUTPUT.put_line (l_count);
END;
/

2
1
0


Autonomous Transactions

By default, when you execute a COMMIT statement, all unsaved changes in your session are saved. And when you roll back, all unsaved changes are erased.

Sometimes, though, we'd like to save just one of our changes, but not the others. The most typical use case for this scenario is error logging. I want to write information out to my error log table and save it, but then I need to roll back the transaction (there is, after all, an error).

It's possible I could use save points to do that (see previous section), but that is hard to get right  consistently and dependably when you are calling a reusable logging program. Fortunately, I can simply make my error logging procedure an autonomous transaction. Then I can insert the error information and commit that insert, without affecting the business transaction, which will subsequently be rolled back.

And it's so easy to do!

Simply add this statement to the declaration section of a procedure or function...

PRAGMA AUTONOMOUS_TRANSACTION;

the following rule then applies:
Before the subprogram can be closed and control passed back to the calling block, any DML changes made within that subprogram must be committed or rolled back.
If there are any unsaved changes, the PL/SQL engine will raise the ORA-06519 exception, as shown below:

CREATE OR REPLACE FUNCTION nothing RETURN INTEGER
IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
UPDATE employees SET last_name = 'abc';

RETURN 1;
END;
/

BEGIN
DBMS_OUTPUT.put_line (nothing);
END;
/

ORA-06519: active autonomous transaction detected and rolled back
ORA-06512: at "STEVEN.NOTHING", line 10
ORA-06512: at line 2

Here's an example of using this feature in an error logging procedure:

CREATE OR REPLACE PACKAGE BODY error_mgr 
IS
PROCEDURE log_error (app_info_in IN VARCHAR2)
IS
PRAGMA AUTONOMOUS_TRANSACTION;
c_code CONSTANT INTEGER := SQLCODE;
BEGIN
INSERT INTO error_log (created_on,
created_by,
errorcode,
callstack,
errorstack,
backtrace,
error_info)
VALUES (SYSTIMESTAMP,
USER,
c_code,
DBMS_UTILITY.format_call_stack,
DBMS_UTILITY.format_error_stack,
DBMS_UTILITY.format_error_backtrace,
app_info_in);

COMMIT;
END;
END;

This LiveSQL script contains the full (and very basic) error logging package.

This LiveSQL script demonstrates the effect of an autonomous transaction pragma.

The SET TRANSACTION Statement

Use the SET TRANSACTION statement to establish the current transaction as read-only or read/write, establish its isolation level, assign it to a specified rollback segment, or assign a name to the transaction.

When you set your transaction to read-only, then queries return data as it existed at the time the transaction began, and you can only run select statements. Here's an example of using this option, drawn from Chris Saxon's excellent LiveSQL module:

set transaction read only;

select * from toys;

update toys
set price = price + 1;

declare
pragma autonomous_transaction;
begin
update toys set price = 99.00;
commit;
end;
/

select * from toys;
commit;
select * from toys;

Here are the results when run in LiveSQL:


Oracle supports just two isolation levels: Read Committed and Serializable.

Read Committed

This is the default mode for Oracle Database. Using read committed, you have statement-level consistency. This means that each DML command (select, insert, update, or delete) can see all the data saved before it begins. Any changes saved by other sessions after it starts are hidden.

It does this using multiversion concurrency control (MVCC). When you update or delete a row, this stores the row's current state in undo. So other transactions can use this undo to view data as it existed in the past.

Serializable

When you set your transaction to serializable, the database acts as if you are the only user of the database. Changes made by other transactions are hidden from you. Serializable also stops you changing rows modified by other transactions with this error:

ORA-08177 can't serialize access for this transaction

You are, in other words, isolated.

Consider using serializable when a transaction accesses the same rows many times, and you will have many people running the transaction at the same time.

Chris Saxon covers all of these topics and more in his LiveSQL module. Be sure to check it out!

Working With JSON Arrays in PL/SQL

$
0
0
Oracle Database 12c Release 2 built upon the 12.1 SQL/JSON features by adding a number of builtin object types (similar to classes in object-oriented languages) for manipulating JSON data in PL/SQL blocks.

In this post, I explore some of the array-oriented JSON features, all made available through the JSON_ARRAY_T type and its methods.

Just like a class, an object type offers a pre-defined constructor function to instantiate new instances of that type, static methods and member methods.

Here are the methods you are most likely to use:

A couple of things to remember, generally, about working with JSON elements generally and JSON arrays specifically in PL/SQL:

Error Handling Behavior

By default, if an error occurs when you call a member method for your JSON array (or object), NULL is returned. In other words, an exception is not raised back to your block.

If you want errors to be propagated from the method as an exception, call the ON_ERROR method and pass a value greater than 0.

Array Indexes

In PL/SQL, as you probably know, indexing in nested tables and varrays starts at 1, not 0. With associative arrays, it can start wherever you want it to start. :-)

JSON array indexing starts at 0, as is common in many other programming languages, and we follow that convention with JSON arrays in the Oracle Database. So you don't want to iterate through a JSON array with a loop header like this:

FOR indx IN 1 .. my_array.get_size()

Instead, you should write something like this:

FOR indx IN 0 .. my_array.get_size() - 1

JSON Array Basics

An array is a comma-delimited list of elements inside square brackets, as in:

["SQL", "PL/SQL"]

The index for a JSON array starts at 0, which is different from the norm for PL/SQL collections (nested tables and varrays start at index value 1).

So the array shown above has elements defined at index values 0 and 1, not 1 and 2.

The ordering of elements in an array is significant, in contrast to objects, in which the ordering of members is not significant (similar to relation tables).

A JSON array can contain scalars, objects and arrays within it. These are all valid JSON arrays:

1. An array containing a single scalar value

[1]

2. An array containing three scalars

[1,2,"three"]

3. An array of three JSON objects

[{"object":1},{"inside":2},{"array":3}]

4. An array containing a Boolean literal, an array of scalars, and an object

[true,
 [1,2,3],
 {"name":"steven"},
]

Build Your Own Array

Sometimes the array is provided to you, and you need to go exploring (see Recursive Looping Through An Array, below). Sometimes you need to construct an array from data in a table or your program

The JSON_ARRAY_T type offers a number of member procedures to BYOA ("build your own array"):
  • APPEND – Append a new item on the end of the array
  • APPEND_NULL – Append a new item on the end of the array
  • PUT – Adds or modifies element at specified position in the array
  • PUT_NULL – sets value of element at specified position in the array to NULL
To demonstrate append, I created a "to JSON" package that converts a string-indexed associative array to a JSON array (it also contains other "to JSON" functions; try it out yourself with this LiveSQL script).

Each element in the JSON array returned is an JSON object in the form

{"index-value":"item-value"}

where index-value is the string index value in the associative array, and item-value is the value of the item at that location in the array.

Here's the package specification; note that the associative array is indexed by a subtype, INDEX_T, which is defined as VARCHAR2(50).

PACKAGE to_json AUTHID DEFINER
IS
SUBTYPE index_t IS VARCHAR2 (50);

TYPE assoc_array_t IS TABLE OF VARCHAR2 (100)
INDEX BY index_t;

FUNCTION to_object (key_in IN VARCHAR2, value_in IN VARCHAR2)
RETURN json_object_t;

FUNCTION to_array (assoc_array_in IN assoc_array_t)
RETURN json_array_t;
END;

And here's the package body:

PACKAGE BODY to_json
IS
FUNCTION to_object (key_in IN VARCHAR2, value_in IN VARCHAR2)
RETURN json_object_t
IS
BEGIN
RETURN json_object_t ('{"' || key_in || '":"' || value_in || '"}');
END;

FUNCTION to_array (assoc_array_in IN assoc_array_t)
RETURN json_array_t
IS
l_index index_t := assoc_array_in.FIRST;
l_json_array json_array_t := json_array_t ();
BEGIN
WHILE l_index IS NOT NULL
LOOP
DBMS_OUTPUT.put_line (
'Appending ' || l_index || ':' || assoc_array_in (l_index));

l_json_array.append (to_object (l_index, assoc_array_in (l_index)));

DBMS_OUTPUT.put_line ('Watch it grow! ' || l_json_array.get_size ());

l_index := assoc_array_in.NEXT (l_index);
END LOOP;

RETURN l_json_array;
END;
END;

The to_object function hides all the details of constructing a valid JSON object from key and value. The to_array function is explained below:

  • Accept an associative array, return a JSON array object type instance.
  • Since this is a string-indexed collection, I cannot use a "FOR indx IN 1 .. array.COUNT" approach. Instead, I start with the lowest-defined index value (retrieved on line 13 with a call to the FIRST function) and use a WHILE LOOP.
  • Call the JSON_OBJECT_T append member method to add an element to the end of the JSON array. What am I adding? A JSON object that is constructed from the associative array index and item, using the to_json.to_object function.
  • Find the next defined index value (remember: strings!). The NEXT function returns NULL when going past the last index value, and that will stop the WHILE loop.
  •  Return the JSON array.

Time to run some code!

In the following block, I take advantage of the new-to-18c qualified expression feature, allowing me to initialize the contents of a string-indexed array with a single expression. I then convert it to a JSON array, and display the results, all in a single call to DBMS_OUTPUT.put_line:

DECLARE
l_array to_json.assoc_array_t :=
to_json.assoc_array_t (
'yes' => 'you', 'can'=>'in', 'oracledatabase'=>'18c',
'fullstop'=>NULL, 'and then'=>'some');
BEGIN
DBMS_OUTPUT.put_line (to_json.to_array (l_array).to_string ());
END;
/
/

Here are the results:

Appending and then:some
Watch it grow! 1
Appending can:in
Watch it grow! 2
Appending fullstop:
Watch it grow! 3
Appending oracledatabase:18c
Watch it grow! 4
Appending yes:you
Watch it grow! 5
[{"andthen":"some"},{"can":"in"},{"fullstop":""},{"oracledatabase":"18c"},{"yes":"you"}]

Notice that the items in the JSON array are not in the same order as they appeared in the qualified expression that populated the associative array. That's due to the automatic ordering by character set order when values are put into a string-indexed collection.

Recursive Looping Through An Array

Some JSON arrays are simple lists of scalars, or even objects. But many arrays have within them other arrays.  And with these arrays-with-nested-arrays, you might want to iterate through all the "leaves" in that hierarchical structure. The easiest way to do that is with recursion. Let's build a procedure to do just that.

All the code in this section may be found, run and played around with on LiveSQL.

First, I will create a helper procedure to display the string, indented to show its place in the JSON array hierarchy:

CREATE OR REPLACE PROCEDURE put_line (
string_in IN VARCHAR2,
pad_in IN INTEGER DEFAULT 0)
IS
BEGIN
DBMS_OUTPUT.put_line (LPAD ('', pad_in * 3) || string_in);
END;
/

My version of DBMS_OUTPUT.put_line is used in several places in the json_array_traversal procedure, shown below.

CREATE OR REPLACE PROCEDURE json_array_traversal ( 
json_document_in IN CLOB,
leaf_action_in IN VARCHAR2,
level_in IN INTEGER DEFAULT 0)
AUTHID DEFINER
IS
l_array json_array_t;
l_object json_object_t;
l_keys json_key_list;
l_element json_element_t;
BEGIN
l_array := json_array_t.parse (json_document_in);

put_line ('Traverse: ' || l_array.stringify (), level_in);

FOR indx IN 0 .. l_array.get_size - 1
LOOP
put_line ('Index: ' || indx, level_in);

CASE
WHEN l_array.get (indx).is_string
THEN
EXECUTE IMMEDIATE leaf_action_in
USING l_array.get_string (indx), level_in;
WHEN l_array.get (indx).is_object
THEN
l_object := TREAT (l_array.get (indx) AS json_object_t);

l_keys := l_object.get_keys;

FOR k_index IN 1 .. l_keys.COUNT
LOOP
EXECUTE IMMEDIATE leaf_action_in
USING l_keys (k_index), level_in;
END LOOP;
WHEN l_array.get (indx).is_array
THEN
json_array_traversal (
TREAT (l_array.get (indx) AS json_array_t).stringify (),
leaf_action_in,
level_in + 1);
ELSE
DBMS_OUTPUT.put_line (
'*** No match for type on array index ' || indx);
END CASE;
END LOOP;
END;

Here's a narrative description of that code:

Pass in a CLOB containing a JSON document, which for this procedure should be an array. The actual value for the "leaf action" parameter is a dynamic PL/SQL block to be executed when a leaf is encountered. It is unlikely you would use anything this generic in production code, but it could be very handy as a utility.

Define a number of instances of JSON object types: an array, an object, key list, and element.

Parse the document (text) into a hierarchical, in-memory representation. At this point, if json_document_in is not a valid array, the following error is raised:

ORA-40587: invalid JSON type

You can verify this with the following block:

DECLARE
l_doc CLOB := '{"name":"Spider"}';
BEGIN
json_array_traversal (
l_doc,
q'[BEGIN NULL; END;]');
END;

OK, then I display the document passed in, taking advantage of the stringify method.

Iterate through each element in the array.  The get_size method returns the number of elements in the array. Remember that JSON array indexes start with zero (0). So this works:

FOR indx IN 0 .. l_array.get_size – 1

But a formulation consistent with iteration through a PL/SQL nested table, such as:

FOR indx IN 1 .. l_array.get_size

Is likely to result in this error:

ORA-30625: method dispatch on NULL SELF argument is disallowed

An element in an array can be a scalar, object or another array. So I provide a WHEN clause for each possibility. Well, not each and every. There are more types of scalars than string, but I leave the expansion of the CASE statement to cover all scalar type to my dear readers.

If the element is a scalar string, then I use native dynamic SQL to execute the provided PL/SQL block. I pass to the string value (by calling the get_string method for that index value) and the level (so that the entry is properly indented in the output).

For an object, I get all of its keys and then take the leaf action for each of the key values. Note: this is the action I chose to perform for an object. In a more complete implementation, you would iterate through the values of the object, and take specific action depending on the value's type. For example, an object could have an array within it, as in:

{"chicken_noises":["click","clack","cluck"]}

Finally, if an array, I call the traversal procedure recursively, passing:

1. This element, cast to an array, and then converted back to string format.
2. The same leaf action dynamic block
3. The level, raised by 1.

When I call the traversal procedure as follows:

DECLARE
l_doc CLOB :=
'["Stirfry",
{"name":"Spider"},
"Mosquitos",
["finger","toe","nose"]
]';
BEGIN
json_array_traversal (
l_doc,
q'[BEGIN put_line ('Leaf: '|| :val, :tlevel); END;]');
END;
/

I see the following output:

Traverse: ["Stirfry",{"name":"Spider"},"Mosquitos",["finger","toe","nose"]]
Index: 0
Leaf: Stirfry
Index: 1
Leaf: name
Index: 2
Leaf: Mosquitos
Index: 3
   Traverse: ["finger","toe","nose"]
   Index: 0
   Leaf: finger
   Index: 1
   Leaf: toe
   Index: 2
   Leaf: nose

And with the following invocation:

DECLARE
l_doc CLOB := '["Stirfry",
{"name":"Spider"},
"Mosquitos",
["finger",
"toe",
[{"object":1},{"inside":2},{"array":3}]
],
{"elbow":"tennis"}
]';
BEGIN
json_array_traversal (
l_doc,
q'[BEGIN put_line ('Leaf: '|| :val, :tlevel); END;]');
END;
/

I see this output:

Traverse: ["Stirfry",{"name":"Spider"},"Mosquitos",["finger","toe",[{"object":1},{"inside":2},{"array":3}]],{"elbow":"tennis"}]
Index: 0
Leaf: Stirfry
Index: 1
Leaf: name
Index: 2
Leaf: Mosquitos
Index: 3
   Traverse: ["finger","toe",[{"object":1},{"inside":2},{"array":3}]]
   Index: 0
   Leaf: finger
   Index: 1
   Leaf: toe
   Index: 2
      Traverse: [{"object":1},{"inside":2},{"array":3}]
      Index: 0
      Leaf: object
      Index: 1
      Leaf: inside
      Index: 2
      Leaf: array
Index: 4
Leaf: elbow

Summary

JSON arrays are widely and heavily used. They are also extremely flexible, as they can contain scalars, objects and other arrays. The more complex and nested is the structure of your JSON array, the most challenging it can be to work with.

The JSON_ARRAY_T object type offers a clean, fast API for interrogating and constructing JSON arrays. Once you are able to correlate PL/SQL arrays with JSON arrays (correcting for differences in indexing, for example), you will find it easy to productively write code to work with JSON arrays in your PL/SQL code.

Resources


 

Time for another Dev Gym PL/SQL Championship!

$
0
0
A new year has arrived, and that means that it's time (or will soon be time) for the PL/SQL Challenge Championship, when up to fifty top players from last year's PL/SQL tournament quizzes compete for top honors.

The following players will be invited to participate in the PL/SQL Challenge Championship for 2018, currently scheduled to take place on 26 February (hey, it takes some time to put together five advanced quizzes without any mistakes in them!).

The number in parentheses after their names are the number of championships in which they have already participated (note: from 2010 through 2013, we held quarterly championships for our then daily PL/SQL quiz!).

Congratulations to all listed below on their accomplishment and best of luck in the upcoming championship!


NameRank
Stelios Vlasopoulos (15)1
mentzel.iudith (18)2
Tony Winn (7)3
NielsHecker (19)4
Andrey Zaytsev (7)5
patch72 (5)6
Ivan Blanarik (12)7
siimkask (18)8
Rakesh Dadhich (10)9
Rytis Budreika (6)10
Vyacheslav Stepanov (17)11
li_bao (6)12
Chad Lee (15)13
Oleksiy Ponomarenko (3)14
_tiki_4_ (11)15
Michal P. (2)16
Maxim Borunov (5)17
Jan Šerák (4)18
seanm95 (5)19
msonkoly (3)20
Chase Mei (4)21
JustinCave (15)22
tonyC (4)23
Ludovic Szewczyk (3)24
Henry_A (5)25
mcelaya (3)26
PZOL (4)27
Talebian (5)28
Mike Tessier (2)29
swesley_perth (4)30
pjas (1)31
Aleksei Davletiarov (0)32
syukhno (1)33
JasonC (3)34
Otto Palenicek (2)35
Karel_Prech (8)36
Sachi (2)37
ted (1)38
Köteles Zsolt (0)39
MarcusM (4)40
Sartograph (1)41
JeroenR (11)42
RalfK (0)43
NickL (3)44
HSteijntjes (1)45
Rimantas Adomauskas (5)46
Sandra99 (2)47
st_guitar (0)48
pablomatico (1)49
richdellheim (1)50

Logic Reigns in the Oracle Dev Gym Logic Championship

$
0
0
Logic is at the very heart of programming, so we complement our quizzes on SQL, PL/SQL and so on with a weekly Logic tournament. And then at the end of the year, the top 50 ranked players qualify for our annual championship.

The following players will be invited to participate in the Logic Annual Championship for 2018, currently scheduled to take place on 19 February.

The number in parentheses after their names are the number of championships in which they have already participated.

Congratulations to all listed below on their accomplishment and best of luck in the upcoming competition!

NameRank
Stelios Vlasopoulos (5)1
Pavel Zeman (4)2
mentzel.iudith (5)3
Ludovic Szewczyk (1)4
James Su (5)5
Chad Lee (5)6
Tony Winn (3)7
Rytis Budreika (5)8
ted (5)9
Kanellos (4)10
Cor van Berkel (4)11
Köteles Zsolt (4)12
Vijay Mahawar (5)13
RalfK (4)14
pjas (1)15
Mike Tessier (3)16
seanm95 (5)17
NickL (4)18
Michal P. (1)19
Eric Levin (4)20
Sandra99 (5)21
Mehrab (5)22
JasonC (5)23
Talebian (4)24
NielsHecker (5)25
richdellheim (5)26
Sartograph (3)27
tonyC (4)28
Kias (2)29
craig.mcfarlane (4)30
umir (5)31
li_bao (3)32
Vyacheslav Stepanov (5)33
Stanislovas (2)34
msonkoly (3)35
Patel Sanjay (1)36
mcelaya (3)37
whab@tele2.at (3)38
MarkusId (0)39
saddaymay (2)40
gabt (0)41
Alexuboo (1)42
Vladimir13 (3)43
JustinCave (5)44
TZ (3)45
Stephan (0)46
Dan Kiser (4)47
jamesravid (0)48
Arūnas Antanaitis (3)49
Henry_A (1)50

Three Hot Tips for Working With Collections

$
0
0

Collections in PL/SQL make it easy for you to implement lists, arrays, stacks, queues, etc. They come in three flavors: associative arrays, nested tables, and varrays. The three types of collections share many features, and also have their own special characteristics.

Here are some tips for making the most of collections. At the bottom of the post, I offer links to a number of resources for diving in more deeply on collections.

You Can Query From Collections

Collections are, for the most part, variables you will declare and manipulate in PL/SQL. But you can query from them using the TABLE operator (and in 12.2 and higher you even leave off that operator).

Use this feature to:
  • Manipulate table data and in-session collection data within a single SELECT.
  • Use the set-oriented power of SQL on your in-session data.
  • Build table functions (functions that return collections and can be called in the FROM clause of a query.
Here's a simple demonstration:

CREATE OR REPLACE TYPE list_of_names_t
IS TABLE OF VARCHAR2 (100);
/

DECLARE
happyfamily list_of_names_t := list_of_names_t ();
BEGIN
happyfamily.EXTEND (7);
happyfamily (1) := 'Veva';
happyfamily (2) := 'Chris';
happyfamily (3) := 'Lauren';
happyfamily (4) := 'Loey';
happyfamily (5) := 'Eli';
happyfamily (6) := 'Steven';
happyfamily (7) := 'Juna';

FOR rec IN ( SELECT COLUMN_VALUE the_name
FROM TABLE (happyfamily)
ORDER BY the_name)
LOOP
DBMS_OUTPUT.put_line (rec.the_name);
END LOOP;
END;
/

Chris
Eli
Juna
Lauren
Loey
Steven
Veva

Prior to Oracle Database 12c, you could only use nested tables and varrays with the TABLE operator. But with 12.1 and above, you can also use it with integer-indexed associative arrays.

You can have a lot more than simply constructing a simple select around your collection (or a function that returns the collection). You can join that collection with other collections or other tables. You can perform set-level operations like UNION and MINUS. You can, in short, treat that collection as a read-only set of rows and columns like any other.

LiveSQL offers a number of scripts demonstrating the TABLE operator here

Collections Consume Session (PGA) Memory

Just like almost every other type of variable (or constant), collections use PGA - process global area - memory, rather than SGA - system global area - memory. This means that the memory for collections is consumed per session. 

Suppose you have a program that populates a collection with 1000s of elements of data (which could even be records, not simply scalar values). In that case, every session that executes your program will use that same amount of memory. Things could get out of hand quickly.

When writing your program, ask yourself how many sessions might run it simultaneously and if there are ways to manage/limit the amount of memory used to populate the collection.

If, for example, you are using BULK COLLECT to populate a collection from a query, stay away from SELECT-BULK COLLECT-INTO. That approach could cause issues down the line, as the volume of data returned by the query increases. Consider, instead, using an explicit cursor, with a FETCH statement and a LIMIT clause. The program might need to retrieve 1M rows, but you can fetch just 100 or 1000 at a time, and therefore cap the total PGA consumed (and reused).

You'll find lots more information about setting that limit value here.

I offer a package to help you analyze how much PGA memory your session has consumed here.


FOR Loops Don't Work with Sparse Collections

Many of your collections will be dense (all index values between lowest and highest are defined), but in some cases (especially with associative arrays), your collections will be sparse. If you try to use a numeric FOR loop to iterate through the collection's elements, you will hit a NO_DATA_FOUND exception.

Instead, use the navigation methods (FIRST, LAST, NEXT, PRIOR) to move from one defined index value to the next, and "skip over" undefined values.

In the first block below, l_animals.FIRST returns -100 and that name is then printed. But the next integer higher than -100 is -99. That is not a defined index value, so the attempt to read l_animals(-99) causes the PL/SQL runtime engine to raise a NO_DATA_FOUND exception.

In the second block, I again start with the index value returned by a call to the FIRST method. But I then use NEXT to find the next highest defined index value. So it takes me straight to 1000 and I display the name of the species at that location. The subsequent call to NEXT returns NULL and the loop stops.

Note: Lupus is the species name for wolf, and Loxodonta is the species name for African elephants. May they all live long and prosper!

DECLARE
TYPE species_t IS TABLE OF VARCHAR2(10)
INDEX BY PLS_INTEGER;

l_animals species_t;
BEGIN
l_animals (-100) := 'Lupus';
l_animals (1000) := 'Loxodonta';

FOR indx IN l_animals.FIRST .. l_animals.LAST
LOOP
DBMS_OUTPUT.put_line (l_animals (indx));
END LOOP;
END;
/

Lupus
ORA-01403: no data found

DECLARE
TYPE species_t IS TABLE OF VARCHAR2(10)
INDEX BY PLS_INTEGER;

l_animals species_t;
l_index PLS_INTEGER;
BEGIN
l_animals (-100) := 'Lupus';
l_animals (1000) := 'Loxodonta';

l_index := l_animals.FIRST;

WHILE l_index IS NOT NULL
LOOP
DBMS_OUTPUT.put_line (l_animals (l_index));
l_index := l_animals.NEXT (l_index);
END LOOP;
END;
/

Lupus
Loxodonta


You might think to yourself: OK, I will never use a FOR loop with collections. I will always use a WHILE loop with FIRST and NEXT (or LAST and PRIOR to go backwards). That way, it'll work whether it is sparse or dense.

That approach probably will not cause any trauma (performance will be fine), but remember that if you do this, you will never notice that a collection which was supposed to be dense actually became sparse due to a problem in your code (or user error :-) )! In other words, you may be taking out "insurance" that covers up a bug!

More on Collections

How about over 5 hours of free, video-based instruction on collections?

How about over 45 scripts on LiveSQL?

Or Tim Hall's ORACLE-BASE article on collections.

Have at it!


Use PL/SQL to Build and Access Document Stores

$
0
0
What does soda have to do with PL/SQL and Oracle Database? Not much...but SODA. Ah, there we have a different story to tell.

SODA stands for "Simple Oracle Document Access." It's a set of NoSQL-style APIs that let you create and store collections of documents (most importantly JSON) in Oracle Database, retrieve them, and query them, without needing to know SQL or how the documents are stored in the database. Read lots more about SODA here.

As of Oracle Database 18c, we offer SODA APIs for Java, C,  Node.js (JavaScript), PythonREST and PL/SQL.



I published an article on SODA for PL/SQL in Oracle Magazine; in this blog post, I focus on some highlights. Please do read the full article (and others still to come!). Also, Tim Hall of Oracle-BASE offers his usual outstanding treatment of this topic here.

SODA for PL/SQL? Whatever for?

First and most important, why would a database developer who writes PL/SQL want to avoid SQL and pretend that the amazing relational Oracle Database is a document store? :-)

Most backend database developers will, of course, stick to the normal way of using PL/SQL: as a way to enhance the SQL language, provide additional security and a means to implement business logic.

In large enterprises that have Oracle Database installed, however, there is an increasing demand from frontend (and/or full stack) developers to work with document databases. With the wide array of SODA APIs now available for Oracle Database, they can have the best of both worlds: the power and security of the world’s best relational database combined with the ease of use and flexibility of JSON-based document management with easy to use NoSQL-style SODA drivers for various programming languages.

In addition, the PL/SQL SODA API makes it possible for database developers to access collections and documents created through other SODA APIs. Thus, a JavaScript developer could use the Node.js API to load JSON documents into the database. The SQL-savvy backend developer could then bring the full power of SQL to that data: indexing access to the documents and building efficient analytic queries against them.

Getting Started with SODA
All the SODA APIs share the same concepts and flow. First, since the point of SODA is to relieve a developer of the need to know SQL, the APIs are not table-focused. They are document-centric. Use the SODA API to manage (create, read, update, delete) documents of just about anything, including videos, images, and – most commonly – JSON documents.
Documents are organized into collections. You can have one collection for all your documents; you can create a collection for each type of document (my video collection, my song collection, etc.); or you can create collections for different components of your application. 
You can query the contents of documents using pattern matching (query-by-example) or by using document keys. 
All PL/SQL SODA operations are made available through the new-to-18c DBMS_SODA package and several object types, including SODA_collection_t and SODA_document_t. To use the package and manage SODA collections and documents in your schema of choice, the SODA_APP role will need to be granted to that schema.
That’s all you need to get going to SODA in PL/SQL! 

I show below an example of using elements of the API. 
I declare several variables based on object types defined for the SODA API. I use the DBMS_SODA package to create a new collection (which holds one or more documents).

Then I use the insert_one_and_get method of the soda_collection_t type to insert a document, which is built using the constructor function of the soda_document_t type.

I then obtain the key value of that document, along with its media type, using methods of the soda_document_t type.

DECLARE
l_collection soda_collection_t;
l_document soda_document_t;
l_new_document soda_document_t;
BEGIN
l_collection := dbms_soda.create_collection ('WithDocuments');

IF l_collection.insert_one (
soda_document_t (
b_content => UTL_RAW.cast_to_raw (
'{"friend_type":1,"friend_name":"Lakshmi"}'))) = 1
THEN
DBMS_OUTPUT.put_line ('BLOB document inserted');
END IF;

l_new_document :=
l_collection.insert_one_and_get (
soda_document_t (
b_content => UTL_RAW.cast_to_raw (
'{"friend_type":2,"friend_name":"Samuel"}')));

DBMS_OUTPUT.put_line ('Samuel''s key: ' || l_new_document.get_key);
DBMS_OUTPUT.put_line (
'Samuel''s media_type: ' || l_new_document.get_media_type);
END;
/

BLOB document inserted
Samuel's key: 1697CFFB902A4FC2BFAD61DA31CF3B07
Samuel's media_type: application/json


There's lots more to explore, and I will be exploring in the coming months. In the meantime, check out my Oracle magazine article and give it a try!

Results of the Dev Gym Logic Championship for 2018

$
0
0
You will find below the rankings for the Logic Annual Championship for quizzes played in 2018. The number next to the player's name is the number of times that player has participated in a championship. Below the table of results for this championship, you will find another list showing the championship history of each of these players.

Congratulations first and foremost to our top-ranked players:

1st Place: Stelios Vlasopoulos

2nd Place: Pavel Zeman

3rd Place: Sartograph 


Next, congratulations to everyone who played in the championship. We hope you found it entertaining, challenging and educational. And for those who were not able to participate in the championship, you can take the quizzes through the Practice feature. We will also make the championship as a whole available as a Test, so you can take it just like these players did.

Finally, many thanks to Eli Feuerstein, the Logic Quizmaster who provided a very challenging set of quizzes, and our deepest gratitude to our reviewers, especially Livio Curzola, who has once again performed an invaluable service to our community.

RankNameTotal Time% CorrectTotal Score
1Stelios Vlasopoulos (6)15 m100%4611
2Pavel Zeman (5)35 m100%4511
3Sartograph (4)35 m100%4511
4Sandra99 (6)37 m100%4500
5seanm95 (6)52 m100%4425
6Vyacheslav Stepanov (6)17 m96%4414
7NickL (5)56 m100%4407
8umir (6)43 m92%4097
9Köteles Zsolt (5)44 m92%4092
10richdellheim (6)59 m92%4014
11mentzel.iudith (6)34 m88%3953
12craig.mcfarlane (5)57 m88%3835
13NielsHecker (6)59 m88%3828
14Talebian (5)53 m84%3668
15gabt (1)37 m80%3564
16Tony Winn (4)48 m80%3510
17Vijay Mahawar (6)59 m80%3450
18whab@tele2.at (4)40 m76%3360
19Mike Tessier (4)44 m72%3151
20Chad Lee (6)50 m72%3123
21msonkoly (4)59 m72%3075
22JasonC (6)34 m68%3017
23Michal P. (2)56 m68%2906
24Stanislovas (3)43 m60%2594
25RalfK (5)59 m60%2516
26Ludovic Szewczyk (2)59 m56%2325
27Cor van Berkel (5)44 m52%2215
28Kias (3)06 m44%2028
29mcelaya (4)25 m44%1937

Championship Performance History

After each name, the quarter in which he or she played, and the ranking in that championship.

NameHistory
Stelios Vlasopoulos2013:16th, 2014:29th, 2015:19th, 2016:8th, 2017:1st, 2018:1st
Pavel Zeman2014:7th, 2015:1st, 2016:3rd, 2017:2nd, 2018:2nd
Sartograph2015:24th, 2016:21st, 2017:5th, 2018:3rd
Sandra992013:17th, 2014:19th, 2015:7th, 2016:4th, 2017:10th, 2018:4th
seanm952013:24th, 2014:26th, 2015:33rd, 2016:12th, 2017:22nd, 2018:5th
Vyacheslav Stepanov2013:1st, 2014:5th, 2015:2nd, 2016:1st, 2017:11th, 2018:6th
NickL2014:14th, 2015:23rd, 2017:20th, 2018:7th
umir2016:30th, 2017:12th, 2018:8th
Köteles Zsolt2014:25th, 2015:4th, 2016:7th, 2017:15th, 2018:9th
richdellheim2013:31st, 2014:6th, 2015:8th, 2016:13th, 2017:31st, 2018:10th
mentzel.iudith2013:4th, 2014:18th, 2015:22nd, 2016:6th, 2017:6th, 2018:11th
craig.mcfarlane2014:8th, 2015:5th, 2017:16th, 2018:12th
NielsHecker2013:3rd, 2014:21st, 2015:11th, 2016:27th, 2017:19th, 2018:13th
Talebian2014:10th, 2015:9th, 2018:14th
gabt2018:15th
Tony Winn2013:25th, 2016:25th, 2017:7th, 2018:16th
Vijay Mahawar2015:27th, 2016:24th, 2017:17th, 2018:17th
whab@tele2.at2015:30th, 2017:33rd, 2018:18th
Mike Tessier2015:40th, 2016:20th, 2017:4th, 2018:19th
Chad Lee2013:34th, 2014:31st, 2015:38th, 2016:5th, 2017:8th, 2018:20th
msonkoly2015:21st, 2017:23rd, 2018:21st
JasonC2013:35th, 2014:12th, 2015:26th, 2016:2nd, 2017:9th, 2018:22nd
Michal P.2017:24th, 2018:23rd
Stanislovas2016:31st, 2017:26th, 2018:24th
RalfK2015:20th, 2016:15th, 2017:13th, 2018:25th
Ludovic Szewczyk2018:26th
Cor van Berkel2014:36th, 2015:17th, 2018:27th
Kias2017:30th, 2018:28th
mcelaya2015:25th, 2017:32nd, 2018:29th

Using sparse collections with FORALL

$
0
0
FORALL is a key performance feature of PL/SQL. It helps you avoid row-by-row processing of non-query DML (insert, update, delete, merge) from within a PL/QL block. Best of all, almost always, is to do all your processing entirely within a single SQL statement. Sometimes, however, that isn't possible (for example, you need to sidestep SQL's "all or nothing" approach) or simply too difficult (not all of us have the insane SQL writing skills of a Tom Kyte or a Chris Saxon or a Connor McDonald).

To dive in deep on FORALL, check out any of the following resources:
In this post, I am going to focus on special features of FORALL that make it easy to work with space collections: the INDICES OF and VALUES OF clauses.

Typical FORALL Usage with Dense Bind Array

Here's the format you will most commonly see with FORALL: the header looks just like a numeric FOR loop, but notice: no loop keywords. Two rows will be updated, because the collection is filled sequentially or densely: every index value between the lowest and the highest are defined.

DECLARE
TYPE employee_aat IS TABLE OF employees.employee_id%TYPE
INDEX BY PLS_INTEGER;
l_employees employee_aat;
BEGIN
l_employees (1) := 7839;
l_employees (2) := 7654;

FORALL l_index IN 1 .. l_employees.COUNT
UPDATE employees SET salary = 10000
WHERE employee_id = l_employees (l_index);
END;
/

When We Go Sparse...

But take a close look at the way I assign values in the next block. Now my lowest index value is 1 and my highest is 100, with nothing in between. This is known as a sparse collection.

Now when I run the same code, I get an error: ORA-22160: element at index [2] does not exist.

DECLARE
TYPE employee_aat IS TABLE OF employees.employee_id%TYPE
INDEX BY PLS_INTEGER;
l_employees employee_aat;
BEGIN
l_employees (1) := 7839;
l_employees (100) := 7654;

FORALL l_index IN 1 .. l_employees.COUNT
UPDATE employees SET salary = 10000
WHERE employee_id = l_employees (l_index);
END;
/

ORA-22160: element at index [2] does not exist

Notice that this is a SQL error, not a PL/SQL exception (if the latter, we might have predicted that the ORA-01403 No data found might have been raised): the collection was passed to the SQL engine, the SQL engine tried to go from first to last, incrementing the counter each time - and then it blew up.

When you are trying to use FORALL with a sparse collection, you must do one of the following:
  1. "Densify" the collection - get rid of the gaps. This was necessary prior to Oracle Database 10g. Hopefully that means you can ignore this option.
  2. Use INDICES OF 
  3. Use VALUES OF
Simplest INDICES OF Use Case

INDICES OF is the solution you will most likely use. Use this approach when you have a collection (the indexing array) whose defined index values can be used to specify the index values in the bind array (referenced within the FORALL's DML statement) that are to be used by FORALL.

In other words, if the element at index value N is not defined in the indexing array, you want the FORALL statement to ignore the element at position N in the bind array.

And in the simplest use case of INDICES OF, the indexing and bind arrays are the same, as you see in the example below.

DECLARE
TYPE employee_aat IS TABLE OF employees.employee_id%TYPE
INDEX BY PLS_INTEGER;
l_employees employee_aat;
BEGIN
l_employees (1) := 7839;
l_employees (2) := 7654;

FORALL l_index IN INDICES OF l_employees
UPDATE employees SET salary = 10000
WHERE employee_id = l_employees (l_index);
END;
/

This simply says: only use the defined index values of l_employees, and skip over any gaps. Nice!

More Interesting INDICES OF Usage

But you can do more with INDICES OF than that.

Suppose your bind array has 10,000 elements defined in it. You need to perform three different FORALL operations against different subsets of those elements.

You could copy the selected contents required for each FORALL "run" into its own collection. But that could use more PGA memory then necessary. You could instead construct three different indexing arrays, each of which simply point back to elements in the bind array that are relevant for that run.

In the example below, the l_employee_indices is my indexing array. Notice that the actual contents of each element in this array is of no importance. The PL/SQL engine will look only at the index values.

Notice that I can also use a BETWEEN clause to restrict which index values I want to use. So in this block, I update the rows for employee IDs 7839 and 7950 only.

DECLARE
TYPE employee_aat IS TABLE OF employees.employee_id%TYPE
INDEX BY PLS_INTEGER;
l_employees employee_aat;

TYPE boolean_aat IS TABLE OF BOOLEAN
INDEX BY PLS_INTEGER;

l_employee_indices boolean_aat;
BEGIN
l_employees (1) := 7839;
l_employees (100) := 7654;
l_employees (500) := 7950;

l_employee_indices (1) := TRUE;
l_employee_indices (500) := TRUE;
l_employee_indices (799) := TRUE;

FORALL l_index IN INDICES OF l_employee_indices
BETWEEN 1 AND 500
UPDATE employees
SET salary = 10000
WHERE employee_id = l_employees (l_index);
END;
/

And Then There is VALUES OF

I've met lots of developers over the years who have used INDICES OF. I've not yet encountered anyone who took advantage of VALUES OF. So if you ever do find a use for it in your code, please let me know! :-)

Use this clause when you have a collection of integers (again, the indexing array) whose content (the value of the element at a specified position) identifies the position in the binding array that you want to be processed by the FORALL statement.

So while with INDICES OF, the PL/SQL engine uses the index values of the indexing array, with VALUES OF, it uses the values of the elements in the collection.

Here's an example:

DECLARE
TYPE employee_aat IS TABLE OF employees.employee_id%TYPE
INDEX BY PLS_INTEGER;

l_employees employee_aat;

TYPE indices_aat IS TABLE OF PLS_INTEGER
INDEX BY PLS_INTEGER;

l_employee_indices indices_aat;
BEGIN
l_employees (-77) := 7820;
l_employees (13067) := 7799;
l_employees (99999999) := 7369;

l_employee_indices (100) := −77;
l_employee_indices (200) := 99999999;

FORALL l_index IN VALUES OF l_employee_indices
UPDATE employees
SET salary = 10000
WHERE employee_id = l_employees (l_index);
END;
/

I populate (sparsely) three rows (–77, 13067, and 99999999) in the collection of employee IDs.

I want to set up the indexing array to identify which of those rows to use in my update. Because I am using VALUES OF, the row numbers that I use are unimportant. Instead, what matters is the value found in each of the rows in the indexing array. Again, I want to skip over that “middle” row of 13067, so here I define just two rows in the l_employee_indices array and assign them values –77 and 9999999, respectively.

Rather than specify a range of values from FIRST to LAST, I simply specify VALUES OF l_employee_indices. Notice that I populate rows 100 and 200 in the indices collection. VALUES OF does not require a densely filled indexing collection.

VALUES OF also does not support a BETWEEN clause like INDICES OF.

So VALUES OF gives you lots of flexibility - perhaps more than you will ever need!

Sparse is Fine with FORALL

So remember: it's no problem using the powerful FORALL feature with sparse collections. All you have to do is pick between INDICES OF or VALUES OF, and let the PL/SQL do all (or more) of the work for you.

Here are LiveSQL scripts covering much the same material as shown above:

INDICES OF
VALUES OF

Results of the Oracle Dev Gym PL/SQL Challenge Championship for 2018

$
0
0
You will find below the rankings for the PL/SQL Challenge Championship for quizzes taken in 2018. The number next to the player's name is the number of times that player has participated in a championship. Below the table of results for this championship, you will find another list showing the championship history of each of these players.

Congratulations first and foremost to our top-ranked players:

1st Place: mentzel.iudith
2nd Place: Andrey Zaytsev
3rd Place: Tony Winn


Next, congratulations to everyone who played in the championship. We hope you found it entertaining, challenging and educational. And for those who were not able to participate in the championship, you can take the quizzes through the Practice feature. We will also make the championship as a whole available as a Test, so you can take it just like these players did.

Finally, many thanks and our deepest gratitude to our reviewers, especially Elic, who has once again performed an invaluable service to our community.

RankNameTotal Time% CorrectTotal Score
1mentzel.iudith (5)33 m96%3467
2Andrey Zaytsev (5)32 m93%3370
3Tony Winn (3)26 m89%3193
4Ivan Blanarik (5)33 m86%3116
5Karel_Prech (5)34 m86%3062
6NielsHecker (5)40 m86%3038
7JeroenR (4)35 m86%3010
8Chase Mei (5)24 m82%2951
9MarcusM (3)32 m82%2920
10Oleksiy Ponomarenko (3)25 m82%2897
11mcelaya (4)38 m82%2845
12Maxim Borunov (5)36 m75%2754
13Jan Šerák (5)40 m75%2689
14Stelios Vlasopoulos (5)44 m79%2674
15Aleksei Davletiarov (1)44 m75%2673
16Rimantas Adomauskas (3)33 m75%2666
17seanm95 (5)29 m71%2582
18Mike Tessier (3)33 m71%2567
19siimkask (5)18 m71%2525
20Henry_A (5)18 m64%2475
21Rakesh Dadhich (5)21 m64%2364
22NickL (3)31 m64%2324
23Talebian (4)34 m64%2314
24Köteles Zsolt (1)39 m64%2294
25Otto Palenicek (3)44 m64%2270
26msonkoly (4)44 m61%2171
27Sachi (3)10 m61%2108
28RalfK (1)13 m54%2045
29PZOL (4)30 m57%2027
30richdellheim (1)42 m61%1979
31Sartograph (2)30 m61%1976

Championship Performance History

After each name, the quarter in which he or she played, and the ranking in that championship.

NameHistory
mentzel.iudith2014:1st, 2015:2nd, 2016:18th, 2017:2nd, 2018:1st
Andrey Zaytsev2014:2nd, 2015:5th, 2016:1st, 2017:21st, 2018:2nd
Tony Winn2016:2nd, 2018:3rd
Ivan Blanarik2014:16th, 2015:16th, 2017:17th, 2018:4th
Karel_Prech2014:4th, 2015:6th, 2016:11th, 2017:9th, 2018:5th
NielsHecker2014:21st, 2015:1st, 2016:15th, 2017:3rd, 2018:6th
JeroenR2014:7th, 2015:20th, 2016:6th, 2018:7th
Chase Mei2014:25th, 2015:26th, 2016:3rd, 2017:20th, 2018:8th
MarcusM2014:17th, 2015:37th, 2018:9th
Oleksiy Ponomarenko2016:10th, 2017:4th, 2018:10th
mcelaya2015:38th, 2016:34th, 2017:29th, 2018:11th
Maxim Borunov2015:9th, 2016:17th, 2017:8th, 2018:12th
Jan Šerák2014:24th, 2015:8th, 2016:7th, 2017:22nd, 2018:13th
Stelios Vlasopoulos2014:37th, 2015:19th, 2016:24th, 2017:5th, 2018:14th
Aleksei Davletiarov2018:15th
Rimantas Adomauskas2017:6th, 2018:16th
seanm952014:34th, 2015:4th, 2016:9th, 2017:18th, 2018:17th
Mike Tessier2017:33rd, 2018:18th
siimkask2014:15th, 2015:14th, 2016:13th, 2017:12th, 2018:19th
Henry_A2014:32nd, 2016:33rd, 2017:11th, 2018:20th
Rakesh Dadhich2014:29th, 2015:31st, 2016:35th, 2017:31st, 2018:21st
NickL2015:21st, 2018:22nd
Talebian2015:23rd, 2018:23rd
Köteles Zsolt2018:24th
Otto Palenicek2016:29th, 2017:28th, 2018:25th
msonkoly2015:15th, 2017:13th, 2018:26th
Sachi2015:30th, 2016:28th, 2018:27th
RalfK2018:28th
PZOL2015:35th, 2017:23rd, 2018:29th
richdellheim2018:30th
Sartograph2017:10th, 2018:31st

An introduction to conditional compilation

$
0
0
Conditional compilation allows the compiler to compile selected parts of a program based on conditions you specify using $ syntax in PL/SQL. When you see statements like $IF, $ELSE, $END and $ERROR in your PL/SQL code, you are looking at conditional compilations, sometimes also referred to as "ifdef" processing.

There's a really good chance you've never taken advantage of conditional compilation in PL/SQL, so I thought I'd write up a few blog posts about why you might want to use it - and then how to put it to use.

Conditional compilation comes in very handy when you need to do any of the following:
  • Compile and run your PL/SQL code base on different versions of Oracle, taking advantage of features specific to those versions. 
  • Run certain code during testing and debugging, but then omit that code from the production code. Or vice versa. 
  • Install/compile different elements of your application based on user requirements, such as the components for which a user is licensed. 
  • Expose usually private subprograms in the package specification to allow for direct testing on those subprograms.
You implement conditional compilation by placing compiler directives (commands) in your source code.

When your program is compiled, the PL/SQL preprocessor evaluates the directives and selects those portions of your code that should be compiled. This pared-down source code is then passed to the compiler for compilation.

The preprocessor checks the value of the database parameter, PLSQL_CCFLAGS, to see if any application-specific conditional compilation flags have been set.

There are three types of directives:

Selection directives

Use the $IF directive to evaluate expressions and determine which code should be included or avoided.

Inquiry directives

Use the $$identifier syntax to refer to conditional compilation flags. These inquiry directives can be referenced within an $IF directive or used independently in your code.

Error directives

Use the $ERROR directive to report compilation errors based on conditions evaluated when the preprocessor prepares your code for compilation.

I'll show you a simple example of each of these directives, then point you to additional resources. Future blog posts will go into detail on specific use cases, as well as two packages related to conditional compilation, DBMS_DB_VERSION and DBMS_PREPROCESSOR.

In the following block, I use $IF, $ELSE and DBMS_DB_VERSION to determine if I should include the UDF prima (new to Oracle Database 12c), which improves the performance of functions called from within SQL statements:

CREATE OR REPLACE FUNCTION my_function (n IN NUMBER)
RETURN VARCHAR2
IS
$IF DBMS_DB_VERSION.VER_LE_11_2
$THEN
/* UDF pragma not available till 12.1 */
$ELSE
PRAGMA UDF;
$END
BEGIN
RETURN TO_CHAR (n);
END;
/

Next up: use my own application-specific inquiry directive, along with one provided by Oracle:

ALTER SESSION SET PLSQL_CCFLAGS = 'commit_off:true'
/

CREATE OR REPLACE PROCEDURE flexible_commits
IS
BEGIN
$IF $$commit_off
$THEN
DBMS_OUTPUT.PUT_LINE ('Commit disabled in $$PLSQL_UNIT');
$ELSE
COMMIT;
$END
END;
/

Finally, I use $ERROR to force a compilation error if anyone tries to compile this code on a version earlier than 12.1.

CREATE OR REPLACE PROCEDURE uses_the_latest_and_greatest
AUTHID DEFINER
IS
BEGIN
$IF DBMS_DB_VERSION.VER_LE_12_1
$THEN
$ERROR 'This program requires Oracle Databse 12.1 or higher.' $END
$END
NULL;
END;
/

Conditional Compilation Resources

Comprehensive white paper: a great starting place - and required reading - for anyone planning on using conditional compilation in production code

Conditional compilation scripts on LiveSQL

Tim Hall (Oracle-BASE) coverage of conditional compilation

Conditional compilation documentation

My Oracle Magazine article on this topic




European Union Mandates All Business Logic in Database by 2020

$
0
0
DatelineDB: April 1st 2019

The European Union turned heads today with a surprise announcement:
Starting 1 January 2020, all business logic in applications must be made available via code stored inside the database. While we recommend that you use Oracle Database and PL/SQL, that will not be required.
This position was apparently taken after close review of the groundbreaking research conducted by Toon Koppelaars of Oracle Corporation, in which he showed that by putting business logic in the database, the overall work - and therefore energy consumption - of the application is reduced, sometimes by as much as 235%. While improving the overall performance of the application by 500%.

A close confidant of the President of the European Union told DatelineDB that the EU would soon adopt a resolution stating that we are now in a climate emergency and every effort must be made in every aspect of human activity to slow down the warming of our planet.

"So the decision to require business logic in the database was basically a no-brainer. A win-win for the customer and the planet."

There are rumors that Java developers all over the world are seeking therapy to deal with their years of falsely implanted memories that made them think the database should be used as nothing but a bit bucket.

And in an unprecedented show of unity, all the JavaScript developers in the world announced that they would henceforth only write code in the dark web, because they really don't like databases. And they are building a new framework: darkDB.js

"Don't worry about that," Brendan Each told DatelineDB. "For a whole boatload of JavaScript programmers that just means they are going to run their editors in dark mode. Shhhhhhh. Don't tell them that's not the dark web."

Larry Ellison was not available for comment.

But Prime Minister May of the United Kingdom did further shock all concerned by issuing her own statement:
Now that the EU has shown such great wisdom and concern for life on this planet, I have instructed my ministers to halt all work on Brexit and instead participate fully in this critical EU initiative, titled For All a Beautiful Database.
If anyone has any questions about putting their business logic in the database, Toon Koppelaars will be available live to answer your questions on May 21.










Viewing all 312 articles
Browse latest View live