PL/SQL Performance Best Practices
In this section, we'll cover fundamental best practices that directly impact the performance of your PL/SQL code. These are tried-and-true methods for writing optimized and efficient database programs.
Using NOCOPY
The NOCOPY
hint is a powerful optimization technique used with OUT
and IN OUT
parameters in PL/SQL subprograms (procedures and functions). By default, when a large data structure (like a collection or a REF CURSOR
) is passed as an IN OUT
or OUT
parameter, PL/SQL creates a copy of that data. This copying can incur significant overhead in terms of memory and CPU, especially for large datasets. The NOCOPY
hint instructs the PL/SQL engine to pass the parameter by reference instead of by value, avoiding the costly copy operation. This can lead to substantial performance gains and reduced memory usage.
Example 1: Using NOCOPY with a Large Collection
-- Disable server output for cleaner demonstration
SET SERVEROUTPUT ON;
-- Define a collection type
CREATE OR REPLACE TYPE vc_array IS TABLE OF VARCHAR2(100);
/
-- Procedure demonstrating NOCOPY for an IN OUT parameter
CREATE OR REPLACE PROCEDURE process_data_nocopy (
p_data IN OUT NOCOPY vc_array -- Using NOCOPY hint
)
AS
l_start_time NUMBER;
l_end_time NUMBER;
BEGIN
l_start_time := DBMS_UTILITY.GET_TIME; -- Get current time in hundredths of seconds
-- Simulate some processing on the collection
FOR i IN 1 .. p_data.COUNT LOOP
p_data(i) := p_data(i) || '_PROCESSED';
END LOOP;
l_end_time := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Processing time (NOCOPY): ' || (l_end_time - l_start_time) || ' hundredths of seconds');
END;
/
-- Procedure demonstrating default behavior (without NOCOPY) for comparison
CREATE OR REPLACE PROCEDURE process_data_default (
p_data IN OUT vc_array -- Default behavior (copying occurs)
)
AS
l_start_time NUMBER;
l_end_time NUMBER;
BEGIN
l_start_time := DBMS_UTILITY.GET_TIME;
-- Simulate some processing on the collection
FOR i IN 1 .. p_data.COUNT LOOP
p_data(i) := p_data(i) || '_PROCESSED';
END LOOP;
l_end_time := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Processing time (DEFAULT): ' || (l_end_time - l_start_time) || ' hundredths of seconds');
END;
/
-- Anonymous block to test and compare
DECLARE
l_data_nocopy vc_array := vc_array();
l_data_default vc_array := vc_array();
BEGIN
-- Populate collections with a large number of elements
FOR i IN 1 .. 100000 LOOP
l_data_nocopy.EXTEND;
l_data_nocopy(i) := 'Item_' || i;
l_data_default.EXTEND;
l_data_default(i) := 'Item_' || i;
END LOOP;
DBMS_OUTPUT.PUT_LINE('--- Comparing NOCOPY vs. Default Parameter Passing ---');
-- Call procedure with NOCOPY
process_data_nocopy(l_data_nocopy);
-- Call procedure without NOCOPY
process_data_default(l_data_default);
-- Clean up
EXECUTE IMMEDIATE 'DROP TYPE vc_array';
END;
/
Explanation
This example demonstrates the performance benefits of NOCOPY when passing a large collection as an IN OUT parameter. We define two procedures: process_data_nocopy which uses the NOCOPY hint, and process_data_default which uses the default pass-by-value mechanism. An anonymous block populates two large collections and then calls both procedures, measuring the execution time using DBMS_UTILITY.GET_TIME. You will typically observe that the procedure using NOCOPY executes faster because it avoids the overhead of copying the entire collection when entering and exiting the subprogram. This is a crucial optimization for PL/SQL procedures and functions handling substantial data volumes, leading to improved Oracle PL/SQL performance and reduced memory footprint.
Example 2: NOCOPY with a REF CURSOR
SET SERVEROUTPUT ON;
-- Create a sample table for demonstration
CREATE TABLE employees_test (
id NUMBER PRIMARY KEY,
name VARCHAR2(100),
salary NUMBER
);
-- Insert some sample data
BEGIN
FOR i IN 1 .. 10000 LOOP
INSERT INTO employees_test (id, name, salary)
VALUES (i, 'Employee ' || i, ROUND(DBMS_RANDOM.VALUE(30000, 100000), 2));
END LOOP;
COMMIT;
END;
/
-- Procedure to open a REF CURSOR using NOCOPY
CREATE OR REPLACE PROCEDURE get_employees_nocopy (
p_ref_cursor IN OUT NOCOPY SYS_REFCURSOR -- NOCOPY hint for REF CURSOR
)
AS
BEGIN
OPEN p_ref_cursor FOR
SELECT id, name, salary
FROM employees_test
WHERE salary > 50000;
END;
/
-- Procedure to open a REF CURSOR without NOCOPY (default)
CREATE OR REPLACE PROCEDURE get_employees_default (
p_ref_cursor IN OUT SYS_REFCURSOR -- Default behavior
)
AS
BEGIN
OPEN p_ref_cursor FOR
SELECT id, name, salary
FROM employees_test
WHERE salary > 50000;
END;
/
-- Anonymous block to test and compare
DECLARE
l_ref_cursor_nocopy SYS_REFCURSOR;
l_ref_cursor_default SYS_REFCURSOR;
l_id NUMBER;
l_name VARCHAR2(100);
l_salary NUMBER;
l_start_time NUMBER;
l_end_time NUMBER;
BEGIN
DBMS_OUTPUT.PUT_LINE('--- Comparing NOCOPY vs. Default with REF CURSOR ---');
-- Test NOCOPY
l_start_time := DBMS_UTILITY.GET_TIME;
get_employees_nocopy(l_ref_cursor_nocopy);
-- Fetching from cursor to simulate usage, though NOCOPY benefit is in passing
LOOP
FETCH l_ref_cursor_nocopy INTO l_id, l_name, l_salary;
EXIT WHEN l_ref_cursor_nocopy%NOTFOUND;
END LOOP;
CLOSE l_ref_cursor_nocopy;
l_end_time := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('REF CURSOR time (NOCOPY): ' || (l_end_time - l_start_time) || ' hundredths of seconds');
-- Test Default
l_start_time := DBMS_UTILITY.GET_TIME;
get_employees_default(l_ref_cursor_default);
LOOP
FETCH l_ref_cursor_default INTO l_id, l_name, l_salary;
EXIT WHEN l_ref_cursor_default%NOTFOUND;
END LOOP;
CLOSE l_ref_cursor_default;
l_end_time := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('REF CURSOR time (DEFAULT): ' || (l_end_time - l_start_time) || ' hundredths of seconds');
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE employees_test';
END;
/
Explanation
This example showcases NOCOPY's utility with REF CURSOR parameters. While the primary performance gain from NOCOPY for REF CURSORs isn't directly in the data transfer (as a REF CURSOR is already a pointer), it can still optimize internal PL/SQL operations by avoiding unnecessary copying of the cursor handle itself, particularly in complex scenarios involving multiple subprogram calls or when the REF CURSOR itself holds a significant amount of metadata or state. This demonstrates a fine-tuning technique for advanced Oracle PL/SQL optimization.
Example 3: When NOT to use NOCOPY
SET SERVEROUTPUT ON;
-- Procedure where NOCOPY would be problematic due to exception handling
CREATE OR REPLACE PROCEDURE update_inventory_nocopy (
p_item_id IN NUMBER,
p_quantity IN OUT NOCOPY NUMBER -- NOCOPY used here
)
AS
BEGIN
-- Simulate an update that might fail
IF p_quantity < 0 THEN
RAISE_APPLICATION_ERROR(-20001, 'Quantity cannot be negative.');
END IF;
-- Simulate successful update
p_quantity := p_quantity + 10;
DBMS_OUTPUT.PUT_LINE('Inventory updated for item ' || p_item_id || '. New quantity: ' || p_quantity);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error updating inventory: ' || SQLERRM);
-- If an error occurs, p_quantity might be in an inconsistent state
-- when NOCOPY is used, as changes might have been partially applied.
END;
/
-- Anonymous block to demonstrate the issue
DECLARE
l_item_qty NUMBER := 100;
BEGIN
DBMS_OUTPUT.PUT_LINE('Initial quantity: ' || l_item_qty);
-- Call with a valid quantity
update_inventory_nocopy(1, l_item_qty);
DBMS_OUTPUT.PUT_LINE('Quantity after valid update: ' || l_item_qty);
-- Call with a negative quantity to trigger an error
BEGIN
update_inventory_nocopy(2, -5); -- This will raise an exception
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Caught exception for negative quantity.');
-- Notice that l_item_qty, despite being passed by NOCOPY,
-- will retain its value from before the call if an exception occurs
-- before the subprogram's completion.
-- However, if partial updates occurred within the subprogram before the error,
-- the caller might see those partial updates, which is the "danger" of NOCOPY.
-- For this simple example, since the error occurs early, the effect is minimal.
-- The primary concern is partial updates being visible when an error rolls back the transaction.
NULL;
END;
DBMS_OUTPUT.PUT_LINE('Quantity after attempted invalid update (should be unchanged for this specific setup): ' || l_item_qty);
END;
/
Explanation
This example illustrates a critical scenario where NOCOPY should be avoided: when exception handling is involved and you need to guarantee the "all or nothing" principle for OUT or IN OUT parameters. When NOCOPY is used, if an unhandled exception occurs within the subprogram after the parameter has been partially modified, those partial modifications might still be visible to the calling environment, even if the transaction is rolled back. This breaks the atomicity expected from parameter passing. Without NOCOPY (default behavior), if an exception occurs, the original values of OUT or IN OUT parameters are preserved, as the copy is only written back upon successful completion of the subprogram. Therefore, use NOCOPY judiciously, considering the implications on error handling and data consistency in your Oracle PL/SQL development.
The benefits of FORALL and BULK COLLECT
FORALL
and BULK COLLECT
are two indispensable PL/SQL features for optimizing DML (Data Manipulation Language) operations and data retrieval. They are designed to minimize the context switching between the PL/SQL engine and the SQL engine, which is a major performance bottleneck. Instead of processing rows one by one (row-by-row processing), these constructs allow you to process a collection of rows in a single trip to the SQL engine, significantly improving performance for bulk operations. This leads to faster data loading, updates, and retrievals, making your Oracle PL/SQL applications more efficient and scalable.
FORALL: This statement is used for bulk DML operations (INSERT, UPDATE, DELETE). It allows you to bind entire collections to SQL statements, executing the DML operation for all elements in the collection in a single call to the SQL engine.
BULK COLLECT: This clause is used with SELECT INTO
, FETCH
, and RETURNING INTO
statements to retrieve multiple rows into a collection with a single fetch operation.
Example 1: Using FORALL for Bulk INSERT
SET SERVEROUTPUT ON;
-- Create a temporary table for bulk insert
CREATE TABLE sales_data_forall (
sale_id NUMBER PRIMARY KEY,
product_id NUMBER,
sale_date DATE,
quantity NUMBER
);
-- Define a record type and a table type for the collection
CREATE TYPE sales_rec_type IS OBJECT (
sale_id NUMBER,
product_id NUMBER,
sale_date DATE,
quantity NUMBER
);
/
CREATE TYPE sales_table_type IS TABLE OF sales_rec_type;
/
-- Procedure demonstrating FORALL for bulk insert
CREATE OR REPLACE PROCEDURE bulk_insert_forall (
p_num_records IN NUMBER
)
AS
l_sales_data sales_table_type := sales_table_type();
l_start_time NUMBER;
l_end_time NUMBER;
BEGIN
l_start_time := DBMS_UTILITY.GET_TIME;
-- Populate the collection with data
FOR i IN 1 .. p_num_records LOOP
l_sales_data.EXTEND;
l_sales_data(i) := sales_rec_type(i, DBMS_RANDOM.VALUE(100, 999), SYSDATE - DBMS_RANDOM.VALUE(0, 365), DBMS_RANDOM.VALUE(1, 100));
END LOOP;
-- Use FORALL for bulk insert
FORALL i IN 1 .. l_sales_data.COUNT
INSERT INTO sales_data_forall (sale_id, product_id, sale_date, quantity)
VALUES (l_sales_data(i).sale_id, l_sales_data(i).product_id, l_sales_data(i).sale_date, l_sales_data(i).quantity);
l_end_time := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('FORALL Bulk Inserted ' || SQL%ROWCOUNT || ' records in ' || (l_end_time - l_start_time) || ' hundredths of seconds.');
COMMIT;
END;
/
-- Procedure demonstrating row-by-row insert for comparison
CREATE OR REPLACE PROCEDURE row_by_row_insert (
p_num_records IN NUMBER
)
AS
l_start_time NUMBER;
l_end_time NUMBER;
BEGIN
l_start_time := DBMS_UTILITY.GET_TIME;
FOR i IN 1 .. p_num_records LOOP
INSERT INTO sales_data_forall (sale_id, product_id, sale_date, quantity)
VALUES (i + 1000000, DBMS_RANDOM.VALUE(100, 999), SYSDATE - DBMS_RANDOM.VALUE(0, 365), DBMS_RANDOM.VALUE(1, 100));
END LOOP;
l_end_time := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Row-by-row Inserted ' || SQL%ROWCOUNT || ' records in ' || (l_end_time - l_start_time) || ' hundredths of seconds.');
COMMIT;
END;
/
-- Anonymous block to test and compare
BEGIN
DBMS_OUTPUT.PUT_LINE('--- Comparing FORALL vs. Row-by-Row Insert ---');
-- Test FORALL with 50,000 records
TRUNCATE TABLE sales_data_forall;
bulk_insert_forall(50000);
-- Test row-by-row with 50,000 records (will be much slower)
TRUNCATE TABLE sales_data_forall;
row_by_row_insert(50000);
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE sales_data_forall';
EXECUTE IMMEDIATE 'DROP TYPE sales_table_type';
EXECUTE IMMEDIATE 'DROP TYPE sales_rec_type';
END;
/
Explanation
This example clearly illustrates the performance superiority of FORALL for bulk inserts compared to traditional row-by-row INSERT statements within a loop. We define a table and collection types, then create two procedures: one using FORALL and another using a simple FOR loop for insertion. By inserting a significant number of records (e.g., 50,000), the output will show a dramatic difference in execution time, with FORALL being significantly faster. This is because FORALL reduces context switches between PL/SQL and SQL engines, making it a cornerstone for high-performance data loading in Oracle PL/SQL. It's a critical technique for efficient Oracle database programming.
Example 2: Using BULK COLLECT for Bulk SELECT
SET SERVEROUTPUT ON;
-- Create a sample table for bulk select
CREATE TABLE products_data (
product_id NUMBER PRIMARY KEY,
product_name VARCHAR2(100),
price NUMBER,
category VARCHAR2(50)
);
-- Insert some sample data
BEGIN
FOR i IN 1 .. 100000 LOOP
INSERT INTO products_data (product_id, product_name, price, category)
VALUES (i, 'Product ' || i, ROUND(DBMS_RANDOM.VALUE(10, 1000), 2),
CASE MOD(i, 3) WHEN 0 THEN 'Electronics' WHEN 1 THEN 'Clothing' ELSE 'Books' END);
END LOOP;
COMMIT;
END;
/
-- Define collection types for BULK COLLECT
CREATE TYPE product_id_array IS TABLE OF NUMBER;
/
CREATE TYPE product_name_array IS TABLE OF VARCHAR2(100);
/
CREATE TYPE product_price_array IS TABLE OF NUMBER;
/
-- Procedure demonstrating BULK COLLECT
CREATE OR REPLACE PROCEDURE bulk_select_products (
p_category IN VARCHAR2
)
AS
l_product_ids product_id_array;
l_product_names product_name_array;
l_product_prices product_price_array;
l_start_time NUMBER;
l_end_time NUMBER;
BEGIN
l_start_time := DBMS_UTILITY.GET_TIME;
-- Use BULK COLLECT to retrieve multiple rows into collections
SELECT product_id, product_name, price
BULK COLLECT INTO l_product_ids, l_product_names, l_product_prices
FROM products_data
WHERE category = p_category;
l_end_time := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('BULK COLLECT fetched ' || l_product_ids.COUNT || ' products for category ' || p_category ||
' in ' || (l_end_time - l_start_time) || ' hundredths of seconds.');
-- Optionally, process the fetched data (e.g., display first few)
IF l_product_ids.COUNT > 0 THEN
FOR i IN 1 .. LEAST(l_product_ids.COUNT, 5) LOOP -- Display up to 5 items
DBMS_OUTPUT.PUT_LINE(' ID: ' || l_product_ids(i) || ', Name: ' || l_product_names(i) || ', Price: ' || l_product_prices(i));
END LOOP;
END IF;
END;
/
-- Procedure demonstrating row-by-row SELECT for comparison
CREATE OR REPLACE PROCEDURE row_by_row_select_products (
p_category IN VARCHAR2
)
AS
CURSOR c_products IS
SELECT product_id, product_name, price
FROM products_data
WHERE category = p_category;
l_product_id NUMBER;
l_product_name VARCHAR2(100);
l_price NUMBER;
l_start_time NUMBER;
l_end_time NUMBER;
l_count NUMBER := 0;
BEGIN
l_start_time := DBMS_UTILITY.GET_TIME;
OPEN c_products;
LOOP
FETCH c_products INTO l_product_id, l_product_name, l_price;
EXIT WHEN c_products%NOTFOUND;
l_count := l_count + 1;
END LOOP;
CLOSE c_products;
l_end_time := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Row-by-row fetched ' || l_count || ' products for category ' || p_category ||
' in ' || (l_end_time - l_start_time) || ' hundredths of seconds.');
END;
/
-- Anonymous block to test and compare
BEGIN
DBMS_OUTPUT.PUT_LINE('--- Comparing BULK COLLECT vs. Row-by-Row SELECT ---');
-- Test BULK COLLECT
bulk_select_products('Electronics');
-- Test row-by-row select
row_by_row_select_products('Electronics');
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE products_data';
EXECUTE IMMEDIATE 'DROP TYPE product_id_array';
EXECUTE IMMEDIATE 'DROP TYPE product_name_array';
EXECUTE IMMEDIATE 'DROP TYPE product_price_array';
END;
/
Explanation
This example demonstrates how BULK COLLECT dramatically improves the performance of data retrieval in PL/SQL. We set up a table with a large number of records and then compare two approaches: bulk_select_products uses BULK COLLECT to fetch all matching rows into collections in a single trip, while row_by_row_select_products uses a traditional cursor and fetches one row at a time. The performance difference, especially for large result sets, will be significant, with BULK COLLECT proving to be much faster due to reduced context switching. This is a fundamental technique for optimizing data retrieval and enhancing Oracle PL/SQL performance.
Example 3: Combining FORALL and BULK COLLECT (FORALL with RETURNING INTO)
SET SERVEROUTPUT ON;
-- Create a table for orders
CREATE TABLE orders (
order_id NUMBER PRIMARY KEY,
customer_id NUMBER,
order_date DATE,
order_status VARCHAR2(20)
);
-- Create a table for order history
CREATE TABLE order_history (
history_id NUMBER GENERATED ALWAYS AS IDENTITY,
order_id NUMBER,
status_change_date DATE,
old_status VARCHAR2(20),
new_status VARCHAR2(20)
);
-- Insert some sample orders
BEGIN
FOR i IN 1 .. 10000 LOOP
INSERT INTO orders (order_id, customer_id, order_date, order_status)
VALUES (i, DBMS_RANDOM.VALUE(1, 1000), SYSDATE - DBMS_RANDOM.VALUE(0, 365), 'PENDING');
END LOOP;
COMMIT;
END;
/
-- Define collection types for IDs and statuses
CREATE TYPE number_array IS TABLE OF NUMBER;
/
CREATE TYPE varchar2_array IS TABLE OF VARCHAR2(20);
/
-- Procedure to update orders and log history using FORALL and BULK COLLECT (RETURNING INTO)
CREATE OR REPLACE PROCEDURE update_and_log_orders (
p_old_status IN VARCHAR2,
p_new_status IN VARCHAR2
)
AS
l_order_ids number_array;
l_old_statuses varchar2_array;
l_start_time NUMBER;
l_end_time NUMBER;
BEGIN
l_start_time := DBMS_UTILITY.GET_TIME;
-- Use FORALL with RETURNING INTO to update orders and capture original data
-- The WHERE clause limits the update to orders with p_old_status
FORALL i IN (SELECT /*+ CARDINALITY(o 10000) */ ROWID FROM orders o WHERE order_status = p_old_status)
UPDATE orders
SET order_status = p_new_status
WHERE ROWID = l_order_ids(i)
RETURNING order_id, p_old_status -- Capture order_id and the old status
BULK COLLECT INTO l_order_ids, l_old_statuses;
-- Now, bulk insert into order_history using the collected data
FORALL i IN 1 .. l_order_ids.COUNT
INSERT INTO order_history (order_id, status_change_date, old_status, new_status)
VALUES (l_order_ids(i), SYSDATE, l_old_statuses(i), p_new_status);
l_end_time := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Updated and logged ' || SQL%ROWCOUNT || ' orders from ' || p_old_status || ' to ' || p_new_status ||
' in ' || (l_end_time - l_start_time) || ' hundredths of seconds.');
COMMIT;
END;
/
-- Anonymous block to test
BEGIN
DBMS_OUTPUT.PUT_LINE('--- Demonstrating FORALL with RETURNING INTO and BULK COLLECT ---');
-- Initial update
update_and_log_orders('PENDING', 'PROCESSING');
-- Further update
update_and_log_orders('PROCESSING', 'SHIPPED');
-- Verify some data
DBMS_OUTPUT.PUT_LINE('--- Sample Order History (first 5) ---');
FOR r IN (SELECT * FROM order_history WHERE ROWNUM <= 5 ORDER BY history_id) LOOP
DBMS_OUTPUT.PUT_LINE('History ID: ' || r.history_id || ', Order ID: ' || r.order_id ||
', Old Status: ' || r.old_status || ', New Status: ' || r.new_status);
END LOOP;
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE orders';
EXECUTE IMMEDIATE 'DROP TABLE order_history';
EXECUTE IMMEDIATE 'DROP TYPE number_array';
EXECUTE IMMEDIATE 'DROP TYPE varchar2_array';
END;
/
Explanation
This advanced example demonstrates the powerful combination of FORALL and BULK COLLECT using the RETURNING INTO clause. We're updating orders based on a specific status and simultaneously capturing the order_id and the old_status for each updated row into PL/SQL collections. These collected values are then used in a subsequent FORALL statement to bulk insert records into an order_history table. This approach is highly efficient because it minimizes context switching: the UPDATE and the subsequent INSERT operations are performed in bulk, drastically reducing the number of round trips between the PL/SQL and SQL engines. This is a common and highly effective pattern for maintaining audit trails or performing complex multi-step DML operations in high-performance Oracle PL/SQL applications.
Using PLSQL_WARNINGS and PLSQL_OPTIMIZE_LEVEL
PLSQL_WARNINGS
and PLSQL_OPTIMIZE_LEVEL
are two crucial compiler parameters that help you write better and faster PL/SQL code. They are part of Oracle's robust set of tools for PL/SQL development and tuning, enabling developers to identify potential issues and enhance code execution.
PLSQL_WARNINGS
: This parameter controls the display of warnings during PL/SQL compilation. These warnings alert you to potential issues that might not be syntax errors but could lead to unexpected behavior, performance problems, or security vulnerabilities. By enabling and addressing these warnings, you can improve code quality, reliability, and maintainability.
PLSQL_OPTIMIZE_LEVEL
: This parameter controls the level of optimization applied by the PL/SQL compiler. A higher optimization level generally results in faster code execution, but it might slightly increase compilation time. Oracle provides different optimization levels to balance compilation speed and runtime performance.
Example 1: PLSQL_WARNINGS to Identify Unreachable Code
SET SERVEROUTPUT ON;
-- Set PLSQL_WARNINGS to enable warnings
ALTER SESSION SET PLSQL_WARNINGS = 'ENABLE:ALL';
-- Procedure with unreachable code
CREATE OR REPLACE PROCEDURE calculate_discount (
p_amount IN NUMBER,
p_discount OUT NUMBER
)
AS
BEGIN
p_discount := p_amount * 0.10; -- 10% discount
RETURN; -- This RETURN statement makes the next line unreachable
p_discount := p_amount * 0.05; -- This line is unreachable
END;
/
-- Compile the procedure and observe warnings
-- (The warning will appear during compilation, not during execution of this block)
-- If you compile this in SQL Developer or SQL*Plus, you'll see a warning message like:
-- PLW-06002: Unreachable code
-- This demonstrates how PLSQL_WARNINGS helps identify logical issues.
-- Test the procedure (will only apply 10% discount)
DECLARE
l_amount NUMBER := 100;
l_discount NUMBER;
BEGIN
calculate_discount(l_amount, l_discount);
DBMS_OUTPUT.PUT_LINE('Calculated discount for ' || l_amount || ': ' || l_discount);
END;
/
-- Clean up: Reset PLSQL_WARNINGS (optional, good practice)
ALTER SESSION SET PLSQL_WARNINGS = 'DISABLE:ALL';
Explanation
This example demonstrates how PLSQL_WARNINGS can help identify unreachable code. We set PLSQL_WARNINGS to ENABLE:ALL for the session. When calculate_discount is compiled, Oracle will issue a warning (e.g., PLW-06002: Unreachable code) because the RETURN statement makes the subsequent line of code impossible to execute. This warning indicates a logical flaw or dead code that can be removed, leading to cleaner and more maintainable Oracle PL/SQL code. This feature is invaluable for code quality assurance.
Example 2: PLSQL_WARNINGS to Detect Implicit Cursor Performance Issues
SET SERVEROUTPUT ON;
-- Set PLSQL_WARNINGS to detect performance issues (e.g., implicit cursors in loops)
ALTER SESSION SET PLSQL_WARNINGS = 'ENABLE:PERFORMANCE';
-- Create a temporary table
CREATE TABLE temp_employee_salaries (
employee_id NUMBER,
salary NUMBER
);
-- Procedure demonstrating an implicit cursor performance warning
CREATE OR REPLACE PROCEDURE adjust_salaries (
p_increase_percent IN NUMBER
)
AS
l_salary NUMBER;
BEGIN
-- This implicit cursor within a loop will likely generate a performance warning
-- PLW-07001: implicit cursor in SQL statement with performance implications
FOR emp_rec IN (SELECT employee_id, salary FROM temp_employee_salaries) LOOP
SELECT salary * (1 + p_increase_percent / 100) INTO l_salary
FROM DUAL; -- Simulate complex calculation for each employee
-- Update each row individually (another implicit cursor)
UPDATE temp_employee_salaries
SET salary = l_salary
WHERE employee_id = emp_rec.employee_id;
END LOOP;
END;
/
-- Compile the procedure and observe warnings (e.g., PLW-07001)
-- Anonymous block to test (will not show warning, warning is at compile time)
BEGIN
-- Insert some dummy data
INSERT INTO temp_employee_salaries VALUES (1, 50000);
INSERT INTO temp_employee_salaries VALUES (2, 60000);
COMMIT;
DBMS_OUTPUT.PUT_LINE('Adjusting salaries...');
adjust_salaries(5);
DBMS_OUTPUT.PUT_LINE('Salaries adjusted.');
-- Display updated salaries
DBMS_OUTPUT.PUT_LINE('--- Updated Salaries ---');
FOR r IN (SELECT employee_id, salary FROM temp_employee_salaries) LOOP
DBMS_OUTPUT.PUT_LINE('Employee ID: ' || r.employee_id || ', Salary: ' || r.salary);
END LOOP;
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE temp_employee_salaries';
ALTER SESSION SET PLSQL_WARNINGS = 'DISABLE:ALL'; -- Reset
END;
/
Explanation
This example showcases how PLSQL_WARNINGS can highlight potential performance bottlenecks. By setting PLSQL_WARNINGS = 'ENABLE:PERFORMANCE', the compiler will warn about situations like implicit cursors inside loops, which often lead to row-by-row processing and poor performance. In this case, the adjust_salaries procedure will likely generate a PLW-07001 warning, suggesting that the SELECT and UPDATE statements within the loop could be optimized using BULK COLLECT and FORALL. This provides proactive guidance for Oracle PL/SQL code tuning and optimization.
Example 3: Using PLSQL_OPTIMIZE_LEVEL
SET SERVEROUTPUT ON;
-- Reset optimization level to default for comparison
ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL = 2;
-- Function designed to perform some calculations, compiled with default optimization (Level 2)
CREATE OR REPLACE FUNCTION calculate_complex_value (
p_input_a IN NUMBER,
p_input_b IN NUMBER
) RETURN NUMBER
AS
l_intermediate_result NUMBER;
l_final_result NUMBER;
BEGIN
-- Simulate complex calculations
l_intermediate_result := (p_input_a * p_input_a) + (p_input_b * p_input_b);
l_final_result := SQRT(l_intermediate_result) * 1.2345;
RETURN l_final_result;
END;
/
-- Compile the function with a higher optimization level
ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL = 4; -- Max optimization level
CREATE OR REPLACE FUNCTION calculate_complex_value_optimized (
p_input_a IN NUMBER,
p_input_b IN NUMBER
) RETURN NUMBER
AS
l_intermediate_result NUMBER;
l_final_result NUMBER;
BEGIN
-- Same logic, but compiled with PLSQL_OPTIMIZE_LEVEL=4
l_intermediate_result := (p_input_a * p_input_a) + (p_input_b * p_input_b);
l_final_result := SQRT(l_intermediate_result) * 1.2345;
RETURN l_final_result;
END;
/
-- Anonymous block to compare execution times (though differences might be subtle for simple functions)
DECLARE
l_start_time_default NUMBER;
l_end_time_default NUMBER;
l_start_time_optimized NUMBER;
l_end_time_optimized NUMBER;
l_result NUMBER;
l_iterations CONSTANT NUMBER := 1000000; -- Large number of iterations
BEGIN
DBMS_OUTPUT.PUT_LINE('--- Comparing PLSQL_OPTIMIZE_LEVEL ---');
-- Test default optimized function
l_start_time_default := DBMS_UTILITY.GET_TIME;
FOR i IN 1 .. l_iterations LOOP
l_result := calculate_complex_value(i, i + 1);
END LOOP;
l_end_time_default := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Default optimization (' || l_iterations || ' iterations): ' || (l_end_time_default - l_start_time_default) || ' hundredths of seconds');
-- Test highly optimized function
l_start_time_optimized := DBMS_UTILITY.GET_TIME;
FOR i IN 1 .. l_iterations LOOP
l_result := calculate_complex_value_optimized(i, i + 1);
END LOOP;
l_end_time_optimized := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Optimized (Level 4) (' || l_iterations || ' iterations): ' || (l_end_time_optimized - l_start_time_optimized) || ' hundredths of seconds');
-- Clean up: Drop functions
EXECUTE IMMEDIATE 'DROP FUNCTION calculate_complex_value';
EXECUTE IMMEDIATE 'DROP FUNCTION calculate_complex_value_optimized';
ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL = 2; -- Reset
END;
/
Explanation
This example demonstrates the effect of PLSQL_OPTIMIZE_LEVEL. We create two identical functions, one compiled with the default PLSQL_OPTIMIZE_LEVEL (typically 2) and another with the highest level (4). While the performance difference might be minimal for very simple functions, for complex PL/SQL logic, higher optimization levels can lead to noticeable speedups as the compiler performs more aggressive optimizations like constant folding, dead code elimination, and common subexpression elimination. It's recommended to compile production code with a higher PLSQL_OPTIMIZE_LEVEL to maximize runtime performance, a key aspect of Oracle PL/SQL performance tuning.
Pipelined Table Functions
Pipelined table functions are a powerful feature in Oracle PL/SQL that allows you to treat the output of a PL/SQL function as a table. This means you can use the function directly in the FROM
clause of a SQL query. The "pipelined" aspect implies that the function returns rows iteratively, as they are generated, rather than waiting for the entire result set to be computed and returned at once. This streaming capability is highly beneficial for performance and memory usage, especially when dealing with large datasets or complex data transformations, making it a critical component of advanced Oracle PL/SQL programming and database performance optimization.
Benefits of Pipelined Table Functions:
Reduced Memory Usage: Rows are returned as they are produced, avoiding the need to store the entire result set in memory.
Improved Responsiveness: Queries can start processing results immediately, without waiting for the full function execution.
ETL (Extract, Transform, Load) Processes: Ideal for transforming data on the fly, acting as a flexible and efficient ETL stage.
Complex Data Generation: Useful for generating large result sets based on complex PL/SQL logic that cannot be easily expressed in pure SQL.
Functionality as a Table: Seamless integration with SQL, allowing standard SQL operations (JOIN, WHERE, GROUP BY, etc.) on the function's output.
Example 1: Basic Pipelined Table Function
SET SERVEROUTPUT ON;
-- Define an object type for the rows
CREATE OR REPLACE TYPE emp_obj_type IS OBJECT (
employee_id NUMBER,
employee_name VARCHAR2(100),
department VARCHAR2(50),
salary NUMBER
);
/
-- Define a table type for the collection of objects
CREATE OR REPLACE TYPE emp_tab_type IS TABLE OF emp_obj_type;
/
-- Create a pipelined table function to generate employee data
CREATE OR REPLACE FUNCTION generate_employees (
p_num_employees IN NUMBER
) RETURN emp_tab_type PIPELINED
AS
BEGIN
FOR i IN 1 .. p_num_employees LOOP
-- Simulate generating employee data
PIPE ROW (emp_obj_type(
i,
'Employee ' || i,
'Dept_' || CEIL(i / 100), -- Assign to departments
ROUND(DBMS_RANDOM.VALUE(30000, 100000), 2)
));
END LOOP;
RETURN;
END;
/
-- Query the pipelined function as if it were a table
SELECT employee_id, employee_name, department, salary
FROM TABLE(generate_employees(10));
-- Example of using it with a WHERE clause
SELECT employee_id, employee_name, salary
FROM TABLE(generate_employees(1000))
WHERE salary > 80000 AND department = 'Dept_5';
-- Clean up
EXECUTE IMMEDIATE 'DROP FUNCTION generate_employees';
EXECUTE IMMEDIATE 'DROP TYPE emp_tab_type';
EXECUTE IMMEDIATE 'DROP TYPE emp_obj_type';
Explanation
This example demonstrates the fundamental concept of a pipelined table function. We define an object type (emp_obj_type) and a nested table type (emp_tab_type) to represent the structure of the data the function will return. The generate_employees function is then created with the PIPELINED keyword. Inside the loop, PIPE ROW is used to send each generated emp_obj_type instance directly to the calling SQL query. This allows the SQL engine to start processing rows as soon as they are generated, without waiting for all 10 or 1000 employee records to be created and stored in memory. This greatly improves efficiency for Oracle PL/SQL procedures involving large data sets or real-time data generation.
Example 2: Pipelined Table Function with Complex Logic and Parameters
SET SERVEROUTPUT ON;
-- Define an object type for financial transactions
CREATE OR REPLACE TYPE transaction_obj_type IS OBJECT (
transaction_id NUMBER,
account_id NUMBER,
transaction_date DATE,
amount NUMBER,
transaction_type VARCHAR2(20)
);
/
-- Define a table type for the collection of transactions
CREATE OR REPLACE TYPE transaction_tab_type IS TABLE OF transaction_obj_type;
/
-- Create a pipelined table function to simulate financial transactions based on parameters
CREATE OR REPLACE FUNCTION get_account_transactions (
p_account_id_start IN NUMBER,
p_account_id_end IN NUMBER,
p_num_transactions_per_acc IN NUMBER
) RETURN transaction_tab_type PIPELINED
AS
l_transaction_id NUMBER := 0;
BEGIN
FOR acc_id IN p_account_id_start .. p_account_id_end LOOP
FOR i IN 1 .. p_num_transactions_per_acc LOOP
l_transaction_id := l_transaction_id + 1;
PIPE ROW (transaction_obj_type(
l_transaction_id,
acc_id,
SYSDATE - DBMS_RANDOM.VALUE(0, 365), -- Random date within last year
ROUND(DBMS_RANDOM.VALUE(-1000, 1000), 2), -- Random amount (debit/credit)
CASE WHEN DBMS_RANDOM.VALUE < 0.5 THEN 'DEBIT' ELSE 'CREDIT' END
));
END LOOP;
END LOOP;
RETURN;
END;
/
-- Use the pipelined function in a complex SQL query
SELECT
t.account_id,
TO_CHAR(t.transaction_date, 'YYYY-MM') AS transaction_month,
SUM(CASE WHEN t.transaction_type = 'DEBIT' THEN t.amount ELSE 0 END) AS total_debits,
SUM(CASE WHEN t.transaction_type = 'CREDIT' THEN t.amount ELSE 0 END) AS total_credits,
SUM(t.amount) AS net_balance_change
FROM TABLE(get_account_transactions(1, 5, 100)) t -- Generate 100 transactions for accounts 1-5
WHERE t.transaction_date > TRUNC(SYSDATE, 'MM') -- Only current month transactions
GROUP BY t.account_id, TO_CHAR(t.transaction_date, 'YYYY-MM')
ORDER BY t.account_id, transaction_month;
-- Clean up
EXECUTE IMMEDIATE 'DROP FUNCTION get_account_transactions';
EXECUTE IMMEDIATE 'DROP TYPE transaction_tab_type';
EXECUTE IMMEDIATE 'DROP TYPE transaction_obj_type';
Explanation
This example demonstrates a more practical use of pipelined table functions for generating data with complex logic and input parameters. The get_account_transactions function takes a range of account IDs and the number of transactions per account, then generates simulated financial transaction data. The key here is how effortlessly this function is integrated into a SQL query. We can apply WHERE, GROUP BY, and aggregation functions directly to the output of the pipelined function, just as if it were a regular table. This showcases the power of pipelined functions for on-the-fly data generation and complex reporting, a highly efficient technique in Oracle PL/SQL for data warehousing and business intelligence.
Example 3: Pipelined Table Function with External Data (Simulated)
SET SERVEROUTPUT ON;
-- Define types for external log entries (simulated)
CREATE OR REPLACE TYPE log_entry_obj_type IS OBJECT (
log_id NUMBER,
log_timestamp TIMESTAMP,
log_level VARCHAR2(20),
message VARCHAR2(4000)
);
/
CREATE OR REPLACE TYPE log_entry_tab_type IS TABLE OF log_entry_obj_type;
/
-- Simulate an external data source (e.g., a CLOB containing log data)
CREATE TABLE external_log_source (
log_data CLOB
);
INSERT INTO external_log_source (log_data) VALUES ('
<log>
<entry id="1">
<timestamp>2025-06-14 10:00:00</timestamp>
<level>INFO</level>
<message>User login successful for user123</message>
</entry>
<entry id="2">
<timestamp>2025-06-14 10:01:15</timestamp>
<level>WARNING</level>
<message>Disk space low on server XYZ</message>
</entry>
<entry id="3">
<timestamp>2025-06-14 10:02:30</timestamp>
<level>ERROR</level>
<message>Database connection failed to ABC_DB</message>
</entry>
<entry id="4">
<timestamp>2025-06-14 10:03:45</timestamp>
<level>INFO</level>
<message>Application started successfully</message>
</entry>
</log>');
COMMIT;
-- Pipelined table function to parse XML log data from a CLOB
CREATE OR REPLACE FUNCTION parse_xml_logs (
p_clob_data IN CLOB
) RETURN log_entry_tab_type PIPELINED
AS
l_xml_doc XMLTYPE;
BEGIN
IF p_clob_data IS NOT NULL THEN
l_xml_doc := XMLTYPE(p_clob_data);
FOR log_rec IN (
SELECT
EXTRACTVALUE(VALUE(entry_node), '/entry/@id') AS log_id_str,
EXTRACTVALUE(VALUE(entry_node), '/entry/timestamp') AS log_timestamp_str,
EXTRACTVALUE(VALUE(entry_node), '/entry/level') AS log_level,
EXTRACTVALUE(VALUE(entry_node), '/entry/message') AS message
FROM TABLE(XMLSEQUENCE(l_xml_doc.EXTRACT('/log/entry'))) entry_node
)
LOOP
PIPE ROW (log_entry_obj_type(
TO_NUMBER(log_rec.log_id_str),
TO_TIMESTAMP(log_rec.log_timestamp_str, 'YYYY-MM-DD HH24:MI:SS'),
log_rec.log_level,
log_rec.message
));
END LOOP;
END IF;
RETURN;
END;
/
-- Query the pipelined function, passing the CLOB data
SELECT log_id, log_timestamp, log_level, message
FROM TABLE(parse_xml_logs((SELECT log_data FROM external_log_source)))
WHERE log_level = 'ERROR';
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE external_log_source';
EXECUTE IMMEDIATE 'DROP FUNCTION parse_xml_logs';
EXECUTE IMMEDIATE 'DROP TYPE log_entry_tab_type';
EXECUTE IMMEDIATE 'DROP TYPE log_entry_obj_type';
Explanation
This advanced example demonstrates using a pipelined table function to process data from an external source (simulated here by a CLOB column containing XML data). The parse_xml_logs function takes a CLOB as input, parses the XML using XMLTYPE and XMLSEQUENCE/EXTRACTVALUE, and then pipelines each parsed log entry as a row. This approach is incredibly useful for integrating and transforming semi-structured or external data directly within SQL, allowing you to leverage the full power of SQL for querying and analyzing complex datasets without loading them entirely into memory. This highlights the flexibility and efficiency of pipelined table functions for various data processing tasks in Oracle PL/SQL, especially for ETL and data integration.
Profiling and Debugging
Profiling and debugging are indispensable tools for identifying performance bottlenecks, understanding code execution flow, and resolving issues in PL/SQL applications. Oracle provides powerful built-in utilities and integrates seamlessly with development environments like SQL Developer to offer comprehensive profiling and debugging capabilities. Mastering these tools is crucial for any serious Oracle PL/SQL developer aiming to build high-quality, performant, and reliable database applications.
DBMS_PROFILER
DBMS_PROFILER
is a built-in Oracle package that allows you to collect performance statistics for PL/SQL applications. It records the execution time of each line of PL/SQL code, as well as the number of times each line is executed. This detailed information helps you identify "hot spots" in your code – areas that consume the most execution time – enabling you to focus your optimization efforts where they will have the greatest impact. It's a fundamental tool for PL/SQL performance tuning and code optimization.
Key Features:
Line-level Statistics: Provides execution time and call count for each executable line.
Hierarchical View: Can show statistics for subprograms called by your main PL/SQL block.
Session-based: Profiling runs for a specific session.
Persistent Data: Results are stored in database tables, allowing for later analysis.
Setup for DBMS_PROFILER:
Before using DBMS_PROFILER, you typically need to run a setup script (usually profload.sql and proftab.sql located in $ORACLE_HOME/rdbms/admin) to create the necessary tables and grant privileges.
Example 1: Profiling a Simple PL/SQL Procedure
SET SERVEROUTPUT ON;
-- Ensure DBMS_PROFILER tables are set up (run profload.sql and proftab.sql if not)
-- Connect as SYSDBA and run: @?/rdbms/admin/profload.sql
-- Connect as your user and run: @?/rdbms/admin/proftab.sql
-- Procedure to be profiled
CREATE OR REPLACE PROCEDURE calculate_factorial (
p_number IN NUMBER,
p_result OUT NUMBER
)
AS
BEGIN
p_result := 1;
IF p_number < 0 THEN
p_result := NULL;
RETURN;
ELSIF p_number = 0 THEN
p_result := 1;
RETURN;
END IF;
FOR i IN 1 .. p_number LOOP
p_result := p_result * i;
END LOOP;
END;
/
-- Anonymous block to run the profiler
DECLARE
l_run_id NUMBER;
l_result NUMBER;
BEGIN
-- Start the profiler
l_run_id := DBMS_PROFILER.START_PROFILER(run_comment => 'Factorial Calculation Test');
DBMS_OUTPUT.PUT_LINE('Profiler started for Run ID: ' || l_run_id);
-- Call the procedure multiple times to get meaningful data
FOR i IN 1 .. 1000 LOOP
calculate_factorial(10, l_result);
-- Simulate some other work
DBMS_LOCK.SLEEP(0.001);
END LOOP;
-- Stop the profiler
DBMS_PROFILER.STOP_PROFILER;
DBMS_OUTPUT.PUT_LINE('Profiler stopped.');
-- You can query the profiler tables for results:
-- SELECT * FROM plsql_profiler_runs;
-- SELECT * FROM plsql_profiler_units;
-- SELECT * FROM plsql_profiler_data ORDER BY total_occur DESC;
-- To analyze in SQL Developer: navigate to Reports -> PL/SQL Profiler
END;
/
-- After running, query the profiler tables:
SELECT runid, run_date, run_comment, run_total_time
FROM plsql_profiler_runs
ORDER BY run_date DESC;
SELECT unit_owner, unit_name, unit_type, total_time, total_occur
FROM plsql_profiler_units
WHERE runid = (SELECT MAX(runid) FROM plsql_profiler_runs) -- Get latest run
ORDER BY total_time DESC;
SELECT
d.line#,
d.total_occur,
d.total_time,
d.min_time,
d.max_time,
s.text AS code_line
FROM plsql_profiler_data d
JOIN all_source s ON d.unit_owner = s.owner
AND d.unit_name = s.name
AND d.unit_type = s.type
AND d.line# = s.line
WHERE d.runid = (SELECT MAX(runid) FROM plsql_profiler_runs) -- Get latest run
AND d.unit_name = 'CALCULATE_FACTORIAL'
ORDER BY d.line#;
-- Clean up (optional)
EXECUTE IMMEDIATE 'DROP PROCEDURE calculate_factorial';
-- For profiler tables, you might want to keep them or truncate for new runs
-- TRUNCATE TABLE plsql_profiler_runs;
-- TRUNCATE TABLE plsql_profiler_units;
-- TRUNCATE TABLE plsql_profiler_data;
Explanation
This example demonstrates how to use DBMS_PROFILER to analyze the performance of a PL/SQL procedure. The anonymous block first calls DBMS_PROFILER.START_PROFILER to begin a profiling session, then executes the calculate_factorial procedure multiple times to collect meaningful data, and finally calls DBMS_PROFILER.STOP_PROFILER. After execution, you can query the PLSQL_PROFILER_RUNS, PLSQL_PROFILER_UNITS, and PLSQL_PROFILER_DATA tables to view the collected statistics. This includes total execution time for the run, time spent in each unit, and crucially, execution counts and times for individual lines of code within CALCULATE_FACTORIAL. This fine-grained detail helps identify specific lines contributing most to execution time, guiding your Oracle PL/SQL optimization efforts.
Example 2: Profiling a Complex PL/SQL Block with Subprogram Calls
SET SERVEROUTPUT ON;
-- Create helper functions for the main procedure
CREATE OR REPLACE FUNCTION get_employee_count RETURN NUMBER
AS
l_count NUMBER;
BEGIN
SELECT COUNT(*) INTO l_count FROM employees;
RETURN l_count;
END;
/
CREATE OR REPLACE FUNCTION calculate_bonus (p_salary IN NUMBER) RETURN NUMBER
AS
BEGIN
RETURN p_salary * 0.15; -- 15% bonus
END;
/
-- Main procedure to be profiled, calling other functions
CREATE OR REPLACE PROCEDURE process_payroll (
p_department_id IN NUMBER
)
AS
CURSOR c_employees IS
SELECT employee_id, salary
FROM employees
WHERE department_id = p_department_id;
l_bonus NUMBER;
l_employee_count NUMBER;
BEGIN
-- Simulate some initial setup time
DBMS_LOCK.SLEEP(0.01);
l_employee_count := get_employee_count(); -- Call a helper function
FOR emp_rec IN c_employees LOOP
l_bonus := calculate_bonus(emp_rec.salary); -- Call another helper function
-- Simulate updating bonus for employee
-- UPDATE employees SET bonus = l_bonus WHERE employee_id = emp_rec.employee_id;
DBMS_LOCK.SLEEP(0.0001); -- Simulate processing time per employee
END LOOP;
END;
/
-- Create a dummy employees table
CREATE TABLE employees (
employee_id NUMBER PRIMARY KEY,
department_id NUMBER,
salary NUMBER,
bonus NUMBER
);
-- Insert dummy data
BEGIN
FOR i IN 1 .. 1000 LOOP
INSERT INTO employees (employee_id, department_id, salary, bonus)
VALUES (i, MOD(i, 5) + 1, ROUND(DBMS_RANDOM.VALUE(40000, 90000), 2), 0);
END LOOP;
COMMIT;
END;
/
-- Anonymous block to run the profiler for the payroll process
DECLARE
l_run_id NUMBER;
BEGIN
l_run_id := DBMS_PROFILER.START_PROFILER(run_comment => 'Payroll Processing for Dept 3');
DBMS_OUTPUT.PUT_LINE('Profiler started for Run ID: ' || l_run_id);
process_payroll(3); -- Profile the procedure for department 3
DBMS_PROFILER.STOP_PROFILER;
DBMS_OUTPUT.PUT_LINE('Profiler stopped.');
-- Analyze the results in plsql_profiler_units and plsql_profiler_data
END;
/
-- Query profiler results to see time spent in each unit
SELECT unit_owner, unit_name, unit_type, total_time, total_occur
FROM plsql_profiler_units
WHERE runid = (SELECT MAX(runid) FROM plsql_profiler_runs)
ORDER BY total_time DESC;
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE employees';
EXECUTE IMMEDIATE 'DROP PROCEDURE process_payroll';
EXECUTE IMMEDIATE 'DROP FUNCTION get_employee_count';
EXECUTE IMMEDIATE 'DROP FUNCTION calculate_bonus';
Explanation
This example demonstrates profiling a more complex PL/SQL block that calls other subprograms. The process_payroll procedure interacts with a dummy employees table and calls get_employee_count and calculate_bonus functions. By profiling process_payroll, DBMS_PROFILER captures statistics not only for the main procedure itself but also for the functions it calls. When you query plsql_profiler_units, you'll see the time spent within PROCESS_PAYROLL, GET_EMPLOYEE_COUNT, and CALCULATE_BONUS individually. This hierarchical view is crucial for identifying which specific components or subroutines within a larger application are contributing most to the overall execution time, providing valuable insights for Oracle PL/SQL performance tuning.
Example 3: Profiling with Different INTERVAL
and WAITS
Options
SET SERVEROUTPUT ON;
-- Create a simple test procedure
CREATE OR REPLACE PROCEDURE long_running_operation
AS
BEGIN
FOR i IN 1 .. 100000 LOOP
-- Simulate CPU-bound work
IF MOD(i, 100) = 0 THEN
NULL; -- Placeholder for some computation
END IF;
END LOOP;
-- Simulate I/O or wait operation
DBMS_LOCK.SLEEP(1); -- Sleep for 1 second
END;
/
-- Anonymous block to run profiler with default settings (interval 1000000, waits false)
DECLARE
l_run_id_default NUMBER;
BEGIN
DBMS_OUTPUT.PUT_LINE('--- Profiling with Default Settings ---');
l_run_id_default := DBMS_PROFILER.START_PROFILER(run_comment => 'Default Profiling Run');
long_running_operation;
DBMS_PROFILER.STOP_PROFILER;
DBMS_OUTPUT.PUT_LINE('Default Profiling Run ID: ' || l_run_id_default);
END;
/
-- Anonymous block to run profiler with custom interval and waits enabled
DECLARE
l_run_id_custom NUMBER;
BEGIN
DBMS_OUTPUT.PUT_LINE('--- Profiling with Custom Interval and Waits Enabled ---');
-- Set INTERVAL to 1000 (10 milliseconds) for finer granularity
-- Set WAITS to TRUE to include wait events in profiling data (requires additional grants)
l_run_id_custom := DBMS_PROFILER.START_PROFILER(
run_comment => 'Custom Profiling Run (Finer Interval, Waits)',
interval => 1000, -- Smaller interval for more frequent sampling
waits => TRUE -- Capture wait event information
);
long_running_operation;
DBMS_PROFILER.STOP_PROFILER;
DBMS_OUTPUT.PUT_LINE('Custom Profiling Run ID: ' || l_run_id_custom);
END;
/
-- After running both, you can compare the results in plsql_profiler_data
-- Especially look at WAIT_TIME and WAIT_COUNT columns if WAITS was enabled.
-- Query data from the custom run to see wait event details
SELECT
d.line#,
d.total_time,
d.total_occur,
d.wait_time, -- Only populated if waits=TRUE
d.wait_count, -- Only populated if waits=TRUE
s.text AS code_line
FROM plsql_profiler_data d
JOIN all_source s ON d.unit_owner = s.owner
AND d.unit_name = s.name
AND d.unit_type = s.type
AND d.line# = s.line
WHERE d.runid = (SELECT MAX(runid) FROM plsql_profiler_runs WHERE run_comment LIKE '%Custom%')
AND d.unit_name = 'LONG_RUNNING_OPERATION'
ORDER BY d.line#;
-- Clean up
EXECUTE IMMEDIATE 'DROP PROCEDURE long_running_operation';
Explanation
This example demonstrates using the INTERVAL and WAITS parameters of DBMS_PROFILER.START_PROFILER. The INTERVAL parameter controls how frequently the profiler samples the execution. A smaller interval (e.g., 1000 for 10 milliseconds) provides finer granularity but might incur slightly more overhead. The WAITS => TRUE parameter (which often requires GRANT SELECT ON V_$SESSION and V_$SESSION_WAIT to the profiler user) instructs DBMS_PROFILER to also collect data on wait events, indicating where the PL/SQL code is spending time waiting for resources (e.g., I/O, locks, CPU). By comparing the results from the default run and the custom run (especially by examining WAIT_TIME and WAIT_COUNT in plsql_profiler_data), you can gain deeper insights into whether your PL/SQL code is CPU-bound or I/O-bound, which is crucial for targeted Oracle PL/SQL performance tuning and bottleneck analysis.
DBMS_HPROF (Hierarchical Profiler)
DBMS_HPROF
is a hierarchical profiler introduced in Oracle Database 11g, offering a more advanced and user-friendly way to profile PL/SQL code compared to DBMS_PROFILER
. It provides a call-tree (or call-graph) view of your application's execution, showing not only the time spent in individual subprograms but also how much time was spent in their children. This hierarchical perspective makes it easier to pinpoint the root cause of performance issues in complex applications, offering deeper insights into Oracle PL/SQL performance.
Key Features:
Call-Tree View: Shows the calling relationships between subprograms and accumulated time.
Easier Interpretation: The output is generally more intuitive for understanding overall application flow.
Overhead: Can have slightly higher overhead than DBMS_PROFILER
due to its hierarchical tracking.
Output File: Results are written to a file in a specified directory, which can then be analyzed using tools like plshprof
.
Setup for DBMS_HPROF:
Create a Directory Object: You need a directory object that points to a physical directory on the database server file system where the profiling output file will be written.
Grant Permissions: Grant WRITE
privilege on this directory object to the user who will be running the profiler.
Ensure plshprof
Utility: plshprof
(a Java utility) is used to interpret the raw DBMS_HPROF
output file into a human-readable format (HTML or text). It's typically located in $ORACLE_HOME/bin
.
Example 1: Basic Profiling with DBMS_HPROF
SET SERVEROUTPUT ON;
-- 1. Create a directory object (replace with an actual path on your DB server)
-- As SYSDBA or with CREATE ANY DIRECTORY privilege:
-- CREATE DIRECTORY HPROF_DIR AS '/tmp/hprof_output';
-- GRANT READ, WRITE ON DIRECTORY HPROF_DIR TO YOUR_USER;
-- Assuming HPROF_DIR is created and granted
-- Procedure to be profiled
CREATE OR REPLACE PROCEDURE perform_calculations (
p_iterations IN NUMBER
)
AS
l_sum NUMBER := 0;
BEGIN
FOR i IN 1 .. p_iterations LOOP
l_sum := l_sum + SQRT(i);
END LOOP;
-- Simulate some delay
DBMS_LOCK.SLEEP(0.1);
END;
/
-- Anonymous block to run DBMS_HPROF
DECLARE
l_run_id NUMBER;
l_filename VARCHAR2(100) := 'hprof_calc_test.trc';
BEGIN
-- Start hierarchical profiler
l_run_id := DBMS_HPROF.START_PROFILER(
location => 'HPROF_DIR', -- Name of the directory object
filename => l_filename -- Name of the output file
);
DBMS_OUTPUT.PUT_LINE('DBMS_HPROF started. Run ID: ' || l_run_id || ', Output file: ' || l_filename);
-- Call the procedure to be profiled
perform_calculations(100000);
-- Stop hierarchical profiler
DBMS_HPROF.STOP_PROFILER;
DBMS_OUTPUT.PUT_LINE('DBMS_HPROF stopped.');
DBMS_OUTPUT.PUT_LINE('To analyze the results, use the plshprof utility:');
DBMS_OUTPUT.PUT_LINE(' cd $ORACLE_HOME/bin');
DBMS_OUTPUT.PUT_LINE(' plshprof HPROF_DIR:' || l_filename || ' report.html');
END;
/
-- Clean up (optional)
EXECUTE IMMEDIATE 'DROP PROCEDURE perform_calculations';
-- EXECUTE IMMEDIATE 'DROP DIRECTORY HPROF_DIR'; -- Only if you created it for this example
Explanation
This example demonstrates the basic usage of DBMS_HPROF. After ensuring a directory object is set up, we define a procedure perform_calculations that we want to profile. The anonymous block calls DBMS_HPROF.START_PROFILER specifying the directory object and a filename for the trace output. After executing the target procedure, DBMS_HPROF.STOP_PROFILER is called. The profiling data is written to the specified trace file on the database server. To view the results, you would typically use the plshprof utility from your Oracle client installation, pointing it to the generated trace file. This utility generates an HTML report (or text) that provides a hierarchical view of the execution times, making it a powerful tool for Oracle PL/SQL performance analysis and optimization.
Example 2: Profiling a Scenario with Multiple Nested Calls
SET SERVEROUTPUT ON;
-- Re-using HPROF_DIR from Example 1
-- Function 1
CREATE OR REPLACE FUNCTION func_a (p_val IN NUMBER) RETURN NUMBER
AS
BEGIN
DBMS_LOCK.SLEEP(0.01); -- Simulate work
RETURN p_val * 2;
END;
/
-- Function 2 (calls Function 1)
CREATE OR REPLACE FUNCTION func_b (p_val IN NUMBER) RETURN NUMBER
AS
l_result NUMBER;
BEGIN
DBMS_LOCK.SLEEP(0.02); -- Simulate work
l_result := func_a(p_val) + 10;
RETURN l_result;
END;
/
-- Procedure 1 (calls Function 2 multiple times)
CREATE OR REPLACE PROCEDURE proc_x (p_count IN NUMBER)
AS
l_total NUMBER := 0;
BEGIN
FOR i IN 1 .. p_count LOOP
l_total := l_total + func_b(i);
END LOOP;
DBMS_LOCK.SLEEP(0.05); -- Simulate more work
END;
/
-- Anonymous block to run DBMS_HPROF for proc_x
DECLARE
l_run_id NUMBER;
l_filename VARCHAR2(100) := 'hprof_nested_calls.trc';
BEGIN
l_run_id := DBMS_HPROF.START_PROFILER(
location => 'HPROF_DIR',
filename => l_filename
);
DBMS_OUTPUT.PUT_LINE('DBMS_HPROF started for nested calls. Run ID: ' || l_run_id);
proc_x(50); -- Call the procedure with nested functions
DBMS_HPROF.STOP_PROFILER;
DBMS_OUTPUT.PUT_LINE('DBMS_HPROF stopped.');
DBMS_OUTPUT.PUT_LINE('To analyze the results for nested calls, use the plshprof utility:');
DBMS_OUTPUT.PUT_LINE(' plshprof HPROF_DIR:' || l_filename || ' nested_calls_report.html');
END;
/
-- Clean up
EXECUTE IMMEDIATE 'DROP PROCEDURE proc_x';
EXECUTE IMMEDIATE 'DROP FUNCTION func_b';
EXECUTE IMMEDIATE 'DROP FUNCTION func_a';
Explanation
This example highlights DBMS_HPROF's ability to capture hierarchical profiling data. We define a chain of subprogram calls: proc_x calls func_b, which in turn calls func_a. When proc_x is profiled with DBMS_HPROF, the generated report (analyzed by plshprof) will clearly show the call tree, indicating how much time was spent in proc_x itself, and how much of that time was consumed by calls to func_b, and further, how much of func_b's time was spent in func_a. This hierarchical breakdown is invaluable for pinpointing performance bottlenecks in complex Oracle PL/SQL applications, allowing you to optimize the slowest parts of your call stack.
Example 3: Analyzing HPROF Output with plshprof
(Conceptual Example - requires external tool)
-- This "code" block is purely conceptual to show how you'd interact with plshprof
-- You would run this command from your operating system's command line, not SQL*Plus or SQL Developer.
-- Prerequisite:
-- 1. Oracle Client or Server installed.
-- 2. Your HPROF_DIR directory exists and contains the .trc file.
-- 3. You are in the $ORACLE_HOME/bin directory or have it in your PATH.
-- Command to analyze the trace file from Example 1:
-- plshprof HPROF_DIR:/tmp/hprof_output/hprof_calc_test.trc hprof_report.html
-- Output in hprof_report.html (example structure):
-- =========================================================================
-- PL/SQL Hierarchical Profiler Report
-- Run ID: 12345
-- Run Comment: Factorial Calculation Test
-- Call Tree
-- =========================================================================
-- Function/Procedure Total Time (ms) Self Time (ms) Calls Avg Self (ms)
-- ---------------------- --------------- --------------- ------ --------------
-- perform_calculations 105.00 10.00 1 10.00
-- -> func_b 90.00 20.00 50 0.40
-- -> func_a 70.00 10.00 50 0.20
-- ANONYMOUS BLOCK 120.00 5.00 1 5.00
-- =========================================================================
-- Details for: perform_calculations
-- =========================================================================
-- Line# Calls Time (ms) Source Code
-- -------- -------- ---------- -------------------------------------------
-- 5 1 10.00 BEGIN
-- 6 1 0.00 FOR i IN 1 .. p_iterations LOOP
-- 7 100000 5.00 l_sum := l_sum + SQRT(i);
-- 8 1 0.00 END LOOP;
-- 10 1 90.00 DBMS_LOCK.SLEEP(0.1); -- Actual sleep takes time
-- 11 1 0.00 END;
-- The actual report is much more detailed, including cumulative times,
-- percentages, and detailed line-by-line statistics.
-- This illustrates the typical output you'd expect from plshprof.
Explanation
This "example" is a conceptual illustration of how you would analyze the DBMS_HPROF output using the plshprof utility. It emphasizes that DBMS_HPROF itself only generates a raw trace file; the interpretation happens outside the database using plshprof. The conceptual output demonstrates the key information provided: a call tree showing the total time spent in each function (including time spent in functions it calls) and the "self time" (time spent directly in that function, excluding calls to others). It also provides line-by-line statistics, similar to DBMS_PROFILER, but within the context of the call hierarchy. This combined view is what makes DBMS_HPROF such a powerful tool for deep Oracle PL/SQL performance analysis and identifying optimization opportunities within complex code paths.
Using the SQL Developer Debugger
The SQL Developer Debugger is an integrated, graphical debugging tool that allows you to step through PL/SQL code, inspect variable values, set breakpoints, and examine the call stack. It's an indispensable tool for understanding code execution flow, identifying logical errors, and resolving complex issues in PL/SQL programs. Unlike profiling tools that focus on performance metrics, the debugger is designed for interactive problem-solving and deep insight into runtime behavior, making it a cornerstone for efficient Oracle PL/SQL development and troubleshooting.
Key Features:
Breakpoints: Pause execution at specific lines of code.
Step Into/Over/Out: Control the execution flow to trace through subprograms.
Variable Inspection: View and modify the values of variables during execution.
Call Stack: See the sequence of subprograms that led to the current execution point.
Data Browser: Inspect data in database tables during a debug session.
Exceptions: Handle and analyze exceptions during runtime.
Setup for SQL Developer Debugger:
Grant Debug Privileges: The database user needs DEBUG CONNECT SESSION
and DEBUG ANY PROCEDURE
(or DEBUG ON <object>
) privileges. GRANT DEBUG CONNECT SESSION TO YOUR_USER;
GRANT DEBUG ANY PROCEDURE TO YOUR_USER;
(or specific object grants)
Compile with Debug: The PL/SQL code (packages, procedures, functions) you want to debug must be compiled with debug information. In SQL Developer, this is often done by default when compiling or by selecting "Compile for Debug" option.
Disable Optimization: For effective debugging, ensure the PLSQL_OPTIMIZE_LEVEL
for your session or the compiled object is set to 1 or 2 (default). Level 3 or 4 can sometimes optimize away code that the debugger expects to see.
Example 1: Basic Debugging a Procedure
SET SERVEROUTPUT ON;
-- Ensure debug privileges and compile for debug (often default in SQL Developer)
-- Procedure to be debugged
CREATE OR REPLACE PROCEDURE process_order_status (
p_order_id IN NUMBER,
p_new_status IN VARCHAR2
)
AS
l_current_status VARCHAR2(20);
l_rows_updated NUMBER;
BEGIN
SELECT order_status INTO l_current_status
FROM orders
WHERE order_id = p_order_id;
IF l_current_status = 'SHIPPED' AND p_new_status = 'CANCELLED' THEN
DBMS_OUTPUT.PUT_LINE('Cannot cancel a shipped order: ' || p_order_id);
RETURN;
END IF;
UPDATE orders
SET order_status = p_new_status
WHERE order_id = p_order_id;
l_rows_updated := SQL%ROWCOUNT;
IF l_rows_updated = 0 THEN
DBMS_OUTPUT.PUT_LINE('Order ' || p_order_id || ' not found or already in status ' || p_new_status);
ELSE
DBMS_OUTPUT.PUT_LINE('Order ' || p_order_id || ' status changed from ' || l_current_status || ' to ' || p_new_status);
END IF;
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('Order ' || p_order_id || ' does not exist.');
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('An error occurred: ' || SQLERRM);
END;
/
-- Create a dummy orders table for testing
CREATE TABLE orders (
order_id NUMBER PRIMARY KEY,
order_status VARCHAR2(20)
);
INSERT INTO orders VALUES (101, 'PENDING');
INSERT INTO orders VALUES (102, 'PROCESSING');
INSERT INTO orders VALUES (103, 'SHIPPED');
COMMIT;
-- To debug in SQL Developer:
-- 1. Open the process_order_status procedure in SQL Developer.
-- 2. Click in the left margin next to a line number to set a breakpoint (e.g., line `SELECT order_status...`).
-- 3. Right-click on the procedure name in the Navigator or editor and choose "Debug".
-- 4. In the "Run PL/SQL" dialog, provide values for p_order_id (e.g., 102) and p_new_status (e.g., 'COMPLETED').
-- 5. Click "OK".
-- 6. SQL Developer will enter debug mode, pausing at your breakpoint. You can then use
-- Step Over (F8), Step Into (F7), Step Out (Shift+F8), view variables, etc.
-- Example calls (for non-debug execution to verify setup)
BEGIN
process_order_status(102, 'COMPLETED'); -- Valid update
process_order_status(103, 'CANCELLED'); -- Should be blocked
process_order_status(999, 'SHIPPED'); -- Order not found
END;
/
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE orders';
EXECUTE IMMEDIATE 'DROP PROCEDURE process_order_status';
Explanation
This example demonstrates a typical scenario for using the SQL Developer Debugger. We create a process_order_status procedure and a simple orders table. The instructions outline the steps to debug this procedure in SQL Developer: set a breakpoint, initiate debugging by right-clicking and choosing "Debug," and then interact with the debugger's controls (step over, step into, variable inspection window). By stepping through the code line by line, you can observe how l_current_status, l_rows_updated, and other variables change, verify the flow of execution through the IF conditions, and understand exactly why certain DBMS_OUTPUT messages appear or why an EXCEPTION might be raised. This interactive approach is fundamental for quickly finding and fixing logical errors in Oracle PL/SQL code.
Example 2: Debugging a Function and Inspecting Return Value
SET SERVEROUTPUT ON;
-- Function to be debugged
CREATE OR REPLACE FUNCTION calculate_total_amount (
p_product_id IN NUMBER,
p_quantity IN NUMBER
) RETURN NUMBER
AS
l_unit_price NUMBER;
l_total_amount NUMBER;
BEGIN
-- Simulate getting unit price from a product table
SELECT price INTO l_unit_price
FROM products
WHERE product_id = p_product_id;
IF l_unit_price IS NULL OR l_unit_price <= 0 THEN
RETURN 0; -- Handle invalid price
END IF;
l_total_amount := l_unit_price * p_quantity;
-- Apply a discount for large quantities
IF p_quantity > 100 THEN
l_total_amount := l_total_amount * 0.90; -- 10% discount
END IF;
RETURN l_total_amount;
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('Product ID ' || p_product_id || ' not found.');
RETURN 0;
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('An error occurred in calculate_total_amount: ' || SQLERRM);
RETURN 0;
END;
/
-- Create a dummy products table
CREATE TABLE products (
product_id NUMBER PRIMARY KEY,
product_name VARCHAR2(100),
price NUMBER
);
INSERT INTO products VALUES (1, 'Laptop', 1200);
INSERT INTO products VALUES (2, 'Mouse', 25);
INSERT INTO products VALUES (3, 'Keyboard', 75);
COMMIT;
-- To debug in SQL Developer:
-- 1. Open the calculate_total_amount function.
-- 2. Set breakpoints (e.g., on the SELECT, on the discount IF).
-- 3. Right-click the function name in the Navigator and choose "Debug".
-- 4. Provide values (e.g., p_product_id: 1, p_quantity: 150).
-- 5. Step through the code. Observe l_unit_price, l_total_amount.
-- 6. When execution reaches the RETURN statement, the "Return Value" tab in the
-- debugger will show the value being returned.
-- Example calls (for non-debug verification)
DECLARE
v_amount NUMBER;
BEGIN
v_amount := calculate_total_amount(1, 5); -- No discount
DBMS_OUTPUT.PUT_LINE('Total for Laptop (x5): ' || v_amount);
v_amount := calculate_total_amount(1, 150); -- With discount
DBMS_OUTPUT.PUT_LINE('Total for Laptop (x150): ' || v_amount);
v_amount := calculate_total_amount(99, 10); -- Product not found
DBMS_OUTPUT.PUT_LINE('Total for non-existent product: ' || v_amount);
END;
/
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE products';
EXECUTE IMMEDIATE 'DROP FUNCTION calculate_total_amount';
Explanation
This example focuses on debugging a PL/SQL function and observing its return value in SQL Developer. The calculate_total_amount function computes a total price, potentially applying a discount. During a debug session, you can set breakpoints, step through the logic, and crucially, observe the values of l_unit_price and l_total_amount as they are calculated. When the function is about to return, the debugger's "Return Value" window will display the final value that the function will send back to the calling environment. This is extremely helpful for verifying the correctness of complex calculations and ensuring that functions produce the expected output, a vital part of Oracle PL/SQL testing and validation.
Example 3: Debugging an Anonymous Block and Exception Handling
SET SERVEROUTPUT ON;
-- Procedure that might raise an error
CREATE OR REPLACE PROCEDURE insert_new_user (
p_username IN VARCHAR2,
p_email IN VARCHAR2
)
AS
BEGIN
-- Simulate a unique constraint violation if username already exists
INSERT INTO app_users (username, email)
VALUES (p_username, p_email);
DBMS_OUTPUT.PUT_LINE('User ' || p_username || ' created successfully.');
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
RAISE_APPLICATION_ERROR(-20001, 'Username ' || p_username || ' already exists.');
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20002, 'An unexpected error occurred: ' || SQLERRM);
END;
/
-- Create a dummy app_users table with a unique constraint
CREATE TABLE app_users (
user_id NUMBER GENERATED ALWAYS AS IDENTITY,
username VARCHAR2(50) UNIQUE NOT NULL,
email VARCHAR2(100)
);
INSERT INTO app_users (username, email) VALUES ('john.doe', 'john@example.com');
COMMIT;
-- Anonymous block to be debugged, including calls and exception handling
DECLARE
l_username VARCHAR2(50);
l_email VARCHAR2(100);
BEGIN
DBMS_OUTPUT.PUT_LINE('--- Starting user creation process ---');
-- Scenario 1: Successful insert
l_username := 'jane.smith';
l_email := 'jane@example.com';
BEGIN
insert_new_user(l_username, l_email);
COMMIT;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Caught error for ' || l_username || ': ' || SQLERRM);
ROLLBACK;
END;
-- Scenario 2: Duplicate username (will raise DUP_VAL_ON_INDEX)
l_username := 'john.doe';
l_email := 'john.new@example.com';
BEGIN
insert_new_user(l_username, l_email); -- Set breakpoint here
COMMIT;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Caught error for ' || l_username || ': ' || SQLERRM);
ROLLBACK;
END;
DBMS_OUTPUT.PUT_LINE('--- User creation process finished ---');
END;
/
-- To debug in SQL Developer:
-- 1. Open the anonymous block in a worksheet.
-- 2. Set a breakpoint on the `insert_new_user(l_username, l_email);` line for the duplicate scenario.
-- 3. Click the "Debug" button in the worksheet toolbar (green bug icon).
-- 4. Execution will pause at your breakpoint.
-- 5. Use Step Over (F8) to proceed. When the `INSERT` statement attempts to violate the unique constraint,
-- an exception will be raised. The debugger will show the exception in the "Smart Data" or "Variables" window.
-- 6. Step Over again to see how the exception propagates to the `EXCEPTION` block in the anonymous block.
-- 7. You can inspect `SQLERRM` and `SQLCODE` in the debugger.
-- Clean up
EXECUTE IMMEDIATE 'DROP TABLE app_users';
EXECUTE IMMEDIATE 'DROP PROCEDURE insert_new_user';
Explanation
This example demonstrates debugging an anonymous PL/SQL block that calls a procedure, specifically focusing on exception handling. The insert_new_user procedure is designed to raise a DUP_VAL_ON_INDEX exception if a duplicate username is inserted. When debugging the anonymous block in SQL Developer, you can set a breakpoint before the problematic call. When the INSERT statement fails, the debugger will immediately highlight the line where the exception occurs and allow you to see the exception details in the variable windows. You can then step into the EXCEPTION block to observe how your error handling logic catches and processes the exception. This capability is vital for robust Oracle PL/SQL application development, ensuring that your code gracefully handles errors and provides meaningful feedback, contributing to overall code reliability and maintainability.