开发者

Oracle SQL Where Condition comparing varchar2 field with number or string

开发者 https://www.devze.com 2023-04-12 04:53 出处:网络
I have two tables with the following columns: Table1 {IDNUMBER(15), ROLL_NUMVARCHAR2(9), BATCH_NUMVARCHAR2(6),

I have two tables with the following columns:

Table1
{   ID            NUMBER(15),
    ROLL_NUM      VARCHAR2(9),
    BATCH_NUM     VARCHAR2(6),
    ACCT_BALANCE  NUMBER(15,2)
}

Table2
{   Table1_ID      NUMBER(15) REFERENCES TABLE1.ID,
    SEQ_NUM       NUMBER(2),
    TRANS_NUM     VARCHAR2(10),
    TRANS_AMT     NUMBER(8,2),
    TRANS_DT      DATE
}

Table1 has 200,000 records and Table2 has 500,000 records

I have simple joins as follows:

SQL #1:

SELECT A.ROLL_NUM, A.ACCT_BALANCE, B.TRANS_NUM, TRANS_AMT, TRANS_DT
FROM   TABLE1 A, TABLE2 B
WHERE  B.Table1_ID = A.ID
AND    A.BATCH_NUM = 400012

SQL #2:

SELECT A.ROLL_NUM, A.ACCT_BALANCE, B.TRANS_NUM, TRANS_AMT, TRANS_DT
FROM   TABLE1 A, TABLE2 B
WHERE  B.Table1_ID = A.ID
AND    A.BATCH_NUM = '400012'

SQL #3:

SELECT A.ROLL_NUM, A.ACCT_BALANCE, B.TRANS_NUM, TRANS_AMT, TRANS_DT
FROM   TABLE1 A, TABLE2 B
WHERE  B.Table1_ID = A.ID
AND    A.BATCH_NUM = TO_NUMBER('400012')

The expected result from the count should be 500,000 if every BATCH_NUM from Table1 is '400012' and all the ID's 开发者_Go百科match up in Table2.

When I run these queries in Oracle (v11 or v10), SQL #2 seems to take forever and I had to stop the query from running after 10 to 15 minutes. SQL #1 and #3 seems to come back with results instantaneously with the full 500,000 records in less than a second. At first, I thought it was an indexing issue, but adding the index doesn't do anything to resolve the issue. I tried this query in TOAD and in SQL Developer with the same results.

I'm at a loss here because the BATCH_NUM column in Table1 is a VARCHAR2 and you'd think that an implicit conversion of the data types would cause the query to be slower, not faster than a non-converted comparison. Can someone explain this?


I agree that use of implicit conversion would prevent an index access path.

To be sure what's really going on, do the following:

explain plan for
select ......
/

Then, immediately following that, do:

select * from table(dbms_xplan.display);

and post the results.

Without looking at the execution plan, anything I say would be guesswork.

Oh, and please post the definitions of all indexes that exist on the tables.

-Mark


I would guess you have fallen into the trap of thinking that index access = fast, full table scan = slow.

Out of date statistics could be causing poor your inconsistant execution times.

you can check for stale stats using:

DECLARE
   l_objlist   DBMS_STATS.objecttab;
BEGIN
   DBMS_STATS.gather_schema_stats (ownname      => USER,
                                   options      => 'LIST STALE',
                                   objlist      => l_objlist
                                  );

   FOR i IN 1 .. l_objlist.COUNT
   LOOP
      DBMS_OUTPUT.put_line (   l_objlist (i).objtype
                            || ' .. '
                            || l_objlist (i).objname
                           );
   END LOOP;
END;


OK, after sitting down with the colleague looking at the explain plans and looking at the indexes, we finally found the issue. I'd like to thank Mark Bobak and Kevin Burton for their input into this. Here's what we found out:

There was an index that had the BATCH_NUM and two other columns as the main index. When there is a cast conversion, Oracle decides to do a full table scan for all the Table1_ID in the child table that matches the BATCH_NUM and ID in the parent table. Doing the full table scan in this case is relatively fast. No, the search condition with the BATCH_NUM as a string - this is where the index fails and causes the whole query to "hang". Because BATCH_NUM is not unique in any way, the condition without any implicit or explicit conversion causes Oracle to try and use the index and the explain plan showed that it tries to do a full range scan of the index (which can be quite large if the table is around 500,000 to 1,000,000 rows of records). Removing the index actually helped in resolving this issue.


If a type conversion from numeric to string must take place, a "normal" index on the column cannot be used. So if you look at the explain plan for your second query, you likely see the full table scan taking place with the qualification to_string(batch_num) == '400012'.

If you really must be able to qualify on a string, you can create a function based index[1] for the column. If you can use the syntax described in options 1 or 3, you can leverage a "normal" index on the batch_num column.

[1] - http://docs.oracle.com/cd/E11882_01/appdev.112/e25518/adfns_indexes.htm

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号