Oracle Advanced Compression – a quick look on performance

Today I was playing around with an oracle feature called “Oracle Advanced Compression“.
Compression is around for quite a while now, but it received some improvements in 11g. For example, it now also works on transaction level, which makes it an option for OLTP environments.
According to oracle, this feature will generally improve both disk space requirement and database performance.

Two benefits and no downside? Sounds nice, so I gave it a try.

1.) Testing Environment

I tested on an oracle 11g single instance on a solaris zone.

First, I created two identical tables, one with compression activated, the other one uncompressed. Then I filled them with approximately 150000 sets of data from dba_objects:

SQL> create table obj_comp as select * from dba_objects where 1=0;
 SQL> create table obj_uncomp as select * from dba_objects where 1=0;
 SQL> alter table obj_comp compress for oltp;
 SQL> insert into obj_comp select * from dba_objects;
 SQL> insert into obj_comp select * from dba_objects;
 SQL> insert into obj_uncomp select * from dba_objects;
 SQL> insert into obj_uncomp select * from dba_objects;

To check if compression is correctly configured, you can use this SQL:

SQL> select table_name
 ,      compression
 ,      compress_for
 from   user_tables;

 TABLE_NAME   COMPRESS COMPRESS_FOR
 ------------ -------- ------------
 OBJ_COMP     ENABLED  OLTP
 OBJ_UNCOMP   DISABLED

2.) Space requirement

That one was easy. You just have to issue the following command:

SQL> SELECT segment_name
 ,      blocks
 from   user_segments
 WHERE  segment_name LIKE 'COMP%';

 SEGMENT_NAME   BLOCKS
 ------------- --------
 OBJ_COMP      896
 OBJ_UNCOMP    2176

As you can easily see, the compressed table saves almost 60% on blocks (and so space on hdd) compared to the uncompressed one.
In “reality” this depends on how the data is structured. The more identical table cells, the better the compression rate. But in general, you will get a nice benefit here.

 

3.) Performance tests

First, I did a full table scan on both tables. To avoid any confusion on buffering, I flushed the buffer cache first.

SQL> alter system flush buffer_cache;
 System altered.
 
SQL> select * from obj_uncomp where object_id < 800;
 4554 rows selected.
 Elapsed: 00:00:02.72

SQL> select * from obj_comp where object_id < 800;
 4554 rows selected.
 Elapsed: 00:00:03.75

As you can see, the Select on the compressed table took a bit longer. When we enable tracing and analyze the tracefile with tkprof, we can see the reason for this.
First, the uncompressed table:

select *
 from
 obj_uncomp where object_id < 800

call     count       cpu    elapsed       disk      query    current        rows
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 Parse        2      0.00       0.00          0          1          0           0
 Execute      2      0.00       0.00          0          0          0           0
 Fetch      610      0.17       0.14       2261       5136          0        9108
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 total      614      0.17       0.14       2261       5137          0        9108

Misses in library cache during parse: 1
 Optimizer mode: ALL_ROWS
 Parsing user id: 91

Rows     Row Source Operation
 -------  ---------------------------------------------------
 4554  TABLE ACCESS FULL OBJ_UNCOMP (cr=2568 pr=0 pw=0 time=14959 us cost=617 size=2346552 card=11336)

Elapsed times include waiting on following events:
 Event waited on                             Times   Max. Wait  Total Waited
 ----------------------------------------   Waited  ----------  ------------
 SQL*Net message to client                     610        0.00          0.00
 SQL*Net message from client                   610       22.90         32.90
 db file sequential read                         1        0.00          0.00
 db file scattered read                         50        0.00          0.01

And, for comparison, the compressed table:

select *
 from
 obj_comp where object_id < 800

call     count       cpu    elapsed       disk      query    current        rows
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 Parse        2      0.01       0.00          0          1          0           0
 Execute      2      0.00       0.00          0          0          0           0
 Fetch      610      0.19       0.19        846       2302          0        9108
 ------- ------  -------- ---------- ---------- ---------- ----------  ----------
 total      614      0.20       0.19        846       2303          0        9108

Misses in library cache during parse: 1
 Optimizer mode: ALL_ROWS
 Parsing user id: 91

Rows     Row Source Operation
 -------  ---------------------------------------------------
 4554  TABLE ACCESS FULL OBJ_COMP (cr=1151 pr=846 pw=0 time=258870 us cost=240 size=367632 card=1776)

Elapsed times include waiting on following events:
 Event waited on                             Times   Max. Wait  Total Waited
 ----------------------------------------   Waited  ----------  ------------
 SQL*Net message to client                     610        0.00          0.00
 db file sequential read                         4        0.00          0.00
 db file scattered read                         28        0.00          0.00
 SQL*Net message from client                   609       16.34         21.47

 

So we do, in fact save a lot of disk reads here, but for the price of spent CPU time. That makes sense, because oracle needs the CPU to uncompress the data before presenting it. If you are low on CPU as I seemed to be here, the prize surpasses the gain.

We get similar results on DML:

SQL> insert into obj_uncomp select * from dba_objects where rownum < 2001;
 2000 rows created.
 Elapsed: 00:00:00.09
SQL> insert into obj_comp select * from dba_objects where rownum < 2001;
 2000 rows created.
 Elapsed: 00:00:00.19

SQL> update obj_uncomp set owner='SOMEONE' where owner='SYS';
 20 rows updated.
 Elapsed: 00:00:00.53
SQL> update obj_comp set owner='SOMEONE' where owner='SYS';
 20 rows updated.
 Elapsed: 00:00:02.30

SQL> delete obj_uncomp where owner='SOMEONE';
 69634 rows deleted.
 Elapsed: 00:00:03.86
SQL> delete obj_comp where owner='SOMEONE';
 69633 rows deleted.
 Elapsed: 00:00:05.19

Everytime the execution on the compressed table took a bit longer, than the uncompressed counterpart. tkprof also shows always the same: it is kind of a struggle between savings in Disk I/O versus overhead in CPU time. And in my case, CPU simply lost.

4.) Conclusion

Compression is indeed a very interesting feature.

The savings in hard disk space are undeniable and a big plus.
In terms of performance, it depends on how your system is configured. If your performance bottleneck is disk I/O, you almost certainly will benefit from using compression, because it saves a lot of disk reads. If you are on the other hand low on CPU, you might not always.

4 thoughts on “Oracle Advanced Compression – a quick look on performance

Leave a Reply

Your email address will not be published. Required fields are marked *