Feed aggregator

Refreshable Clone PDB -- 4 Converting it to a Read Write PDB

Hemant K Chitale - 8 hours 37 min ago

 As demonstrated in my first post on Refreshable Clone PDBs, the Clone PDB can be opened in only READ ONLY mode.  This is akin to a Basic Materialized View or a Read Only Standby Database, either of which is updated from the source table(s) / database.

Such a Refreshable Clone PDB is useful as a Reporting Database where you can run queries / extracts -- with data AS OF the last Refresh -- without putting load on the running Production (Transactional, Read Write Database).

But if you want to (finally) open the Clone PDB in Read Write mode (and severe any links with the source, thus preventing further Refresh's) you can do so by changing the Refresh Mode.



SQL> alter pluggable database ro_pdb open read only;

Pluggable database altered.

SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects;

COUNT(*)
----------
73645

SQL> connect / as sysdba
Connected.

SQL> alter pluggable database ro_pdb close; -- close it so that I can do a REFRESH

Pluggable database altered.

SQL>
SQL> alter pluggable database ro_pdb refresh; -- execute the REFRES

Pluggable database altered.

SQL> alter pluggable database ro_pdb open ; -- attempt to OPEN, defaulting to READ WRITE mode
alter pluggable database ro_pdb open
*
ERROR at line 1:
ORA-65341: cannot open pluggable database in read/write mode


SQL> alter pluggable database ro_pdb open read write;
alter pluggable database ro_pdb open read write
*
ERROR at line 1:
ORA-65341: cannot open pluggable database in read/write mode


SQL>
SQL> alter pluggable database ro_pdb open read only; -- open READ ONLY

Pluggable database altered.

SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects; -- verify that the database has been refreshed (new rows visible in the source table)

COUNT(*)
----------
83645

SQL>
SQL> connect / as sysdba
Connected.
SQL> alter pluggable database ro_pdb close; -- close it again

Pluggable database altered.

SQL>
SQL> alter pluggable database ro_pdb refresh mode none ; -- ***DISABLE FURTHER REFRESH's***

Pluggable database altered.

SQL>
SQL> alter pluggable database ro_pdb open read write; -- open in READ WRITE mode now !!

Pluggable database altered.

SQL>
SQL> connect hemant/newhemant@ro_pdb
Connected.
SQL> select count(*) from list_of_objects;

COUNT(*)
----------
83645

SQL> delete list_of_objects where owner = 'HEMANT'; -- proof that the database is now WRITABLE

32 rows deleted.

SQL> commit;

Commit complete.

SQL>
SQL> alter pluggable database ro_pdb close;

Pluggable database altered.

SQL> alter pluggable database ro_pdb refresh; -- check if it can be REFRESH'ed from the source
alter pluggable database ro_pdb refresh
*
ERROR at line 1:
ORA-65261: pluggable database RO_PDB not enabled for refresh


SQL>


Thus, to enable the PDB to be WRITABLE, REFRESH has to be disabled. 
 Once REFRESH is disabled, the PDB can no longer be refreshed from the source. It has diverged from the source.

These are the alert log messages when REFRESH is disabled :


2022-01-23T15:35:55.766486+08:00
alter pluggable database ro_pdb refresh mode none
2022-01-23T15:35:55.846041+08:00
RO_PDB(6):Pluggable database RO_PDB pseudo opening
RO_PDB(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
RO_PDB(6):Autotune of undo retention is turned on.
RO_PDB(6):Endian type of dictionary set to little
RO_PDB(6):Undo initialization recovery: Parallel FPTR failed: start:1511350 end:1511367 diff:17 ms (0.0 seconds)
RO_PDB(6):Undo initialization recovery: err:0 start: 1511329 end: 1511464 diff: 135 ms (0.1 seconds)
RO_PDB(6):[6305] Successfully onlined Undo Tablespace 2.
RO_PDB(6):Undo initialization online undo segments: err:0 start: 1511465 end: 1511507 diff: 42 ms (0.0 seconds)
RO_PDB(6):Undo initialization finished serial:0 start:1511329 end:1511509 diff:180 ms (0.2 seconds)
RO_PDB(6):Database Characterset for RO_PDB is AL32UTF8
2022-01-23T15:35:57.144146+08:00
RO_PDB(6):Pluggable database RO_PDB pseudo closing
RO_PDB(6):JIT: pid 6305 requesting stop
RO_PDB(6):Closing sequence subsystem (1512195729).
RO_PDB(6):Buffer Cache flush started: 6
RO_PDB(6):Buffer Cache flush finished: 6
Completed: alter pluggable database ro_pdb refresh mode none


And these are the messages when it is OPENed in READ WRITE mode :



2022-01-23T15:36:56.085938+08:00
alter pluggable database ro_pdb open read write
2022-01-23T15:36:56.088166+08:00
RO_PDB(6):Pluggable database RO_PDB opening in read write
RO_PDB(6):SUPLOG: Initialize PDB SUPLOG SGA, old value 0x0, new value 0x18
RO_PDB(6):Autotune of undo retention is turned on.
RO_PDB(6):Endian type of dictionary set to little
RO_PDB(6):Undo initialization recovery: Parallel FPTR complete: start:1571711 end:1571732 diff:21 ms (0.0 seconds)
RO_PDB(6):Undo initialization recovery: err:0 start: 1571710 end: 1571732 diff: 22 ms (0.0 seconds)
2022-01-23T15:36:58.249919+08:00
RO_PDB(6):[6305] Successfully onlined Undo Tablespace 2.
RO_PDB(6):Undo initialization online undo segments: err:0 start: 1571732 end: 1573115 diff: 1383 ms (1.4 seconds)
RO_PDB(6):Undo initialization finished serial:0 start:1571710 end:1573161 diff:1451 ms (1.5 seconds)
RO_PDB(6):Deleting old file#9 from file$
RO_PDB(6):Deleting old file#10 from file$
RO_PDB(6):Deleting old file#11 from file$
RO_PDB(6):Deleting old file#12 from file$
RO_PDB(6):Deleting old file#26 from file$
RO_PDB(6):Deleting old file#31 from file$
RO_PDB(6):Deleting old file#32 from file$
RO_PDB(6):Deleting old file#33 from file$
RO_PDB(6):Deleting old file#34 from file$
RO_PDB(6):Adding new file#50 to file$(old file#9). fopr-1, newblks-48640, oldblks-19200
RO_PDB(6):Adding new file#51 to file$(old file#10). fopr-1, newblks-61440, oldblks-15360
RO_PDB(6):Adding new file#52 to file$(old file#11). fopr-1, newblks-58240, oldblks-12800
RO_PDB(6):Adding new file#53 to file$(old file#12). fopr-1, newblks-46880, oldblks-640
RO_PDB(6):Adding new file#54 to file$(old file#26). fopr-1, newblks-12800, oldblks-12800
RO_PDB(6):Adding new file#56 to file$(old file#31). fopr-1, newblks-12800, oldblks-12800
RO_PDB(6):Adding new file#55 to file$(old file#32). fopr-1, newblks-12800, oldblks-12800
RO_PDB(6):Adding new file#57 to file$(old file#34). fopr-1, newblks-115200, oldblks-12800
RO_PDB(6):Successfully created internal service RO_PDB at open
2022-01-23T15:36:59.694482+08:00
****************************************************************
Post plug operations are now complete.
Pluggable database RO_PDB with pdb id - 6 is now marked as NEW.
****************************************************************
RO_PDB(6):Pluggable database RO_PDB dictionary check beginning
RO_PDB(6):Pluggable Database RO_PDB Dictionary check complete
RO_PDB(6):Database Characterset for RO_PDB is AL32UTF8
2022-01-23T15:37:05.752451+08:00
RO_PDB(6):JIT: pid 6305 requesting full stop
2022-01-23T15:37:11.910855+08:00
RO_PDB(6):SUPLOG: Set PDB SUPLOG SGA at PDB OPEN, old 0x18, new 0x0 (no suplog)
2022-01-23T15:37:20.655852+08:00
RO_PDB(6):Opening pdb with no Resource Manager plan active
RO_PDB(6):joxcsys_required_dirobj_exists: directory object exists with required path /opt/oracle/product/19c/dbhome_1/javavm/admin/, pid 6305 cid 6
2022-01-23T15:37:22.578807+08:00
Pluggable database RO_PDB opened read write
Completed: alter pluggable database ro_pdb open read write


Note : To understand FILE# values, see my previous post on Datafile Names for Refresh Clone PDB.
(Some of the FILE# values have changed since that post because I have added a new tablespace in the source database and recreated RO_PDB as a Refreshable Clone PDB since that post).

Categories: DBA Blogs

SIMPLIFY A COMPLICATED PROCESS USING SED

Michael Dinh - Fri, 2022-01-21 22:08

For every PDB, there is a perl script used to report tablespace free the that PDB.

While I am not able to change how the process was implemented, I can make it easier.

Here is the current process.

Edit the script "tablespace_free_PDB_NAME.pl".

Change the PDB name at the below SQL command:
"alter session set container=<Your PDB name_1>"
 
Rename the script to match your PDB name 
like "tablespace_free_<Your PDB name>.pl".

The above instructions work but is time consuming, not scalable, and error prone.

Here’s a demo how to simplify the process.

1. Create template temp_tablespace_free_PDB.pl:

[oracle@ol7-19-dg1 ~]$ cat temp_tablespace_free_PDB.pl
alter session set container=vPDB;

2. Export variable PDB with <Your PDB name>

[oracle@ol7-19-dg1 ~]$ export PDB=SOAP

3. Create tablespace_free_<Your PDB name>.pl

[oracle@ol7-19-dg1 ~]$ ls tablespace_free_$PDB.pl
ls: cannot access tablespace_free_SOAP.pl: No such file or directory

[oracle@ol7-19-dg1 ~]$ sed "s/vPDB/$PDB/g" temp_tablespace_free_PDB.pl > tablespace_free_$PDB.pl

[oracle@ol7-19-dg1 ~]$ ls tablespace_free_$PDB.pl
tablespace_free_SOAP.pl
[oracle@ol7-19-dg1 ~]$

[oracle@ol7-19-dg1 ~]$ cat tablespace_free_$PDB.pl
alter session set container=SOAP;
[oracle@ol7-19-dg1 ~]$

The above solution is better but far from perfect.

If there are a dozen PDBs to implement, then manual work will have to be done a dozen time.

Here is an example using array and for loop.

There are 2 PDBs: SOAP and SCUM.

Here is the template:

[oracle@ol7-19-dg1 ~]$ cat temp_tablespace_free_PDB.pl
alter session set container=vPDB;
select sysdate from dual;
[oracle@ol7-19-dg1 ~]$

1. Create script to loop through list of PDBs:

[oracle@ol7-19-dg1 ~]$ cat create_tablespace_free_PDB.sh
#!/bin/bash
array=( SOAP SCUM )
for i in "${array[@]}"
do
  echo "$i"
  export PDB=$i
  sed "s/vPDB/$PDB/g" temp_tablespace_free_PDB.pl > tablespace_free_$PDB.pl
  ls -l tablespace_free_$PDB.pl
done
exit
[oracle@ol7-19-dg1 ~]$

2. Run create_tablespace_free_PDB.sh:

[oracle@ol7-19-dg1 ~]$ ./create_tablespace_free_PDB.sh
SOAP
-rw-r--r--. 1 oracle oinstall 60 Jan 22 03:52 tablespace_free_SOAP.pl
SCUM
-rw-r--r--. 1 oracle oinstall 60 Jan 22 03:52 tablespace_free_SCUM.pl
[oracle@ol7-19-dg1 ~]$

3. Review results:

[oracle@ol7-19-dg1 ~]$ cat tablespace_free_SOAP.pl
alter session set container=SOAP;
select sysdate from dual;
[oracle@ol7-19-dg1 ~]$

[oracle@ol7-19-dg1 ~]$ cat tablespace_free_SCUM.pl
alter session set container=SCUM;
select sysdate from dual;
[oracle@ol7-19-dg1 ~]$

Oracle Database Appliance (ODA): Adjusting to the new ORACLE_BASE-setting from 19.11. onwards

Yann Neuhaus - Fri, 2022-01-21 15:58
By Clemens Bleile

From Oracle Database Appliance (ODA) version 19.11. onwards the ORACLE_HOMEs and ORACLE_BASE will be created on ACFS, i.e. take space from an ASM-diskgroup instead of taking filesystem space from /u01. See this blog concerning details.

If the ODA has been upgraded from a release < 19.11. to 19.11. or newer, then you usually have diag- and admin-data in 2 ORACLE_BASE-locations:

/u01/app/oracle

for databases created before migrating to ODA 19.11. or newer and

/u01/app/odaorabase/oracle

for databases created after the migration to ODA 19.11. or newer.

An ORACLE_HOME is associated to an ORACLE_BASE. This can be shown by running the command orabase:

oracle@dbi-oda-x8:/home/oracle/ [rdbms191300] orabase
/u01/app/odaorabase/oracle

orabase takes its info from

$ORACLE_HOME/install/orabasetab

Migrating databases created before ODA 19.11. to an ORACLE_HOME created after the installation of ODA 19.11. causes diagnostics and admin-data to remain in the “old” ORACLE_BASE /u01/app/oracle, but the command orabase points to the “new” ORACLE_BASE /u01/app/odaorabase/oracle. Tools, which rely on a global ORACLE_BASE settings may have problems handling 2 ORACLE_BASE locations on a machine.

REMARK: The dbi-services tool DMK Management Kit can handle the situation with different ORACLE_BASE on a machine by specifying the ORACLE_BASE-location per database in separate sections in $DMK_HOME/etc/dmk.conf.

The following instructions will show the steps to move the diag- and admin-data of a database from an “old” ORACLE_BASE /u01/app/oracle to the “new” ORACLE_BASE /u01/app/odaorabase/oracle:

E.g. assume we have a Database CBLTEST which has an ORACLE_BASE /u01/app/oracle and a newly created database CBLTEST2 with an ORACLE_BASE /u01/app/odaorabase0:

[root@dbi-oda-x8 log]# odacli list-databases
 
ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID                                
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
f9e51c56-fcd2-47a3-bcb4-56587a710f1e     CBLTEST    SI       19.9.0.0.201020      false      OLTP     odb1     ASM        CONFIGURED   d9d1b8dd-4abc-46fe-aae4-d52162d66dab    
34fd593b-7586-41c9-a58d-71dfc0cf1d90     CBLTEST2   SI       19.13.0.0.211019     true       OLTP     odb1s    ASM        CONFIGURED   4dda330d-feed-491b-bace-006c22b75672

After bringing CBLTEST to 19.13. with

# odacli update-dbhome -i d9d1b8dd-4abc-46fe-aae4-d52162d66dab -v 19.13.0.0.0

it has the following ORACLE_BASE-setting:

oracle@dbi-oda-x8:/home/oracle/ [CBLTEST] orabase
/u01/app/odaorabase/oracle

but the diagnostics-data still below the “old” ORACLE_BASE.

Here the steps to copy your diag- and admin-data for CBLTEST to the new ORACLE_BASE:

REMARK 1: The situation with Read-Only ORACLE_HOMEs is not covered here.
REMARK 2: Before doing this in production you should test this on a test system carefully. If in doubt open a Service Request with Oracle.

1. Change diagnostic_dest and the audit_file_dest in the spfile

sqlplus / as sysdba
SQL> alter system set diagnostic_dest='/u01/app/odaorabase/oracle' scope=spfile;
SQL> alter system set audit_file_dest='/u01/app/odaorabase/oracle/admin/CBLTEST/adump' scope=spfile;

2. Shutdown the DB

srvctl stop database -db CBLTEST

3. Copy the rdbms-diag-data to new ORACLE_BASE

cp -pR /u01/app/oracle/diag/rdbms/cbltest /u01/app/odaorabase/oracle/diag/rdbms

4. Copy the admin-directory to the new ORACLE_BASE

cp -pR /u01/app/oracle/admin/CBLTEST /u01/app/odaorabase/oracle/admin

5. Copy the audit-data for spillover-files from unified auditing.

cp -pR /u01/app/oracle/audit/CBLTEST /u01/app/odaorabase/oracle/audit

6. Change ORACLE_BASE to the new value and adjust your scripts, which do set ORACLE_BASE. E.g. for DMK adjust $DMK_HOME/etc/dmk.conf

export ORACLE_BASE=/u01/app/odaorabase/oracle

REMARK: You may check if the environment variable ORACLE_BASE has been stored in the Oracle Cluster registry:

oracle@dbi-oda-x8:/home/oracle/ [CBLTEST] srvctl getenv database -d CBLTEST -t "ORACLE_BASE"
CBLTEST:
PRKF-1128 : Environment variable ORACLE_BASE is not defined.

If it has been set (which would be very unusuable on an ODA) then you can overwrite it with

oracle@dbi-oda-x8:/home/oracle/ [CBLTEST] srvctl setenv database -d CBLTEST -t "ORACLE_BASE=/u01/app/odaorabase/oracle"
oracle@dbi-oda-x8:/home/oracle/ [CBLTEST] srvctl getenv database -d CBLTEST -t "ORACLE_BASE"
CBLTEST:
ORACLE_BASE=/u01/app/odaorabase/oracle

7. Startup the DB

srvctl start database -db CBLTEST

8. Check that everything is correct

8.1. Re-login as oracle, set the environment and connect “as sysdba”. Here with DMK sourced:

oracle@dbi-oda-x8:/home/oracle/ [rdbms191300] CBLTEST
********* dbi services Ltd. *********
STATUS                 : OPEN
DB_UNIQUE_NAME         : CBLTEST
OPEN_MODE              : READ WRITE
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PRIMARY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
VERSION                : 19.13.0.0.0
CDB Enabled            : NO
*************************************
oracle@dbi-oda-x8:/home/oracle/ [CBLTEST] echo $ORACLE_BASE
/u01/app/odaorabase/oracle
oracle@dbi-oda-x8:/home/oracle/ [CBLTEST] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 21 20:51:48 2022
Version 19.13.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.13.0.0.0

SQL> select value from v$diag_info where value like '/u01%';

VALUE
---------------------------------------------------------------------------------
/u01/app/odaorabase/oracle
/u01/app/odaorabase/oracle/diag/rdbms/cbltest/CBLTEST
/u01/app/odaorabase/oracle/diag/rdbms/cbltest/CBLTEST/trace
/u01/app/odaorabase/oracle/diag/rdbms/cbltest/CBLTEST/alert
/u01/app/odaorabase/oracle/diag/rdbms/cbltest/CBLTEST/incident
/u01/app/odaorabase/oracle/diag/rdbms/cbltest/CBLTEST/cdump
/u01/app/odaorabase/oracle/diag/rdbms/cbltest/CBLTEST/hm
/u01/app/odaorabase/oracle/diag/rdbms/cbltest/CBLTEST/trace/CBLTEST_ora_45735.trc
/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_4

9 rows selected.

SQL> quit
Disconnected from Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.13.0.0.0
oracle@dbi-oda-x8:/home/oracle/ [CBLTEST] cda
oracle@dbi-oda-x8:/u01/app/odaorabase/oracle/admin/CBLTEST/ [CBLTEST] ls -l
total 364
drwxr-x--- 2 oracle oinstall 20480 Jan 13 22:01 adump
drwxr-xr-x 2 oracle oinstall 20480 Jan 13 22:48 backup
drwxr-x--- 2 oracle oinstall 20480 Jan 13 12:27 dpdump
drwxr-xr-x 2 oracle oinstall 20480 Jan 13 18:01 etc
drwxr-xr-x 2 oracle oinstall 20480 Jan 13 22:48 log
drwxr-x--- 2 oracle oinstall 20480 Jan 13 12:29 pfile
drwxr-x--- 2 oracle oinstall 20480 Jan 13 12:23 xdb_wallet
oracle@dbi-oda-x8:/u01/app/odaorabase/oracle/admin/CBLTEST/ [CBLTEST] 

8.2. Check adrci

oracle@dbi-oda-x8:/home/oracle/ [CBLTEST] adrci

ADRCI: Release 19.0.0.0.0 - Production on Fri Jan 21 19:51:30 2022

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/odaorabase/oracle"
adrci> show base
ADR base is "/u01/app/odaorabase/oracle"
adrci> 

–> The Base-setting in adrci is coming from the file $ORACLE_HOME/log/diag/adrci_dir.mif

9. Cleanup

If everything is correct, then you may delete the data from the “old” ORACLE_BASE:

rm -rf /u01/app/oracle/diag/rdbms/cbltest
rm -rf /u01/app/oracle/admin/CBLTEST
rm -rf /u01/app/oracle/admin/audit/CBLTEST
References

https://marcelpils.de/orabase-doesnt-show-the-current-oracle-base-path
https://www.thegeekdiary.com/oracle-database-how-to-set-environment-variables-using-srvctl
https://logic.edchen.org/how-to-resolve-no-adr-base-is-set

MOS Notes

How To Change The Value For ORACLE_BASE In The Inventory Of A 12.1 RDBMS Home (Doc ID 2010941.1)
orabase command returns no value instead of ORACLE_BASE value (Doc ID 2225573.1)

Cet article Oracle Database Appliance (ODA): Adjusting to the new ORACLE_BASE-setting from 19.11. onwards est apparu en premier sur Blog dbi services.

Patching an IBM Db2 instance

Yann Neuhaus - Fri, 2022-01-21 13:12

The next task in my journey with DB2 is to apply a patch. In the world of DB2 there are Modification Packs and Fix packs. As far as I understand it, Modifications Packs come with new features, while Fix Packs come with bug fixes only. Keeping your database installations up to date is important, that applies to all products. As my system is running 11.5 the page which gives me all the information is this, the downloads are here. Lets go.

I’ll use the installation from the last post, which means this is a root installation. The current version of my Db2 instance is 11.5.6.0:

db2inst1@sles15-db2-root:~> db2ls

Install Path                       Level   Fix Pack   Special Install Number   Install Date                  Installer UID 
---------------------------------------------------------------------------------------------------------------------
/opt/ibm/db2/V11.5               11.5.6.0        0                            Thu Jan 20 12:05:28 2022 CET             0 

After downloading the latest pack and putting it in the home directory of the root user it looks like this:

db2 => quit
DB20000I  The QUIT command completed successfully.
db2inst1@sles15-db2-root:~> su -
Password: 
sles15-db2-root:~ $ ls -l
total 1826112
-rw------- 1 root root         61 Jan 20 13:14 .Xauthority
-rw------- 1 root root       2376 Jan 20 15:25 .bash_history
drwx------ 1 root root          0 Mar  3  2021 .gnupg
-rw------- 1 root root       1402 Jan 20 15:25 .viminfo
drwxr-xr-x 1 root root          0 Mar  3  2021 bin
drwxr-xr-x 1 root root         36 Jul  9  2021 inst-sys
-rw-r--r-- 1 root root 1869925829 Jan 21 09:42 v11.5.7_linuxx64_server_dec.tar.gz

Not much different from the initial installation, once you extract that you’ll get the “server_dec” directory:

sles15-db2-root:~ $ tar axf v11.5.7_linuxx64_server_dec.tar.gz 
sles15-db2-root:~ $ ls -l
total 1826112
-rw------- 1 root root         61 Jan 20 13:14 .Xauthority
-rw------- 1 root root       2376 Jan 20 15:25 .bash_history
drwx------ 1 root root          0 Mar  3  2021 .gnupg
-rw------- 1 root root       1402 Jan 20 15:25 .viminfo
drwxr-xr-x 1 root root          0 Mar  3  2021 bin
drwxr-xr-x 1 root root         36 Jul  9  2021 inst-sys
drwxr-xr-x 1 root root        240 Nov 22 19:36 server_dec
-rw-r--r-- 1 root root 1869925829 Jan 21 09:42 v11.5.7_linuxx64_server_dec.tar.gz
sles15-db2-root:~ $ rm v11.5.7_linuxx64_server_dec.tar.gz  

Inside this directory you’ll “installFixPack”, which is a plain shell script:

sles15-db2-root:~ # cd server_dec/
sles15-db2-root:~/server_dec # ls -l
total 84
drwxr-xr-x 1 bin bin   112 Nov 22 19:36 db2
-r-xr-xr-x 1 bin bin  5207 Nov 22 19:36 db2_deinstall
-r-xr-xr-x 1 bin bin  5077 Nov 22 19:36 db2_install
-r--r--r-- 1 bin bin 16809 Nov 22 19:36 db2checkCOL.tar.gz
-r--r--r-- 1 bin bin  4987 Nov 22 19:36 db2checkCOL_readme.txt
-r-xr-xr-x 1 bin bin  5254 Nov 22 19:36 db2ckupgrade
-r-xr-xr-x 1 bin bin  5041 Nov 22 19:36 db2ls
-r-xr-xr-x 1 bin bin  5059 Nov 22 19:36 db2prereqcheck
-r-xr-xr-x 1 bin bin  5059 Nov 22 19:36 db2setup
-r-xr-xr-x 1 bin bin  5095 Nov 22 19:36 installFixPack

The parameters you need to pass to that script, are the current location of installation and a new directory where the pack gets installed. Lets try:

sles15-db2-root:~/server_dec $ ./installFixPack -b /opt/ibm/db2/V11.5/ -p /opt/ibm/db2/fixpack/
Requirement not matched for DB2 database "Server" . Version: "11.5.7.0". 

Summary of prerequisites that are not met on the current system: 

DBT3507E  The db2prereqcheck utility failed to find the following package or file: "mksh". 


DBT3619W  The db2prereqcheck utility detected that ksh is not linked to ksh or ksh93. This is required for Db2 High Availability Feature with Tivoli SA MP. 

  Aborting the current installation ...
  Run installation with the option "-f sysreq" parameter to force the installation.

Ok, apparently we need a KornShell to install that:

sles15-db2-root:~/server_dec $ zypper in -y mksh ksh

Next try:

sles15-db2-root:~/server_dec $ ./installFixPack -b /opt/ibm/db2/V11.5/ -p /opt/ibm/db2/fixpack/
Read the license agreement file in the db2/license directory.

***********************************************************
To accept those terms, enter "yes". Otherwise, enter "no" to cancel the install process. [yes/no]
yes
DBI1017I  installFixPack is updating the database products installed in
      location /opt/ibm/db2/V11.5/.


DB2 installation is being initialized.

 Total number of tasks to be performed: 53 
Total estimated time for all tasks to be performed: 2552 second(s) 

Task #1 start
Description: Preparing the system 
Estimated time 120 second(s) 
Task #1 end 

Task #2 start
Description: Base Client Support for installation with root privileges 
Estimated time 3 second(s) 
Task #2 end 

Task #3 start
Description: Product Messages - English 
Estimated time 14 second(s) 
Task #3 end 

Task #4 start
Description: Base client support 
Estimated time 360 second(s) 
Task #4 end 

Task #5 start
Description: Java Runtime Support 
Estimated time 215 second(s) 
Task #5 end 

Task #6 start
Description: Java Help (HTML) - English 
Estimated time 7 second(s) 
Task #6 end 

Task #7 start
Description: Base server support for installation with root privileges 
Estimated time 6 second(s) 
Task #7 end 

Task #8 start
Description: Global Secure ToolKit 
Estimated time 76 second(s) 
Task #8 end 

Task #9 start
Description: Java support 
Estimated time 11 second(s) 
Task #9 end 

Task #10 start
Description: SQL procedures 
Estimated time 3 second(s) 
Task #10 end 

Task #11 start
Description: ICU Utilities 
Estimated time 59 second(s) 
Task #11 end 

Task #12 start
Description: Java Common files 
Estimated time 18 second(s) 
Task #12 end 

Task #13 start
Description: Base server support 
Estimated time 610 second(s) 
Task #13 end 

Task #14 start
Description: Control Center Help (HTML) - English 
Estimated time 13 second(s) 
Task #14 end 

Task #15 start
Description: Relational wrappers common 
Estimated time 3 second(s) 
Task #15 end 

Task #16 start
Description: DB2 data source support 
Estimated time 6 second(s) 
Task #16 end 

Task #17 start
Description: ODBC data source support 
Estimated time 260 second(s) 
Task #17 end 

Task #18 start
Description: Teradata data source support 
Estimated time 3 second(s) 
Task #18 end 

Task #19 start
Description: Spatial Extender server support 
Estimated time 21 second(s) 
Task #19 end 

Task #20 start
Description: Scientific Data Sources 
Estimated time 5 second(s) 
Task #20 end 

Task #21 start
Description: JDBC data source support 
Estimated time 185 second(s) 
Task #21 end 

Task #22 start
Description: IBM Software Development Kit (SDK) for Java(TM) 
Estimated time 49 second(s) 
Task #22 end 

Task #23 start
Description: DB2 LDAP support 
Estimated time 4 second(s) 
Task #23 end 

Task #24 start
Description: DB2 Instance Setup wizard 
Estimated time 23 second(s) 
Task #24 end 

Task #25 start
Description: Structured file data sources 
Estimated time 5 second(s) 
Task #25 end 

Task #26 start
Description: Integrated Flash Copy Support 
Estimated time 3 second(s) 
Task #26 end 

Task #27 start
Description: Oracle data source support 
Estimated time 4 second(s) 
Task #27 end 

Task #28 start
Description: Connect support 
Estimated time 3 second(s) 
Task #28 end 

Task #29 start
Description: Application data sources 
Estimated time 4 second(s) 
Task #29 end 

Task #30 start
Description: Spatial Extender client 
Estimated time 3 second(s) 
Task #30 end 

Task #31 start
Description: SQL Server data source support 
Estimated time 4 second(s) 
Task #31 end 

Task #32 start
Description: Communication support - TCP/IP 
Estimated time 3 second(s) 
Task #32 end 

Task #33 start
Description: Base application development tools 
Estimated time 36 second(s) 
Task #33 end 

Task #34 start
Description: Parallel Extension 
Estimated time 3 second(s) 
Task #34 end 

Task #35 start
Description: EnterpriseDB code 
Estimated time 3 second(s) 
Task #35 end 

Task #36 start
Description: Replication tools 
Estimated time 60 second(s) 
Task #36 end 

Task #37 start
Description: Sample database source 
Estimated time 4 second(s) 
Task #37 end 

Task #38 start
Description: itlm 
Estimated time 3 second(s) 
Task #38 end 

Task #39 start
Description: DB2 Text Search 
Estimated time 108 second(s) 
Task #39 end 

Task #40 start
Description: Command Line Processor Plus 
Estimated time 8 second(s) 
Task #40 end 

Task #41 start
Description: Sybase data source support 
Estimated time 3 second(s) 
Task #41 end 

Task #42 start
Description: Informix data source support 
Estimated time 4 second(s) 
Task #42 end 

Task #43 start
Description: Federated Data Access Support 
Estimated time 3 second(s) 
Task #43 end 

Task #44 start
Description: First Steps 
Estimated time 3 second(s) 
Task #44 end 

Task #45 start
Description: Pacemaker 
Estimated time 100 second(s) 
Task #45 end 

Task #46 start
Description: Product Signature for DB2 Server Edition 
Estimated time 6 second(s) 
Task #46 end 

Task #47 start
Description: Guardium Installation Manager Client 
Estimated time 36 second(s) 
Task #47 end 

Task #48 start
Description: Setting DB2 library path 
Estimated time 180 second(s) 
Task #48 end 

Task #49 start
Description: Installing or updating Db2 resource agent scripts for Pacemaker 
Estimated time 20 second(s) 
Task #49 end 

Task #50 start
Description: Executing control tasks 
Estimated time 20 second(s) 
Task #50 end 

Task #51 start
Description: Updating global registry 
Estimated time 20 second(s) 
Task #51 end 

Task #52 start
Description: Updating the db2ls and db2greg link 
Estimated time 1 second(s) 
Task #52 end 

Task #53 start
Description: Registering DB2 licenses 
Estimated time 5 second(s) 
Task #53 end 

The execution completed successfully.

For more information see the DB2 installation log at
"/tmp/installFixPack.log.4317".

Very much the same output/steps as in with the initial installation. All looks fine. This was the installation of all the files:

sles15-db2-root:~/server_dec $ cd /opt/ibm/db2/fixpack/
sles15-db2-root:/opt/ibm/db2/fixpack $ ls -l
total 4
drwx--x--x 1 root root    30 Jan 21 09:53 .licbkup
drwxr-xr-x 1 root root  1696 Nov 22 19:31 .metadata
drwxr-xr-x 1 bin  bin      6 Nov 22 19:28 R
drwxr-xr-x 1 bin  bin    296 Nov 22 19:27 Readme
drwxr-xr-x 1 bin  bin     18 Nov 22 19:27 acs
drwxr-xr-x 1 bin  bin    734 Nov 22 19:29 adm
drwxr-xr-x 1 bin  bin     76 Nov 22 19:28 adsm
drwxr-xr-x 1 bin  bin   4368 Nov 22 19:28 bin
drwxr-xr-x 1 bin  bin   5024 Nov 22 19:28 bnd
drwxr-xr-x 1 bin  bin   1090 Jan 21 09:53 cfg
drwxr-xr-x 1 bin  bin     10 Nov 22 19:27 conv
drwxr-xr-x 1 bin  bin     92 Nov 22 19:28 das
drwxr-xr-x 1 bin  bin    156 Nov 22 19:28 dasfcn
drwxr-xr-x 1 bin  bin     48 Nov 22 19:29 db2tss
drwxr-xr-x 1 bin  bin     84 Nov 22 19:28 desktop
drwxr-xr-x 1 bin  bin     12 Nov 22 19:28 doc
drwxr-xr-x 1 bin  bin     52 Nov 22 19:28 federation
drwxr-xr-x 1 bin  bin    100 Nov 22 19:28 function
drwxr-xr-x 1 bin  bin     24 Nov 22 19:28 gse
drwxr-xr-x 1 bin  bin     64 Nov 22 19:28 gskit
drwxr-xr-x 1 bin  bin    548 Nov 22 19:28 guardium
drwxr-xr-x 1 bin  bin     34 Nov 22 19:28 ha
drwxr-xr-x 1 bin  bin   2786 Nov 22 19:27 include
drwxr-xr-x 1 bin  bin     54 Nov 22 19:29 infopop
drwxr-xr-x 1 bin  bin    156 Jan 21 09:53 install
drwxr-xr-x 1 bin  bin    722 Nov 22 19:28 instance
drwxr-xr-x 1 bin  bin    352 Nov 22 19:27 java
drwxr-xr-x 1 bin  bin     38 Nov 22 19:27 json
drwxr-xr-x 1 bin  bin   1564 Jan 21 09:52 lib32
drwxr-xr-x 1 bin  bin  10328 Jan 21 09:52 lib64
drwxr-xr-x 1 bin  bin    932 Jan 21 09:53 license
lrwxrwxrwx 1 root root    12 Nov 22 19:27 logs -> install/logs
drwxr-xr-x 1 bin  bin     66 Nov 22 19:28 map
drwxr-xr-x 1 bin  bin    746 Nov 22 19:28 misc
drwxr-xr-x 1 bin  bin     28 Nov 22 19:28 msg
drwxr-xr-x 1 bin  bin    246 Nov 22 19:28 pd
drwxr-xr-x 1 bin  bin     14 Nov 22 19:28 properties
drwxr-xr-x 1 bin  bin     30 Nov 22 19:28 python
drwxr-xr-x 1 bin  bin    368 Nov 22 19:28 samples
drwxr-xr-x 1 bin  bin     12 Nov 22 19:28 security32
drwxr-xr-x 1 bin  bin    108 Nov 22 19:28 security64
drwxr-xr-x 1 bin  bin   1796 Nov 22 19:28 tools

Now we need to patch the instance, this is done with db2iupdt which is located in the “instance” directory:

sles15-db2-root:/opt/ibm/db2/fixpack/instance $ ./db2iupdt -d db2inst1
DBI1446I  The db2iupdt command is running.


DB2 installation is being initialized.

 The DB2 installer detects that one or more DB2 instances "db2inst1" are still
active. Stop the active instances and rerun the command again.

A major error occurred during the execution that caused this program to
terminate prematurely. If the problem persists, contact your technical service
representative.

For more information see the DB2 installation log at "/tmp/db2iupdt.log.13332".
DBI1264E  This program failed. Errors encountered during execution were
      written to the installation log file. Program name:
      db2iupdt. Log file name: /tmp/db2iupdt.log.13332.

Explanation: 

This message is returned when some processes and operations have failed.
Detailed information about the error was written to the log file.

User response: 

Contact IBM support to get assistance in resolving this issue. Keep the
log file intact as this file is an important reference for IBM support.


   Related information:
   Contacting IBM Software Support

I would have been surprised if you could update the instance without shutting it down before, so this is somehow expected:

db2inst1@sles15-db2-root:~> db2stop
01/21/2022 09:57:39     0   0   SQL1064N  DB2STOP processing was successful.
SQL1064N  DB2STOP processing was successful.
db2inst1@sles15-db2-root:~> ps -ef | grep db2
root      1051  1049  0 09:41 ?        00:00:00 sshd: db2inst1 [priv]
db2inst1  1055     1  0 09:41 ?        00:00:00 /usr/lib/systemd/systemd --user
db2inst1  1056  1055  0 09:41 ?        00:00:00 (sd-pam)
db2inst1  1063  1051  0 09:41 ?        00:00:00 sshd: db2inst1@pts/0
db2inst1  1064  1063  0 09:41 pts/0    00:00:00 -bash
db2inst1  1476     1  0 09:41 pts/0    00:00:00 /home/db2inst1/sqllib/bin/db2bp 1064A1004 5 A
root     14155     1  0 09:56 ?        00:00:00 /opt/ibm/db2/V11.5/bin/db2fmcd
db2inst1 14180 28023  0 09:57 pts/1    00:00:00 ps -ef
db2inst1 14181 28023  0 09:57 pts/1    00:00:00 grep --color=auto db2
root     27014  1049  0 09:52 ?        00:00:00 sshd: db2inst1 [priv]
db2inst1 27995 27014  0 09:52 ?        00:00:00 sshd: db2inst1@pts/1
db2inst1 28023 27995  0 09:52 pts/1    00:00:00 -bash

Next try:

sles15-db2-root:/opt/ibm/db2/fixpack/instance $ ./db2iupdt -d db2inst1
DBI1446I  The db2iupdt command is running.


DB2 installation is being initialized.

 Total number of tasks to be performed: 4 
Total estimated time for all tasks to be performed: 309 second(s) 

Task #1 start
Description: Setting default global profile registry variables 
Estimated time 1 second(s) 
Task #1 end 

Task #2 start
Description: Initializing instance list 
Estimated time 5 second(s) 
Task #2 end 

Task #3 start
Description: Configuring DB2 instances 
Estimated time 300 second(s) 
Task #3 end 

Task #4 start
Description: Updating global profile registry 
Estimated time 3 second(s) 
Task #4 end 

The execution completed successfully.

For more information see the DB2 installation log at "/tmp/db2iupdt.log.14194".
DBI1070I  Program db2iupdt completed successfully.

All seems to be fine, so lets start it up and check the version:

db2inst1@sles15-db2-root:~> db2start
01/21/2022 09:58:44     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.
db2inst1@sles15-db2-root:~> db2ls

Install Path                       Level   Fix Pack   Special Install Number   Install Date                  Installer UID 
---------------------------------------------------------------------------------------------------------------------
/opt/ibm/db2/V11.5               11.5.6.0        0                            Thu Jan 20 12:05:28 2022 CET             0 
/opt/ibm/db2/fixpack             11.5.7.0        0                            Fri Jan 21 09:53:04 2022 CET             0 



db2inst1@sles15-db2-root:~> db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 11.5.7.0

You can issue database manager commands and SQL statements from the command 
prompt. For example:
    db2 => connect to sample
    db2 => bind sample.bnd

For general help, type: ?.
For command help, type: ? command, where command can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE for help on the CATALOG DATABASE command
 ? CATALOG          for help on all of the CATALOG commands.

To exit db2 interactive mode, type QUIT at the command prompt. Outside 
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.

For more detailed help, refer to the Online Reference Manual.

Looks good. Finally you should update the catalogs in your databases:

db2inst1@sles15-db2-root:~> db2updv115 -d db1


_________________________________________________________________________     
                                                                              
                    _____   DB2 Service Tools   _____                         
                                                                              
                            I      B      M                                   
                                                                              
                               db2updv115                                      
                                                                              
   This tool is a service utility designed to update a DB2 Version 11.5       
   database to the current fix pack level.                                    
                                                                              
_________________________________________________________________________     


DB2 Universal Database Version 11.5, 5622-044 (c) Copyright IBM Corp. 2019
Licensed Material - Program Property of IBM
IBM DATABASE 2 Database update to current fix pack tool

db2updv115 completed successfully for database 'db1'. 

Looking back at the output of db2ls I wondered why I have two lines in there:

db2inst1@sles15-db2-root:~> db2ls

Install Path                       Level   Fix Pack   Special Install Number   Install Date                  Installer UID 
---------------------------------------------------------------------------------------------------------------------
/opt/ibm/db2/V11.5               11.5.6.0        0                            Thu Jan 20 12:05:28 2022 CET             0 
/opt/ibm/db2/fixpack             11.5.7.0        0                            Fri Jan 21 09:53:04 2022 CET             0 

Checking the open files of one of the processes gives this:

sles15-db2-root:~/server_dec $ ps -ef | grep db2 | grep db2ckpwd
root      2809 32564  0 10:58 pts/0    00:00:00 grep --color=auto db2ckpwd
root     30065 30057  0 09:58 pts/1    00:00:00 db2ckpwd 0
root     30066 30057  0 09:58 pts/1    00:00:00 db2ckpwd 0
root     30067 30057  0 09:58 pts/1    00:00:00 db2ckpwd 0
sles15-db2-root:~/server_dec $ lsof -p 30065
COMMAND    PID USER   FD   TYPE DEVICE  SIZE/OFF     NODE NAME
db2syscr 30065 root  cwd    DIR   0,46       156      256 /
db2syscr 30065 root  rtd    DIR   0,46       156      256 /
db2syscr 30065 root  txt    REG  253,3    197000 12583070 /home/db2inst1/sqllib/adm/db2syscr
db2syscr 30065 root  DEL    REG    0,1                  7 /SYSV276cac61
db2syscr 30065 root  DEL    REG    0,1                  8 /SYSV00000000
db2syscr 30065 root  DEL    REG    0,1                  6 /SYSV276cac74
db2syscr 30065 root  mem    REG   0,46    340640    45411 /usr/lib/locale/en_US.utf8/LC_CTYPE
db2syscr 30065 root  mem    REG   0,46        54    45152 /usr/lib/locale/en_US.utf8/LC_NUMERIC
db2syscr 30065 root  mem    REG   0,46   2586930    45146 /usr/lib/locale/en_US.utf8/LC_COLLATE
db2syscr 30065 root  mem    REG   0,46     41240    40191 /lib64/libnss_compat-2.31.so
db2syscr 30065 root  mem    REG   0,46    235576     1702 /usr/lib64/liblzma.so.5.2.3
db2syscr 30065 root  mem    REG   0,46     92216    40244 /lib64/libz.so.1.2.11
db2syscr 30065 root  mem    REG   0,46    112624     1859 /usr/lib64/libaudit.so.1.0.0
db2syscr 30065 root  mem    REG   0,56   1932400    10339 /opt/ibm/db2/fixpack/lib64/libicuucdb2.so.55.1
db2syscr 30065 root  mem    REG   0,56   1629408    10336 /opt/ibm/db2/fixpack/lib64/libicuucdb2.so.44.2
db2syscr 30065 root  mem    REG   0,56   1542960    10337 /opt/ibm/db2/fixpack/lib64/libicuucdb2.so.38.1
db2syscr 30065 root  mem    REG   0,56   1232064     8885 /opt/ibm/db2/fixpack/lib64/libicuucdb2.so.32.1
db2syscr 30065 root  mem    REG   0,56     72528    10320 /opt/ibm/db2/fixpack/lib64/libiculxdb2.so.55.1
db2syscr 30065 root  mem    REG   0,56     73632    10341 /opt/ibm/db2/fixpack/lib64/libiculxdb2.so.44.2
db2syscr 30065 root  mem    REG   0,56     72096    10342 /opt/ibm/db2/fixpack/lib64/libiculxdb2.so.38.1
db2syscr 30065 root  mem    REG   0,56     55680     8830 /opt/ibm/db2/fixpack/lib64/libiculxdb2.so.32.1
db2syscr 30065 root  mem    REG   0,56    462224    10316 /opt/ibm/db2/fixpack/lib64/libiculedb2.so.55.1
db2syscr 30065 root  mem    REG   0,56    315536    10335 /opt/ibm/db2/fixpack/lib64/libiculedb2.so.44.2
db2syscr 30065 root  mem    REG   0,56    296344    10311 /opt/ibm/db2/fixpack/lib64/libiculedb2.so.38.1
db2syscr 30065 root  mem    REG   0,56    312096     8889 /opt/ibm/db2/fixpack/lib64/libiculedb2.so.32.1
db2syscr 30065 root  mem    REG   0,56     71192    10338 /opt/ibm/db2/fixpack/lib64/libicuiodb2.so.55.1
db2syscr 30065 root  mem    REG   0,56     65936    10328 /opt/ibm/db2/fixpack/lib64/libicuiodb2.so.44.2
db2syscr 30065 root  mem    REG   0,56     65576    10331 /opt/ibm/db2/fixpack/lib64/libicuiodb2.so.38.1
db2syscr 30065 root  mem    REG   0,56     65600     8903 /opt/ibm/db2/fixpack/lib64/libicuiodb2.so.32.1
db2syscr 30065 root  mem    REG   0,56   3148400    10321 /opt/ibm/db2/fixpack/lib64/libicui18ndb2.so.55.1
db2syscr 30065 root  mem    REG   0,56   2354488    10310 /opt/ibm/db2/fixpack/lib64/libicui18ndb2.so.44.2
db2syscr 30065 root  mem    REG   0,56   1648208    10322 /opt/ibm/db2/fixpack/lib64/libicui18ndb2.so.38.1
db2syscr 30065 root  mem    REG   0,56   1507104     8900 /opt/ibm/db2/fixpack/lib64/libicui18ndb2.so.32.1
db2syscr 30065 root  mem    REG   0,56  20194496    10334 /opt/ibm/db2/fixpack/lib64/libicudatadb2.so.55.1
db2syscr 30065 root  mem    REG   0,56  11048128    10317 /opt/ibm/db2/fixpack/lib64/libicudatadb2.so.44.2
db2syscr 30065 root  mem    REG   0,56   6980792    10325 /opt/ibm/db2/fixpack/lib64/libicudatadb2.so.38.1
db2syscr 30065 root  mem    REG   0,56   9780656     8866 /opt/ibm/db2/fixpack/lib64/libicudatadb2.so.32.1
db2syscr 30065 root  mem    REG   0,56    612696     8875 /opt/ibm/db2/fixpack/lib64/libdb2sdbin.so.1
db2syscr 30065 root  mem    REG   0,56    156632     8878 /opt/ibm/db2/fixpack/lib64/libdb2locale.so.1
db2syscr 30065 root  mem    REG   0,56    228488     8854 /opt/ibm/db2/fixpack/lib64/libdb2genreg.so.1
db2syscr 30065 root  mem    REG   0,56    657544    10852 /opt/ibm/db2/fixpack/lib64/libdb2dstf.so.1
db2syscr 30065 root  mem    REG   0,56   5850192     8890 /opt/ibm/db2/fixpack/lib64/libDB2xslt4c.so.112.0
db2syscr 30065 root  mem    REG   0,56    536376     8919 /opt/ibm/db2/fixpack/lib64/libDB2xml4c-depdom.so.58.0
db2syscr 30065 root  mem    REG   0,56   4259416     8868 /opt/ibm/db2/fixpack/lib64/libDB2xml4c.so.58.0
db2syscr 30065 root  mem    REG   0,56     46920     8837 /opt/ibm/db2/fixpack/lib64/libDB2xalanMsg.so.112.0
db2syscr 30065 root  mem    REG   0,46     48288    62219 /usr/lib64/libnuma.so.1.0.0
db2syscr 30065 root  mem    REG   0,46   1473368     2648 /usr/lib64/libxml2.so.2.9.7
db2syscr 30065 root  mem    REG   0,46     64232    45768 /lib64/libpam.so.0.84.2
db2syscr 30065 root  mem    REG   0,46     40240    40205 /lib64/librt-2.31.so
db2syscr 30065 root  mem    REG   0,46     17840    40183 /lib64/libdl-2.31.so
db2syscr 30065 root  mem    REG   0,46    202736     1810 /usr/lib64/libcrypt.so.1.1.0
db2syscr 30065 root  mem    REG   0,46      5608     1874 /lib64/libaio.so.1.0.1
db2syscr 30065 root  mem    REG   0,56   8316600     8839 /opt/ibm/db2/fixpack/lib64/libdb2g11n.so.1
db2syscr 30065 root  mem    REG   0,46   2163888    40181 /lib64/libc-2.31.so
db2syscr 30065 root  mem    REG   0,46    101024    40311 /lib64/libgcc_s.so.1
db2syscr 30065 root  mem    REG   0,46   1419880    40185 /lib64/libm-2.31.so
db2syscr 30065 root  mem    REG   0,46   2161776    40439 /usr/lib64/libstdc++.so.6.0.29
db2syscr 30065 root  mem    REG   0,56   6207992     8842 /opt/ibm/db2/fixpack/lib64/libdb2osse.so.1
db2syscr 30065 root  mem    REG   0,56 302123664    10859 /opt/ibm/db2/fixpack/lib64/libdb2e.so.1
db2syscr 30065 root  mem    REG   0,56   4327000    10870 /opt/ibm/db2/fixpack/lib64/libdb2thrift.so.1
db2syscr 30065 root  mem    REG   0,46    147304    40201 /lib64/libpthread-2.31.so
db2syscr 30065 root  mem    REG   0,46    192696    40173 /lib64/ld-2.31.so
db2syscr 30065 root  mem    REG   0,46      3284    45155 /usr/lib/locale/en_US.utf8/LC_TIME
db2syscr 30065 root  mem    REG   0,46       286    45150 /usr/lib/locale/en_US.utf8/LC_MONETARY
db2syscr 30065 root  mem    REG   0,46        57    45149 /usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES
db2syscr 30065 root  mem    REG   0,46        34    45153 /usr/lib/locale/en_US.utf8/LC_PAPER
db2syscr 30065 root  mem    REG   0,46        77    45151 /usr/lib/locale/en_US.utf8/LC_NAME
db2syscr 30065 root  mem    REG   0,56     22360     8899 /opt/ibm/db2/fixpack/lib64/libdb2install.so.1
db2syscr 30065 root  mem    REG   0,56     16976    10919 /opt/ibm/db2/fixpack/lib64/libdb2encr.so.1
db2syscr 30065 root  mem    REG   0,56    125184     8887 /opt/ibm/db2/fixpack/lib64/libdb2dascmn.so.1
db2syscr 30065 root  mem    REG   0,56     17080    10900 /opt/ibm/db2/fixpack/lib64/libdb2compr_encr.so.1
db2syscr 30065 root  mem    REG   0,56     20504    10868 /opt/ibm/db2/fixpack/lib64/libdb2compr.so.1
db2syscr 30065 root  mem    REG   0,56    485200     8844 /opt/ibm/db2/fixpack/lib64/libdb2cftrace.so.1
db2syscr 30065 root  mem    REG   0,56    532336     8881 /opt/ibm/db2/fixpack/lib64/libdb2cf_api.so.1
db2syscr 30065 root  mem    REG   0,56     70488     8857 /opt/ibm/db2/fixpack/lib64/libdb2trcapi.so.1
db2syscr 30065 root  mem    REG   0,56    576944     8880 /opt/ibm/db2/fixpack/lib64/libdb2osse_db2.so.1
db2syscr 30065 root  mem    REG   0,46       167    45145 /usr/lib/locale/en_US.utf8/LC_ADDRESS
db2syscr 30065 root  mem    REG   0,46        59    45154 /usr/lib/locale/en_US.utf8/LC_TELEPHONE
db2syscr 30065 root  mem    REG   0,46        23    45148 /usr/lib/locale/en_US.utf8/LC_MEASUREMENT
db2syscr 30065 root  mem    REG   0,46     26988    45412 /usr/lib64/gconv/gconv-modules.cache
db2syscr 30065 root  mem    REG   0,46       369    45147 /usr/lib/locale/en_US.utf8/LC_IDENTIFICATION
db2syscr 30065 root    0u   CHR    1,3       0t0     9522 /dev/null
db2syscr 30065 root    1u   CHR    1,3       0t0     9522 /dev/null
db2syscr 30065 root    2u   CHR    1,3       0t0     9522 /dev/null

This confirms that the files of the new location of the Fix Pack are used, and not anymore those of the initial installation. I probably should have chosen a better naming for the directory, as “fixpack” really is not a good choice here.

Cet article Patching an IBM Db2 instance est apparu en premier sur Blog dbi services.

HTML DB

Tom Kyte - Fri, 2022-01-21 10:06
Tom - I read a little bit about the new features in 10g for HTML DB. I'm pretty excited about it if it effectively delivers on these new features (e.g., the whole meta-data driven framework, the Application Builder, Session Management, Templates, Security, etc.) I have been thinking for awhile that there is opportunity in smaller markets to role out rock solid applications with a single Oracle database running mod/plsql. I was considering all the infrastructure services that would need to be built in order to quickly build one or more applications using mod/plsql. With HTML DB, it is looking like I don't have to worry about building these services on my own anymore, and that is great news. A feature that I didn't see that I would also consider very useful... A security framework where you can map Users to one or more Roles (I know this is in Oracle), then further mapping one or more Roles to one or more defined "Transaction codes" (e.g., CreateOrder, ViewOrder, DeleteOrder), and finally associating a group of "HTML Pages" to a transaction. This kind of framework could apply to many information system applications. It would both enforce at the server as well as influence the look of the application (e.g., limited menu options based on Roles.) Anyway, seeing these new HTML DB features along with 10g's ability to transparently scale in its clustering/grid configurations, Oracle 10g with HTML DB could be a solution for any size application. Finally some questions.... 1.) Does Oracle have any plans to build out-of-box applications/products on top of HTML DB? 2.) Do you see any potential confusion from your customers as to whether they should use HTML DB vs. the iAS? I did Java development for a few years and I like using core java a lot. I still struggle to decide whether or not the EJB architecture is that good. I think I've come to the conclusion that if you are building an "information system" application where most of the work is selecting,inserting,updating,and deleting from the database, then a simpler architecture is better, and HTML DB may be onto something here. At the end of the day I would think companies want to get ROBUST applications working QUICKLY, and J2EE is not necessarily the right choice in all cases. A couple more questions... 3.) Will HTML DB be available in the Standard edition? As I said before, I see opportunity in smaller markets for HTML DB as replacements for spreadsheets, MS-Access, and SQLServer applications, however, the cost of Oracle could prohibit this. Does Oracle have any interest in this market? 4.) Maybe I am just paranoid, but I see a strategy here. An alternative to .NET or J2EE, simpler and faster approach (for certain types of "information system apps"), it gets Oracle in the door, and allows them to sell additional licenses (10g grid) as the customers' applications need to scale further. Is this a stretch? Always curious, Dave
Categories: DBA Blogs

Jenkins – Public cloud installation

Yann Neuhaus - Fri, 2022-01-21 03:37

Hi Team,

Sometimes I was struggling with my VM to install and test my favorite software.
Many problem can occur, VM can crash , need to install again, select the correct image computer , manage resources allocation and consumption can be sometime a nightmare.
But as I am lucky, one of my dbi colleague advised me to use public cloud,so …..thanks a lot @Kevin BAUDE !!!
Let’s see how to create a public cloud and install Jenkins on it.

Connect to the Public cloud site

Go to https://www.infomaniak.com/fr/hebergement/public-cloud

  • Subscribe to the site (you can have 3 month for free in order to test feature of this site)
  • Create an account

  • name your public cloud

  • Go on your interface manager
    You can create a new public cloud or manage your current public cloud

  • Create a new project

  • log on you public cloud

  • Double click on your project and start configuration

before creating your instance
  • create your security rules to allow SSH
    To use your machine on putty or mobaxterm for example you must add a new rule
  • import a ssh key pair
    On your machine generate the key pair

  • Import the public key to your public cloud

  • Create your new instance

  • Name your instance

  • select you source

  • select your OS specificity RAM/Space

Note: Flavors manage the sizing for the compute, memory and storage capacity of the instance

  • select your Network( I let by default )
  • add your security groups (including SSH rule added before )
  • add your key pair defined before (each time you create a VM you must add your key pair )
Connect to your new instance
  • Click on launch instance

  • connect to your terminal with your ip address displayed

Your VM is now available for Jenkins installation

Install Jenkins
  • Install java
    sudo dnf install java-1.8.0-openjdk-devel

  • Check if install is validated

  • Add Jenkins Software Repository

Jenkins isn’t included in the default CentOS software repositories. To add the Jenkins repository, open a terminal window, and enter the following:

sudo wget –O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo

note:
If you have an issue with that command install wget

[root@2022-jenkins-instance centos]# sudo wget -O /etc/yum.repos.d/jenkins.repo \>https://pkg.jenkins.io/redhat-stable/jenkins.reposudo: wget: command not found[root@2022-jenkins-instance centos]# yum install wget
Last metadata expiration check: 0:40:28 ago on Wed 12 Jan 2022 09:21:01 AM UTC.

Install Jenkins

[root@2022-jenkins-instance centos]# sudo rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key
[root@2022-jenkins-instance centos]# sudo dnf install jenkins

If not working use suffix –nobest

Or you can do the following steps:

sudo wget -O /etc/yum.repos.d/jenkins.repo \
https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
sudo yum upgrade
sudo yum install epel-release java-11-openjdk-devel
sudo yum install jenkins
sudo systemctl daemon-reload

Start Jenkins services

To start the Jenkins service and enable it at startup, enter the following:

sudo systemctl start jenkins
sudo systemctl enable jenkins

To display the status of the Jenkins service, enter the following:

sudo systemctl status jenkins

important

when connecting to your browser don’t forget to open rules fort your Jenkins port ( I let default 8080 )

  • Open rule for Jenkins port

  • You can now configure your Jenkins

  • Get secret password at
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
  • Install your pluginSelect default (install suggested plugins )

  • Create your first admin user (you can also skip but not recommended)

  • Configure your instance and chose you port

  • Jenkins is now ready

  • You can now access to the Dashboard

Conclusion:

A nice idea to work on a public cloud to get many features and to avoid our computer overloading, thanks to infomaniak site to propose a free testing of their features, I invite you to try it, I guess I will subscribe to enhance my work and to store my labs safely!

Cet article Jenkins – Public cloud installation est apparu en premier sur Blog dbi services.

New options for pg_basebackup in PostgreSQL 15

Yann Neuhaus - Fri, 2022-01-21 01:42

If, and you should, are doing base backups of your PostgreSQL instances you’ve probably used pg_basebackup. Another use case for pg_basebackup is to create a starting point for replicas when you are setting up streaming replication. Up to now, pg_basebackup will send the result always to the node/machine where you started it. That means: If you started pg_basebackup on your client or backup machine, then the resulting base backup will go there. No other options are available in recent versions of PostgreSQL. This will change with PostgreSQL 15.

The commit which implemented this is this (don’t be confused by the commit date, it really happened yesterday, not in 2021):

commit 3500ccc39b0dadd1068a03938e4b8ff562587ccc (HEAD -> master, origin/master, origin/HEAD)
Author: Robert Haas 
Date:   Tue Nov 16 15:20:50 2021 -0500

    Support base backup targets.
    
    pg_basebackup now has a --target=TARGET[:DETAIL] option. If specfied,
    it is sent to the server as the value of the TARGET option to the
    BASE_BACKUP command. If DETAIL is included, it is sent as the value of
    the new TARGET_DETAIL option to the BASE_BACKUP command.  If the
    target is anything other than 'client', pg_basebackup assumes that it
    will now be the server's job to write the backup in a location somehow
    defined by the target, and that it therefore needs to write nothing
    locally. However, the server will still send messages to the client
    for progress reporting purposes.
    
    On the server side, we now support two additional types of backup
    targets.  There is a 'blackhole' target, which just throws away the
    backup data without doing anything at all with it. Naturally, this
    should only be used for testing and debugging purposes, since you will
    not actually have a backup when it finishes running. More usefully,
    there is also a 'server' target, so you can now use something like
    'pg_basebackup -Xnone -t server:/SOME/PATH' to write a backup to some
    location on the server. We can extend this to more types of targets
    in the future, and might even want to create an extensibility
    mechanism for adding new target types.
    
    Since WAL fetching is handled with separate client-side logic, it's
    not part of this mechanism; thus, backups with non-default targets
    must use -Xnone or -Xfetch.
    
    Patch by me, with a bug fix by Jeevan Ladhe.  The patch set of which
    this is a part has also had review and/or testing from Tushar Ahuja,
    Suraj Kharage, Dipesh Pandit, and Mark Dilger.
    
    Discussion: http://postgr.es/m/CA+TgmoaYZbz0=Yk797aOJwkGJC-LK3iXn+wzzMx7KdwNpZhS5g@mail.gmail.com

To demonstrate the feature I’ve setup two nodes. This is my client:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:ca:ce:73 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.240/24 brd 192.168.100.255 scope global dynamic enp1s0
       valid_lft 3155sec preferred_lft 3155sec
    inet6 fe80::5054:ff:feca:ce73/64 scope link 
       valid_lft forever preferred_lft forever

… and this is my server:

postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/ [pgdev] ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c7:93:6c brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.241/24 brd 192.168.100.255 scope global dynamic enp1s0
       valid_lft 2278sec preferred_lft 2278sec
    inet6 fe80::5054:ff:fec7:936c/64 scope link 
       valid_lft forever preferred_lft forever

What happens if I start a base backup on the client is, that the data is send from the server to my client and I’ll have the base backup locally stored on the client:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ mkdir /var/tmp/backup
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ./pg_basebackup -h 192.168.100.241 -F t -D /var/tmp/backup/
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ls -l /var/tmp/backup/
total 39624
-rw------- 1 postgres postgres   138469 Jan 21 07:41 backup_manifest
-rw------- 1 postgres postgres 23652864 Jan 21 07:41 base.tar
-rw------- 1 postgres postgres 16778752 Jan 21 07:41 pg_wal.tar

What the above introduces is to tell the server to store the backup. The option for this is this one:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ./pg_basebackup --help | grep -A 1 TARGET
  -t, --target=TARGET[:DETAIL]
                         backup target (if other than client)

So now, again executing pg_basebackup from the client, we can do this:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ssh 192.168.100.241 'mkdir /var/tmp/bb'
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ./pg_basebackup -h 192.168.100.241 -X none -t server:/var/tmp/bb/
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ssh 192.168.100.241 'ls -l /var/tmp/bb/'
total 23236
-rw------- 1 postgres postgres   138469 Jan 21 07:57 backup_manifest
-rw------- 1 postgres postgres 23654400 Jan 21 07:57 base.tar

Now the backup is generated on the server without sending it to the client. For testing purposes there is also the “blackhole” target, which just throws away the backup:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ./pg_basebackup -h 192.168.100.241 -X none -t blackhole
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ 

Why is that feature cool? Because now there is the infrastructure to implement other targets, maybe S3, or whatever.

Another feature that just got committed is this: Extend the options of pg_basebackup to control compression.

This gives you more options for compression:

postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ./pg_basebackup --help | grep -A 1 LEVEL
  -Z, --compress={gzip,none}[:LEVEL] or [LEVEL]
                         compress tar output with given compression method or level

For the moment there is only “none” and “gzip”:

postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] mkdir /var/tmp/aa
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ./pg_basebackup -D /var/tmp/aa/ -Z none
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ls /var/tmp/aa/
backup_label     base              global        pg_dynshmem  pg_ident.conf  pg_logical    pg_notify    pg_serial     pg_stat      pg_subtrans  pg_twophase  pg_wal   postgresql.auto.conf
backup_manifest  current_logfiles  pg_commit_ts  pg_hba.conf  pg_log         pg_multixact  pg_replslot  pg_snapshots  pg_stat_tmp  pg_tblspc    PG_VERSION   pg_xact  postgresql.conf
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] rm -rf /var/tmp/aa/*
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ./pg_basebackup -D /var/tmp/aa/ -F t -Z gzip:4
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ls -l /var/tmp/aa/
total 3316
-rw------- 1 postgres postgres  138314 Jan 21 08:39 backup_manifest
-rw------- 1 postgres postgres 3235329 Jan 21 08:39 base.tar.gz
-rw------- 1 postgres postgres   17075 Jan 21 08:39 pg_wal.tar.gz

The same applies here: The infrastructure is now in place, and other options can be added. Nice features, thanks to all involved.

Cet article New options for pg_basebackup in PostgreSQL 15 est apparu en premier sur Blog dbi services.

Upgrading Plex Media Server on a Raspberry Pi

The Anti-Kyte - Thu, 2022-01-20 14:37

I’m running Plex on my RaspberryPi so that I can stream my movies to multiple devices on my home network.
The periodic upgrades to the Plex software can get a bit fiddly.
Therefore, to save me looking up how to do the same things every time I need to upgrade, I thought I’d put them in one place…

Finding information about my Pi

I’m running Raspberry Pi OS (formerly Raspbian) on my Pi.

Connecting to the Pi via ssh, I can run the following commands to discover information about my system.

To find out what OS version I’m running :

cat /etc/os-release |grep 'PRETTY NAME'

the model of the Pi :

cat /proc/cpuinfo |grep Model

the specific ARM architecture of your Pi

/proc/cpuinfo |grep 'model name'

the name of your pi on my local (home) network

hostname -f

the IP address of your Pi on your local network

hostname -I
Upgrading Raspbian

This is not usually a requirement for upgrading Plex, but I thought I’d mention it here just in case.

There is a big warning about in-place upgrades in the documentation

In my case, I only really use my Pi for the plex server so the in-place upgrade does not present a huge risk.
If I happen to lose everything, re-installation and configuration would not be too onerous a task.
Therefore, I follow the link provided in the documentation and ended up with these instructions.

Upgrading Plex Server

I run my Pi headless (i.e. no monitor attached), so I’m usually accessing the Plex management web page on another machine when I see it’s upgrade time :

I can click on the link and download the update file here.
Alternatively, you can get it directly from the Plex Downloads site.

On the download site, I need to select “Linux” when asked to choose a platform :

Then click on the “Choose Distribution” button :

Now I want to select one of the ARM distros ( in my case v7) :

I can now copy the file from my machine to the pi via SFTP.
There is a good tutorial here about using the freeware sftp client Filezilla to transfer files to and from a pi via sftp.

Installing the upgrade

For this bit, I need to have a terminal session on the pi ( I usually use SSH to remote in).

We don’t want to go to all the trouble of running the upgrade if we’re already running the latest available version. One way to confirm the current version, is :

apt-cache policy plexmediaserver

First, if we’ve not done so already, make sure we have the most up-to-date packages on the Pi :

sudo apt update -y && sudo apt upgrade -y

Now re-start the pi to make sure any changes take effect

sudo reboot

Obviously, you’ll lose your connection to the Pi whilst it’s rebooting and will need to reconnect once it’s back up.

Next we need to stop the Plexmidiaserver ( assuming it starts on boot, which mine does) :

sudo service plexmediaserver stop

Now we need to manually install of deb file we’ve downloaded :

sudo apt install ./plexmediaserver_1.25.2.5319-c43dc0277_armhf.deb

The plexmediaserver should now be up and running. You can check by running :

sudo service plexmediaserver status

…and should be running the version we’ve just installed :

Plex Clients

Once you’ve completed the Server install, you may need to “remind” the plex clients running on your TVs of the IP adress of your server.

Remember, you can get the IP address of your pi by running :

hostname -I

Your looking for the first bit of information returned by this command.
On a home network, the address will usually be in the format :

192.168.1.nnn

…where nnn is a 3-digit number.

Setting up IBM Db2 on Linux – root installation

Yann Neuhaus - Thu, 2022-01-20 12:11

In the first post about Db2 I went for the non-root installation method. While this is great if you want to get started fast and to do some tests this also comes with limitations. Most installations, I guess, are root installations, especially if you want to run more than one instance on a host, and you want to have them started automatically when the systems comes up. Lets see how that works.

As last time, I’ll start with a SLES 15SP3 minimal installation:

sles15-db2-root:~ $ uname -a
Linux sles15-db2-root 5.3.18-59.37-default #1 SMP Mon Nov 22 12:29:04 UTC 2021 (d10168e) x86_64 x86_64 x86_64 GNU/Linux
sles15-db2-root:~ $ cat /etc/os-release
NAME="SLES"
VERSION="15-SP3"
VERSION_ID="15.3"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP3"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15:sp3"
DOCUMENTATION_URL="https://documentation.suse.com/"

Instead of trial and error as in the last post, lets directly install the packages which are required for a non-root installation:

sles15-db2-root:~ $ zypper in -y xorg-x11-xauth sudo vi binutils libnuma1 libstdc++6-32bit pam-32bit

I’ll use exactly the same Db2 distribution as in the previous post:

sles15-db2-root:~ $ ls -l
total 2034632
-rw------- 1 root root        268 Jan 20 11:23 .bash_history
drwxr-xr-x 1 root root          0 Mar  3  2021 bin
drwx------ 1 root root          0 Mar  3  2021 .gnupg
drwxr-xr-x 1 root root         36 Jul  9  2021 inst-sys
-rw-r--r-- 1 root root 2083458415 Jan 20 11:38 v11.5.6_linuxx64_server_dec.tar.gz
sles15-db2-root:~ $ tar axf v11.5.6_linuxx64_server_dec.tar.gz 
sles15-db2-root:~ $ rm v11.5.6_linuxx64_server_dec.tar.gz 
sles15-db2-root:~ $ ls -l
total 4
-rw------- 1 root root 268 Jan 20 11:23 .bash_history
drwxr-xr-x 1 root root   0 Mar  3  2021 bin
drwx------ 1 root root   0 Mar  3  2021 .gnupg
drwxr-xr-x 1 root root  36 Jul  9  2021 inst-sys
drwxr-xr-x 1 root root 240 Jun 11  2021 server_dec
sles15-db2-root:~ $ cd server_dec
sles15-db2-root:~/server_dec $ ls -l
total 84
drwxr-xr-x 1 bin bin   112 Jun 11  2021 db2
-r--r--r-- 1 bin bin  4987 Jun 11  2021 db2checkCOL_readme.txt
-r--r--r-- 1 bin bin 16808 Jun 11  2021 db2checkCOL.tar.gz
-r-xr-xr-x 1 bin bin  5254 Jun 11  2021 db2ckupgrade
-r-xr-xr-x 1 bin bin  5207 Jun 11  2021 db2_deinstall
-r-xr-xr-x 1 bin bin  5077 Jun 11  2021 db2_install
-r-xr-xr-x 1 bin bin  5041 Jun 11  2021 db2ls
-r-xr-xr-x 1 bin bin  5059 Jun 11  2021 db2prereqcheck
-r-xr-xr-x 1 bin bin  5059 Jun 11  2021 db2setup
-r-xr-xr-x 1 bin bin  5095 Jun 11  2021 installFixPack

This time we’ll use “db2prereqcheck” to check if all requirements are met for the installation:

sles15-db2-root:~/server_dec $ ./db2prereqcheck

==========================================================================

Thu Jan 20 11:50:39 2022
Checking prerequisites for DB2 installation. Version "11.5.6.0". Operating system "Linux" 
   
Validating "Linux distribution " ... 

Requirement not matched for DB2 database "Server" with pureScale feature . Version: "10.5.0.0". 
Summary of prerequisites that are not met on the current system: 
DBT3613E  The db2prereqcheck utility failed to verify the prerequisites for TSA. Ensure your machine meets all the TSA installation prerequisites. 
...

DBT3507E  The db2prereqcheck utility failed to find the following package or file: "kernel-source". 
DBT3507E  The db2prereqcheck utility failed to find the following package or file: "". 
DBT3507E  The db2prereqcheck utility failed to find the following package or file: "gcc-c++". 
DBT3507E  The db2prereqcheck utility failed to find the following package or file: "cpp". 
DBT3507E  The db2prereqcheck utility failed to find the following package or file: "gcc". 

Seems we are missing another few packages:

sles15-db2-root:~/server_dec $ zypper in kernel-source gcc-c++ cpp gcc

Next try (“-i” tells db2prereqcheck to only check for the current version of Db2, not all versions as above):

sles15-db2-root:~/server_dec $ ./db2prereqcheck -i
...
Validating "/lib/libpam.so*" ... 
   Requirement matched. 
DBT3533I  The db2prereqcheck utility has confirmed that all installation prerequisites were met. 

All fine, so lets go:

sles15-db2-root:~/server_dec $ ./db2_install 
Read the license agreement file in the db2/license directory.

***********************************************************
To accept those terms, enter "yes". Otherwise, enter "no" to cancel the install process. [yes/no]
yes
 
 
Default directory for installation of products - /opt/ibm/db2/V11.5

***********************************************************
Install into default directory (/opt/ibm/db2/V11.5) ? [yes/no] 
yes
 
 
Specify one of the following keywords to install DB2 products.

  SERVER 
  CONSV 
  CLIENT 
  RTCL 
 
Enter "help" to redisplay product names.

Enter "quit" to exit.

***********************************************************
SERVER
***********************************************************
Do you want to install the DB2 pureScale Feature? [yes/no] 
no
DB2 installation is being initialized.

 Total number of tasks to be performed: 59 
Total estimated time for all tasks to be performed: 2775 second(s) 

Task #1 start
Description: Checking license agreement acceptance 
Estimated time 1 second(s) 
Task #1 end 

Task #2 start
Description: Base Client Support for installation with root privileges 
Estimated time 3 second(s) 
Task #2 end 

Task #3 start
Description: Product Messages - English 
Estimated time 14 second(s) 
Task #3 end 

Task #4 start
Description: Base client support 
Estimated time 355 second(s) 
Task #4 end 

Task #5 start
Description: Java Runtime Support 
Estimated time 217 second(s) 
Task #5 end 

Task #6 start
Description: Java Help (HTML) - English 
Estimated time 7 second(s) 
Task #6 end 

Task #7 start
Description: Base server support for installation with root privileges 
Estimated time 6 second(s) 
Task #7 end 

Task #8 start
Description: Global Secure ToolKit 
Estimated time 76 second(s) 
Task #8 end 

Task #9 start
Description: Java support 
Estimated time 11 second(s) 
Task #9 end 

Task #10 start
Description: SQL procedures 
Estimated time 3 second(s) 
Task #10 end 

Task #11 start
Description: ICU Utilities 
Estimated time 59 second(s) 
Task #11 end 

Task #12 start
Description: Java Common files 
Estimated time 18 second(s) 
Task #12 end 

Task #13 start
Description: Base server support 
Estimated time 582 second(s) 
Task #13 end 

Task #14 start
Description: Control Center Help (HTML) - English 
Estimated time 13 second(s) 
Task #14 end 

Task #15 start
Description: Relational wrappers common 
Estimated time 3 second(s) 
Task #15 end 

Task #16 start
Description: DB2 data source support 
Estimated time 6 second(s) 
Task #16 end 

Task #17 start
Description: ODBC data source support 
Estimated time 260 second(s) 
Task #17 end 

Task #18 start
Description: Teradata data source support 
Estimated time 3 second(s) 
Task #18 end 

Task #19 start
Description: Spatial Extender server support 
Estimated time 21 second(s) 
Task #19 end 

Task #20 start
Description: Scientific Data Sources 
Estimated time 5 second(s) 
Task #20 end 

Task #21 start
Description: JDBC data source support 
Estimated time 200 second(s) 
Task #21 end 

Task #22 start
Description: IBM Software Development Kit (SDK) for Java(TM) 
Estimated time 49 second(s) 
Task #22 end 

Task #23 start
Description: DB2 LDAP support 
Estimated time 4 second(s) 
Task #23 end 

Task #24 start
Description: DB2 Instance Setup wizard 
Estimated time 23 second(s) 
Task #24 end 

Task #25 start
Description: Structured file data sources 
Estimated time 5 second(s) 
Task #25 end 

Task #26 start
Description: Integrated Flash Copy Support 
Estimated time 3 second(s) 
Task #26 end 

Task #27 start
Description: Oracle data source support 
Estimated time 4 second(s) 
Task #27 end 

Task #28 start
Description: Connect support 
Estimated time 3 second(s) 
Task #28 end 

Task #29 start
Description: Application data sources 
Estimated time 4 second(s) 
Task #29 end 

Task #30 start
Description: Spatial Extender client 
Estimated time 3 second(s) 
Task #30 end 

Task #31 start
Description: SQL Server data source support 
Estimated time 4 second(s) 
Task #31 end 

Task #32 start
Description: Communication support - TCP/IP 
Estimated time 3 second(s) 
Task #32 end 

Task #33 start
Description: Tivoli SA MP 
Estimated time 300 second(s) 
Task #33 end 

Task #34 start
Description: Base application development tools 
Estimated time 36 second(s) 
Task #34 end 

Task #35 start
Description: Parallel Extension 
Estimated time 3 second(s) 
Task #35 end 

Task #36 start
Description: EnterpriseDB code 
Estimated time 3 second(s) 
Task #36 end 

Task #37 start
Description: Replication tools 
Estimated time 60 second(s) 
Task #37 end 

Task #38 start
Description: Sample database source 
Estimated time 4 second(s) 
Task #38 end 

Task #39 start
Description: itlm 
Estimated time 3 second(s) 
Task #39 end 

Task #40 start
Description: DB2 Text Search 
Estimated time 107 second(s) 
Task #40 end 

Task #41 start
Description: Command Line Processor Plus 
Estimated time 8 second(s) 
Task #41 end 

Task #42 start
Description: Sybase data source support 
Estimated time 3 second(s) 
Task #42 end 

Task #43 start
Description: Informix data source support 
Estimated time 4 second(s) 
Task #43 end 

Task #44 start
Description: Federated Data Access Support 
Estimated time 3 second(s) 
Task #44 end 

Task #45 start
Description: First Steps 
Estimated time 3 second(s) 
Task #45 end 

Task #46 start
Description: Pacemaker 
Estimated time 100 second(s) 
Task #46 end 

Task #47 start
Description: Product Signature for DB2 Server Edition 
Estimated time 6 second(s) 
Task #47 end 

Task #48 start
Description: Guardium Installation Manager Client 
Estimated time 36 second(s) 
Task #48 end 

Task #49 start
Description: Setting DB2 library path 
Estimated time 180 second(s) 
Task #49 end 

Task #50 start
Description: Installing or updating DB2 HA scripts for IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) 
Estimated time 40 second(s) 
Task #50 end 

Task #51 start
Description: Installing or updating Db2 resource agent scripts for Pacemaker 
Estimated time 20 second(s) 
Task #51 end 

Task #52 start
Description: Executing control tasks 
Estimated time 20 second(s) 
Task #52 end 

Task #53 start
Description: Updating global registry 
Estimated time 20 second(s) 
Task #53 end 

Task #54 start
Description: Starting DB2 Fault Monitor 
Estimated time 10 second(s) 
Task #54 end 

Task #55 start
Description: Updating the db2ls and db2greg link 
Estimated time 1 second(s) 
Task #55 end 

Task #56 start
Description: Registering DB2 licenses 
Estimated time 5 second(s) 
Task #56 end 

Task #57 start
Description: Setting default global profile registry variables 
Estimated time 1 second(s) 
Task #57 end 

Task #58 start
Description: Initializing instance list 
Estimated time 5 second(s) 
Task #58 end 

Task #59 start
Description: Updating global profile registry 
Estimated time 3 second(s) 
Task #59 end 

The execution completed with warnings.

For more information see the DB2 installation log at
"/tmp/db2_install.log.7770".

This is quite some output. There is one warning in the tmp file:

WARNING: DBI20105E  An error occurred while installing the following file set:
"TSAMP". Because these files were not successfully installed, functionality
that depends on these files might not work as expected.

As this seems to be related to Tivoli System Automation, I don’t care. As an additional step you can validate the installation:

sles15-db2-root:~/server_dec $ /opt/ibm/db2/V11.5/bin/db2val
DBI1379I  The db2val command is running. This can take several minutes.

DBI1335I  Installation file validation for the DB2 copy installed at
      /opt/ibm/db2/V11.5 was successful.

DBI1343I  The db2val command completed successfully. For details, see
      the log file /tmp/db2val-220120_120906.log.

A root installation does not create an instance automatically, this is a manual post-installation task. Before doing that we need the Groups and Users:

sles15-db2-root:~ $ groupadd -g 999 db2iadm1
sles15-db2-root:~ $ groupadd -g 998 db2fsdm1
sles15-db2-root:~ $ groupadd -g 997 dasadm1
sles15-db2-root:~ $ useradd -u 1004 -g db2iadm1 -m -d /home/db2inst1 db2inst1 
sles15-db2-root:~ $ useradd -u 1003 -g db2fsdm1 -m -d /home/db2fenc1 db2fenc1 
sles15-db2-root:~ $ useradd -u 1002 -g dasadm1 -m -d /home/dasusr1 dasusr1
sles15-db2-root:~ $ passwd db2fenc1
sles15-db2-root:~ $ passwd dasusr1
sles15-db2-root:~ $ passwd dasusr1

Nest step: Instance creation:

sles15-db2-root:~ $ /opt/ibm/db2/V11.5/instance/db2icrt -a server -u db2fenc1 db2inst1
DBI1446I  The db2icrt command is running.

DB2 installation is being initialized.

 Total number of tasks to be performed: 4 
Total estimated time for all tasks to be performed: 309 second(s) 

Task #1 start
Description: Setting default global profile registry variables 
Estimated time 1 second(s) 
Task #1 end 

Task #2 start
Description: Initializing instance list 
Estimated time 5 second(s) 
Task #2 end 

Task #3 start
Description: Configuring DB2 instances 
Estimated time 300 second(s) 
Task #3 end 

Task #4 start
Description: Updating global profile registry 
Estimated time 3 second(s) 
Task #4 end 

The execution completed successfully.

For more information see the DB2 installation log at "/tmp/db2icrt.log.32397".
DBI1070I  Program db2icrt completed successfully.

Have a look the generated tmp file, because it contains the port number for the instance:

sles15-db2-root:~ $ cat /tmp/db2icrt.log.32397
...
Required steps:
You can connect to the DB2 instance "db2inst1" using the port number "25010". Record it for future reference.
...

All the files of the instance went to the home of the “db2inst1” user:

sles15-db2-root:~ $ ls -al /home/db2inst1/sqllib/
total 88
drwxrwsr-t 20 db2inst1 db2iadm1  4096 Jan 20 12:14 .
drwxr-xr-x  9 db2inst1 db2iadm1   182 Jan 20 12:14 ..
lrwxrwxrwx  1 root     db2iadm1     5 Jan 20 12:14 .32 -> lib32
lrwxrwxrwx  1 root     db2iadm1     5 Jan 20 12:14 .64 -> lib64
drwxr-sr-x  2 db2inst1 db2iadm1    23 Jun 11  2021 acs
drwxr-sr-x  2 db2inst1 db2iadm1  4096 Jan 20 12:14 adm
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 adsm -> /opt/ibm/db2/V11.5/adsm
drwxr-s---  2 db2inst1 db2iadm1     6 Jan 20 12:14 backup
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 bin -> /opt/ibm/db2/V11.5/bin
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 bnd -> /opt/ibm/db2/V11.5/bnd
drwxrwsr-t  2 db2inst1 db2iadm1  4096 Jan 20 12:14 cfg
drwxrwsr-t  2 db2inst1 db2iadm1    24 Jan 20 12:14 cfgcache
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 conv -> /opt/ibm/db2/V11.5/conv
drwxrwsr-t  2 db2inst1 db2iadm1    23 Jan 20 12:14 ctrl
drwxrwsr-t  2 db2inst1 db2iadm1     6 Jan 20 12:14 ctrlha
drwxrwsr-t  2 db2inst1 db2iadm1   148 Jan 20 12:14 dasfcn
-rwxr-xr-x  1 db2inst1 db2iadm1 13116 Jan 20 12:14 db2cshrc
drwxrwsrwt  3 db2inst1 db2iadm1    22 Jan 20 12:14 db2dump
-rw-rw-r--  1 db2inst1 db2iadm1    20 Jan 20 12:14 db2nodes.cfg
-rwxr-xr-x  1 db2inst1 db2iadm1  9088 Jan 20 12:14 db2profile
-rw-rw-r--  1 db2inst1 db2iadm1 28672 Jan 20 12:14 db2systm
drwxr-sr-x  5 db2inst1 db2iadm1    95 Jan 20 12:14 db2tss
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 doc -> /opt/ibm/db2/V11.5/doc
lrwxrwxrwx  1 root     db2iadm1    29 Jan 20 12:14 federation -> /opt/ibm/db2/V11.5/federation
-rw-r--r--  1 db2inst1 db2iadm1     4 Jan 20 12:14 .ftok
drwxrwsr-t  4 db2inst1 db2iadm1   127 Jan 20 12:14 function
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 gse -> /opt/ibm/db2/V11.5/gse
lrwxrwxrwx  1 root     db2iadm1    24 Jan 20 12:14 gskit -> /opt/ibm/db2/V11.5/gskit
lrwxrwxrwx  1 root     db2iadm1    26 Jan 20 12:14 include -> /opt/ibm/db2/V11.5/include
lrwxrwxrwx  1 root     db2iadm1    26 Jan 20 12:14 infopop -> /opt/ibm/db2/V11.5/infopop
-rw-rw-r--  1 db2inst1 db2iadm1     2 Jan 20 12:14 .insttype
-rw-r--r--  1 db2inst1 db2iadm1   120 Jan 20 12:14 .instuse
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 java -> /opt/ibm/db2/V11.5/java
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 json -> /opt/ibm/db2/V11.5/json
lrwxrwxrwx  1 root     db2iadm1     5 Jan 20 12:14 lib -> lib64
lrwxrwxrwx  1 root     db2iadm1    24 Jan 20 12:14 lib32 -> /opt/ibm/db2/V11.5/lib32
lrwxrwxrwx  1 root     db2iadm1    24 Jan 20 12:14 lib64 -> /opt/ibm/db2/V11.5/lib64
drwxrwsr-t  2 db2inst1 db2iadm1    26 Jan 20 12:14 log
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 map -> /opt/ibm/db2/V11.5/map
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 misc -> /opt/ibm/db2/V11.5/misc
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 msg -> /opt/ibm/db2/V11.5/msg
drwxrwsr-x  2 db2inst1 db2iadm1     6 Jan 20 12:14 .netls
lrwxrwxrwx  1 root     db2iadm1    21 Jan 20 12:14 pd -> /opt/ibm/db2/V11.5/pd
-rw-rw-r--  1 db2inst1 db2iadm1    16 Jan 20 12:14 profile.env
lrwxrwxrwx  1 root     db2iadm1    25 Jan 20 12:14 python -> /opt/ibm/db2/V11.5/python
lrwxrwxrwx  1 root     db2iadm1    20 Jan 20 12:14 R -> /opt/ibm/db2/V11.5/R
lrwxrwxrwx  1 root     db2iadm1    25 Jan 20 12:14 Readme -> /opt/ibm/db2/V11.5/Readme
lrwxrwxrwx  1 root     db2iadm1    26 Jan 20 12:14 samples -> /opt/ibm/db2/V11.5/samples
drwxr-sr-x  3 db2inst1 db2iadm1   119 Jan 20 12:14 security
drwxr-sr-x  3 db2inst1 db2iadm1    20 Jan 20 12:14 security32
drwxr-sr-x  3 db2inst1 db2iadm1    20 Jan 20 12:14 security64
drwxrwsrwx  2 db2inst1 db2iadm1     6 Jan 20 12:14 tmp
lrwxrwxrwx  1 root     db2iadm1    24 Jan 20 12:14 tools -> /opt/ibm/db2/V11.5/tools
drwxrwsrwx  2 db2inst1 db2iadm1     6 Jan 20 12:14 uif
-rwxr-xr-x  1 db2inst1 db2iadm1     0 Jan 20 12:14 usercshrc
-rwxr-xr-x  1 db2inst1 db2iadm1     0 Jan 20 12:14 userprofile

Lets bring the instance up and create a database, exactly as in the last post:

sles15-db2-root:~ $ su - db2inst1
db2inst1@sles15-db2-root:~> db2ls

Install Path                       Level   Fix Pack   Special Install Number   Install Date                  Installer UID 
---------------------------------------------------------------------------------------------------------------------
/opt/ibm/db2/V11.5               11.5.6.0        0                            Thu Jan 20 12:05:28 2022 CET             0 

      
      

db2inst1@sles15-db2-root:~> . sqllib/userprofile 
db2inst1@sles15-db2-root:~> db2ilist 
db2inst1
db2inst1@sles15-db2-root:~> db2start
01/20/2022 12:20:41     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.


b2inst1@sles15-db2-root:~> db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 11.5.6.0

You can issue database manager commands and SQL statements from the command 
prompt. For example:
    db2 => connect to sample
    db2 => bind sample.bnd

For general help, type: ?.
For command help, type: ? command, where command can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE for help on the CATALOG DATABASE command
 ? CATALOG          for help on all of the CATALOG commands.

To exit db2 interactive mode, type QUIT at the command prompt. Outside 
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.

For more detailed help, refer to the Online Reference Manual.

db2 => create database db1
DB20000I  The CREATE DATABASE command completed successfully.
db2 => connect to db1

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.6.0
 SQL authorization ID   = DB2INST1
 Local database alias   = DB1

db2 => 

Finally we want this instance to start automatically when the systems comes up:

db2inst1@sles15-db2-root:~> db2greg -getinstrec instancename='db2inst1'
Retrieved record:
   Service      = |DB2|
   Version      = |11.5.6.0|
   InstanceName = |db2inst1|
   InstancePath = |/home/db2inst1/sqllib|
   Usage        = |N/A|
   StartAtBoot  = 1
   Maintenance  = 0
   InstallPath  = |/opt/ibm/db2/V11.5|
   RemoteProf   = |N/A|
   Comment      = |N/A|
db2inst1@sles15-db2-root:~> db2iauto -on db2inst1 
db2inst1@sles15-db2-root:~> exit
sles15-db2-root:~ $ systemctl reboot

After rebooting we can confirm, that the instance is started:

sles15-db2-root:~ # su - db2inst1
db2inst1@sles15-db2-root:~> ps -ef | grep db2
root       543     1  0 13:13 ?        00:00:00 /opt/ibm/db2/V11.5/bin/db2fmcd
root      1354     1  8 13:14 ?        00:00:00 db2wdog 0 [db2inst1]
db2inst1  1356  1354 10 13:14 ?        00:00:00 db2sysc 0
root      1425  1354  0 13:14 ?        00:00:00 db2ckpwd 0
root      1426  1354  0 13:14 ?        00:00:00 db2ckpwd 0
root      1427  1354  0 13:14 ?        00:00:00 db2ckpwd 0
db2inst1  1429  1354  1 13:14 ?        00:00:00 db2vend (PD Vendor Process - 1) 0
db2inst1  1438  1354  1 13:14 ?        00:00:00 db2acd 0 ,0,0,0,1,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000000,00000000,00000000,00000000,00000000,0000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,000000002d85b000,0000000000000000,0000000000000000,1,0,0,,,,,a89e68,14,1e014,2,0,1,0000000000041fc0,0x240000000,0x240000000,1600000,2,2,10
root      1458  1374  0 13:14 pts/0    00:00:00 su - db2inst1
db2inst1  1459  1458  0 13:14 pts/0    00:00:00 -bash
db2inst1  1867  1459 99 13:14 pts/0    00:00:00 ps -ef
db2inst1  1868  1459  0 13:14 pts/0    00:00:00 grep --color=auto db2
db2inst1@sles15-db2-root:~> db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 11.5.6.0

You can issue database manager commands and SQL statements from the command 
prompt. For example:
    db2 => connect to sample
    db2 => bind sample.bnd

For general help, type: ?.
For command help, type: ? command, where command can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE for help on the CATALOG DATABASE command
 ? CATALOG          for help on all of the CATALOG commands.

To exit db2 interactive mode, type QUIT at the command prompt. Outside 
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.

For more detailed help, refer to the Online Reference Manual.

db2 => connect to db1

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.6.0
 SQL authorization ID   = DB2INST1
 Local database alias   = DB1

db2 => 

As this is on Linux there should be a systemd service definition somewhere:

db2inst1@sles15-db2-root:~> systemctl list-unit-files | grep db2
db2fmcd.service                                                           enabled         disabled     
db2inst1@sles15-db2-root:~> systemctl status db2fmcd.service
● db2fmcd.service - DB2 v11.5.6.0
     Loaded: loaded (/etc/systemd/system/db2fmcd.service; enabled; vendor preset: disabled)
     Active: active (running) since Thu 2022-01-20 13:13:16 CET; 1min 54s ago
   Main PID: 543 (db2fmcd)
      Tasks: 44
     CGroup: /system.slice/db2fmcd.service
             ├─ 543 /opt/ibm/db2/V11.5/bin/db2fmcd
             ├─1354 db2wdog 0 [db2inst1]
             ├─1356 db2sysc 0
             ├─1425 db2ckpwd 0
             ├─1426 db2ckpwd 0
             ├─1427 db2ckpwd 0
             ├─1429 db2vend (PD Vendor Process - 1) 0
             └─1438 db2acd 0 ,0,0,0,1,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000000,00000000,00000000,00000000,00000000,0000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,000000002d85b000,0000000000000000,0000000000000000,1,0,0,,,,,a89e68,14,1e014,2,0,1,0000000000041fc0,0x240000000,0x240000000,1600000,2,2,10

That’s it for the root installation. For me, personally, there is not much benefit compared to the non-root installation. For real systems this is the way to go for, especially when you check the limitations of the non-root installation.

Cet article Setting up IBM Db2 on Linux – root installation est apparu en premier sur Blog dbi services.

ODA KVM Machines : Adding a Network Interface on a Virtual Machine

Yann Neuhaus - Thu, 2022-01-20 10:55

In a previous blog , I was talking about virtual machines on ODA servers. Sometimes it comes that we want to add another network interface for other purpose.
In this blog I am describing the main tasks to add a second network interface card on KVM virtual machine. Let’s describe the context
I have an ODA X8-2M with appliance 19.12. This ODA contains virtual machines for the applications. The virtual machine only has one NIC used for the public connexion. The idea is to add a second network interface that will be used for the backups
-The oda server : oda-dev-01
-The virtual machine : vmserver_appli
We show the actual network configuration for the virtual machine vmserver-appli. Only eth0 is present in the VM.

[root@vmserver-appli ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:e9:26:18 brd ff:ff:ff:ff:ff:ff
    inet 10.18.35.139/24 brd 10.18.35.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f7a:d225:d568:5afa/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[root@vmserver-appli ~]#

As we can see below the eth0 is actually using the vNetwork pubnet of the physical ODA server. The pubnet is configured with the btbond1.

[root@oda-dev-01 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
pubnet                Bridged          btbond1          pubnet                NO        2021-09-27 10:55:41      2021-09-27 10:55:41
                                                                                        CEST                     CEST

[root@oda-dev-01 ~]#

In the ODA there is a second configured bonding with btbond2.

[root@oda-dev-01 network-scripts]# cat ifcfg-btbond2
# This file is automatically created by the ODA software.
DEVICE=btbond2
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
BONDING_OPTS="mode=active-backup miimon=100 primary=p7p3"
[root@oda-dev-01 network-scripts]#

As we can see btbond2 is not used yet compared to btbond1 used for the pubnet vNetwork

[root@oda-dev-01 network-scripts]# cat ifcfg-btbond1
# This file is automatically created by the ODA software.
DEVICE=btbond1
ONBOOT=yes
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
BONDING_OPTS="mode=active-backup miimon=100 primary=p7p1"
BRIDGE=pubnet
[root@oda-dev-01 network-scripts]#

So the idea is to create backupnet vNetwork based on btbond2. Later the backupnet vNetwork will be attached to the virtual machine.The command odacli create-vnetwork is used to create backupnet

[root@oda-dev-01 network-scripts]# odacli create-vnetwork --name backupnet --bridge backupnet --type bridged --interface btbond2 --ip 10.18.45.55 --gateway 10.18.45.1 --netmask 255.255.255.128

Job details
----------------------------------------------------------------
                     ID:  f3490869-1d38-4562-928f-644e971b24cd
            Description:  vNetwork backupnet creation
                 Status:  Created
                Created:  January 20, 2022 10:39:12 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@oda-dev-01 network-scripts]#

We verify that the job is ok

[root@oda-dev-01 network-scripts]# odacli describe-job -i f3490869-1d38-4562-928f-644e971b24cd

Job details
----------------------------------------------------------------
                     ID:  f3490869-1d38-4562-928f-644e971b24cd
            Description:  vNetwork backupnet creation
                 Status:  Success
                Created:  January 20, 2022 10:39:12 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Network doesn't exist   January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:12 AM CET    Success
Validate interface to use exists         January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:12 AM CET    Success
Validate interfaces to create not exist  January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:12 AM CET    Success
Validate interface is not default        January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:12 AM CET    Success
Create bridge from interface             January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:18 AM CET    Success
Create metadata                          January 20, 2022 10:39:18 AM CET    January 20, 2022 10:39:18 AM CET    Success
Persist metadata                         January 20, 2022 10:39:18 AM CET    January 20, 2022 10:39:18 AM CET    Success

[root@oda-dev-01 network-scripts]#

And we can then list the vNetworks in the ODA

[root@oda-dev-01 network-scripts]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
pubnet                Bridged          btbond1          pubnet                NO        2021-09-27 10:55:41      2021-09-27 10:55:41
                                                                                        CEST                     CEST
backupnet             Bridged          btbond2          backupnet             NO        2022-01-20 10:39:18 CET  2022-01-20 10:39:18 CET

[root@oda-dev-01 network-scripts]#

Let’s describe the VM that I want to add a second network interface. We can see that only one vNetwork (pubnet) is attached to the VM.

[root@oda-dev-01 network-scripts]# odacli describe-vm -n vmserver-appli
VM details
--------------------------------------------------------------------------------
                       ID:  14f10064-a048-419d-a850-18b1d9b2b69e
                     Name:  vmserver-appli
                  Created:  2021-10-21 14:51:50 CEST
                  Updated:  2021-10-21 14:51:50 CEST
               VM Storage:  storevms
              Description:  Web site standby
            VM image path:  /u05/app/sharedrepo/storevms/.ACFS/snaps/vm_vmserver-appli/vmserver-appli
                  VM size:  20.00 GB
                   Source:  V1009690-01.iso
                  OS Type:  NONE
               OS Variant:  NONE
        Graphics settings:  vnc,listen=127.0.0.1
             Display Port:  127.0.0.1:1

 Status
--------------------------
             Current node:  oda-dev-01
            Current state:  ONLINE
             Target state:  ONLINE

 Parameters
--------------------------
           Preferred node:  NONE
              Boot option:  NONE
               Auto start:  YES
                Fail over:  NO
             NUMA enabled:  NO

                            Config                     Live
                            -------------------------  -------------------------
                   Memory:  4.00 GB                    4.00 GB
               Max Memory:  8.00 GB                    8.00 GB
               vCPU count:  3                          3
           Max vCPU count:  6                          6
                 CPU Pool:  cpuvms                     cpuvms
        Effective CPU set:  1-9,17-25,33-41,49-57      1-9,17-25,33-41,49-57
                    vCPUs:  0:1-9,17-25,33-41,49-57    0:1-9,17-25,33-41,49-57
                            1:1-9,17-25,33-41,49-57    1:1-9,17-25,33-41,49-57
                            2:1-9,17-25,33-41,49-57    2:1-9,17-25,33-41,49-57
                   vDisks:  vdisk2webs21:vdc           vdisk2webs21:vdc
                            vdisk1webs21:vdb           vdisk1webs21:vdb
                vNetworks:  pubnet:52:54:00:e9:26:18   pubnet:52:54:00:e9:26:18

[root@oda-dev-01 network-scripts]#

Again on the virtual machine only one NIC eth0 is configured

[root@vmserver-appli ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:e9:26:18 brd ff:ff:ff:ff:ff:ff
    inet 10.18.35.139/24 brd 10.18.35.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f7a:d225:d568:5afa/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[root@vmserver-appli ~]#

To add a second network interface, let’s attach the backupnet vNetwork to the VM . Just note the use both –live and –config
–live: The change is applied to the running VM instance, but does not persist after the VM stops.
–config: The change persists in the configuration of VM instance and takes effect on subsequent reboots.

[root@oda-dev-01 network-scripts]# odacli modify-vm -n vmserver-appli -avn backupnet --live --config

Job details
----------------------------------------------------------------
                     ID:  8da2a4cc-528c-47d9-bcb0-4481010c73f3
            Description:  VM vmserver-appli modification
                 Status:  Created
                Created:  January 20, 2022 10:48:25 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@oda-dev-01 network-scripts]#

Let’s verify that the job returns SUCCESS

[root@oda-dev-01 network-scripts]# odacli describe-job -i 8da2a4cc-528c-47d9-bcb0-4481010c73f3

Job details
----------------------------------------------------------------
                     ID:  8da2a4cc-528c-47d9-bcb0-4481010c73f3
            Description:  VM vmserver-appli modification
                 Status:  Success
                Created:  January 20, 2022 10:48:25 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate VM exists                       January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:25 AM CET    Success
Validate network attachment pre-reqs     January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:25 AM CET    Success
Attach vNetworks                         January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:25 AM CET    Success
Save configuration in ACFS               January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:25 AM CET    Success
Save live VM configuration in ACFS       January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:26 AM CET    Success
Modify VM metadata                       January 20, 2022 10:48:26 AM CET    January 20, 2022 10:48:26 AM CET    Success
Persist metadata                         January 20, 2022 10:48:26 AM CET    January 20, 2022 10:48:26 AM CET    Success

[root@oda-dev-01 network-scripts]#

Describing again the VM, we can see that the backupnet vNetwork is now attached and persistent

[root@oda-dev-01 network-scripts]# odacli describe-vm -n vmserver-appli
VM details
--------------------------------------------------------------------------------
                       ID:  14f10064-a048-419d-a850-18b1d9b2b69e
                     Name:  vmserver-appli
                  Created:  2021-10-21 14:51:50 CEST
                  Updated:  2022-01-20 10:48:26 CET
               VM Storage:  storevms
              Description:  Web site standby
            VM image path:  /u05/app/sharedrepo/storevms/.ACFS/snaps/vm_vmserver-appli/vmserver-appli
                  VM size:  20.00 GB
                   Source:  V1009690-01.iso
                  OS Type:  NONE
               OS Variant:  NONE
        Graphics settings:  vnc,listen=127.0.0.1
             Display Port:  127.0.0.1:1

 Status
--------------------------
             Current node:  oda-dev-01
            Current state:  ONLINE
             Target state:  ONLINE

 Parameters
--------------------------
           Preferred node:  NONE
              Boot option:  NONE
               Auto start:  YES
                Fail over:  NO
             NUMA enabled:  NO

                            Config                     Live
                            -------------------------  -------------------------
                   Memory:  4.00 GB                    4.00 GB
               Max Memory:  8.00 GB                    8.00 GB
               vCPU count:  3                          3
           Max vCPU count:  6                          6
                 CPU Pool:  cpuvms                     cpuvms
        Effective CPU set:  1-9,17-25,33-41,49-57      1-9,17-25,33-41,49-57
                    vCPUs:  0:1-9,17-25,33-41,49-57    0:1-9,17-25,33-41,49-57
                            1:1-9,17-25,33-41,49-57    1:1-9,17-25,33-41,49-57
                            2:1-9,17-25,33-41,49-57    2:1-9,17-25,33-41,49-57
                   vDisks:  vdisk2webs21:vdc           vdisk2webs21:vdc
                            vdisk1webs21:vdb           vdisk1webs21:vdb
                vNetworks:  backupnet:52:54:00:b2:ae:94  backupnet:52:54:00:b2:ae:94
                            pubnet:52:54:00:e9:26:18   pubnet:52:54:00:e9:26:18

[root@oda-dev-01 network-scripts]#

Connecting on the VM, we can see that there is a second network interface card eth1.

[root@vmserver-appli ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:e9:26:18 brd ff:ff:ff:ff:ff:ff
    inet 10.18.35.139/24 brd 10.18.35.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f7a:d225:d568:5afa/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:b2:ae:94 brd ff:ff:ff:ff:ff:ff
[root@vmserver-appli ~]#

Now you can configure this second network card according to your needs

Conclusion

Hope this blog will help. Just a recommendation is to always restart VMs with odacli stop-vm after modifications, this will validate that your changes are persistent or not

Cet article ODA KVM Machines : Adding a Network Interface on a Virtual Machine est apparu en premier sur Blog dbi services.

The Contents of the Database Controlfile -- 3 : Recreating the Controlfile

Hemant K Chitale - Thu, 2022-01-20 03:57

 In my previous posts, I have shown the Logical Structure and the Physical Contents of the Controlfile.  It is clear that some of the sections (e.g. for "ARCHIVED LOG" entries) have many slots (or "records") not all of which are currently in use.

What happens if you recreate the controlfile ?  Since my first post, I have generated some more Backup Pieces and ArchiveLogs (and purged ArchiveLogs) so the RECORDS_UESD or INDEXes of "BACKUP DATAFILE", "BACKUP PIECE" and "ARCHIVED LOG" in this listing are different from that first listing.



SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 383 383 198 197 1444 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 210 1 210 210 0
BACKUP PIECE 780 1006 160 1 160 160 0
BACKUP REDOLOG 76 215 215 51 50 265 0
BACKUP SET 96 1022 149 1 149 149 0
BACKUP SPFILE 124 131 33 1 33 33 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 4 1 4 4 0
DATAFILE 520 1024 34 0 0 1532 0
DATAFILE COPY 736 1000 3 1 3 3 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 818 14 13 831 0
FILENAME 524 4146 29 0 0 0 0
FLASHBACK LOG 84 2048 2 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 1 0 0 2 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 292 54 53 637 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 696 1 696 696 0
PDB RECORD 780 10 5 0 0 26 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 4 0 0 15 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 6 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 7 0
RMAN STATUS 116 141 141 98 97 379 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 22 0 0 75 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 7 0 0 36 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>
SQL> alter database backup controlfile to trace as '/home/oracle/controlfile_creation.txt';

Database altered.

SQL>

SQL> !cat /home/oracle/controlfile_creation.txt
-- The following are current System-scope REDO Log Archival related
-- parameters and can be included in the database initialization file.
--
-- LOG_ARCHIVE_DEST=''
-- LOG_ARCHIVE_DUPLEX_DEST=''
--
-- LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
--
-- DB_UNIQUE_NAME="ORCLCDB"
--
-- LOG_ARCHIVE_CONFIG='SEND, RECEIVE, NODG_CONFIG'
-- LOG_ARCHIVE_MAX_PROCESSES=4
-- STANDBY_FILE_MANAGEMENT=MANUAL
-- FAL_CLIENT=''
-- FAL_SERVER=''
--
-- LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
-- LOG_ARCHIVE_DEST_1='OPTIONAL REOPEN=300 NODELAY'
-- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM NOVERIFY SYNC'
-- LOG_ARCHIVE_DEST_1='REGISTER'
-- LOG_ARCHIVE_DEST_1='NOALTERNATE'
-- LOG_ARCHIVE_DEST_1='NODEPENDENCY'
-- LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME'
-- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_1=ENABLE

--
-- Below are two sets of SQL statements, each of which creates a new
-- control file and uses it to open the database. The first set opens
-- the database with the NORESETLOGS option and should be used only if
-- the current versions of all online logs are available. The second
-- set opens the database with the RESETLOGS option and should be used
-- if online logs are unavailable.
-- The appropriate set of statements can be copied from the trace into
-- a script file, edited as necessary, and executed when there is a
-- need to re-create the control file.
--
-- Set #1. NORESETLOGS case
--
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- Additional logs may be required for media recovery of offline
-- Use this only if the current versions of all online logs are
-- available.

-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" NORESETLOGS FORCE LOGGING ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
-- STANDBY LOGFILE
-- GROUP 4 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log' SIZE 200M BLOCKSIZE 512
DATAFILE
'/opt/oracle/oradata/ORCLCDB/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_system_j80dl8qy_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_sysaux_j80dl94j_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_undotbs1_j80dl96d_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_rman_tbs_j80f07n8_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf'
CHARACTER SET AL32UTF8
;

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO REDUNDANCY 2');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE

-- All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;

-- Database can now be opened normally.
ALTER DATABASE OPEN;

-- Open all the PDBs.
ALTER PLUGGABLE DATABASE ALL OPEN;

-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/temp01.dbf'
SIZE 138412032 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "PDB$SEED";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf'
SIZE 76546048 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "ORCLPDB1";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp01.dbf'
SIZE 135266304 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp02.dbf'
SIZE 52428800 REUSE AUTOEXTEND OFF;
ALTER SESSION SET CONTAINER = "RMANCAT";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_temp_j80dl97t_.dbf'
SIZE 77594624 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "CDB$ROOT";
-- End of tempfile additions.
--
--
--
----------------------------------------------------------
-- The following script can be used on the standby database
-- to re-populate entries for a standby controlfile created
-- on the primary and copied to the standby site.
----------------------------------------------------------
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log'
SIZE 200M BLOCKSIZE 512 REUSE;
-- Set #2. RESETLOGS case
--
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- The contents of online logs will be lost and all backups will
-- be invalidated. Use this only if online logs are damaged.

-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" RESETLOGS FORCE LOGGING ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
-- STANDBY LOGFILE
-- GROUP 4 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log' SIZE 200M BLOCKSIZE 512
DATAFILE
'/opt/oracle/oradata/ORCLCDB/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_system_j80dl8qy_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_sysaux_j80dl94j_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_undotbs1_j80dl96d_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_rman_tbs_j80f07n8_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf'
CHARACTER SET AL32UTF8
;

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO REDUNDANCY 2');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE

-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;

-- Open all the PDBs.
ALTER PLUGGABLE DATABASE ALL OPEN;

-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/temp01.dbf'
SIZE 138412032 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "PDB$SEED";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf'
SIZE 76546048 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "ORCLPDB1";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp01.dbf'
SIZE 135266304 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp02.dbf'
SIZE 52428800 REUSE AUTOEXTEND OFF;
ALTER SESSION SET CONTAINER = "RMANCAT";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_temp_j80dl97t_.dbf'
SIZE 77594624 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "CDB$ROOT";
-- End of tempfile additions.
--
--
--
----------------------------------------------------------
-- The following script can be used on the standby database
-- to re-populate entries for a standby controlfile created
-- on the primary and copied to the standby site.
----------------------------------------------------------
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log'
SIZE 200M BLOCKSIZE 512 REUSE;

SQL>


The trace that is generated with the "alter database backup controlfile to trace ..." command actually contains the SQL statements that can be used to recreate the controlfile.  There are a number of useful comments also added.
Essentially, there are two sets of SQL commands.  

The first set, titled "Set #1" is for the NORESETLOGS scenario where all the Online Redo Log files are available.
The second set, titled "Set #2" is for a RESETLOGS scenario where Incomplete Recovery is required because not all the Online Redo Log files are available.

In my case, I have all my Online Redo Log files so I do not lose any data and I can use the commands from Set #1.

Recreating the Controlfile :


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

SQL> @Recreate_Controlfile.SQL
SQL> spool Recreate_Controlfile
SQL>
SQL> STARTUP NOMOUNT
ORACLE instance started.

Total System Global Area 1207955544 bytes
Fixed Size 9134168 bytes
Variable Size 671088640 bytes
Database Buffers 520093696 bytes
Redo Buffers 7639040 bytes
SQL> CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" NORESETLOGS FORCE LOGGING ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 1024
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
9 GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
10 GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
11 -- STANDBY LOGFILE
12 -- GROUP 4 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log' SIZE 200M BLOCKSIZE 512
13 DATAFILE
14 '/opt/oracle/oradata/ORCLCDB/system01.dbf',
15 '/opt/oracle/oradata/ORCLCDB/sysaux01.dbf',
16 '/opt/oracle/oradata/ORCLCDB/undotbs01.dbf',
17 '/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf',
18 '/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf',
19 '/opt/oracle/oradata/ORCLCDB/users01.dbf',
20 '/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf',
21 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf',
22 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf',
23 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf',
24 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf',
25 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf',
26 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_system_j80dl8qy_.dbf',
27 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_sysaux_j80dl94j_.dbf',
28 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_undotbs1_j80dl96d_.dbf',
29 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_rman_tbs_j80f07n8_.dbf',
30 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf',
31 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf',
32 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf'
33 CHARACTER SET AL32UTF8
34 ;

Control file created.

SQL>
SQL> -- Configure RMAN configuration record 1
SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO BACKED UP 1 TIMES TO DISK');

PL/SQL procedure successfully completed.

SQL> -- Configure RMAN configuration record 2
SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');

PL/SQL procedure successfully completed.

SQL> -- Configure RMAN configuration record 3
SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO REDUNDANCY 2');

PL/SQL procedure successfully completed.

SQL> -- Recovery is required if any of the datafiles are restored backups,
SQL> -- or if the last shutdown was not normal or immediate.
SQL> -- I will be doing a SHUTDOWN IMMEDIATE so I do not need a RECOVER DATABASE
SQL> -- RECOVER DATABASE
SQL>
SQL> -- All logs need archiving and a log switch is needed.
SQL> ALTER SYSTEM ARCHIVE LOG ALL;

System altered.

SQL>
SQL> -- Database can now be opened normally.
SQL> ALTER DATABASE OPEN;

Database altered.

SQL>
SQL> -- Open all the PDBs.
SQL> ALTER PLUGGABLE DATABASE ALL OPEN;

Pluggable database altered.

SQL>
SQL> -- Commands to add tempfiles to temporary tablespaces.
SQL> -- Online tempfiles have complete space information.
SQL> -- Other tempfiles may require adjustment.
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/temp01.dbf'
2 SIZE 138412032 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "PDB$SEED";

Session altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf'
2 SIZE 76546048 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "ORCLPDB1";

Session altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp01.dbf'
2 SIZE 135266304 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp02.dbf'
2 SIZE 52428800 REUSE AUTOEXTEND OFF;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "RMANCAT";

Session altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_temp_j80dl97t_.dbf'
2 SIZE 77594624 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "CDB$ROOT";

Session altered.

SQL> -- End of tempfile additions.
SQL>
SQL> spool off
SQL>
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 71
Next log sequence to archive 73
Current log sequence 73
SQL>


The ARCHIVE LOG LIST output shows that I did NOT have to do a RESETLOGS. The CREATE CONTROLFILE and ALTER DATABASE OPEN commands succeeded.

What do I now see in the Logical Entries in the Controlfile ?


SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 308 3 1 3 3 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 0 0 0 0 0
BACKUP PIECE 780 1006 0 0 0 0 0
BACKUP REDOLOG 76 215 0 0 0 0 0
BACKUP SET 96 1022 0 0 0 0 0
BACKUP SPFILE 124 131 0 0 0 0 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 2 1 2 2 0
DATAFILE 520 1024 34 0 0 39 0
DATAFILE COPY 736 1000 0 0 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0 0
FILENAME 524 4146 27 0 0 0 0
FLASHBACK LOG 84 2048 0 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 0 0 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 1 1 1 1 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 12 1 12 12 0
PDB RECORD 780 10 4 0 0 6 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 3 0 0 0 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 0 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 3 0
RMAN STATUS 116 141 0 0 0 0 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 22 0 0 7 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 5 0 0 8 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>


Recreating the Controlfile resulted in loss of information on all ArchivedLogs and RMAN Backups !
Compare the RECORDS_USED and INDEXes for "ARCHIVED LOG" and "BACKUP %" entries against the listing obtained before recreated the Controlfile.  (A few ArchivedLogs were generated by the "ALTER SYSTEM ARCHIVE LOG ALL" command in the script itself).

Can I add that information back ?


RMAN> catalog start with '/opt/oracle/FRA/ORCLCDB';

using target database control file instead of recovery catalog
searching for all files that match the pattern /opt/oracle/FRA/ORCLCDB

List of Files Unknown to the Database
=====================================
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2021_09_26/o1_mf_s_1084311179_jo0xnh8o_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143716_jy7pz5dd_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094145522_jy7rqmov_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469527_jyko4r0n_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469675_jyko9dj3_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469696_jykob1pp_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094477941_jykxcrnx_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094481585_jyl0xoq0_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko09yl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko0b3k_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko6m5w_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko7djl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko98yc_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_ncnnf_TAG20220120T112134_jyko9zkl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxch2m_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxchwb_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko1qll_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko20ty_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3xgo_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3y0y_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33fq_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33wn_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_60_jyl0zmoz_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_61_jyl10qd1_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_62_jyl11rbq_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_63_jyl12p73_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_64_jyl13l2w_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_65_jyl14s7l_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_66_jyl1qrjm_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_67_jyl81z9z_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_68_jyl861ll_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_69_jyl8l6mo_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jyl8p3d5_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jyl8szdf_.arc

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2021_09_26/o1_mf_s_1084311179_jo0xnh8o_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143716_jy7pz5dd_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094145522_jy7rqmov_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469527_jyko4r0n_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469675_jyko9dj3_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469696_jykob1pp_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094477941_jykxcrnx_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094481585_jyl0xoq0_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko09yl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko0b3k_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko6m5w_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko7djl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko98yc_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_ncnnf_TAG20220120T112134_jyko9zkl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxch2m_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxchwb_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko1qll_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko20ty_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3xgo_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3y0y_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33fq_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33wn_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_60_jyl0zmoz_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_61_jyl10qd1_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_62_jyl11rbq_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_63_jyl12p73_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_64_jyl13l2w_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_65_jyl14s7l_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_66_jyl1qrjm_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_67_jyl81z9z_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_68_jyl861ll_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_69_jyl8l6mo_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jyl8p3d5_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jyl8szdf_.arc

RMAN>
RMAN> list backup summary;


List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
1 B F A DISK 26-SEP-21 1 1 NO TAG20210926T213259
2 B F A DISK 16-JAN-22 1 1 NO TAG20220116T164836
3 B F A DISK 16-JAN-22 1 1 NO TAG20220116T165004
4 B F A DISK 16-JAN-22 1 1 NO TAG20220116T171842
5 B F A DISK 20-JAN-22 1 1 NO TAG20220120T111847
6 B F A DISK 20-JAN-22 1 1 NO TAG20220120T112115
7 B F A DISK 20-JAN-22 1 1 NO TAG20220120T112136
8 B F A DISK 20-JAN-22 1 1 NO TAG20220120T133901
9 B F A DISK 20-JAN-22 1 1 NO TAG20220120T143945
10 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
11 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
12 B A A DISK 20-JAN-22 1 1 YES TAG20220120T111944
13 B A A DISK 20-JAN-22 1 1 YES TAG20220120T111944
14 B A A DISK 20-JAN-22 1 1 YES TAG20220120T111944
15 B F A DISK 20-JAN-22 1 1 NO TAG20220120T112134
16 B A A DISK 20-JAN-22 1 1 YES TAG20220120T133854
17 B A A DISK 20-JAN-22 1 1 YES TAG20220120T133854
18 B F A DISK 16-JAN-22 1 1 NO TAG20220116T164908
19 B F A DISK 16-JAN-22 1 1 NO TAG20220116T164908
20 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
21 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
22 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
23 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
24 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
25 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
4 1 60 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_60_jyl0zmoz_.arc

5 1 61 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_61_jyl10qd1_.arc

6 1 62 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_62_jyl11rbq_.arc

7 1 63 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_63_jyl12p73_.arc

8 1 64 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_64_jyl13l2w_.arc

9 1 65 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_65_jyl14s7l_.arc

10 1 66 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_66_jyl1qrjm_.arc

11 1 67 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_67_jyl81z9z_.arc

12 1 68 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_68_jyl861ll_.arc

13 1 69 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_69_jyl8l6mo_.arc

14 1 70 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jyl8p3d5_.arc

1 1 70 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jylbnvgo_.arc

15 1 71 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jyl8szdf_.arc

2 1 71 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jylbo521_.arc

3 1 72 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_72_jylbofq3_.arc


RMAN>


Yes ! RMAN can "add the information" back into the Controlfile  using the CATALOG command  (also see this earlier post on the CATALOG command)-- as long as you know where the Backups and ArchiveLogs ARE ! (Another good reason to use a single FRA location !)


SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 308 15 1 15 15 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 37 1 37 37 0
BACKUP PIECE 780 1006 26 1 26 26 0
BACKUP REDOLOG 76 215 112 1 112 112 0
BACKUP SET 96 1022 26 1 26 26 0
BACKUP SPFILE 124 131 10 1 10 10 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 2 1 2 2 0
DATAFILE 520 1024 34 0 0 39 0
DATAFILE COPY 736 1000 0 0 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0 0
FILENAME 524 4146 28 0 0 0 0
FLASHBACK LOG 84 2048 0 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 0 0 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 1 1 1 1 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 12 1 12 12 0
PDB RECORD 780 10 4 0 0 6 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 3 0 0 0 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 8 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 3 0
RMAN STATUS 116 141 3 1 3 3 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 22 0 0 7 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 5 0 0 8 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>
SQL> select dest_id, standby_dest, deleted, status, count(*)
2 from v$archived_log
3 group by dest_id, standby_dest, deleted, status
4 order by 1,2,3,4
5 /

DEST_ID STA DEL S COUNT(*)
---------- --- --- - ----------
1 NO NO A 15

SQL>


The RECORDS_USED for ARCHIVED LOG" and "BACKUP DATAFILE" and "BACKUP PIECE" are now properly reset.
I would NOT advice that you periodically recreate your Controlfile.  

But this is an available method to "reset" the controlfile. I could also take this opportunity to increase controlled-values like MAXLOGFILES, MAXMEMBERS and MAXDATAFILES (or even MAXINSTANCES !).


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> quit

SQL> @Recreate_Controlfile.SQL
SQL> spool Recreate_Controlfile
SQL>
SQL> STARTUP NOMOUNT
ORACLE instance started.

Total System Global Area 1207955544 bytes
Fixed Size 9134168 bytes
Variable Size 671088640 bytes
Database Buffers 520093696 bytes
Redo Buffers 7639040 bytes
SQL> CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" NORESETLOGS FORCE LOGGING ARCHIVELOG
2 MAXLOGFILES 24
3 MAXLOGMEMBERS 4
4 MAXDATAFILES 2048
5 MAXINSTANCES 8
6 MAXLOGHISTORY 512
7 LOGFILE
8 GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
9 GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
10 GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
....
....
....


RMAN> catalog start with '/opt/oracle/FRA/ORCLCDB';
....
....
....



SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 532 18 1 18 18 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 37 1 37 37 0
BACKUP PIECE 780 1006 26 1 26 26 0
BACKUP REDOLOG 76 215 112 1 112 112 0
BACKUP SET 96 1022 26 1 26 26 0
BACKUP SPFILE 124 131 10 1 10 10 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 2 1 2 2 0
DATAFILE 520 2048 34 0 0 39 0
DATAFILE COPY 736 1000 0 0 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0 0
FILENAME 524 6242 27 0 0 0 0
FLASHBACK LOG 84 2048 0 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 0 0 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 584 1 1 1 1 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 12 1 12 12 0
PDB RECORD 780 10 4 0 0 6 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 24 3 0 0 0 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 9 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 3 0
RMAN STATUS 116 141 1 1 1 1 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 2048 22 0 0 7 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 2048 5 0 0 8 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>


Note how I have increased the "DATAFILE" RECORDS_TOTAL for "DATAFILE" from 1024 to 2048 and for "REDO LOG" from 16 to 24 and "LOGHISTORY" from 292 to 584.


Categories: DBA Blogs

Getting explain plans out of Db2

Yann Neuhaus - Thu, 2022-01-20 03:03

The last post about Db2 was about getting it up and running. As the issue we had to solve was about performance, getting an explain plan for the problematic statement(s) was the obvious step to do. In PostgreSQL you can just use EXPLAIN for that, and you’re done. In Db2 the procedure is a bit different, but once you know the tools, it is quite easy as well.

Let’s start by creating a new database:

db2@sles15db2:~> . sqllib/db2profile 
db2@sles15db2:~> db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 11.5.6.0

You can issue database manager commands and SQL statements from the command 
prompt. For example:
    db2 => connect to sample
    db2 => bind sample.bnd

For general help, type: ?.
For command help, type: ? command, where command can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE for help on the CATALOG DATABASE command
 ? CATALOG          for help on all of the CATALOG commands.

To exit db2 interactive mode, type QUIT at the command prompt. Outside 
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.

For more detailed help, refer to the Online Reference Manual.

db2 => create database db1
DB20000I  The CREATE DATABASE command completed successfully.

As we need something to explain, we’ll copy the “tables” catalog table without any data:

db2 => connect to db1

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.6.0
 SQL authorization ID   = DB2
 Local database alias   = DB1
db2 => create table t like syscat.tables
DB20000I  The SQL command completed successfully.
db2 => select count(*) from t;

1          
-----------
          0

  1 record(s) selected.

Generate some data:

db2 => insert into t select * from syscat.tables
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from syscat.tables
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => select count(*) from t

1          
-----------
      14016

  1 record(s) selected.

The structure of the table is like this:

db2 => describe table t

                                Data type                     Column
Column name                     schema    Data type name      Length     Scale Nulls
------------------------------- --------- ------------------- ---------- ----- ------
TABSCHEMA                       SYSIBM    VARCHAR                    128     0 No    
TABNAME                         SYSIBM    VARCHAR                    128     0 No    
OWNER                           SYSIBM    VARCHAR                    128     0 No    
OWNERTYPE                       SYSIBM    CHARACTER                    1     0 No    
TYPE                            SYSIBM    CHARACTER                    1     0 No    
STATUS                          SYSIBM    CHARACTER                    1     0 No    
BASE_TABSCHEMA                  SYSIBM    VARCHAR                    128     0 Yes   
BASE_TABNAME                    SYSIBM    VARCHAR                    128     0 Yes   
ROWTYPESCHEMA                   SYSIBM    VARCHAR                    128     0 Yes   
ROWTYPENAME                     SYSIBM    VARCHAR                    128     0 Yes   
CREATE_TIME                     SYSIBM    TIMESTAMP                   10     6 No    
ALTER_TIME                      SYSIBM    TIMESTAMP                   10     6 No    
INVALIDATE_TIME                 SYSIBM    TIMESTAMP                   10     6 No    
STATS_TIME                      SYSIBM    TIMESTAMP                   10     6 Yes   
COLCOUNT                        SYSIBM    SMALLINT                     2     0 No    
TABLEID                         SYSIBM    SMALLINT                     2     0 No    
TBSPACEID                       SYSIBM    SMALLINT                     2     0 No    
CARD                            SYSIBM    BIGINT                       8     0 No    
NPAGES                          SYSIBM    BIGINT                       8     0 No    
MPAGES                          SYSIBM    BIGINT                       8     0 No    
FPAGES                          SYSIBM    BIGINT                       8     0 No    
NPARTITIONS                     SYSIBM    BIGINT                       8     0 No    
NFILES                          SYSIBM    BIGINT                       8     0 No    
TABLESIZE                       SYSIBM    BIGINT                       8     0 No    
OVERFLOW                        SYSIBM    BIGINT                       8     0 No    
TBSPACE                         SYSIBM    VARCHAR                    128     0 Yes   
INDEX_TBSPACE                   SYSIBM    VARCHAR                    128     0 Yes   
LONG_TBSPACE                    SYSIBM    VARCHAR                    128     0 Yes   
PARENTS                         SYSIBM    SMALLINT                     2     0 Yes   
CHILDREN                        SYSIBM    SMALLINT                     2     0 Yes   
SELFREFS                        SYSIBM    SMALLINT                     2     0 Yes   
KEYCOLUMNS                      SYSIBM    SMALLINT                     2     0 Yes   
KEYINDEXID                      SYSIBM    SMALLINT                     2     0 Yes   
KEYUNIQUE                       SYSIBM    SMALLINT                     2     0 No    
CHECKCOUNT                      SYSIBM    SMALLINT                     2     0 No    
DATACAPTURE                     SYSIBM    CHARACTER                    1     0 No    
CONST_CHECKED                   SYSIBM    CHARACTER                   32     0 No    
PMAP_ID                         SYSIBM    SMALLINT                     2     0 Yes   
PARTITION_MODE                  SYSIBM    CHARACTER                    1     0 No    
LOG_ATTRIBUTE                   SYSIBM    CHARACTER                    1     0 No    
PCTFREE                         SYSIBM    SMALLINT                     2     0 No    
APPEND_MODE                     SYSIBM    CHARACTER                    1     0 No    
REFRESH                         SYSIBM    CHARACTER                    1     0 No    
REFRESH_TIME                    SYSIBM    TIMESTAMP                   10     6 Yes   
LOCKSIZE                        SYSIBM    CHARACTER                    1     0 No    
VOLATILE                        SYSIBM    CHARACTER                    1     0 No    
ROW_FORMAT                      SYSIBM    CHARACTER                    1     0 No    
PROPERTY                        SYSIBM    VARCHAR                     32     0 No    
STATISTICS_PROFILE              SYSIBM    CLOB                  10485760     0 Yes   
COMPRESSION                     SYSIBM    CHARACTER                    1     0 No    
ROWCOMPMODE                     SYSIBM    CHARACTER                    1     0 No    
ACCESS_MODE                     SYSIBM    CHARACTER                    1     0 No    
CLUSTERED                       SYSIBM    CHARACTER                    1     0 Yes   
ACTIVE_BLOCKS                   SYSIBM    BIGINT                       8     0 No    
DROPRULE                        SYSIBM    CHARACTER                    1     0 No    
MAXFREESPACESEARCH              SYSIBM    SMALLINT                     2     0 No    
AVGCOMPRESSEDROWSIZE            SYSIBM    SMALLINT                     2     0 No    
AVGROWCOMPRESSIONRATIO          SYSIBM    REAL                         4     0 No    
AVGROWSIZE                      SYSIBM    SMALLINT                     2     0 No    
PCTROWSCOMPRESSED               SYSIBM    REAL                         4     0 No    
LOGINDEXBUILD                   SYSIBM    VARCHAR                      3     0 Yes   
CODEPAGE                        SYSIBM    SMALLINT                     2     0 No    
COLLATIONSCHEMA                 SYSIBM    VARCHAR                    128     0 No    
COLLATIONNAME                   SYSIBM    VARCHAR                    128     0 Yes   
COLLATIONSCHEMA_ORDERBY         SYSIBM    VARCHAR                    128     0 No    
COLLATIONNAME_ORDERBY           SYSIBM    VARCHAR                    128     0 Yes   
ENCODING_SCHEME                 SYSIBM    CHARACTER                    1     0 No    
PCTPAGESSAVED                   SYSIBM    SMALLINT                     2     0 No    
LAST_REGEN_TIME                 SYSIBM    TIMESTAMP                   10     6 Yes   
SECPOLICYID                     SYSIBM    INTEGER                      4     0 No    
PROTECTIONGRANULARITY           SYSIBM    CHARACTER                    1     0 No    
AUDITPOLICYID                   SYSIBM    INTEGER                      4     0 Yes   
AUDITPOLICYNAME                 SYSIBM    VARCHAR                    128     0 Yes   
AUDITEXCEPTIONENABLED           SYSIBM    CHARACTER                    1     0 No    
DEFINER                         SYSIBM    VARCHAR                    128     0 No    
ONCOMMIT                        SYSIBM    CHARACTER                    1     0 No    
LOGGED                          SYSIBM    CHARACTER                    1     0 No    
ONROLLBACK                      SYSIBM    CHARACTER                    1     0 No    
LASTUSED                        SYSIBM    DATE                         4     0 No    
CONTROL                         SYSIBM    CHARACTER                    1     0 No    
TEMPORALTYPE                    SYSIBM    CHARACTER                    1     0 No    
TABLEORG                        SYSIBM    CHARACTER                    1     0 No    
EXTENDED_ROW_SIZE               SYSIBM    CHARACTER                    1     0 No    
PCTEXTENDEDROWS                 SYSIBM    REAL                         4     0 No    
REMARKS                         SYSIBM    VARCHAR                    254     0 Yes   

  85 record(s) selected.

If we create an index on the “tabname” column and later filter on that column we should get an index access:

db2 => create index i on t(tabname)
DB20000I  The SQL command completed successfully.
db2 => quit
DB20000I  The QUIT command completed successfully.

You have several options to create explain plans in Db2 and one of the options is to use the CURRENT EXPLAIN MODE special register, so let’s try that:

db2@sles15db2:~> echo "select count(*) from t where tabname='t';" > 1.sql
db2@sles15db2:~> db2 set current explain mode explain
DB20000I  The SQL command completed successfully.
db2@sles15db2:~> db2 -tvf  1.sql     
select count(*) from t where tabname='t'
SQL0219N  The required Explain table "DB2.EXPLAIN_INSTANCE" does not exist.  
SQLSTATE=42704

The error message is pretty clear, somehow we need to create the explain tables. There are two options for this and we’ll use the EXPLAIN.DDL script which comes with the Db2 installation:

db2@sles15db2:~> ls -l ./sqllib/misc/EXPLAIN.DDL
-r--r--r-- 1 db2 db2 48371 Jun 11  2021 ./sqllib/misc/EXPLAIN.DDL
db2@sles15db2:~> grep -i "create table" ./sqllib/misc/EXPLAIN.DDL
CREATE TABLE EXPLAIN_INSTANCE ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_STATEMENT ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_ARGUMENT ( EXPLAIN_REQUESTER   VARCHAR(128 OCTETS)  NOT NULL,
CREATE TABLE EXPLAIN_OBJECT ( EXPLAIN_REQUESTER    VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_OPERATOR ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_PREDICATE ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_STREAM ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_DIAGNOSTIC ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_DIAGNOSTIC_DATA ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE OBJECT_METRICS ( EXECUTABLE_ID     VARCHAR(32 OCTETS) FOR BIT DATA NOT NULL,
CREATE TABLE ADVISE_INSTANCE (
CREATE TABLE ADVISE_INDEX(
CREATE TABLE ADVISE_WORKLOAD (
CREATE TABLE ADVISE_MQT (
CREATE TABLE ADVISE_PARTITION (
CREATE TABLE ADVISE_TABLE (
CREATE TABLE EXPLAIN_ACTUALS ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,

Quite a few tables get created by this script and we’ll not go into the details of which table contains what (this is out of the scope of this post), but just execute it:

db2@sles15db2:~> db2 -tvf ./sqllib/misc/EXPLAIN.DDL

******* IMPORTANT ********** 

USAGE: db2 -tf EXPLAIN.DDL   

******* IMPORTANT ********** 


UPDATE COMMAND OPTIONS USING C OFF
DB20000I  The UPDATE COMMAND OPTIONS command completed successfully.
...
COMMIT WORK
DB20000I  The SQL command completed successfully.

All done, lets try again:

db2@sles15db2:~> db2 set current explain mode explain
DB20000I  The SQL command completed successfully.
db2@sles15db2:~> db2 -tvf  1.sql
select count(*) from t where tabname='t'
SQL0217W  The statement was not executed as only Explain information requests 
are being processed.  SQLSTATE=01604

Looks better, but how do we get the explain plan? One option is to use db2exfmt:

db2@sles15db2:~> db2exfmt -d DB1 -# 0 -w -1 -g TIC -n % -s % -o explain.txt
DB2 Universal Database Version 11.5, 5622-044 (c) Copyright IBM Corp. 1991, 2017
Licensed Material - Program Property of IBM
IBM DATABASE 2 Explain Table Format Tool

Connecting to the Database.
Connect to Database Successful.
Binding package - Bind was Successful
Output is in explain.txt.
Executing Connect Reset -- Connect Reset was Successful.
db2@sles15db2:~> db2 set current explain mode no
DB20000I  The SQL command completed successfully.

This generates the “explain.txt” file with lots of information. What we are looking for is the explain plan and there is a graphical representation in the file, which is exactly what we need:

Access Plan:
-----------
        Total Cost:             6.77696
        Query Degree:           1


      Rows 
     RETURN
     (   1)
      Cost 
       I/O 
       |
        1 
     GRPBY 
     (   2)
     6.77689 
        1 
       |
     35.8378 
     IXSCAN
     (   3)
     6.77614 
        1 
       |
      14016 
 INDEX: DB2     
        I
       Q1

Another option is to use db2expln:

db2@sles15db2:~> db2expln -database db1 -statement "select count(*) from t where tabname = 't'" -terminal

DB2 Universal Database Version 11.5, 5622-044 (c) Copyright IBM Corp. 1991, 2017
Licensed Material - Program Property of IBM
IBM DB2 Universal Database SQL and XQUERY Explain Tool

DB2 Universal Database Version 11.5, 5622-044 (c) Copyright IBM Corp. 1991, 2017
Licensed Material - Program Property of IBM
IBM DB2 Universal Database SQL and XQUERY Explain Tool

******************** DYNAMIC ***************************************

==================== STATEMENT ==========================================

        Isolation Level          = Cursor Stability
        Blocking                 = Block Unambiguous Cursors
        Query Optimization Class = 5

        Partition Parallel       = No
        Intra-Partition Parallel = No

        SQL Path                 = "SYSIBM", "SYSFUN", "SYSPROC", "SYSIBMADM", 
                                   "DB2"


Statement:
  
  select count(*)
  from t 
  where tabname ='t' 


Section Code Page = 1208

Estimated Cost = 6.775021
Estimated Cardinality = 1.000000

Access Table Name = DB2.T  ID = 2,4
|  Index Scan:  Name = DB2.I  ID = 1
|  |  Regular Index (Not Clustered)
|  |  Index Columns:
|  |  |  1: TABNAME (Ascending)
|  #Columns = 0
|  Skip Inserted Rows
|  Avoid Locking Committed Data
|  Currently Committed for Cursor Stability
|  #Key Columns = 1
|  |  Start Key: Inclusive Value
|  |  |  1: 't' 
|  |  Stop Key: Inclusive Value
|  |  |  1: 't' 
|  Index-Only Access
|  Index Prefetch: Sequential(1), Readahead
|  Lock Intents
|  |  Table: Intent Share
|  |  Row  : Next Key Share
|  Sargable Index Predicate(s)
|  |  Predicate Aggregation
|  |  |  Column Function(s)
Aggregation Completion
|  Column Function(s)
Return Data to Application
|  #Columns = 1

End of section

Compared to db2exfmt this gives a much more compact output.

Cet article Getting explain plans out of Db2 est apparu en premier sur Blog dbi services.

Hierarchical Aggregation of Columns

Tom Kyte - Wed, 2022-01-19 21:26
Hello all, Greetings of the new year! Goal - generate an output which can be directly used for pasting hierarchical data into a flat file Link to liveSQL with sample data and scripts - https://livesql.oracle.com/apex/livesql/s/mw9olr17o7dir21jysn5jni1u Expected output <b>-#HDR#- 1 - HEADER1 - HEADER2 - -#Line#- 1 - 1 - DETAIL11 - DETAIL12 ; -#Line#- 1 - 2 - DETAIL21 - DETAIL22 ;</b> Hierarchy example: <b> Purchase Order HEADER - Only 1 Purchase Order LINE - 1-many under 1 HEADER Purchase Order Schedule - 1-many under 1 LINE Purchase Order Distribution - 1-many under 1 SCHEDULE</b> SQL included in LiveSQL currently produces the following: <b>-#HDR#- 1 - HEADER1 - HEADER2 - -#Line#- 1 - 1 - DETAIL11 - DETAIL12 ; -#HDR#- 1 - HEADER1 - HEADER2 - -#Line#- 1 - 2 - DETAIL21 - DETAIL22 ;</b> Idea is to have a single column which shows the header and detail level information. Assumption - single level detail. However, data exists with 3 levels of detail. Request your inputs on generating expected output so that it can be extended to support multiple detail levels. Thank you and stay safe! BR, Prashant ATMAN
Categories: DBA Blogs

VirtualBox 6.1.32

Tim Hall - Wed, 2022-01-19 07:38

VirtualBox 6.1.32 has been released. The downloads and changelog are in the usual places. I’ve installed it on Windows 11 and macOS Big Sur hosts with no dramas. Some time in the next 24 hours I’ll upload updated versions of my Oracle Linux 7 and Oracle Linux 8 vagrant boxes. https://app.vagrantup.com/oraclebase Cheers Tim…

The post VirtualBox 6.1.32 first appeared on The ORACLE-BASE Blog.VirtualBox 6.1.32 was first posted on January 19, 2022 at 2:38 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Hash Aggregation – 2

Jonathan Lewis - Wed, 2022-01-19 06:03

In the note I wrote a couple of days ago about the way the optimizer switches from hash group by to sort group by if you add an order by X,Y clause to a query that has a group by X,Y I mentioned that this had been something I’d noticed about 15 years ago (shortly after Oracle introduced hash aggregation, in fact) but it was only the second of two details I’d noted when experimenting with this new operation. The first thing I’d noticed came from an even simpler example, and here’s a (cryptic) clue to what it was:


column operation_type format a24

select
        operation_type, count(*) 
from
        V$sql_workarea 
group by 
        operation_type
order by 
        operation_type
;

OPERATION_TYPE             COUNT(*)
------------------------ ----------
APPROXIMATE NDV                   1
BUFFER                          130
CONNECT-BY (SORT)                10
GROUP BY (HASH)                  12
GROUP BY (SORT)                 240
HASH-JOIN                       466
IDX MAINTENANCE (SORT)           39
LOAD WRITE BUFFERS               10
RANGE TQ COMPUTATION             13
RESULT CACHE                      4
SORT (v1)                        10
SORT (v2)                       147
WINDOW (SORT)                    35


The clue isn’t in the query, it’s in what’s missing from the result set, so here’s some code to create and query some data to make it easier to spot the anomaly:

rem
rem     Script:         hash_agg.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sept 2007
rem

create table t1
as
with generator as (
        select  rownum id
        from    dual
        connect by
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        lpad(mod(rownum-1,1000),6)      small_vc_K,
        lpad(rownum-1,6)                small_vc_M
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid wordpress format issue
;
    
set serveroutput off

prompt  ===========================
prompt  1000 distinct values (hash)
prompt  ===========================

select
        /*+ gather_plan_statistics 1000 */
        count(*)
from
        (
        select  /*+ no_merge */
                distinct small_vc_K
        from
                t1
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

prompt  ===========================
prompt  1000 distinct values (sort)
prompt  ===========================

select
        /*+ gather_plan_statistics 1000 */
        count(*)
from
        (
        select  /*+ no_merge no_use_hash_aggregation */
                distinct small_vc_K
        from
                t1
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));


I’ve added the /*+ gather_plan_statistics */ hint to the query so that I can check on the rowsource execution stats and (particularly) the memory and/or temporary space used; and I’ve blocked hash aggregation in one of the two queries, so I expect to see a “hash unique” operation in the first query and a “sort unique” operation in the second. Here’s what I get from 19.11.0.0:


===========================
1000 distinct values (hash)
===========================

  COUNT(*)
----------
      1000

SQL_ID  1baadqgv02r6b, child number 0
-------------------------------------
select  /*+ gather_plan_statistics 1000 */  count(*) from  (  select
/*+ no_merge */   distinct small_vc_K  from   t1  )

Plan hash value: 171694178

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |   765 (100)|      1 |00:00:00.07 |    2637 |   2632 |       |       |          |
|   1 |  SORT AGGREGATE      |      |      1 |      1 |            |      1 |00:00:00.07 |    2637 |   2632 |       |       |          |
|   2 |   VIEW               |      |      1 |   1000 |   765  (56)|   1000 |00:00:00.07 |    2637 |   2632 |       |       |          |
|   3 |    HASH UNIQUE       |      |      1 |   1000 |   765  (56)|   1000 |00:00:00.07 |    2637 |   2632 |  1889K|  1889K| 1417K (0)|
|   4 |     TABLE ACCESS FULL| T1   |      1 |   1000K|   405  (17)|   1000K|00:00:00.04 |    2637 |   2632 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------


17 rows selected.

===========================
1000 distinct values (sort)
===========================

  COUNT(*)
----------
      1000

SQL_ID  a66rqhgw7a6dk, child number 0
-------------------------------------
select  /*+ gather_plan_statistics 1000 */  count(*) from  (  select
/*+ no_merge no_use_hash_aggregation */   distinct small_vc_K  from
t1  )

Plan hash value: 1750119335

-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |   765 (100)|      1 |00:00:00.22 |    2637 |       |       |          |
|   1 |  SORT AGGREGATE      |      |      1 |      1 |            |      1 |00:00:00.22 |    2637 |       |       |          |
|   2 |   VIEW               |      |      1 |   1000 |   765  (56)|   1000 |00:00:00.22 |    2637 |       |       |          |
|   3 |    SORT UNIQUE       |      |      1 |   1000 |   765  (56)|   1000 |00:00:00.22 |    2637 | 48128 | 48128 |43008  (0)|
|   4 |     TABLE ACCESS FULL| T1   |      1 |   1000K|   405  (17)|   1000K|00:00:00.02 |    2637 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------


Look back at the summary of v$sql_workarea. Can you now spot something that might be missing? Since we’ve now got two execution plans with their sql_ids, let’s run a much more precise query against the view.

select
        sql_id, child_number, operation_id, operation_type , 
        total_executions, last_memory_used/1024 last_mem_kb
from
        v$sql_workarea
where
        sql_id in (
                '1baadqgv02r6b',
                'a66rqhgw7a6dk'
)
order by
        sql_id, child_number, operation_id
/

SQL_ID        CHILD_NUMBER OPERATION_ID OPERATION_TYPE       TOTAL_EXECUTIONS LAST_MEM_KB
------------- ------------ ------------ -------------------- ---------------- -----------
1baadqgv02r6b            0            3 GROUP BY (HASH)                     1       1,417
a66rqhgw7a6dk            0            3 GROUP BY (SORT)                     1          42

The first thing to note, of course, is that the sort aggregate at operation 1 doesn’t report a workarea at all; but this shouldn’t be a surprise, the operation is simply counting rows as they arrive, there’s no real sorting going on.

Operation 3 in both cases is the more interesting one. In the plan it’s reported as a “hash/sort unique” but the workarea operation has changed this to a “group by (hash/sort)”.  It’s a little detail that probably won’t matter to most people most of the time – but it’s the sort of thing that can waste a couple of hours of time when you’re trying to analyze a performance oddity.

And another thing …

You’ll have noticed, by the way, that the hash unique operation actually demanded far more memory than the sort unique operation; 1.4MB compared to 42KB. In a large production system this may be fairly irrelevant, especially since the discrepancy tends to disappear for higher volumes, and hardly matters if the operation spills to disc. However, if you have a large number of processes doing a lot of small queries using hash aggregation you may find that cumulatively they use up an unexpected fraction of your pga_aggregate_target.

There is some scope (with the usual “confirm with Oracle support” caveat) for modifying this behaviour with a fix_control:

16792882    QKSFM_COMPILATION_16792882  Cardinality threshold for hash aggregation(raised to power of 10)

alter session set "_fix_control"='16792882:3';

The default value is 0, the maximum value seems to be 3 and the description about raising to power of 10 seems to be the wrong way round, but I haven’t done any careful testing. When I set the value to 3 the hash unique with an estimated output of 1,000 switched to a sort unique.(hence my suggestion about 10N rather than N10).

Footnote

In the previous note I pointed out that Oracle would use a single sort group by operation rather than a consecutive hash group by / sort order by pair of operations. This simpler example helps to explain why. If you check the CBO trace files for the two you can check the costs of the inline aggregate view.

From a 19.11.0.0 trace file here are two costing fragments for the hash group by strategy; the first bit is the cost of the tablescan that acquires the rows (operation 4) the second is the final cost of the inline view / query block:


  Best:: AccessPath: TableScan
         Cost: 404.639881  Degree: 1  Resp: 404.639881  Card: 1000000.000000  Bytes: 0.000000


Final cost for query block SEL$2 (#2) - All Rows Plan:
  Best join order: 1
  Cost: 764.843155  Degree: 1  Card: 1000000.000000  Bytes: 7000000.000000
  Resc: 764.843155  Resc_io: 337.000000  Resc_cpu: 1069607888
  Resp: 764.843155  Resp_io: 337.000000  Resc_cpu: 1069607888


I won’t show you the equivalent extracts for the example where I blocked hash aggregation because the numbers are identical. So there’s no way that Oracle will want to do hash group by followed by sort order by, when it can do just a single sort group by that costs exactly the same as the hash group by operation alone. This is a shame, and a little ironic because when Oracle Corp introduce hash aggregation they made a bit of a fuss about how much more efficient it was than sorting- but it looks like no-one told the optimizer about this.

Why the Oak Table Was So Great

Cary Millsap - Tue, 2022-01-18 17:57

This weekend, I watched a wonderful TEDx video by Barbara Sher, called “Isolation Is the Dream-Killer, Not Your Attitude.” Please watch this video. It’s 21 minutes, 18 seconds long.

It reminded me about what was so great about the Oak Table. That’s right: was. It’s not anymore.

Here’s what it was. People I admired, trusted, and liked would gather at a home in Denmark owned by a man named Mogens Nørgaard. Mogens is the kindest and most generous host I have ever encountered. He would give his whole home—every inch—to keep as many of us as he could, for a week, once or twice a year. Twenty, maybe thirty of us. We ate, drank, and slept, all for free, as much and for as long as we wanted. 

And the “us” in that sentence was no normal, regular, everyday “us.” It was Tom Kyte, Lex de Haan, Anjo Kolk, Jonathan Lewis, Graham Wood, Tanel Põder, Toon Koppelaars, Chris Antognini, Steve Adams, Stephan Haisley, James Morle, John Beresniewicz, Jože Senegačnik, Bryn Llewellyn, Tuomas Pystynen, Andy Zitelli, Johannes Djernæs, Michael Möller, Dan Norris, Carel Jan-Engel, Pete Sharman, Tim Gorman, Kellyn Pot'Vin, Alex Gorbachev, Frits Hoogland, Karen Morton, Robyn Sands, Greg Rahn, and—my goodness—I’m leaving out even more people than I’m listing.

We spent a huge amount of our time sitting together at Mogens’s big oak table, which was big enough for about eight people. Or, in actuality, about twice that. We’d just work. And talk. If there wasn’t a meal on the table, then it would be filled with laptops and power cords covering every square inch. Oops, I mean millimeter. That table had millimeters.

And here’s what was so great about the Oak Table: you could say what you wanted—whatever it was!—and you could have it. You could just say your dream and your obstacle, and someone around the table would know how to make your dream come true.

It’s tricky even trying to remember good examples of people’s dreams, because I’m so far removed from it now. Some of them were nerdy things like, “I wonder how long an Oracle PARSE call would take if we did a 256-table join?” You’d hear, “Hmm, interesting. I think I have a test for that,” and then the next thing you know, Jonathan Lewis would be working on your problem. Or, “Hey, does anyone know how to do such-and-such in vim?” And Johannes Djernæs or Michael Möller would show you how easy it was.

I got into a career-saving conversation late one night with Robyn Sands. She had asked, “Is anybody else having trouble finding good PL/SQL developers? I can’t figure out where they are, if there even are any. Are there?” We talked for a while about why they were so scarce, and then I connected the dots that, hey, I have two superb PL/SQL developers at home on the bench, and I had been desperately trying to find them good work. The story that Robyn and I started some 3:00am over beers resulted in a superb consumer femtocell device for Robyn and a year’s worth of much-needed revenue for my tiny little team.

It was a world where you could have anything you want. Better yet, it was a world where you could dream properly. Today, in isolation, it’s hard to even dream right. After nearly two years of being locked away, I can barely conceive of a world that’s plentiful and joyous like those Oak Table years. I feel much smaller now. (Oh, and it wasn’t COVID-19 that killed that Oak Table experience. It died years before that—but, obviously, it’s a factor today.)

I want it back. I want my friends back. How are we going to do this?

What kind of countertop should you get for your kitchen? 

OraQA - Tue, 2022-01-18 09:22

Remodeling your dream home can be quite a task. You have so much to change in the home, thus we thought of extending a little helping hand. 

The kitchen is a space that sees a lot of foot traffic, heat, scratches, and water. You need some material that withstands all of this. You need a material that’s robust, durable, and good-looking. 

Most homeowners fall into the trap of buying a good-looking countertop, but end up paying too much for it. Does it guarantee durability? They don’t care to check all of this. 

What about you? Do you wish to make the same mistakes? If not, then keep reading this post. We will shed light on the kind of countertop you should choose for your kitchen. 

#Marble 

Marble is an exquisite material. It is pleasing to the eyes and has a luxurious feel as well. 

You will find that a lot of people save up to invest in marble countertops. They say that marble is naturally heat-resistant, but it is not exactly scratch-resistant. Don’t commit the mistake of chopping directly on marble. 

Marble is expensive; thus, you need to take good care of the material. If you are determined to take good care of it, feel free to buy it from one of the reputed stores. 

#Granite 

Granite is durable, good-looking, and popular. You will find many people investing in granite countertops. The interesting thing about this material is that it’s available in both light and dark colors. 

Moreover, you can get them in a variety of shapes too. Don’t miss the variations and flecks on this material. 

#Quartz 

Quartz happens to be a very durable and beautiful material. If you have lots of family members and enjoy cooking, consider this material as a kitchen countertop. It is very durable, so it can withstand heat, water, and stains. 

Stain-removal is easy – you can’t say the same thing about marble. Removing stains requires soap and water. Just wipe the stain to remove it. 

Quartz is also available in different variations. Since marble is expensive, manufacturers are making engineered stone, and quartz is one of them. 

Some of the types look like marble. 

Concluding Thoughts 

Do you want granite, marble, or a quartz countertop in your kitchen? If yes, check out the countertops from Granite au Sommet. They have a wonderful collection! 

You can compare the texture, colors, and durability on your own. Everyone has a different opinion and choice, so feel free to choose the material you like and can afford.

The post What kind of countertop should you get for your kitchen?  appeared first on ORA QA.

How can I register my SMTP service in Oracle Apex?

Tom Kyte - Tue, 2022-01-18 08:46
In the company where I work, we have the services of JD Edwards and Orchestrator Studio, but we are using APEX for fast applications. The problem is that we have not managed to use the JD Edwards SMTP in APEX or an external one to test that the apex_mail.send() function works and so far we have not succeeded, and we watched videos and followed the steps and nothing and the documentation offered by Oracle is not entirely clear even confusing that you have to do, another thing we tried was to create a service in Orchestrator Studio and it works with JS the drawback is that the code is on the client side and not the server and the alternative was to use <b>UTL_HTTP</b> or <b>apex_web_service. make_rest_request()</b> and neither as I read that <b>UTL_HTTP</b> has some limits with apex and <b>apex_web_service.make_rest_request(</b>) blocks the service because the URL is not secure. I have read most of the forum and nothing I can do to make it send APEX mails because it really depends on a database developer to do that and I have been using this tool for a very short time. <b>I do not know if you could help me which are the steps I must follow to be able to send mails with APEX as in Cloud and in my local machine (localhost). </b> Sources consulted: <b>https://docs.oracle.com/en/database/oracle/application-express/21.1/aeapi/Configuring-Oracle-Application-Express-to-Send-Email.html#GUID-596E11FE-9289-4238-A3CA-D33F508E40F7 https://stackoverflow.com/questions/65528885/invoke-a-rest-api-json-in-plsql https://www.youtube.com/watch?v=sI37Be2EZhk https://technology.amis.nl/database/invoke-a-rest-service-from-plsql-make-an-http-post-request-using-utl_http-in-oracle-database-11g-xe/</b>
Categories: DBA Blogs

Performance issues in production

Tom Kyte - Tue, 2022-01-18 08:46
Hi Tom, We are supporting a Siebel application having Oracle 11g as backed. Last month we upgraded database to 19C and after that we are facing performance issues on daily basis. When we checked DB reports like AWR and ASH, I have below observations: a) In AWR report random queries are coming on top and there are no consistency so I am assuming issue is not related to a particular SQL. b) In the DB reports we can see top event is "gc buffer busy acquire" most of the time and this event is related to a insert statement. c) All top SQL visible in AWR/ASH reports are having multiple plans. Below is the snap shot for one of the SQL: SNAP_ID NODE BEGIN_INTERVAL_TIME SQL_ID PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIO ---------- ------ ------------------------------ ------------- --------------- ------------ ------------ -------------- 2459 2 12-JAN-22 06.00.34.887 4hdhq06bahcnq 912142910 207 17.761 .0 2460 1 12-JAN-22 06.30.10.663 4hdhq06bahcnq 728991164 293 .024 1,376.6 2460 2 12-JAN-22 06.30.10.670 4hdhq06bahcnq 912142910 107 95.319 2,889,181.3 2461 1 12-JAN-22 07.00.04.996 4hdhq06bahcnq 728991164 439 .035 1,251.3 2461 2 12-JAN-22 07.00.04.990 4hdhq06bahcnq 912142910 149 66.469 2,106,097.7 My question from you guys are: 1. To investigate "gc buffer busy acquire" event what could be the sequence of investigation? 2. Since most of the SQLs coming on top in DB reports are generating multiple plans, Do we have any parameters in 19C which can be responsible for so many plans? Thanks
Categories: DBA Blogs

Library cache lock in wait class Concurrency

Tom Kyte - Tue, 2022-01-18 08:46
In the Production Application we randomly notice a hiccup (slowness) in the database. The AWR shows as `Waiting for event "library cache lock" in wait class "Concurrency"?. The respective SQL statement is one of the frequently running statement (15,000 times per minute) in the application. The SQL is a simple READ on primary key from a table which is cached in a logical memory. The table holds just 40,000 records and doesn?t grow at runtime. During the regular season, the same SQL statement elapsed time would be in milliseconds. But during the hiccup (slowness) time, the same SQL statement takes minutes to return the results. The other important thing to note is that the issue initiates most of the time at either 0th minute, 30th minute or 45th minute of an hour and resolves its own in couple of minutes. However, sometime the issue lasts more than an hour before it resolves its own. We would appreciate if you can help on what could be the reason the SQL performs slower for few minutes randomly.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator