Feed aggregator

Hakan Factor

Jonathan Lewis - Mon, 2022-11-28 09:14

There’s a question on the MOSC forum (needs an account) at present that started with the performance of the datapump API over a database link but moved on to the topic of how to handle a scenario that I’ve described in the past involving a table where rows are intially short and eventually become much longer and a requirement comes up to rebuild the table.

In this case the OP has to use datapump (selecting truncate as the “action on existence”) to copy the table data from one place to another rather then doing the more common ‘alter table move’ variant of rebuilding the table.

The underlying problem in this case is that:

  • the table has 84 columns made up of (pk_col1, pk_col2, flag, change_date) plus 20 groups of 4 “value” columns.
  • rows are inserted with just the four main columns and the first group of four values.
  • over time each subsequent group of 4 values in a row is updated in a separate statement

We haven’t been given numbers but a row probably ends up taking about 10 times the space it started with – and if that’s the case you would normally need to set the table’s pctfree to something like 90 to avoid getting a lot of migrated rows in the table. But that’s not the whole of the story.

Things to go wrong

If you don’t set pctfree to 90 you get lots of migrated rows. If you then do an export (expdp) in direct_path mode expdp will do a large number of single block reads following the migrated rows, and Oracle won’t cache the follow-on blocks, so you may re-read them several times in the course of reading one block in the direct path tablescan. (For cached reads the codepath for a tablescan will simply ignore the “head pointer” to a migrated row because it “knows” that it will find the whole row in some other block eventually.)

If you do set pctfree to 90 then when you rebuild the table (or recreate it with pctfree set to 90) than you end up with a much larger table with lots of blocks that are only 10% used because most of the rows are now big and aren’t going to grow any more.

Best strategy – the Hakan factor.

Work out how many rows in their final state will fit into a block and recreate the table telling Oracle that that’s the maximum number of rows it’s allowed to put in a block. (You could also set pctfree to zero at the same time to minimise the chance of Oracle inserting fewer rows than your target.)

The devil, of course, is in the detail. Part of the devilry comes from a bug that was fixed some time as far back as Part comes from the fact that Oracle doesn’t give us a documented API to set the magic number – we have to find a way to teach Oracle about the number or hack the data dictionary. Part, inevitably, comes from the fact that when dealing with undocumented (or barely documented) mechanisms you ought to set up some test cases to check that the latest version of Oracle behaves the same way as your previous versions of Oracle when you’re playing dirty tricks.

Part 1 – Teaching Oracle.

You may know your data so well that you can immediately say how many “full-length” rows should should fit a block. If you can’t do this you could simply create a copy of the original table structure with a pctfree of zero then copy into it a few hundred rows from the original table using a predicate to limit the selected rows to ones that would not be updated any further. For example (using the table definition supplied by the OP) you might say:

create table test_tab_clone 
pctfree 0 
select  * 
from    test_tab 
where   rownum = 0

insert into test_tab_clone 
select  * 
from    t1 
where   rownum <= 400 
and     fourthvalue19 is not null


I’m assuming in this case column “fourthvalue19” will only be non-null only if the whole of the 19th set of values is populated and all the other sets of values are populated. From the OP’s perspective there may be a more sensible way of identifying fully populated rows. You do need to ensure that the table has at least one full block otherwise some odd things can happen when you try to set the Hakan factor.

Once you’ve got a small table of full size rows a simple analysis of rows per block is the next step:

        count(*)        blocks
                count(*)                                rows_starting_in_block
        group by
group by
order by

---------------------- ----------
                     3          1
                    18         22
                    19          1
sum                            24

Looking at these results I can see that there’s a slight variation in the number of rows that could be crammed into a block – and one block which holds the last few rows of my insert statement which I can ignore. In a more realistic case you might need to tweak the selection predicate to make sure that you’ve picked only full-size rows; or you might simply need to decide that you’ve got a choice of two or three possible values for the Hakan factor and see what the results are from using them.

With the same figures above I’d be strongly inclined to set a Hakan factor of 18. That does mean I might be “wasting” roughly 1/19th of every block (for the relatively cases where a 19th row would have fitted) but it’s better than setting the Hakan factor to 19 and finding I get roughly 1 row in every 19 migrating for 22 blocks out of 23 where I should have restricted the number of rows per block to 18; the choice is not always that straightforward.

So here’s how we now “train” Oracle, then test that it learned the lesson:

truncate table test_tab_clone;
insert into test_tab_clone select * from test_tab where rownum <= 18;
alter table test_tab_clone minimize records_per_block;

truncate table test_tab_clone;
insert into test_tab_clone select * from all_objects where rownum <= 10000;

start rowid_count test_tab_clone

---------------------- ----------
                    10          1
                    18        555
sum                           556

In the first three statments I’ve emptied the table, inserted 18 rows (I ought to check they all went into the same block, really) and set the Hakan factor.

Once the Hakan factor is set I’ve emptied the table again then populated it with the “full” data set. In fact for demo purposes I’ve copied exactly 10,000 rows so that we can see that every block (except, we can safely assume, the last one written to) has acquired exactly 18 rows.

Part 2 – applying the strategy

It’s often easy to sketch out something that looks like as if it’s exactly what you need, but there are always peripheral considerations that might cause problems and an important part of examining a problem is to consider the overheads and penalties. How, for example, is our OP going to apply the method in production.

There are two problems

  • It’s a large table, and we’re cloning it because we can’t hack directly into the data dictionary to modify the table directly. What are the side effects?
  • We want the imported export to acquire the same Hakan factor. Do we have to take any special action?

The import is the simpler problem to consider since it’s not open-ended. As far as impdp is concerned we could import “data_only” or “include_metadata”, and the “table_exists_action” could be either replace or truncate, so there are only 4 combinations to investigate.

The bad news is that none of the options behaves nicely – impdp (tested on seems to import the data then execute the “minimize records_per_block” command when really it should transfer the Hakan factor before importing the data. So it seems to be necessary to go through the same convoluted steps at least once to precreate a target table with the desired Hakan factor and thereafter use only the truncate option for the import if you want to make the target behave in every way like the source. (Even then you will need to watch out for extreme cases if the export holds fewer rows than the value you’ve set for the Hakan factor – with the special case that if the exported table is empty the attempt by the import to set the Hakan factor raises error “ORA-28603: statement not permitted on empty tables”.)

Let’s get back to the side effects of our cloning exercise on the source table. We’ve created a copy of the original data with a suitable Hakan factor so that blocks holding “completed” rows are full and 1blocks holding “in-gransit” rows have enough space to grow to their “completed” size and there are no migrated rows – and we don’t expect to see migrated rows in the future. But it’s not the right table, and to ensure we had a complete copy we would have stopped all processing of the source table.

Could we have avoided the stoppage? Maybe we could use the dbms_redefinition package – the OP is running Standard Edition so can’t do online redefinition any other way – and use the Hakan hack mechanism on the “interim” table immediately after creating it.

If we find that the online redefinition mechanism generates too much undo and redo we’ll have to use the blocking method – but then we have to do some table renaming and worry about PL/SQL packages becoming invalid, and foreign key constraints, synonyms, views etc. being associated with the wrong table.

So even though we can sketch out with an outline strategy there are still plenty of details to worry about around the edges. To a large degree this is because Oracle has not yet made the Hakan factor a “proper” property of a table that you can explicitly set in a “move” or “create table” operation . There is a function embedded in the executable (kkdxFixTableHAKAN) that looks as if it should set the Hakan factor, and there is presumably some piece of code that sets the Hakan factor when you exectute a call to “create table for exchange”, it would be nice if there was an API that was visible to DBAs.


If you have a table where rows grows significantly over their lifetime, you ought to ensure that you’ve set a suitable pctfree for the table. But if you anticipate copying, or moving the table at any time then there’s no way to pick a pctfree that is good for all stages of the data’s lifetime.

There is a feature that you can impose on the data to avoid the problems of extreme change in row-lengths and it’s fairly straightforward to impose on a single table but there is no API available to manipulate the feature directly and if you don’t anticipate the need during the initial design stage then applying the feature after the event can be an irritating and resource-intensive operation.


For those not familiar with it, the Hakan Factor was introduced by Oracle to allow a little extra efficiency in the compression and use of bitmap indexes. If Oracle has information about the largest number of rows that can appear in any block in a table it can minimise the number of bits needed per block (space saving) and avoid having to expand and compare unnecessarily long sequences of zero bits when comparing entries across bitmap indexes. Given their intended use it should come as no surprise that you can’t call “minimize records_per_block” for a table that has an existing bitmap index.

Responsive UI with Streamlit/Python

Andrejus Baranovski - Mon, 2022-11-28 01:28
I explain how to get current UI width info in Streamlit and use this value to calculate number of columns to be generated for the form block. Streamlit is server side UI framework and it runs on Python.


Retrieving and understanding a response file for Oracle GoldenGate Deployment configuration

DBASolved - Sun, 2022-11-27 12:46

To say that Oracle GoldenGate has undergone a transformation over the last seven year is an understatement.  Back in 2017, […]

The post Retrieving and understanding a response file for Oracle GoldenGate Deployment configuration appeared first on DBASolved.

Categories: DBA Blogs

Warning: ODA HA disk enclosure is not smart!

Yann Neuhaus - Fri, 2022-11-25 10:05

Apart from the number of servers (1 vs 2), the main difference between Oracle Database Appliance lite (S/L) and High-Availability (HA) is the data disks location. They are inside the server on lite ODAs, and in a dedicated disk enclosure on HA ODAs. Obviously, this is because when 2 nodes want to use the same disks, these disks have to be shared. And this is why HA needs a SAS disk enclosure.

Disk technology on ODA

On lite ODAs, disks are SSDs inside the server and connected to the PCI Express bus without any interface, it’s the NVMe technology. This is very fast. There are faster technologies, like NVRam, but price/performance ratio made NVMe technology a game changer.

HA ODAs are not that fast regarding disk bandwidth. This is because NVMe only works for disks locally connected to the server’s motherboard. Both HA ODA nodes come with SAS controllers, these being connected to a SAS disk enclosure with SAS SSDs in it. As this enclosure is quite big (same height as the 2 nodes together), disk capacity is much higher than lite ODAs. A fully loaded X9-2HA ODA with SSDs has 184TB, it’s more than twice the 81TB capacity of a fully loaded ODA X9-2L. Furthermore, you can add another storage enclosure to X9-2HA to double the disk capacity to 369TB. And if you need even more capacity, there is an high capacity version of this enclosure with a mix of SSDs and HHDs for a maximum RAW capacity of 740TB. This is huge!

Hardware monitoring on ODA

Monitoring the ODA hardware is done from ILOM, the management console. ILOM can send SNMP traps and raise an alert if something is wrong. For an HA ODA, you have 2 ILOMs to monitor, as the 2 nodes are separate hardware. There’s a catch when it comes to monitoring the storage enclosure. This enclosure is not active, meaning that it doesn’t have any intelligence, and therefore cannot raise any alert. And ILOM from the nodes is not aware of hardware outside the nodes. You may think that it’s not really a problem because data disks are monitored by ASM. But this enclosure also has SAS interfaces to get connected with the nodes. And if one of these interfaces is down, you may not detect the problem.

The use case

My customer has multiple HA ODAs, and I was doing a sanity checks of these ODAs. Everything was fine until I did an orachk on an X6-2HA:

odaadmcli orachk
INFO: 2022-11-16 16:41:11: Running orachk under /usr/bin/orachk Searching for running databases . . . . .
List of running databases registered in OCR
1. XXX
3. YYY
4. ZZZ 
5. All of above
6. None of above
Select databases from list for checking best practices. For multiple databases, select 5 for All or comma separated number like 1,2 etc [1-6][5]. 6
RDBMS binaries found at /u01/app/oracle/product/ and ORACLE_HOME not set. Do you want to set ORACLE_HOME to "/u01/app/oracle/product/"?[y/n][y] y
FAIL => Several enclosure components controllers might be down

This is not something nice to see. My storage enclosure has a problem.

I will do another check with odaadmcli:

odaadmcli show enclosure

        NAME        SUBSYSTEM         STATUS      METRIC

        E0_FAN0     Cooling           OK          4910 rpm
        E0_FAN1     Cooling           OK          4530 rpm
        E0_FAN2     Cooling           OK          4920 rpm
        E0_FAN3     Cooling           OK          4570 rpm
        E0_IOM0     Encl_Electronics  OK          -
        E0_IOM1     Encl_Electronics  Not availab -
        E0_PSU0     Power_Supply      OK          -
        E0_PSU1     Power_Supply      OK          -
        E0_TEMP0    Amb_Temp          OK          23 C
        E0_TEMP1    Midplane_Temp     OK          23 C
        E0_TEMP2    PCM0_Inlet_Temp   OK          29 C
        E0_TEMP3    PCM0_Hotspot_Temp OK          26 C
        E0_TEMP4    PCM1_Inlet_Temp   OK          44 C
        E0_TEMP5    PCM1_Hotspot_Temp OK          28 C
        E0_TEMP6    IOM0_Temp         OK          22 C
        E0_TEMP7    IOM1_Temp         OK          28 C

Enclosure is not visible through one of the SAS controller. Maybe there is a failure, but the node is not able to say that there is a failure. It may be related to an unplugged SAS cable, as I found on MOS.

Let’s do a validate storage topology:

odacli validate-storagetopology
INFO    : ODA Topology Verification
INFO    : Running on Node0
INFO    : Check hardware type
SUCCESS : Type of hardware found : X6-2
INFO    : Check for Environment(Bare Metal or Virtual Machine)
SUCCESS : Type of environment found : Bare Metal
INFO    : Check number of Controllers
SUCCESS : Number of Internal RAID bus controllers found : 1
SUCCESS : Number of External SCSI controllers found : 2
INFO    : Check for Controllers correct PCIe slot address
SUCCESS : Internal RAID controller   : 23:00.0
SUCCESS : External LSI SAS controller 0 : 03:00.0
SUCCESS : External LSI SAS controller 1 : 13:00.0
INFO    : Check if JBOD powered on
SUCCESS : 0JBOD : Powered-on
INFO    : Check for correct number of EBODS(2 or 4)
FAILURE : Check for correct number of EBODS(2 or 4) : 1
ERROR   : 1 EBOD found on the system, which is less than 2 EBODS with 1 JBOD
INFO    : Above details can also be found in the log file=/opt/oracle/oak/log/srvxxx/storagetopology/StorageTopology-2022-11-16-17:21:43_34790_17083.log

EBOD stands for Expanded Bunch Of Disks, which is not very clear. But as disks are OK, this is probably related to cabling or controller in the enclosure.


My customer went to the datacenter and first checked the cabling, but it was fine. Opening an SR on My Oracle Support quickly solved the problem. A new controller was sent, it was swapped in the enclosure with the defect one without any downtime, and everything is fine then.


There is absolutely no problem with the HA storage enclosure not being smart. You don’t need a smart storage for this kind of server, as ODA is a “Simple. Reliable. Affordable” solution.

In this particular case, it’s hard to detect that the failure is a real one. But my customer was using a RAC setup with a failure in one of the redundant components, maybe since months. It’s definitely not satisfying. From time to time, manual and human checks are still needed!

L’article Warning: ODA HA disk enclosure is not smart! est apparu en premier sur dbi Blog.

JBoss EAP (and WildFly) cli Scripting via Ansible

Yann Neuhaus - Fri, 2022-11-25 07:57

As I am working on a new YaK component to deploy a JBoss-EAP/WildFly server, I decided to configure it with help of jboss-cli.sh scripting tool instead of direct modification of xml files. The idea is to create a script on the server via Jinja templates and then run it.


The role has only one main.yml file which is doing the following:

  1. Create a temporary script using ansible.builtin.tempfile
  2. Write operations into this file
  3. Run it
  4. Remove the file

Theses tasks are in a block with an associate rescue block to catch any error we could have.

Note that I decided to set “change_when” to false for step 1, 2 and 4 as working with temporary script on the server is not really a modification of the server itself. This helps to have a relevant playbook recap at end of execution.

Also, to support idempotency, I have added a changed_when clause:

changed_when: "'changed' in jboss_cli_result.stdout"

Whenever a jboss cli script is logging “changed”, associate Ansible task will be showed as changed.


As I can’t have only one template for all jboss-cli operations, I decided to name template with the operation name. So far, the role is supporting the following operations:

  • add-alias.j2
  • add-handler.j2
  • add.j2
  • datasource-add.j2
  • deploy.j2
  • echo.j2
  • module-add.j2
  • reload.j2
  • remove-handler.j2
  • remove.j2
  • security.j2
  • undefine-attribute.j2
  • write-attribute.j2

As add and remove resource are very similar templates, they are symbolic link to another common file ( _ressource.j2). We will not cover all templates in the blog, so let’s focus on one.

Deploy a Java Application

Deploying a Java application consist of following steps:

  1. Create a folder to store the file on remote server
  2. Copy the file in it
  3. Deploy it in JBoss-EAP/WildFly

Calling the jboss-cli role will look like this:

- name: Deploy WebApp
    name: "deploy webapp"
      - deploy_webapp:
        operation: "deploy"
        parameters: "{{ deployments }}"
    name: "jboss-cli"

Where deployment is a list of Java applications (ear, war) to deploy.

Parameters for deployment are limited compared to other operations:

  • operation to specify what we are going to do. This is also indicating which Jinja template to use.
  • parameters: The list of files to deploy. It is a list of full path to files.

deploy.j2 file contains the following code:

{% for dep in operation.parameters %}
{% set filename = (dep |basename) %}
{% set path_to_file = webapp_folder + '/' + filename %}
if (outcome != success) of /deployment={{ filename }}:read-resource()
    deploy {{ path_to_file }}
    echo changed
    echo nochange
{% endfor %}

Line 1: Instead of using loop in Ansible, I found it more convenient to do loop in template itself.

Line 2 and 3: I am populating variables so that jboss-cli commands to make the code a bit easier to read.

Line 4 to 9 are jboss-cli commands.

  • Line 4: We are reading-resource. If this fails (outcome different from success), …
  • We deploy Java application (line 5) and write “changed” to script output
  • Or else (line 7), application already exists and we echo nochange.

Such role will output the following lines:

TASK [deploy-webapp : Include deploy-webapp-WildFly role] **************************
Friday 25 November 2022  06:56:54 +0000 (0:00:01.161)       0:02:31.697 *******
included: /workspace/yak/components/middleware_webserver/roles/deploy-webapp/tasks/deploy-webapp-WildFly.yml for oci_dbi_test/srv-linux-mdw-ols-1/WILDFLY

TASK [deploy-webapp : Call common tasks] **************************
Friday 25 November 2022  06:56:54 +0000 (0:00:00.105)       0:02:31.802 *******
included: /workspace/yak/components/middleware_webserver/roles/deploy-webapp/tasks/deploy-webapp-JBoss-WildFly-common.yml for oci_dbi_test/srv-linux-mdw-ols-1/WILDFLY

TASK [deploy-webapp : Create /app/installers/webapp folders] **************************
Friday 25 November 2022  06:56:54 +0000 (0:00:00.146)       0:02:31.949 *******
changed: [oci_dbi_test/srv-linux-mdw-ols-1/WILDFLY]

TASK [deploy-webapp : Copy File] **********************************
Friday 25 November 2022  06:56:55 +0000 (0:00:00.556)       0:02:32.505 *******
changed: [oci_dbi_test/srv-linux-mdw-ols-1/WILDFLY] => (item=/workspace/.ssh/helloworld.war)

TASK [Deploy WebApp] **********************************************
Friday 25 November 2022  06:56:56 +0000 (0:00:00.970)       0:02:33.476 *******

TASK [jboss-cli : Create Temporary Script] *****************************************
Friday 25 November 2022  06:56:56 +0000 (0:00:00.111)       0:02:33.587 *******
ok: [oci_dbi_test/srv-linux-mdw-ols-1/WILDFLY]

TASK [jboss-cli : Write Operations into Script] ************************************
Friday 25 November 2022  06:56:56 +0000 (0:00:00.503)       0:02:34.091 *******
ok: [oci_dbi_test/srv-linux-mdw-ols-1/WILDFLY] => (item=deploy on deploy)

TASK [jboss-cli : Run Script deploy webapp] ****************************************
Friday 25 November 2022  06:56:57 +0000 (0:00:00.802)       0:02:34.894 *******
changed: [oci_dbi_test/srv-linux-mdw-ols-1/WILDFLY]

TASK [jboss-cli : Debug] ******************************************
Friday 25 November 2022  06:57:00 +0000 (0:00:03.175)       0:02:38.070 *******
skipping: [oci_dbi_test/srv-linux-mdw-ols-1/WILDFLY]

TASK [jboss-cli : Remove Temporary Script] *****************************************
Friday 25 November 2022  06:57:00 +0000 (0:00:00.070)       0:02:38.140 *******
skipping: [oci_dbi_test/srv-linux-mdw-ols-1/WILDFLY]

Next Steps

For this particular operation, we could imagine to implement a “force” option, whenever we want to overwrite the deployed webapp. I could also remove deployed web application when they are not declared in host variables.

For now, the role already support enough operation to be able to install and configure a JBoss or WildFly server with dbi services best practices. The component also takes care of configuring datasource with associated driver as well as SSL setup for administration console and web application secured access.

Role also supports JBoss-EAP or WildFly indifferently.

L’article JBoss EAP (and WildFly) cli Scripting via Ansible est apparu en premier sur dbi Blog.

Ansible deployment code for Oracle Fusion Middleware Infrastructure

Yann Neuhaus - Thu, 2022-11-24 10:11

During the YaK weblogic_domain component development phase, I added the Oracle fusion middleware forms & Reports and the Oracle fusion middleware infrastructure for Application development Framework (ADF) deployments to the weblogic_domain component. I encountered an issue that the WebLogic domain was created but it was missing the Metadata Service repository (MDS) connection and thus deploying ADF applications was failing. Even after configuring the WebLogic domain with Java Required Files (JRF), as described in the blog “Using Ansible to install WebLogic 12c R2 (12.2.1) and Fusion Middleware Infrastructure on Oracle Linux 7.1” , the MDS connection was still not configured.

Doing some researches I came to the information that the WebLogic domain templates below are not enough to configure fully such WebLogic Domain:

wls_template=middleware_home + '/wls_server/common/templates/wls/wls.jar';
em_template=middleware_home + '/em/common/templates/wls/oracle.em_wls_template.jar';

The Oracle WSM Policy Manager extension template is required too but needs to be applied properly as there is no automatic domain configuration like using the Forms & Reports domain extension templates.

I found the nice github Oracle Fusion Middleware Infrastructure on Docker project that gave me all information needed for my ansible project. The needed WebLogic Domain extension templates are the following:

  name: 'Oracle JRF -'
  location: '/oracle_common/common/templates/wls/oracle.jrf_template.jar'
  name: 'Oracle JRF WS Async -'
  location: '/oracle_common/common/templates/wls/oracle.jrf.ws.async_template.jar'
  name: 'Oracle WSM Policy Manager -'
  location: '/oracle_common/common/templates/wls/oracle.wsmpm_template.jar'
  name: 'Oracle Enterprise Manager -'
  location: '/em/common/templates/wls/oracle.em_wls_template.jar'

Additionally to the standard WebLogic wls template to create the basic WebLogic Domain


Note that all those templates are located based on the Oracle Home directory.

But once the right templates are used, the FMW deployments and services needs to be applied to the WebLogic Servers. If not, the following error is raised:

CFGFWK-64254: Error occurred in "Artifacts Generation" phase execution
Encountered error: CFGFWK-64038: The app-svc-name "wsm-pm" must have target.
CFGFWK-64038: The app-svc-name "wsm-pm" must have target.
CFGFWK-64038: Provide a valid target During ConfigMapping

No error is reported for the JRF because the enterprise Manager template automatically configures JRF on the Administration Server but the managed servers will miss JRF.
The application of the FMW deployments and services to the WebLogic Managed Servers is done by using the setServerGroups WLST command. The two JRF-MAN-SVR and WSMPM-MAN-SVR server groups ensure that the Oracle JRF and Oracle Web Services Manager (OWSM) services are targeted to the Managed Servers you are creating.

groups=[ "JRF-MAN-SVR", "WSMPM-MAN-SVR" ]

The ansible role part to create the WebLogic Domain is using two templates. The first one is a shell script using the Repository Creation Utility (RCU) to create the repository and the second one is a python script to create the FMW WebLogic Domain.

CreateRepo.j2 jinja template

SCRIPT=$(readlink -f $0)

JAVA_HOME={{ pv_java_home }}
export JAVA_HOME

{{ pv_mw_home }}/oracle_common/bin/rcu \
  -silent \
  -createRepository \
  -databaseType ORACLE \
  -connectString {{ pv_weblogic_domain_details.dbserver_name }}:{{ pv_weblogic_domain_details.dbserver_port }}/{{ pv_weblogic_domain_details.dbserver_service }} \
  -dbUser sys \
  -dbRole SYSDBA \
  -schemaPrefix {{ pv_weblogic_domain_details.repository_prefix }} \
  -useSamePasswordForAllSchemaUsers true \
  -component IAU \
  -component IAU_APPEND \
  -component IAU_VIEWER \
  -component OPSS \
  -component STB \
  -component WLS \
  -component MDS \
  -f < {{ pv_domain_config_location }}/passwords.txt

CreateDomain.j2 jinja template (not the complete code but only the interesting parts after the basic domain was created).

# Read the domain
print '>>>Domain readed.'
# Apply the FMW extensions templates
{% if pv_fusion_template is defined %}
{% for key,value in pv_fusion_template_list.items() %}
addTemplate('{{ pv_mw_home }}/{{ value.location }}')
{% endfor %}
setOption('AppDir', applicationpath )
{% endif %}
# Create the machines
  for mm in MACHINES.split('|'):
    if mm :
      # create the machine
      m = mm.split(',')

except Exception, e:
  print "The machines creation failed, check reason",e
# Create the clusters and assigned managed servers
  for cluster in CLUSTERS.split('|'):
    if cluster :
      found = 0
      for cc in cluster.split(':'):
        if found == 0 :
          clusterName = cc
          found = 1
          server = cc.split(',')

except Exception, e:
  print "The clusters creation failed, check reason",e
# Create the standalone managed servers
  for server in SERVERS.split('|'):
    if server :
      serverAttributes = server.split(',')

except Exception, e:
  print "The servers creation failed, check reason",e
# Update the JDBC data sources described set the FMW templates
  print '>>>JDBC updated.'
  # Place all Managed Servers in the Server Groups
  groups=[ "JRF-MAN-SVR", "WSMPM-MAN-SVR" ]
  for server in SERVERS.split('|'):
    if server :
      serverAttributes = server.split(',')

  for cluster in CLUSTERS.split('|'):
    if cluster :
      found = 0
      for cc in cluster.split(':'):
        if found == 0 :
          clusterName = cc
          found = 1
          server = cc.split(',')
# Create Applications dedicated data sources
{% if pv_weblogic_domain_details.create_JDBC_Connection is defined %}
{% for key,value in pv_weblogic_domain_details.ds_connections.items() %}
  createJDBCDataSource( "{{ value.ds_name }}", JDBC_REPO_DATA_SOURCE_URL, "{{ value.ds_jndi_name }}", "{{ value.ds_schema }}", "{{ value.ds_target }}", JDBC_REPO_DATA_SOURCE_TEST, JDBC_DSPassword)
except Exception, e:
  print "The JDBCDataSource {{ value.ds_name }} creation failed, check reason",e
{% endfor %}
{% endif %}

Now the extract of the ansible role to create the WebLogic Domain

- name: Execute create repository script
  become: true
  become_user: '{{ pv_oracle_user }}'
  shell: "{{ pv_domain_config_location }}/create_repo.sh"
  when: pv_create_repo_schemas is defined and pv_create_repo_schemas and pv_adminFlag
  register: rcu_results
  failed_when: rcu_results.stdout | regex_search('ERROR') and not( rcu_results.stdout | regex_search('RCU-6016') )
  #ignore_errors: true
    - wls-domain-creation
- name: Remove password response file for repository creation
  become: true
  become_user: '{{ pv_oracle_user }}'
    path: "{{ pv_domain_config_location }}/passwords.txt"
    state: absent
  when: pv_create_repo_schemas is defined and pv_create_repo_schemas
    - wls-domain-creation

- name: Execute Create Domain Script
  become: true
  become_user: '{{ pv_oracle_user }}'
  command: "{{ pv_mw_home }}/oracle_common/common/bin/wlst.sh {{ pv_domain_config_location }}/CreateDomain.py {{ pv_domain_name }}"
    WLS_ADMIN_USER: '{{ pv_weblogic_domain_details.weblogic_admin }}'
    WLS_ADMIN_PASSWORD: '{{ pv_weblogic_secrets.weblogic_admin_pass }}'
    WLS_NM_USER: '{{ pv_weblogic_domain_details.nm_user }}'
    WLS_NM_PASSWORD: '{{ pv_weblogic_secrets.nodemanager_password }}'
    DS_PASSWORD: "{{ pv_weblogic_secrets.datasource_password | default('') }}"
    JDBC_DS_PASSWORD: "{{ pv_weblogic_secrets.jdbc_datasource_password | default('') }}"
    WEBLOGIC_DOMAIN_DEF_DIR: '{{ pv_domain_config_base }}'
    MW_CONFIG_HOME: '{{ pv_config_base }}'
    creates: '{{ pv_domains_base }}/{{ pv_domain_name }}/startWebLogic.sh'
  when: pv_adminFlag
  register: createdomain_output
    - wls-domain-creation

Now with YaK weblogic_domain component, we are able to deploy WebLogic domains clustered or not, Fusion Middleware Forms & Reports, Fusion Middleware infrastructure for ADF applications to on-premises platforms, or on different platforms on the Cloud (AWS, Azure, OCI). YaK core takes care of the infrastructure and Yak WebLogic_domain component takes care of the WebLogic or Fusion Middleware applications servers.

L’article Ansible deployment code for Oracle Fusion Middleware Infrastructure est apparu en premier sur dbi Blog.

Latency test from Client to Oracle DB: Minimum number of rows fetched in sqlplus and sqlcl

Yann Neuhaus - Thu, 2022-11-24 10:08

Some time ago I wrote a Blog on how to calculate the network latency fetching a row to a Client from an Oracle database. The tool I used was sqlcl, because it just requires Java and hence no Oracle Client needs to be installed. Some people executed the script provided in the blog with sqlplus (also because current versions of sqlcl require Java 11) and saw only 1669 network round trips with 5000 rows and an arraysize of 1:

test_netlat@orclcdb1@PDB1> exec dbms_output.put_line(to_char(:roundtrips_end - :roundtrips_begin)||' network round trips.');
1669 network round trips.

test_netlat@orclcdb1@PDB1> exec dbms_output.put_line(to_char((:time_end - :time_begin)*10)||' ms elapsed time.');
790 ms elapsed time.

test_netlat@orclcdb1@PDB1> exec dbms_output.put_line(to_char((:db_time_end - :db_time_start)/1000)||' ms DB time.');
165.841 ms DB time.

test_netlat@orclcdb1@PDB1> exec dbms_output.put_line(to_char(round((((:time_end - :time_begin)*10)-((:db_time_end - :db_time_start)/1000))/(:roundtrips_end - :roundtrips_begin),3))||' ms latency per round trip.');
.374 ms latency per round trip.

test_netlat@orclcdb1@PDB1> exec dbms_output.put_line('--> (Elapsed Time - DB Time) / network round trips');
--> (Elapsed Time - DB Time) / network round trips

REMARK: rowprefetch was set to the default of 1 during the sqlplus tests.

With sqlcl I can see a number of network round trips closer to the expected 5000:

SQL> exec dbms_output.put_line(to_char(:roundtrips_end - :roundtrips_begin)||' network round trips.');
4953 network round trips.

SQL> exec dbms_output.put_line(to_char((:time_end - :time_begin)*10)||' ms elapsed time.');
2340 ms elapsed time.

SQL> exec dbms_output.put_line(to_char((:db_time_end - :db_time_start)/1000)||' ms DB time.');
987.403 ms DB time.

SQL> exec dbms_output.put_line(to_char(round((((:time_end - :time_begin)*10)-((:db_time_end - :db_time_start)/1000))/(:roundtrips_end - :roundtrips_begin),3))||' ms latency per round trip.');
.273 ms latency per round trip.

SQL> exec dbms_output.put_line('--> (Elapsed Time - DB Time) / network round trips');
--> (Elapsed Time - DB Time) / network round trips

So how can we explain the 1669 network round trips with sqlplus and the 4953 network round trips with sqlcl?

The number of rows fetched can be checked with sql_trace:

set feed only
alter session set sql_trace=true;
select filler from tlat ;
alter session set sql_trace=false;
set feed on

The trace file shows the following lines when running this with a 19.16. sqlplus against a 19.16-database:

FETCH #139731510199544:c=67,e=66,p=0,cr=3,cu=0,mis=0,r=2,dep=0,og=1,plh=3090445234,tim=20585748787
FETCH #139731510199544:c=9,e=9,p=0,cr=1,cu=0,mis=0,r=3,dep=0,og=1,plh=3090445234,tim=20585749126
FETCH #139731510199544:c=7,e=7,p=0,cr=1,cu=0,mis=0,r=3,dep=0,og=1,plh=3090445234,tim=20585749293
FETCH #139731510199544:c=8,e=8,p=0,cr=1,cu=0,mis=0,r=3,dep=0,og=1,plh=3090445234,tim=20585749521
FETCH #139731510199544:c=8,e=8,p=0,cr=1,cu=0,mis=0,r=3,dep=0,og=1,plh=3090445234,tim=20585749706
FETCH #139731510199544:c=0,e=8,p=0,cr=1,cu=0,mis=0,r=3,dep=0,og=1,plh=3090445234,tim=20585749874

Important is the information “r=”. The first fetch fetched 2 rows and all subsequent fetches fetched 3 rows.

With sqlcl (version I can see 50 rows fetched on the first fetch and then the expected 1 row fetched at a time:

FETCH #140181223387392:c=52,e=52,p=0,cr=3,cu=0,mis=0,r=50,dep=0,og=1,plh=3090445234,tim=5975977699
FETCH #140181223387392:c=17,e=17,p=0,cr=1,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=5975978365
FETCH #140181223387392:c=0,e=19,p=0,cr=1,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=5975978679
FETCH #140181223387392:c=0,e=13,p=0,cr=1,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=5975978968
FETCH #140181223387392:c=13,e=13,p=0,cr=1,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=5975979473
FETCH #140181223387392:c=13,e=13,p=0,cr=1,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=5975979909

So sqlplus in 19c has a minimum fetchsize (arraysize) of 3 except for the first fetch and sqlcl fetches the expected 1 row except for the first fetch, which takes 50 rows.

REMARK: In earlier releases of sqlplus the minimum for arraysize was 2 with a first fetch of 1 row. E.g. in or

FETCH #139833692592448:c=0,e=60,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=422087587
FETCH #139833692592448:c=0,e=23,p=0,cr=1,cu=0,mis=0,r=2,dep=0,og=1,plh=3090445234,tim=422088651
FETCH #139833692592448:c=0,e=16,p=0,cr=1,cu=0,mis=0,r=2,dep=0,og=1,plh=3090445234,tim=422089371
FETCH #139833692592448:c=0,e=15,p=0,cr=1,cu=0,mis=0,r=2,dep=0,og=1,plh=3090445234,tim=422090092

In 18c it changed:

FETCH #140635627289560:c=62,e=61,p=0,cr=3,cu=0,mis=0,r=2,dep=0,og=1,plh=3090445234,tim=559611770
FETCH #140635627289560:c=22,e=22,p=0,cr=1,cu=0,mis=0,r=3,dep=0,og=1,plh=3090445234,tim=559612282
FETCH #140635627289560:c=15,e=14,p=0,cr=1,cu=0,mis=0,r=3,dep=0,og=1,plh=3090445234,tim=559612368
FETCH #140635627289560:c=34,e=34,p=0,cr=1,cu=0,mis=0,r=3,dep=0,og=1,plh=3090445234,tim=559612583

The old behavior up to 12.2. is documented in MOS Note
Pipelined Function with Pipe Row() Pipes out 15 Rows at a Time (Doc ID 1265916.1):

SQL*Plus is written in OCI, and OCI has a default prefetch value of 1 row.

The first fetch is 1 row, as 1 row is prefetched on the execute.
Then it either performs a scalar fetch, so one requested row plus one prefetched row,
or it performs an array fetch so you see eg :

arraysize = 1, fetches are: 1, 2, 2, …

On the initial execute it fetches one row. So on the first fetch call SQLPlus makes it returns that row. On the second fetch call SQLPlus makes, asking for one row, it has to go to
the database. This means it fetches the row requested plus it prefetches one additional row,
and you see two fetched. The next fetch call SQL*Plus makes uses the prefetched row,
then the next goes to the database and fetches two more and so on.

So up to 12.2. sqlplus just relied on OCI’s prefetch-mechanism and hence the minimum fetch size was 2. Why do we see a minimum of 3 rows fetched from 18c onwards?

I made a test with an OCI-program I took from MOS Note
OCI: Sample Program That Prefetches Rows (Doc ID 1126015.1)

The provided c-program was adjusted to use

MAX_FETCH_COUNT 1 –> fetchsize, i.e. arraysize in sqlplus
prefetch_cnt = 1 –> prefetch-size of OCI

The trace file running the program in 19c looks as follows:

FETCH #139759481436136:c=115,e=115,p=0,cr=3,cu=0,mis=0,r=2,dep=0,og=1,plh=3090445234,tim=17304993374
FETCH #139759481436136:c=0,e=50,p=0,cr=1,cu=0,mis=0,r=2,dep=0,og=1,plh=3090445234,tim=17304993696
FETCH #139759481436136:c=0,e=73,p=0,cr=1,cu=0,mis=0,r=2,dep=0,og=1,plh=3090445234,tim=17304994078
FETCH #139759481436136:c=41,e=41,p=0,cr=1,cu=0,mis=0,r=2,dep=0,og=1,plh=3090445234,tim=17304994447

After changing prefetch_cnt = 0 I really can see only 1 row per fetch:

FETCH #140542285248488:c=93,e=93,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=17507201670
FETCH #140542285248488:c=74,e=74,p=0,cr=1,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=17507202044
FETCH #140542285248488:c=47,e=46,p=0,cr=1,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=17507202294
FETCH #140542285248488:c=28,e=28,p=0,cr=1,cu=0,mis=0,r=1,dep=0,og=1,plh=3090445234,tim=17507202563

From the tests it’s not clear what caused the behavior change in 18c. The new features described in the release notes of sqlplus in 18c seem not related.

Summary: The script provided in Blog Script to calculate the network latency between the application and the Oracle DB-server is correct, but you may consider that sqlplus and sqlcl provide a different number of fetches (with arraysize 1), i.e. the amount of data transported may be bit different, but that should not have any impact on the result. To be 100% correct with the latency an OCI-program can be used with a fetchsize of 1 and a prefetch-size of 0. Alternatively you may test with an arraysize of 3 in sqlplus and sqlcl.

L’article Latency test from Client to Oracle DB: Minimum number of rows fetched in sqlplus and sqlcl est apparu en premier sur dbi Blog.

Azure SQL Managed Instance link feature overview

Yann Neuhaus - Thu, 2022-11-24 06:30

In one of my last blog-posts I spoke about Striim which is a data replication platform that can be used for migrations but also for offloaded reporting or even real-time analytics.
A new Azure feature, currently on preview, named Link feature for Azure SQL Managed Instance is available. It gives the possibility to connect a SQL Server hosted on-premise or in the cloud to a SQL Managed instance.
This feature keeps the replicate up-to-date with near real-time data replication to the cloud.
It gives the opportunity to have a read-only secondary in the cloud to offload the workload and take advantages of the Azure environment like built-in security, scalability. performance…

This new feature is based on Distributed Availability Groups. There is no prerequisite to already have an Availability Group or multiple nodes, a single node is working but multiple ones with multiples AAG are also working.

The link can be kept ever or remove after a transition period for a migration with near to zero downtime.

Before to configure a Manage Instance Link as I’m working with SQL Server 2019 we need:

  • SQL Server 2019 Enterprise or Developer Edition with CU15 or above
  • An Azure SQL Manage Instance

And also prepare our SQL Server instance with some prerequisites:

  • Check CU15
  • Create a database master key in the master database
  • Enable Availability group feature
  • Enable trace flags:
    • -T1800: This trace flag optimizes performance when the log files for the primary and secondary replicas in an availability group are hosted on disks with different sector sizes, such as 512 bytes and 4K.
    • -T9567: This trace flag enables compression of the data stream for availability groups during automatic seeding. The compression increases the load on the processor but can significantly reduce transfer time during seeding.

Don’t forget to restart the SQL engine to validate the new configuration.

Let’s start to replicate the AdventureWorks database from our on-premise instance to an Azure SQL Managed Instance. The source database must be in Full recovery model and have at least one full backup.
Go to the source database, right click on it and select “Azure SQL Managed Instance link” and “Replicate database”:

An introduction slide explains us the goal, some scenarios and the requirements of this new feature:

Requirements are now checked and they are all met here, for the server but also for Availability Group where the database master key is available:

We can now select one or more databases to replicate to the Azure SQL Managed instance via the link feature. We select here the AdventureWorks2019 database which meets the requirements, it’s means Full recovery model and a Full backup executed:

It’s time to sign in to our Azure Subscription:

Once done we need to select the Managed Instance information which will be our target:

On the following screen we have lots of information:

  • The name and the port of the endpoint which will be created for the database mirroring
  • the certificate which will be created with an expiry date
  • the names for the Availability group and for the Distributed Availability group

A last check to the choices made and click Finish to start the process:

All steps succeeded:

I have now my AdventureWorks database replicated on my Managed instance and ready for read-only workload.
If I check my Distributed Availability group dashboard, I can see that my first replica is my Availability group and my second is the Managed Instance. My AdventureWorks database is synchronized and the last hardened time occurs during my first synchronization.

If I update the column JobTitle on my Employee table on my Source instance, the update will be replicated asynchronously to my secondary database.
My replica database before the update:

I run my script on my primary database:

After some seconds my secondary is again synchronized:

And if I look on my Distributed Availability group, I can see the details with my last hardened time:

We cannot see that the database on the Managed Instance is not a “Standard” one, there is no information written after the database name like Synchronized as it is the case with a secondary database in an Availability group. Nevertheless if we try to run an update statement again this database we receive an error message:

This new Azure Managed Instance link feature is really interesting to off-load analytics and reporting to Azure but can also be used in a migration scenario with near to zero downtime.
It’s also a good way to create a first hybrid scenario before moving to the Cloud.

L’article Azure SQL Managed Instance link feature overview est apparu en premier sur dbi Blog.

ODA X9-2 and CPU speed

Yann Neuhaus - Wed, 2022-11-23 03:30

It’s been several months now that Oracle released new Oracle Database Appliance X9-2 series. And my first project with 4 of these servers is coming to an end as moving to production is planned for the next weeks. What is the real speed of the Xeon CPUs inside these new servers? Let’s find out.

Intel Xeon and CPU speed

I already addressed this topic in a previous blog post, and it’s still relevant today:

Basically, Intel Xeon CPU have variable CPU speed depending on the number of enabled cores. The less cores you enable, the more speed you will get on the enabled cores.

Why core speed is important?

You may argue that with multiple cores on nowadays’ CPUs, single core speed is not so important. But it is. First, if you run Standard Edition, there is no parallelism and each statement will be processed by a single core, and as soon as everything is in db cache for this statement, duration will mainly be related to core speed. A 3GHz core will be 50% faster than a 2GHz core without any surprise.

Regarding Enterprise Edition, it’s slightly different. Yes Enterprise Edition supports parallelism, but parallelism has to be configured to work properly. And it’s not suitable for statements taking fractions of a second to execute. Most of the statements will not use parallelism.

Core speed for Xeon Gold (X8) vs Xeon Silver (X9)

X9-2 has now Silver Xeon instead of Gold Xeon for previous X8-2. It means that the CPU is no more an high end version. With X9-2, Oracle wanted to keep the same level of performance compared to previous generation, because it’s enough for this kind of platform. Actually, ODA is an entry level engineered system, and it’s not supposed to be the best platform available. If your target if maximum CPU speed, you’d better look at Exadata X9-2M (based on 32-core Xeon Platinum) or build your own server with Gold or Platinum Xeon, although it may not make a significant difference for most of us.

Real core speed according to technical specs sheet

In the X8-2 specs sheet, you may have seen that Xeon are given for a 2.3GHz core speed. X9-2 specs sheet is given for 2.4GHz cores. But these speeds are not exactly those you will notice. When looking in the Intel spec sheets for these processors, these speeds are “Base Frequency”, meaning that you could expect (much) more than these figures.

Real core speed on X8-2 series

Your system is aware of CPU min and max speeds:

lscpu | grep -e min  -e max
CPU max MHz:           3900.0000
CPU min MHz:           1000.0000

It’s easy to find the actual CPU speed:

lscpu | grep -e "CPU MHz" -e "CPU(s)" | grep -v NUMA

Here are the core speeds I’ve noticed on an X8-2M:

Enabled coresCPU MHz642800483100323600243600163700123700839004390023900

Basically, disabling half of the cores will bring you a significant speed bump (nearly +30%). And disabling 3/4 of the cores will bring you the maximum speed on the remaining cores (+40%).

Depending on your ODA, you can then expect:

  • 3600MHz: X8-2S with 8 cores, X8-2M with 16 cores, X8-2HA with 2x 16 cores
  • 3900MHz: X8-2S with 4 cores, X8-2M with 8 cores, X8-2HA with 2x 8 cores
Real core speed on X9-2 series

I didn’t test all core configurations as X9-2 is quite new, but I will start cutting core numbers by half on my ODA X9-2S with one Xeon:

lscpu | grep CPU
CPU op-mode(s):        32-bit, 64-bit
CPU(s):                32
On-line CPU(s) list:   0-31
CPU family:            6
Model name:            Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
CPU MHz:               2900.000
CPU max MHz:           3400.0000
CPU min MHz:           800.0000
NUMA node0 CPU(s):     0-31

odacli update-cpucore -c 8

lscpu | grep CPU
CPU op-mode(s):        32-bit, 64-bit
CPU(s):                16
On-line CPU(s) list:   0-15
CPU family:            6
Model name:            Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
CPU MHz:               3400.000
CPU max MHz:           3400.0000
CPU min MHz:           800.0000
NUMA node0 CPU(s):     0-15

That’s it, as soon as half of the cores (or less) are enabled, you will get maximum speed.

Let’s confirm this on an ODA X9-2L with 2 Xeons:

lscpu | grep CPU
CPU op-mode(s):        32-bit, 64-bit
CPU(s):                64
On-line CPU(s) list:   0-63
CPU family:            6
Model name:            Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
CPU MHz:               2899.691
CPU max MHz:           3400.0000
CPU min MHz:           800.0000
NUMA node0 CPU(s):     0-15,32-47
NUMA node1 CPU(s):     16-31,48-63

odacli update-cpucore -c 16

lscpu | grep CPU
lscpu | grep CPU
CPU op-mode(s):        32-bit, 64-bit
CPU(s):                32
On-line CPU(s) list:   0-31
CPU family:            6
Model name:            Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
CPU MHz:               3400.000
CPU max MHz:           3400.0000
CPU min MHz:           800.0000
NUMA node0 CPU(s):     0-7,16-23
NUMA node1 CPU(s):     8-15,24-31

As expected, this is the same behaviour. Maximum speed when enabling half the cores.

I didn’t try yet on an X9-2HA, but as it’s basically two X9-2L ODAs without any NVMe disk, it will probably be the same conclusion.


ODA X9-2 is quite the same hardware when you compare to its previous sibling. But you should know that you will benefit from maximum core speed as soon as your cut half the cores with odacli configure-cpucore. I would recommend to limit your Enterprise licenses to a maximum of 4 CPUs (8 cores) on an X9-2S, 8 CPUs (16 cores) on an X9-2L and 16 CPUs (2x 16 cores) on an X9-2HA. If you use Standard Edition, consider enabling only 8 cores on an X9-2S, 16 cores on an X9-2L and 2x 16 cores on an X9-2HA to reach the best performance.

L’article ODA X9-2 and CPU speed est apparu en premier sur dbi Blog.

An introduction to OMrun – 2 – Working with parameters

Yann Neuhaus - Tue, 2022-11-22 06:14

In the last post we compared the list of databases in the same PostgreSQL cluster using OMrun. The setup for this was pretty simple: We’ve defined the environment using the PostgreSQL Data Adapter and created the connections to two databases. In a new “Test Data Object” we’ve written the SQL statement to list all databases and once the test was executed OMrun compared the results.

This is how it looked like at the end of the previous post:

The select statement we used did not use a where clause, it just lists all the databases. If we want to restrict the result, we can do it like this:

Very simple SQL (of course an in-list with just one value does not make much sense). A potential issue with this is, that we hard code the value(s) into the statement. This is where the parameters come into the game. In OMrun you can define up to four parameters which will get passed into the statement. You can already see that in the lower right section in the screenshot above. Instead of hard coding the value(s), we can define a parameter and pass the parameter into the statement by referencing it with “@param1”:

Our statement has become more dynamic. Passing multiple values by using one parameter is possible as well:

In the same way the remaining parameters can be used, e.g. like this:

Doing it like this, will result in the test to fail, of course. You can even go a step further and use nested parameters:

As you can see, this gives you great flexibility with passing values into your statements.

L’article An introduction to OMrun – 2 – Working with parameters est apparu en premier sur dbi Blog.

Fedora 37 and Oracle

Tim Hall - Tue, 2022-11-22 03:34

Fedora 37 was released recently. Here comes the standard warning. Here are the usual things I do when a new version of Fedora comes out. Why do I do this? As mentioned in the first link, Fedora is a proving ground for future versions of RHEL, and therefore Oracle Linux. I like to see what … Continue reading "Fedora 37 and Oracle"

The post Fedora 37 and Oracle first appeared on The ORACLE-BASE Blog.Fedora 37 and Oracle was first posted on November 22, 2022 at 10:34 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

This is the end...

Scott Spendolini - Mon, 2022-11-21 21:43

 ...of my blog at this URL. 

Please follow me over on spendolini.blog for a fresh, updated continuation of this blog.

"Find Definition References" for Page and Field Configurator

Jim Marion - Mon, 2022-11-21 13:33

A student recently asked:

Is there an Edit | Find Definition References equivalent for Page and Field Configurator?

Great question! In Application Designer, we can open a field and choose Edit | Find Definition References to find all usages of that field. Unfortunately, Page and Field Configurator does not have an equivalent. The good news, however, is Page and Field Configurator is metadata driven. In other words, we can create our own "Find Definition References" equivalent by writing SQL. Here is a short example to get you started:


The EOCC_CONFIG_FLD record contains the Component name, Record name, and Field name, allowing us to effectively "Find Definition References" for any of those three items.

At JSMpros, we teach PeopleTools Tips like this every week. Be sure to check our website to see what we are offering next!

Row_number() sorts

Jonathan Lewis - Mon, 2022-11-21 11:47

An email on the Oracle-L list server a few days ago described a performance problem that had appeared after an upgrade from to 19c (19.15). A long running statement (insert as select, running parallel 16) that had run to completion in 11g using about 20GB of temporary space (with 50GM read and written) had failed after running for a couple of hours in 19c and consuming 2.5 TB of temporary space, even when the 11g execution plan was recreated through an SQL Profile.

When I took a look at the SQL Monitor report for 19c it turned out that a large fraction of the work done was in an operation called WINDOW CHILD PUSHED RANK which was there to deal with a predicate:

row_number() over(partition by t.ds_no, t.c_nbr order by c.cpcl_nbr desc) = 1

Checking the succesful 11g execution, this operation had taken an input rowsource of 7 billion rows and produced an output rowsource of 70 million rows.

Checking the SQL Monitor report for the failed executions in 19c the “pure” 19c plan had reported 7 billion input rows, 6GB memory and 1TB temp space at the same point, the plan with the 11g profile had reported 10 billion rows, but the operation had not yet reported any output rows despite reporting 9GB as the maximum memory allocation and 1TB as the maximum temp space usage. (Differences in row counts were probably due to the report being run for different dates.)

So, the question to the list server was: “is this a bug in 19c?”


It’s a little unfortunate that I couldn’t model the problem in 19c at the time because my 19c VM kept crashing; but I built a very simple model to allow me to emulate the window sort and rank() predicate in an 11g instance, then re-played the model in an instance of 21c.

For the model data I took 50 copies of the first 50,000 rows from view all_objects to produce a table of 2,500,000 rows covering 35,700 blocks and 279 MB, (55,000 / 430 in 21c); then I ran the query below and reported its execution plan with a basic call to dbms_xplan.display_cursor():

        /*+ dynamic_sampling(0) */
        owner, max(object_name)
from    (
                /*+ no_merge */
                owner, object_name 
        from    (
                        owner, object_name,
                        row_number() over (partition by object_name order by object_type desc) orank 
                )  where orank= 1
group by 
order by

| Id  | Operation                  | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT           |      |       |       |       | 29491 (100)|          |
|   1 |  SORT GROUP BY             |      |     8 |   184 |       | 29491   (9)| 00:02:28 |
|   2 |   VIEW                     |      |  2500K|    54M|       | 28532   (6)| 00:02:23 |
|*  3 |    VIEW                    |      |  2500K|   112M|       | 28532   (6)| 00:02:23 |
|*  4 |     WINDOW SORT PUSHED RANK|      |  2500K|    95M|   124M| 28532   (6)| 00:02:23 |
|   5 |      TABLE ACCESS FULL     | T1   |  2500K|    95M|       |  4821   (8)| 00:00:25 |

Predicate Information (identified by operation id):

   3 - filter("ORANK"=1)

Oracle 21c produced the same execution plan – though the row estimate for the VIEW operations (numbers 2 and 3) was a more realistic 46,236 (num_distinct recorded for object_name) compared to the unchanged 2,500,000 from 11g. (Of course it should have been operation 4 that showed the first drop in cardinality.)

With my first build, the timings weren’t what I expected: under 21c the query completed in 3.3 seconds, under 11g it took 11.7 seconds. Most of the difference was due to a large (55MB) spill to temp space that appeared in 11g but not in 21c. This would have been because 11g wasn’t allowed a large enough PGA, so I set the workarea_size_policy to manual and the sort_area_size to 100M, which looks as if it should have been enough to cover the 11g requirement – it wasn’t and I had to grow the sort_area_size to 190 MB before the 11g operation completed in memory, allocating roughly 155MB. By comparison 21c reported an increase of only 19MB of PGA to run the query, claiming that it needed only 4.7MB to handle the critical operation.

For comparison purposes here are the two run-time execution plans, with rowsource execution stats (which messed the timing up a little) and the column projection information; 11g first:

| Id  | Operation                  | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT           |      |      1 |        |       |       | 29491 (100)|      8 |00:00:03.96 |   35513 |       |       |          |
|   1 |  SORT GROUP BY             |      |      1 |      8 |   184 |       | 29491   (9)|      8 |00:00:03.96 |   35513 |  3072 |  3072 | 2048  (0)|
|   2 |   VIEW                     |      |      1 |   2500K|    54M|       | 28532   (6)|  28575 |00:00:04.07 |   35513 |       |       |          |
|*  3 |    VIEW                    |      |      1 |   2500K|   112M|       | 28532   (6)|  28575 |00:00:03.93 |   35513 |       |       |          |
|*  4 |     WINDOW SORT PUSHED RANK|      |      1 |   2500K|    95M|   124M| 28532   (6)|   1454K|00:00:08.82 |   35513 |   189M|  4615K|  168M (0)|
|   5 |      TABLE ACCESS FULL     | T1   |      1 |   2500K|    95M|       |  4821   (8)|   2500K|00:00:10.85 |   35513 |       |       |          |

Predicate Information (identified by operation id):

   3 - filter("ORANK"=1)

Column Projection Information (identified by operation id):

   1 - (#keys=1) "OWNER"[VARCHAR2,30], MAX("OBJECT_NAME")[30]

It’s an interesting oddity, and possibly a clue about the excess memory and temp space, that the A-Rows column for the Window Sort operation reports 1,454K rows output when it surely ought to be the final 45,982 at that point. It’s possible to imagine a couple of strategies that Oracle might be following to do the window sort that would reasult in the excess volume appearing, and I’ll leave it to the readers to use their imagination on that one.

And now 21c

| Id  | Operation                  | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT           |      |      1 |        |       |       | 48864 (100)|     12 |00:00:02.98 |   54755 |  54750 |       |       |          |
|   1 |  SORT GROUP BY             |      |      1 |     12 |   852 |       | 48864   (1)|     12 |00:00:02.98 |   54755 |  54750 |  5120 |  5120 | 4096  (0)|
|   2 |   VIEW                     |      |      1 |  46236 |  3205K|       | 48859   (1)|  45982 |00:00:02.97 |   54755 |  54750 |       |       |          |
|*  3 |    VIEW                    |      |      1 |  46236 |  6547K|       | 48859   (1)|  45982 |00:00:02.97 |   54755 |  54750 |       |       |          |
|*  4 |     WINDOW SORT PUSHED RANK|      |      1 |   2500K|   131M|   162M| 48859   (1)|  45982 |00:00:02.97 |   54755 |  54750 |  5297K|   950K| 4708K (0)|
|   5 |      TABLE ACCESS FULL     | T1   |      1 |   2500K|   131M|       | 15028   (1)|   2500K|00:00:00.28 |   54755 |  54750 |       |       |          |

Predicate Information (identified by operation id):

   3 - filter("ORANK"=1)

Column Projection Information (identified by operation id):

   1 - (#keys=1; rowset=256) "OWNER"[VARCHAR2,128], MAX("OBJECT_NAME")[128]
   2 - (rowset=256) "OWNER"[VARCHAR2,128], "OBJECT_NAME"[VARCHAR2,128]
   3 - (rowset=256) "OWNER"[VARCHAR2,128], "OBJECT_NAME"[VARCHAR2,128], "ORANK"[NUMBER,22]
   4 - (#keys=2; rowset=256) "OBJECT_NAME"[VARCHAR2,128], "OBJECT_TYPE"[VARCHAR2,23], "OWNER"[VARCHAR2,128], ROW_NUMBER() OVER ( PARTITION BY
   5 - (rowset=256) "OWNER"[VARCHAR2,128], "OBJECT_NAME"[VARCHAR2,128], "OBJECT_TYPE"[VARCHAR2,23]

In this case we see the A-rows from the Window Sort meeting our expectations – but that may be a beneficial side effect of the operation completing in memory.

Optimisation (?)

Given the dramatically different demands for memory for a query that ought to do the same thing in both versions it looks as if 21c may be doing something clever that 11g doesn’t do, or maybe doesn’t do very well, or maybe tries to do but has a bug that isn’t dramatic enough to be obvious unless you’re looking closely.

Here’s a script that I used to build the test data, with scope for a few variations in testing. You’ll notice that the “create table” includes an “order by” clause that is close to the sorting requirement of the over() clause that appears in the query. The results I’ve show so far were for data that didn’t have this clause in place.

rem     Script:         analytic_sort_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2022
rem     Last tested

create table t1 nologging 
        (select * from all_objects where rownum <= 50000) ao,
        (select rownum from dual connect by rownum <= 50)
order by
        object_name, object_type -- desc

--      Stats collection to get histograms

                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 254'

-- reconnect here to maximise visibility of PGA allocation

connect xxxxxxxx/xxxxxxxx

set linesize 180
set trimspool on
set tab off

-- alter session set workarea_size_policy = manual;
-- alter session set sort_area_size = 199229440;

alter session set events '10046 trace name context forever, level 8';
-- alter session set statistics_level = all;
-- alter session set "_rowsource_execution_statistics"= true;

spool analytic_sort_2

        /*  monitoring */
        owner, max(object_name)
from    (
                /*+ no_merge */
                owner, object_name 
        from    (
                        owner, object_name,
                        row_number() over (partition by object_name order by object_type desc) orank 
                )  where orank= 1
group by 
order by

select * from table(dbms_xplan.display_cursor(format=>'cost bytes allstats last projection'));

alter session set events '10046 trace name context off';
alter session set "_rowsource_execution_statistics"= false;
alter session set statistics_level = typical;
alter session set workarea_size_policy = auto;

spool off

The results I’m going to comment on now are the ones I got after running the script as above, then reconnecting and flushing the shared pool before repeat the second half of the script (i.e. without recreating the table).

In 11g, going back to the automatic workarea sizing the session used 37MB of memory and then spilled (only) 3MB to temp. The run time was approximately 3 seconds – which is a good match for the “unsorted” 21c run time. As with the original tests, the value reported in A-rows is larger than we would expect (in this case suspiciously close to twice the correct values – but that’s more likely to be a coincidence than a clue). Interestingly, when I switched to the manual workarea_size_policy and set the sort_area_size to 190MB Oracle said “that’s the optimum memory” and used nearly all of it to complete in memory – for any value less than that (even down to 5MB) Oracle spilled just 3 MB to disk in a one-pass operation. So it looks as if Oracle “knows” it doesn’t need to sort the whole data set, but still uses as much memory as is available to do something before it starts to get clever.

| Id  | Operation                  | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
|   0 | SELECT STATEMENT           |      |      1 |        |       |       | 29491 (100)|      8 |00:00:01.76 |   35523 |   2145 |    331 |       |       |          |         |
|   1 |  SORT GROUP BY             |      |      1 |      8 |   184 |       | 29491   (9)|      8 |00:00:01.76 |   35523 |   2145 |    331 |  2048 |  2048 | 2048  (0)|         |
|   2 |   VIEW                     |      |      1 |   2500K|    54M|       | 28532   (6)|  28575 |00:00:02.00 |   35523 |   2145 |    331 |       |       |          |         |
|*  3 |    VIEW                    |      |      1 |   2500K|   112M|       | 28532   (6)|  28575 |00:00:01.83 |   35523 |   2145 |    331 |       |       |          |         |
|*  4 |     WINDOW SORT PUSHED RANK|      |      1 |   2500K|    95M|   124M| 28532   (6)|  57171 |00:00:02.10 |   35523 |   2145 |    331 |  2979K|   768K|   37M (1)|    3072 |
|   5 |      TABLE ACCESS FULL     | T1   |      1 |   2500K|    95M|       |  4821   (8)|   2500K|00:00:11.84 |   35513 |   1814 |      0 |       |       |          |         |

In 21c there’s essentially no difference between the sorted and unsorted tests, which suggests that with my data the session had started finding been able to apply its optimisation strategy at the earliest possible moment rather than waiting until it had no alternative but to spill to disc.

| Id  | Operation                  | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT           |      |      1 |        |       |       | 48864 (100)|     12 |00:00:00.98 |   54753 |  54748 |       |       |          |
|   1 |  SORT GROUP BY             |      |      1 |     12 |   852 |       | 48864   (1)|     12 |00:00:00.98 |   54753 |  54748 |  4096 |  4096 | 4096  (0)|
|   2 |   VIEW                     |      |      1 |  46236 |  3205K|       | 48859   (1)|  45982 |00:00:00.97 |   54753 |  54748 |       |       |          |
|*  3 |    VIEW                    |      |      1 |  46236 |  6547K|       | 48859   (1)|  45982 |00:00:00.97 |   54753 |  54748 |       |       |          |
|*  4 |     WINDOW SORT PUSHED RANK|      |      1 |   2500K|   131M|   162M| 48859   (1)|  45982 |00:00:00.97 |   54753 |  54748 |  5155K|   940K| 4582K (0)|
|   5 |      TABLE ACCESS FULL     | T1   |      1 |   2500K|   131M|       | 15028   (1)|   2500K|00:00:00.42 |   54753 |  54748 |       |       |          |


Given the way that 11g reports a very small spill to disc, which stays fairly constant in size no matter how large or small the available PGA allocation is, when the input data is sorted to help the over() clause, and given how large the spill to disc can become when the data is not sorted, I feel that Oracle has an optimisation that discards input rows early in the analytic window sort. But we also have some evidence of a flaw in the code in versions prior to 21c that means Oracle fails to re-use memory that becomes available from rows that have been discarded.


I’ve said in the past that if you’re using analytic functions you ought to minimise the size of the data you’re processing before you apply the analytic part. Another step that can help is to make sure you’ve got the data into a (fairly well) sorted order before you reach the analytic part.

In the case of versions of Oracle prior to 21c, it also seems to make sense (if you can arrange it) to minimise the reduce the amount of memory the session is allowed to use for a sort operation, as this will reduce the CPU used by the session and avoid grabbing excess redundant memory that could be used more effectively by other sessions.


Just before publishing I found a way of keeping my instance alive long enough to run the tests, then also ran them on an instance of Both versions showed the same pattern of doing a large allocation of memory and large spill to disc when the data was not sorted, and a large allocation of memory but a small spill to disc when the data was sorted.

As a little sanity check I also exported the 19c data and imported it to 21c in case it was a simple variation in the data that allwoed made 21c to operate more efficiently than19c. The change in data made no difference to the way in which 21c handled it, in both cases it called for a small allocation of memory with no spill to disc.

Using Git with PL/SQL in a Shared Development Database

Gerger Consulting - Mon, 2022-11-21 06:18


With Gitora 6, developers can work on the same code base (i.e. a package, procedure etc…) in the same database without blocking each other.

Many development teams use a single database for development. Many of them use the same database for testing, as well. They achieve this by using different schemas for different purposes. Gitora 6 enables these teams to create Git repos for these schemas and pull updates between them.

With Gitora 6, you can even create a different schema for every developer and have them merge their code using Git.

Gitora 6 enables you to implement any modern development workflow in a single database.

How does Gitora 6 work?

Gitora 6 introduces a new repo type called Single Schema Repo (SSR). As its name suggests an SSR manages database objects from a single schema. The DDL scripts in SSR’s don’t contain the schema prefixes so that Gitora can execute them in other schemas in the same database.

This enables developers to work on the same package, procedure, function, view etc… (i.e. anything that can be created with the CREATE OR REPLACE command) at the same time, in the same database in different schemas.

An Example

Let’s go through an example: Let’s assume that the team is working on a logistics software and have a schema named LOGISTICS that stores all their database objects. The team can create a schema (or already have one) called LOGISTICS_TEST in the same database. Here are the steps the team needs to follow so that they can pull their changes to LOGISTICS_TEST.

  1. Create a single schema Gitora repo that manages the LOGISTICS schema. Let’s call it the REPO_LOGISTICS.
  2. Add all relevant database objects in the LOGISTICS schema to the LOGISTICS_REPO.
  3. Create another single schema Gitora repo that manages the LOGISTICS_TEST schema. Let’s call it the REPO_LOGISTICS_TEST

That’s it. That’s all there is to it. From this point on, any change you make to the code in the LOGISTICS schema can be pulled to the LOGISTICS_TEST schema using Gitora (and vice versa).

Single Schema Repos can also be used to create separate development environments in the same database for every developer in the team.

Multiple Developers, Same Database

Assuming we already have the LOGISTICS schema and the REPO_LOGISTICS repo from the previous example, here is how that would work:

  1. Create a schema for each developer: LOGISTICS_JOE, LOGISTICS_MARY, LOGISTICS_PAUL.
  2. Create a single schema Gitora repo for each schema. Let’s call them REPO_LOGISTICS_JOE, REPO_LOGISTICS_MARY, REPO_LOGISTICS_PAUL respectively.

From this point on, all three developers can work in their own schema and edit any package, procedure, view etc… freely, without overwriting each other’s changes or being affected by them. Using Gitora, they can create new branches in their own repo, for features or bugs they work on without affecting anyone else.

When the time comes to put it all together, they can use Gitora to merge their changes.

We wrote a few articles to get you started:

Creating Single Schema Repos.

How to implement agile development in a single database

How to manage changes to tables with Single Schema Repos

Before Gitora 6, agile database development with Gitora required multiple databases to implement. With Gitora 6 you can achieve it in a single database.

We are incredibly excited to release Gitora 6. We believe it will bring a massive productivity boost to Oracle database development teams worldwide. We can’t wait for you to try it.

Gitora 6 is available now. You can download it from this link.

Categories: Development

An introduction to OMrun – 1 – A simple data comparison with the PostgreSQL Adapter

Yann Neuhaus - Mon, 2022-11-21 05:12

With the integration of OMIS into dbi services we also got a new product: OMrun. OMrun is all about data, especially comparing, validating and anonymizing data from various sources against various targets. Most of that is done using plain SQL, so we get the full power of SQL for validating, comparing or anonymizing data. In this first introduction post we’ll look at very simple case. In later posts which will follow, we’ll go deeper and talk about scenarios, rules, parameters and much more.

When you start OMrun for the first time it looks like this:

There is a menu at the top and a tree view on the left. What we’ll do as the first step is: We’ll remove all the sample definitions, that is: Remove the predefined “Environments”, the predefined “Scenarios” and the predefined “Components / Objects”. The procedure for this is as usual, right click to get then context menu, then delete (this works the same for all the entries):

The first piece we need is an environment. An environment can be thought of as a collection of data sources. For the scope of this post, we’ll create two definitions for PostgreSQL databases: One for the source and one for the target because in the end, we want to compare data. Creating a new environment is pretty simple:

This gives us an empty environment. The next step is to define our data sources. A data source can be pretty much everything. If you take a look at the default “Data Adapters” in the “Config” menu, you’ll notice that there are many already predefined. Two of those are for PostgreSQL (not counting the ODBC adapter):

“Trusted” means: Please use the user and password from the operating system OMrun runs on, which is Window. Unchecked “Trusted” means, please use the credentials we’ll provide later on.

Next is defining the two connections to PostgreSQL:

We’re using the same instance here, but two different databases and we’ll use the PostgreSQL “Db Type”, of course.

Now that we have the connections its time to create the “Data-Object”:

Now is the time to define the queries we want to execute on the source and the target. In our case we do a simple select against pg_database which of course returns the same result for both sides:

Having done that, the business mapping needs to be generated:

Tip: If you move your mouse pointer over the “Query A” or “Query B” labels, you’ll see what statement is generated:

Ready to execute our first data comparison:

Not a big surprise, the test passed because we have of course the same result on both sides. Of course, this is not really useful in practice, but it should already give you an idea what OMrun can do for you. In the next posts we’ll go a bit more into the details and start to use parameters.

L’article An introduction to OMrun – 1 – A simple data comparison with the PostgreSQL Adapter est apparu en premier sur dbi Blog.

Access OCI cloud using session token

Yann Neuhaus - Mon, 2022-11-21 01:42

In the previous article A step by step guide to install and configure OCI client the oci client configuration.

OCI provides also token access.

~ > oci session authenticate --profile-name MY_SESS_PROF --region eu-zurich-1 --tenancy-name <your tenancy name>
    Please switch to newly opened browser window to log in!
    You can also open the following URL in a web browser window to continue:
    Completed browser authentication process!
Config written to: /Users/mytest/.oci/config

    Try out your newly created session credentials with the following example command:

    oci iam region list --config-file /Users/mytest/.oci/config --profile MY_SESS_PROF --auth security_token

Note: The tenancy name can be found on the page “Tenancy details

Let’s try out the OCI output suggestion:

oci iam region list --config-file /Users/mytest/.oci/config --profile MY_SESS_PROF --auth security_token

  "data": [
      "key": "AMS",
      "name": "eu-amsterdam-1"
      "key": "ARN",
      "name": "eu-stockholm-1"

The $HOME/.config file was updated with the token and the session parameters are in the session directory as follows:

~ > cd $HOME/.oci
.oci > tree
├── config
├── oci_api_key.pem
├── oci_api_key_public.pem
├── rsa.private
├── rsa.public
└── sessions
    └── MY_SESS_PROF
        ├── oci_api_key.pem
        ├── oci_api_key_public.pem
        └── token

.oci > cat config
user = ocid1.user.oc1..aaaa*********wyqwklitkuu6owpzcj****
fingerprint = 01:**************:3c:9a:99:ff:08:******
key_file = ~/.oci/oci_api_key.pem
tenancy = ocid1.tenancy.oc1..aaaaaaaa**************iai4di356quiofira
region = eu-zurich-1

# Here the new added session 

For an easy use these env variables are available to avoid passing --profile and --auth parameters on oci command line:

~ > export OCI_CLI_AUTH=security_token

Token validation:

~ > oci session validate  --profile MY_SESS_PROF --auth security_token
Session is valid until 2022-10-06 17:59:00

The token can be refreshed before it’s expiration:

~ > oci session refresh --profile MY_SESS_PROF
Attempting to refresh token from https://auth.eu-zurich-1.oraclecloud.com/v1/authentication/refresh
Successfully refreshed token
~ > oci session validate  --profile MY_SESS_PROF --auth security_token
Session is valid until 2022-10-06 18:16:42

L’article Access OCI cloud using session token est apparu en premier sur dbi Blog.

Calling JavaScript from Streamlit/Python

Andrejus Baranovski - Sun, 2022-11-20 14:30
Streamlit is Python framework, it helps to build UI with low code approach. Code runs on server and there is no functionality, which would allow to inject client side events with JavaScript. Luckily there is a 3rd-party Streamlit component, it allows to make JavaScript calls from server side Python code. Response from such JavaScript call is received within the same Streamlit execution, this makes it possible to use values returned from JavaScript in Python logic. In this example, I show one liner JavaScript call to get UI screen width, to be able to calculate layout structure adjustments in Python, based on the screen size.


VirtualBox 7.0.4 and Vagrant 2.3.3 – Another VirtualBox Fail (For Me)…

Tim Hall - Sun, 2022-11-20 04:12

VirtualBox 7.0.4 VirtualBox 7.0.4 has been released. The downloads and changelog are in the normal places. From my previous posts on VirtualBox 7.0.x you will know I’ve been having problems with it. They all seem to come down to networking. I can often, but not always, start up and existing VM, but if I try to build a new VM … Continue reading "VirtualBox 7.0.4 and Vagrant 2.3.3 – Another VirtualBox Fail (For Me)…"

The post VirtualBox 7.0.4 and Vagrant 2.3.3 – Another VirtualBox Fail (For Me)… first appeared on The ORACLE-BASE Blog.VirtualBox 7.0.4 and Vagrant 2.3.3 – Another VirtualBox Fail (For Me)… was first posted on November 20, 2022 at 11:12 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.


Subscribe to Oracle FAQ aggregator