Feed aggregator

静岡で水道修理を依頼するには

The Feature - 12 hours 52 min ago

水道工事には新築の際に行う給水管の引き込み工事や室内の配管、下水道排水工事などがあります。道路側の工事を行うには自治体の許可が必要で、自治体の指定を受けた水道工事の業者が行います。敷地内の工事や修理も同じで、自治体の指定を受けた事業者に敷地の所有者が依頼して行います。一般家庭で使用している上下水道は、各自治体の水道局が管理しています。

その水道局が管理している上下水道で工事を行えるのは、自治体が指定を与えた事業者と水道法により定められています。静岡市内で水道修理の工事を行うためには、静岡市の上下水道局の指定給水装置工事事業者に依頼する必要があります。静岡市の水道局に認定された指定給水装置工事事業者以外に依頼して水道修理を行うと、後でトラブルが起こる可能性があります。最悪の場合、法律に違反したとして給水を止められる可能性もあるので注意してください。

台所やお風呂場など水回りの部品の交換を、居住者が行うことに問題はありません。蛇口のパッキンやカートリッジなどの部品は、正しいサイズのものを取り寄せれば誰でも交換することが可能です。しかし配管を伴う水道修理は、大掛かりな工事になる可能性があり素人が行うのは危険です。地域の水道局の指定給水装置工事事業者に修理を依頼した方が安全です。

指定給水装置工事事業者は身元も確かで、トラブルが起こる可能性は低くなります。もし仮にトラブルが起きたとしても、水道局に相談できるので安心して修理を依頼することができます。

Categories: APPS Blogs

静岡で水道修理を依頼するには

Marian Crkon - 12 hours 52 min ago
水道工事には新築の際に行う給水管の引き込み工事や室内の配管、下水道排水工事などがあります。道路側の工事を行うには自治体の許可が必要で、自治体の指定を受けた水道工事の業者が行います。敷地内の工事や修理も同じで、自治体の指定を受けた事業者に敷地の所有者が依頼して行います。一般家庭で使用している上下水道は、各自治体の水道局が管理しています。 その水道局が管理している上下水道で工事を行えるのは、自治体が指定を与えた事業者と水道法により定められています。静岡市内で水道修理の工事を行うためには、静岡市の上下水道局の指定給水装置工事事業者に依頼する必要があります。静岡市の水道局に認定された指定給水装置工事事業者以外に依頼して水道修理を行うと、後でトラブルが起こる可能性があります。最悪の場合、法律に違反したとして給水を止められる可能性もあるので注意してください。 台所やお風呂場など水回りの部品の交換を、居住者が行うことに問題はありません。蛇口のパッキンやカートリッジなどの部品は、正しいサイズのものを取り寄せれば誰でも交換することが可能です。しかし配管を伴う水道修理は、大掛かりな工事になる可能性があり素人が行うのは危険です。地域の水道局の指定給水装置工事事業者に修理を依頼した方が安全です。 指定給水装置工事事業者は身元も確かで、トラブルが起こる可能性は低くなります。もし仮にトラブルが起きたとしても、水道局に相談できるので安心して修理を依頼することができます。

Oracle 19c Automatic Indexing: CBO Incorrectly Using Auto Indexes Part II ( Sleepwalk)

Richard Foote - 16 hours 10 min ago
As I discussed in Part I of this series, problems and inconsistencies can appear between what the Automatic Indexing processing thinks will happen with newly created Automatic Indexing and what actually happens in other database sessions. This is because the Automatic Indexing process session uses a much higher degree of Dynamic Sampling (Level=11) than other […]
Categories: DBA Blogs

19c New Feature DGMGRL validate database?

Michael Dinh - Sun, 2020-09-20 09:35

Not too long ago, I had blogged about When To Use dgmgrl / vs dgmgrl sys@tns

I believe this is New Feature for 19c (but not 100% certain) may resolved the question above?.

DEMO:
Connect using OS authentication from standby host.

ERROR:
ORA-01017: invalid username/password; logon denied

[oracle@ol7-112-dg2 ~]$ dgmgrl /
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sun Sep 20 14:22:13 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "hawk_stby"
Connected as SYSDG.
DGMGRL> validate database hawk;

  Database Role:    Primary database

  Ready for Switchover:  Yes

  Managed by Clusterware:
    hawk:  NO
    Validating static connect identifier for the primary database hawk...

ORA-01017: invalid username/password; logon denied

    Warning: Ensure primary database's StaticConnectIdentifier property
    is configured properly so that the primary database can be restarted
    by DGMGRL after switchover

DGMGRL> validate database hawk_stby;

  Database Role:     Physical standby database
  Primary Database:  hawk

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Managed by Clusterware:
    hawk     :  NO
    hawk_stby:  NO
    Validating static connect identifier for the primary database hawk...

ORA-01017: invalid username/password; logon denied

    Warning: Ensure primary database's StaticConnectIdentifier property
    is configured properly so that the primary database can be restarted
    by DGMGRL after switchover

  Log Files Cleared:
    hawk Standby Redo Log Files:       Cleared
    hawk_stby Online Redo Log Files:   Not Cleared
    hawk_stby Standby Redo Log Files:  Available

DGMGRL>

DEMO:
Connect to primary using tns from standby host.

DGMGRL> connect sys/oracle@hawk
Connected to "hawk"
Connected as SYSDBA.
DGMGRL> validate database hawk;

  Database Role:    Primary database

  Ready for Switchover:  Yes

  Managed by Clusterware:
    hawk:  NO
    Validating static connect identifier for the primary database hawk...
    The static connect identifier allows for a connection to database "hawk".

DGMGRL> validate database hawk_stby;

  Database Role:     Physical standby database
  Primary Database:  hawk

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Managed by Clusterware:
    hawk     :  NO
    hawk_stby:  NO
    Validating static connect identifier for the primary database hawk...
    The static connect identifier allows for a connection to database "hawk".

  Log Files Cleared:
    hawk Standby Redo Log Files:       Cleared
    hawk_stby Online Redo Log Files:   Not Cleared
    hawk_stby Standby Redo Log Files:  Available

DGMGRL>

This will at least address one example for when to use TNS vs OS authentication for DGMGRL.

Kubernetes Pods for Beginners

Online Apps DBA - Sun, 2020-09-20 07:29

Kubernetes Pods are the smallest deployable unit created and managed by Kubernetes. A Pod is a group of one or more containers, which shares all the Storage and Network Resources. Are you interested to know more about Kubernetes Pods? Check out k21academy’s blog post – https://k21academy.com/kubernetes26 This post covers: – What is Kubernetes Pods? – […]

The post Kubernetes Pods for Beginners appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Monitoring LAG Using DGMGRL Is Nice And Needs Improvements

Michael Dinh - Sat, 2020-09-19 08:53

On November 2, 2017, I had blogged about Monitoring Standby – SQLPlus or DGMGRL

Since the post, I do not recall using dgmgrl much for monitoring lag.

Almost 3 years later and 19c, let’s revisit the topic.

Here is what monitoring lag looks like from SQLPlus.
Notice BLOCK# increased which mean transfer is working.

SQL> r
  1  select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6*

                                        CLIENT                                                  DELAY
PID                       INST  THREAD# PROCESS      PROCESS   STATUS       SEQUENCE#   BLOCK#   MINS
------------------------ ----- -------- ------------ --------- ------------ --------- -------- ------
9589                         1        1 LGWR         RFS       RECEIVING          175     8540      0
9059                         1        1 N/A          MRP0      APPLYING_LOG       175     8540      0

SQL> r
  1  select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6*

                                        CLIENT                                                  DELAY
PID                       INST  THREAD# PROCESS      PROCESS   STATUS       SEQUENCE#   BLOCK#   MINS
------------------------ ----- -------- ------------ --------- ------------ --------- -------- ------
9589                         1        1 LGWR         RFS       RECEIVING          175     8554      0
9059                         1        1 N/A          MRP0      APPLYING_LOG       175     8554      0

SQL> 

For 19c, show configuration lag will provide info on lag and is knowing Lag is 0 seconds good enough?

[oracle@ol7-112-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sat Sep 19 12:50:58 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "hawk_stby"
Connected as SYSDG.
DGMGRL> show configuration lag

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk      - Primary database
    hawk_stby - Physical standby database
                Transport Lag:      0 seconds (computed 9 seconds ago)
                Apply Lag:          0 seconds (computed 9 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 20 seconds ago)

DGMGRL> show configuration lag verbose

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk      - Primary database
    hawk_stby - Physical standby database
                Transport Lag:      0 seconds (computed 12 seconds ago)
                Apply Lag:          0 seconds (computed 12 seconds ago)

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'
    ConfigurationWideServiceName    = ''

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS
DGMGRL>

Using SendQEntries shows LOG_SEQ but RecvQEntries does not.

DGMGRL> show database hawk_stby RecvQEntries
STANDBY_RECEIVE_QUEUE
              STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED        FIRST_CHANGE#         NEXT_CHANGE#       SIZE (KBs)

DGMGRL> show database hawk SendQEntries
PRIMARY_SEND_QUEUE
        STANDBY_NAME       STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED        FIRST_CHANGE#         NEXT_CHANGE#       SIZE (KBs)
                          CURRENT       1047346434                1                  175  09/19/2020 12:32:29                                   2984164                                 23586

DGMGRL> show database hawk_stby RecvQEntries
STANDBY_RECEIVE_QUEUE
              STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED        FIRST_CHANGE#         NEXT_CHANGE#       SIZE (KBs)

DGMGRL>

Disable apply and compare differences between SQLPlus and DGMGRL.

DGMGRL> edit database hawk_stby set state=APPLY-OFF
> ;
Succeeded.
DGMGRL> show database hawk_stby

Database - hawk_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-OFF
  Transport Lag:      0 seconds (computed 8 seconds ago)
  Apply Lag:          0 seconds (computed 8 seconds ago)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    hawk

Database Status:
SUCCESS

DGMGRL>

### From Primary:

*** gv$managed_standby ***

                                        CLIENT                                                  DELAY
PID                       INST  THREAD# PROCESS      PROCESS   STATUS       SEQUENCE#   BLOCK#   MINS
------------------------ ----- -------- ------------ --------- ------------ --------- -------- ------
9030                         1        1 LNS          LNS       WRITING            180     1311      0

SQL> r
  1  select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6*

                                        CLIENT                                                  DELAY
PID                       INST  THREAD# PROCESS      PROCESS   STATUS       SEQUENCE#   BLOCK#   MINS
------------------------ ----- -------- ------------ --------- ------------ --------- -------- ------
9030                         1        1 LNS          LNS       WRITING            180     1314      0

SQL>

### From Standby:

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       1        1 NO             179 19-SEP-2020 13:12:25         5 39.93333
       1        1 YES            174 19-SEP-2020 12:32:29

SQL> r
  1  select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6*

                                        CLIENT                                                  DELAY
PID                       INST  THREAD# PROCESS      PROCESS   STATUS       SEQUENCE#   BLOCK#   MINS
------------------------ ----- -------- ------------ --------- ------------ --------- -------- ------
9589                         1        1 LGWR         RFS       RECEIVING          180      284      0

SQL> r
  1  select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6*

                                        CLIENT                                                  DELAY
PID                       INST  THREAD# PROCESS      PROCESS   STATUS       SEQUENCE#   BLOCK#   MINS
------------------------ ----- -------- ------------ --------- ------------ --------- -------- ------
9589                         1        1 LGWR         RFS       RECEIVING          180      298      0

SQL>

From DGMGRL:

DGMGRL> show configuration lag

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk      - Primary database
    hawk_stby - Physical standby database
                Transport Lag:      0 seconds (computed 6 seconds ago)
                Apply Lag:          6 minutes 48 seconds (computed 6 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 31 seconds ago)

DGMGRL> show database hawk SendQEntries
PRIMARY_SEND_QUEUE
        STANDBY_NAME       STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED        FIRST_CHANGE#         NEXT_CHANGE#       SIZE (KBs)
                          CURRENT       1047346434                1                  180  09/19/2020 13:12:25                                   2992416                                   270

DGMGRL> show database hawk_stby RecvQEntries
STANDBY_RECEIVE_QUEUE
              STATUS     RESETLOGS_ID           THREAD              LOG_SEQ       TIME_GENERATED       TIME_COMPLETED        FIRST_CHANGE#         NEXT_CHANGE#       SIZE (KBs)
   PARTIALLY_APPLIED       1047346434                1                  175  09/19/2020 12:32:29  09/19/2020 13:12:16              2984164              2992388            24709
         NOT_APPLIED       1047346434                1                  176  09/19/2020 13:12:16  09/19/2020 13:12:17              2992388              2992393                1
         NOT_APPLIED       1047346434                1                  177  09/19/2020 13:12:17  09/19/2020 13:12:20              2992393              2992400                2
         NOT_APPLIED       1047346434                1                  178  09/19/2020 13:12:20  09/19/2020 13:12:20              2992400              2992403                1
         NOT_APPLIED       1047346434                1                  179  09/19/2020 13:12:20  09/19/2020 13:12:25              2992403              2992416                3

DGMGRL>

APPLY-ON

DGMGRL> edit database hawk_stby set state=APPLY-ON;
Succeeded.
DGMGRL> show configuration lag

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk      - Primary database
    hawk_stby - Physical standby database
                Transport Lag:      0 seconds (computed 3 seconds ago)
                Apply Lag:          0 seconds (computed 3 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 41 seconds ago)

DGMGRL> /

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk      - Primary database
    hawk_stby - Physical standby database
                Transport Lag:      0 seconds (computed 4 seconds ago)
                Apply Lag:          0 seconds (computed 4 seconds ago)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 56 seconds ago)

DGMGRL> show database hawk_stby

Database - hawk_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 6 seconds ago)
  Apply Lag:          0 seconds (computed 6 seconds ago)
  Average Apply Rate: 1.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    hawk

Database Status:
SUCCESS

DGMGRL>

It would be nice if show configuration lag is able to provide some high level info frequently asked by management.

What is lag time, how many sequence is the standby behind, what is the apply rate, what is LOG_SEQ at primary and standby?

The use and misuse of %TYPE and %ROWTYPE attributes in PL/SQL APIs

Andrew Clarke - Fri, 2020-09-18 08:34
PL/SQL provides two attributes which allow us to declare a data structure with its datatype derived from a database table or a previously declared variable.

We can use %type attribute for defining a constant, a variable, a collection element, record field or PL/SQL program parameters. While we can reference a previously declared variable, the most common use case is to tie the declaration to a table column. The following snippet declares a variable with the same datatype and characteristics (length, scale, precision) as the SAL column of the EMP table.


l_salary emp.sal%type;
We can use the %rowtype attribute to declare a record variable which matches the projection of a database table or view, or a cursor variable. The following snippet declares a variable with the same projection as the preceeding cursor.

cursor get_emp_dets is
select emp.empno
, emp.ename
, emp.sal
, dept.dname
from emp
inner join dept on dept,deptno = emp.deptno;
l_emp_dets get_emp_dets%rowtype;
Using these attributes is considered good practice. PL/SQL development standards will often mandate their use. They deliver these benefits:
  1. self-documenting code: if we see a variable with a definition which references emp.sal%type we can be reasonably confident this variable will be used to store data from the SALARY column of the EMP table.
  2. datatype conformance: if we change the scale or precision of the the SALARY column of the EMP table all variables which use the %type attribute will pick up the change automatically. If we add a new column to the EMP table, all variables defined with the %rowtype attribute will be able to handle that column without us needing to change those programs.
That last point comes with an amber warning: the automatic conformance only works when the %rowtype variable is populated by SELECT * FROM queries. If we are using an explicit projection with named columns then we have now broken our code and we need to fix it. More generally, this silent propagation of changes to our data structures means we need to pay more attention to impact analysis. Is it right that we can just change a column's datatype or amend a table's projection without changing the code which depends on them? Maybe it's okay, maybe not. By shielding us from the immediate impact of broken code, using these attributes also withholds the necessity to revisit our programs: so we have to remember to do it.

Overall I think the benefits listed above outweigh the risks, and I think we should always use these attributes whenever it is appropriate, for the definition of local variables and constants. However, complications arise if we use them to declare PL/SQL program parameters, specifically for procedures in package specs and standalone program units. It's not so bad if we're writing an internal API but it becomes a proper headache when we are dealing with a public API, one which will be called by programs owned by another user, one whose developers are in another team or outside our organisation, or even using Java, dotNet or whatever. So why is the use of these attributes so bad for those people?

  1. obfuscated code: these attributes are only self-documenting when we have a familiarity with the underlying schema, or have easy access to it. This will frequently not be the case for developers in other teams (or outside the organisation) who need to call our API. They may be able to guess at the datatype of SALARY or HIREDATE, but they really shouldn't have to. And, of course, a reference to emp%rowtype is about as unhelpful as it could be. Particularly when we consider ...
  2. loss of encapsulation: one purpose of an API is to shield consumers of our application from the gnarly details of its implementation. However, the use of %type and %rowtype is actually exposing those details. Furthermore, a calling program cannot define their own variables using these attributes unless we grant them SELECT on the tables. Otherwise the declaration will hurl PLS-00201. This is particularly problematic for handling %rowtype, because we need to define a record variable which matches the row structure.
  3. breaking the contract: an interface is an agreement between the provider and the calling program. The API defines input criteria and in return guarantees outcomes. It forms a contract, which allows the consumer to write code against stable definitions. Automatically propagating changes in the underlying data structures to parameter definitions creates unstable dependencies. It is not simply that the use of %type and %rowtype attributes will cause the interface to change automatically, the issue is that there is no mechanism for signalling the change to an API's consumers. Interfaces demand stable dependencies: we must manage any changes to our schema in a way which ideally allows the consumers to continue to use the interface without needing to change their code, but at the very least tells them that the interface has changed.
Defining parameters for public APIsThe simplest solution is to use PL/SQL datatypes in procedural signatures. These seem straightforward. Anybody can look at this function and understand that input parameter is numeric and the returned value is a string.

function get_dept_manager (p_deptno in number) return varchar2;
So clear but not safe. How long is the returned string? The calling program needs to know, so it can define an appropriately sized variable to receive it. Likewise, in this call, how long is can a message be?

procedure log_message (p_text in varchar2);
Notoriously we cannot specify length, scale or precision for PL/SQL parameters. But the calling code and the called code will write values to concretely defined types. The interface needs to communicate those definitions. Fortunately PL/SQL offers a solution: subtypes. Here we have a substype which explicitly defines the datatype to be used for passing messages:

subtype st_message_text is varchar2(256);

procedure log_message (p_text in st_message_text);
Now the calling program knows the maximum permitted length of a message and can trim its value accordingly. (Incidentally, the parameter is still not constrained in the called program so we can pass a larger value to the log_message() procedure: the declared length is only enforced when we assign the parameter to something concrete such as a local variable.)

We can replace %rowtype definitions with explicit RECORD defintions. So a function which retrieves the employee records for a department will look something like this:


subtype st_deptno is number(2,0);

type r_emp is record(
empno number(4,0),
ename varchar2(10),
job varchar2(9),
mgr number(4,0),
hiredate date
sal number(7,2),
comm number(7,2),
deptno st_deptno
);

type t_emp is table of r_emp;

function get_dept_employees (p_deptno in st_deptno) return t_emp;
We do this for all our public functions.

subtype st_manager_name is varchar2(30);

function get_dept_manager (p_deptno in st_deptno) return st_manager_name;
Now the API clearly documents the datatypes which calling programs need to pass and which they will receive as output. Crucially, this approach offers stability: the datatype of a parameter cannot be changed invisibly, as any change must be implemented in a new version of the publicly available package specification. Inevitably this imposes a brake on our ability to change the API but we ought not to be changing public APIs frequently. Any such change should arise from either new knowledge about the requirements or a bug in the data model. Wherever possible we should try to handle bugs internally within the schema. But if we have to alter the signature of a procedure we need to communicate the change to our consumers as far ahead of time as possible. Ideally we should shield them from the need to change their code at all. One way to achieve that is Edition-Based Redefinition. Other ways would be to deploy the change with overloaded procedures or even using a different procedure name, and deprecate the old procedure. Occasionally we might have no choice but to apply the change and break the API: sometimes with public interfaces the best we can do is try to annoy the fewest number of people. Transitioning from a private to a public interface There is a difference between internal and public packages. When we have procedures which are intended for internal usage (i.e. only called by other programs in the same schema) we can define their parameters with %type and %rowtype attributes. We have access and - it is to be hoped! - familiarity with the schema's objects, so the datatype anchoring supports safer coding. But what happens when we have a package which we wrote as an internal package but now we need to expose its functionality to a wider audience? Should we re-write the spec to use subtypes instead?

No. The correct thing to do is to write a wrapper package which acts as a facade over the internal one, and grant EXECUTE privileges on the wrapper. The wrapper package will obviously have the requisite subtype definitions in the spec, and procedures declared with those subtypes. The package body will likely consist of nothing more than those procedures, which simply call their equivalents in the internal package. There may be some affordances for translating data structures, such as populating a table %rowtype variable from the public record type, but those will usually be necessary only for the purposes of documentation (this publicly defined subtype maps to this internally defined table column). There is an obvious overhead to writing another package, especially one which is really just a pass-through to the real functionality, but there are clear benefits which justify the overhead:

  • Stability. Not re-writing an existing package is always a good thing. Even if we are mechanically just replacing one set of datatype definitions with a different set which have the same characteristics we are still changing the core system, and that's a chunk of regression testing we've just added to the task.
  • Least privilege escalation. Even if the internal package has been written with a firm eye on the SOLID principles, the chances are it contains more functionality than we need to expose to other consumers. Writing a wrapper package gives us the opportunity to grant access to only the required procedures.
  • Composition. It is also likely that the internal package doesn't have the exact procedure the other team needs. Perhaps there are actually two procedures they need to call, or there's one procedure but it has some confusing internal flags in its signature. Instead of violating the Law of Demeter we can define one simple procedure in the wrapper package spec and handle the internal complexity in the body.
  • Future proofing. Writing a wrapper package gives us an affordance where we can handle subsequent changes in the internal data model or functionality without affecting other consumers. By definition a violation of YAGNI, but as it's not the main reason why we're doing this I'm allowing this as a benefit.
Design is always a trade offThe use of these attributes is an example of the nuances which Coding Standards often lack. In many situations their use is good practice, and we should employ them in those cases. But we also need to know when their use is a bad practice, and why, so we can do something better instead.

Part of the Designing PL/SQL Programs series

Microsoft Azure AI Fundamentals [AI-900]: Step By Step Activity Guides (Hands-On Labs)

Online Apps DBA - Fri, 2020-09-18 06:05

Number 1 technology Artificial Intelligence and Machine Learning have joined hands with Azure, taking it to another level. Getting certified in this field will definitely help you to land up in a highly paid job and have the edge over the others. Check out the blog at https://k21academy.com/ai90005 to find out all the Hands-On Lab […]

The post Microsoft Azure AI Fundamentals [AI-900]: Step By Step Activity Guides (Hands-On Labs) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

静岡の水道修理店選びと依頼する利点

The Feature - Fri, 2020-09-18 05:56

静岡には数多くの水道修理店が存在していますが、日頃利用することは滅多にないからこそ、いざ依頼する時には何を基準に選んだら良いかで悩むのではないでしょうか。水まわりのトラブルは頻繁に起きることではありませんが、トラブル発生時に自分では修理できないと感じた場合には、専門としている修理店に依頼して直してもらうべきという点は確かです。一般的には素人でも簡単に直せると言われているトラブルでも、知識や経験が全くない素人が無理に修理すると、予期せぬ二次被害になってしまったり、その後使い続けることを考えると不安になるのではないでしょうか。静岡には水道修理店が豊富に存在しているからこそ、プロに依頼して直してもらうべきです。

依頼先により修理する上での安全面や、修理にかかる費用や時間など異なる点が多くなるのでしっかり比較しておくようにしましょう。少しでも野菜に越したことはないと考えるのは当然であり、今の時代ならインターネットを使えば簡単に見積もりを取ることができます。しかしインターネット上の見積もりは簡易的なものであり、本当に必要となる費用とは異なるケースも多いので気を付けなくてはなりません。本当に必要な料金を知るためには、現地調査に来てもらう必要があります。

現地調査をお願いする時には、出張料や見積もり料、キャンセル料が発生するのか確かめる必要があります。静岡の水道修理もプロに依頼することにより、将来的にも安心して水道を使えるようになります。

Categories: APPS Blogs

静岡の水道修理店選びと依頼する利点

Marian Crkon - Fri, 2020-09-18 05:56
静岡には数多くの水道修理店が存在していますが、日頃利用することは滅多にないからこそ、いざ依頼する時には何を基準に選んだら良いかで悩むのではないでしょうか。水まわりのトラブルは頻繁に起きることではありませんが、トラブル発生時に自分では修理できないと感じた場合には、専門としている修理店に依頼して直してもらうべきという点は確かです。一般的には素人でも簡単に直せると言われているトラブルでも、知識や経験が全くない素人が無理に修理すると、予期せぬ二次被害になってしまったり、その後使い続けることを考えると不安になるのではないでしょうか。静岡には水道修理店が豊富に存在しているからこそ、プロに依頼して直してもらうべきです。 依頼先により修理する上での安全面や、修理にかかる費用や時間など異なる点が多くなるのでしっかり比較しておくようにしましょう。少しでも野菜に越したことはないと考えるのは当然であり、今の時代ならインターネットを使えば簡単に見積もりを取ることができます。しかしインターネット上の見積もりは簡易的なものであり、本当に必要となる費用とは異なるケースも多いので気を付けなくてはなりません。本当に必要な料金を知るためには、現地調査に来てもらう必要があります。 現地調査をお願いする時には、出張料や見積もり料、キャンセル料が発生するのか確かめる必要があります。静岡の水道修理もプロに依頼することにより、将来的にも安心して水道を使えるようになります。

Oracle 19c Automatic Indexing: CBO Incorrectly Using Auto Indexes Part I (Neighborhood Threat)

Richard Foote - Fri, 2020-09-18 03:19
Following on from my previous few posts on “data skew”, I’m now going to look at it from a slightly different perspective, where there is an inherent relationship between columns. The CBO has difficulties in recognising (by default) that some combinations of column values are far more common than other combinations, resulting in incorrect cardinality […]
Categories: DBA Blogs

Service Accounts suck - why data futures require end to end authentication.

Steve Jones - Thu, 2020-09-17 10:33
 Can we all agree that "service" accounts suck from a security perspective.  Those are the accounts that you set up so what system/service can talk to another one.  Often this will be a database connection so the application uses one account (and thus one connection pool) to access the database.  These service accounts are sometimes unique to a service or application, but often its a standard
Categories: Fusion Middleware

Need to calculate Age as part of select

Tom Kyte - Thu, 2020-09-17 10:06
Hi, We just went live on Oracle a couple of weeks ago. I have a legacy process that includes running a script that was coded for Sybase. I have most of it converted to Oracle, but I'm having trouble with the Age field (it's the last piece I need to get working). I thought about just including the Age piece... then thought to include the entire script for context if nothing else. Thanks in advance for the assist! -Denise Current legacy code <code>SELECT DISTINCT meme.MEME_MEDCD_NO, meme.MEME_BIRTH_DT, AGE = CASE WHEN ( month(convert(datetime, meme.MEME_BIRTH_DT, 103))*100)+ day(convert(datetime, meme.MEME_BIRTH_DT, 103)) - ((month(getdate())*100)+day(getdate())) <= 0 THEN DATEDIFF(YEAR,convert(datetime, meme.MEME_BIRTH_DT, 103),getdate())</b> ELSE DATEDIFF(YEAR,convert(datetime, meme.MEME_BIRTH_DT, 103),getdate())-1 END, sbsb.SBSB_ID, mepe.MEPE_EFF_DT, mepe.MEPE_TERM_DT, mepe.MEPE_ELIG_IND, mepe.CSPI_ID, sbad.SBAD_COUNTY AS 'Member_County', pdpd.LOBD_ID FROM dbo.CMC_MEME_MEMBER meme INNER JOIN dbo.CMC_MEPE_PRCS_ELIG mepe ON mepe.MEME_CK =meme.MEME_CK INNER JOIN dbo.CMC_SBSB_SUBSC sbsb ON sbsb.SBSB_CK = meme.SBSB_CK INNER JOIN CMC_PDPD_PRODUCT pdpd ON mepe.PDPD_ID = pdpd.PDPD_ID INNER JOIN CMC_SBAD_ADDR sbad ON sbsb.SBSB_CK = sbad.SBSB_CK AND sbsb.SBAD_TYPE_MAIL = sbad.SBAD_TYPE WHERE mepe.GRGR_CK IN (1,3,8) AND mepe.MEPE_ELIG_IND = 'Y' AND mepe.MEPE_EFF_DT <= '09/01/2020' AND -- Match file date mepe.MEPE_TERM_DT >= '09/01/2020' AND -- Match file date meme.MEME_MEDCD_NO IN ( )</code>
Categories: DBA Blogs

Database Wallet

Tom Kyte - Thu, 2020-09-17 10:06
Hi Team, We have SSL certificates imported on database server using ORAPKI after creating wallet. We are using utl_http for external system communication from database and using utl_http.set_wallet to access the certificates. Now, we are en-queuing the messages to database queue and writing logic in middle ware to read message from queue and send messages to external system. but the problem is certificates are database server and the communication to external system is from middleware. Can we read the SSL certificate from database server and pass it to middleware? is there a way to pass the certificate from DB to middleware. Can you please advise. Thank You.
Categories: DBA Blogs

Process in order to estimate how many DBAs are needed to support a project

Tom Kyte - Thu, 2020-09-17 10:06
GM, Do you guys know of a whitepaper or training that describes an approach to estimating how many Oracle DBA hours are needed to perform X, Y, Z? We are bidding on an effort, and I would prefer not to have to reinvent the wheel if something already exists. For instance, if a project has these requirements: DBA shall setup a three node 19c RAC cluster with ASM. DBA shall tune the system DBA shall be able to restore the database within a day with minimal data loss DBA shall It will need to be tuned. DBA shall setup a disaster recovery site using data guard DBA shall setup security to meet NIST-3029 I have started to break down all of the 50+ major tasks to satisfy the above requirements, and and threw in there rough daily estimates for each step. Database Security 10 days: o Database instance security hardening setup 3 o Database server security hardening implementation - 2 o Security scanner software setup and troubleshooting - 1 o Troubleshooting false positive security findings and waivers - 2 Oracle install and dB creation with RAC- 5 days o Clusterware setup- 3 days o RAC database creation- 1 o Licensing - .5 o Database Shutdown and Startup setup 1 Backup and Recovery Setup- 2 etc. Thanks, John
Categories: DBA Blogs

Choice State in AWS Step Functions

Pakistan's First Oracle Blog - Thu, 2020-09-17 02:47

Richly asynchronous server-less applications can be built by using AWS step functions. Choice State in AWS Step Functions is the newest feature which was long awaited.

In simply words, we define steps and their transitions and call it State Machine as a whole. In order to define this state machine, we use Amazon States Language (ASL). ASL is a JSON-based structured language that defines state machines and collections of states that can perform work (Task states), determines which state to transition to next (Choice state), and stops execution on error (Fail state). 

So if the requirement is to add a branching logic like if-then-else or case statement in our state transition, then Choice state comes handy. The choice state introduces various new operators into the ASL and the sky is now limit with the possibilities. Operators for choice state include comparison operators like Is Null, IsString etc, Existence operators like Ispresent, glob wildcards where you match some string and also variable string comparison.

Choice State enables developers to simplify existing definitions or add dynamic behavior within state machine definitions. This makes it easier to orchestrate multiple AWS services to accomplish tasks. Modelling complex workflows with extended logic is now possible with this new feature.

Now one hopes that AWS introduces doing it all graphically instead of dabbling into ASL.

Categories: DBA Blogs

Table vs Index Fragmentation

Tom Kyte - Wed, 2020-09-16 15:46
Hello, This is more of a fundamental question, sorry i dont have any test cases. Does table fragmentation also imply index fragmentation for the same table. ?
Categories: DBA Blogs

PARALLEL HINT and DML ERROR logging

Tom Kyte - Wed, 2020-09-16 15:46
HI, <code> CREATE TABLE TEMP_TEST ( ID NUMBER(10) ) ALTER TABLE TEMP_TEST ADD ( CONSTRAINT temp_test_pk UNIQUE (ID) ); </code> Scenario:1: <code> truncate table TEMP_TEST; ALTER SESSION ENABLE PARALLEL DML; INSERT INTO /*+ NOAPPEND PARALLEL(5) */ TEMP_TEST SELECT /*+ PARALLEL */DISTINCT BUCKET FROM source LOG ERRORS INTO ERR$_TEMP_TEST ('insert failed') REJECT LIMIT UNLIMITED; </code> Scenario:2: <code> truncate table TEMP_TEST; ALTER SESSION ENABLE PARALLEL DML; INSERT INTO /*+ NOAPPEND PARALLEL(5) */ TEMP_TEST SELECT DISTINCT BUCKET FROM source LOG ERRORS INTO ERR$_TEMP_TEST ('insert failed') REJECT LIMIT UNLIMITED; </code> Scenario:1 is failing with unique constraint error instead of error records inserting into error table, but scenario:2 error records are inserting into ERR$_TEMP_TEST? The only difference between these two is PARALLEL hint in select statement.
Categories: DBA Blogs

getting ora 01017, invalid username/password when configuring oracle mobile server to my repository db

Tom Kyte - Wed, 2020-09-16 15:46
my local db is a 19c i downloaded the latest version of the mobile server and while going through the installation, i came to this error, i have check my sqlnet.ora file, the tns configuration is good because i am able to connect with toad and sql developer. this is the sqlnet.ora #SQLNET.AUTHENTICATION_SERVICES= (NTS) SQLNET.AUTHENTICATION_SERVICES = (NONE) NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) #SQLNET.ALLOWED_LOGON_VERSION=12 SQLNET.ALLOWED_LOGON_VERSION=9 WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY=C:\Users\TEKYI\Documents\wallet\oracle)))
Categories: DBA Blogs

Anomaly detection

Tom Kyte - Wed, 2020-09-16 15:46
Hello, We have an application that monitors applications, detects anomalies, does correlation between metrics, performs Root Cause Analysis based on a few machine learning algorithms. We are planning onboard oracle monitoring for this application with a few metrics like below. Could you please suggest where we could get some baseline monitoring SQL's to plugin to our application, especially the SQLs that are used to generate ASH/AWR reports. We want to start small and expand over a period of time. Redo (Mb per second) Transactions per second Latency: Log file Sync, Log file parallel write, single block read all in Avg Ms IO MB/per sec Physical Reads MB/sec Physical writes MB/sec DB CPU % usage Network MB/sec Logons per sec Logical Reads Mb/sec File Sync(Avg/ms) RMAN IO mb/ms Waits Locks Top SQL?s Stale statistics on objects Top Objects by Size, growth, Avg growth per day, month Space growth (total vs used), Avg per day, month Thanks, Ravi B
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator