Tuesday, September 15, 2009

Oracle Connection Manager (CMAN)

Connection Manager (CMAN)

Installation

Document is based on Oracle release 10gR2 (10.2.0.4.0). To test it on a single server it is better to create 2 Oracle home. One Oracle database server home that will handle our test database and one small Oracle client home that will handle Connection Manager executable.
On Unix the executable is called cmctl.
In following Oracle client installed under /ora_client10g/software with oraclient account. Oracle database installed under /ora_10g/software with ora10g account.

Configuration

Connection Manager

To create the first Connection Manager configuration inherit cman.ora file from /ora_client10g/software/network/admin/samples. Then modify it to map your target configuration. In following cman.ora example we target to listen on port 1551 and redirect connection for 10g.world service to databse running on the same server. For all clients whatever their IP.

cman.ora:
cman =
(configuration=
  (address=(protocol=tcp)(host=eult93.sgp.st.com)(port=1551))
  (parameter_list =
    (aso_authentication_filter=off)
    (connection_statistics=on)
    (log_directory=/ora_client10g/software/network/log)
    (log_level=SUPPORT)
    (max_connections=256)
    (idle_timeout=0)
    (inbound_connect_timeout=0)
    (session_timeout=0)
    (outbound_connect_timeout=0)
    (max_gateway_processes=16)
    (min_gateway_processes=2)
    (remote_admin=on)
    (trace_directory=/ora_client10g/software/network/trace)
    (trace_level=SUPPORT)
    (trace_timestamp=on)
    (trace_filelen=1000)
    (trace_fileno=1)
    (max_cmctl_sessions=4)
    (event_group=init_and_term,memory_ops)
  )

  (rule_list=
    (rule=
       (src=*)(dst=*)(srv=10g.world)(act=accept)
       (action_list=(aut=off)(moct=0)(mct=0)(mit=0)(conn_stats=on))
    )
    (rule=
       (src=eult93)(dst=127.0.0.1)(srv=cmon)(act=accept)
       (action_list=(aut=off)(moct=0)(mct=0)(mit=0)(conn_stats=on))
    )
  )
)

tnsnames.ora:
10g.world=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = eult93.sgp.st.com) (PORT = 1541))
)
(CONNECT_DATA = (SID = 10g))
)

sqlnet.ora:
NAMES.DEFAULT_DOMAIN = world
NAMES.DIRECTORY_PATH= (TNSNAMES)

USE_CMAN = true

SQLNET.EXPIRE_TIME = 1

Then start it using cmctl command. You must launch it interactively to tell which is the name of the connection manager you plan to administer (the name in the cman.ora file).
[oraclient@eult93 ~]$ cmctl

CMCTL for Linux: Version 10.2.0.4.0 - Production on 14-SEP-2009 14:04:31

Copyright (c) 1996, 2007, Oracle.  All rights reserved.

Welcome to CMCTL, type "help" for information.

CMCTL> administer cman
Current instance cman is not yet started
Connections refer to (address=(protocol=tcp)(host=eult93.sgp.st.com)(port=1551)).
The command completed successfully.
CMCTL:cman> startup
Starting Oracle Connection Manager instance cman. Please wait...
TNS-04077: WARNING: No password set for the Oracle Connection Manager instance.
CMAN for Linux: Version 10.2.0.4.0 - Production
Status of the Instance
----------------------
Instance name             cman
Version                   CMAN for Linux: Version 10.2.0.4.0 - Production
Start date                14-SEP-2009 14:04:44
Uptime                    0 days 0 hr. 0 min. 9 sec
Num of gateways started   2
Average Load level        0
Log Level                 SUPPORT
Trace Level               SUPPORT
Instance Config file      /ora_client10g/software/network/admin/cman.ora
Instance Log directory    /ora_client10g/software/network/log
Instance Trace directory  /ora_client10g/software/network/trace
The command completed successfully.

You can get some statistics (rules, uptime, conneciton, ...) using show command:
CMCTL:cman> show all
listener_address          | (address=(protocol=tcp)(host=eult93.sgp.st.com)(port=1551))
aso_authentication_filter |   OFF
connection_statistics     |    ON
event_group               | (init_and_term, memory_ops)
log_directory             | /ora_client10g/software/network/log
log_level                 | SUPPORT
max_connections           |   256
idle_timeout              |     0
inbound_connect_timeout   |     0
session_timeout           |     0
outbound_connect_timeout  |     0
max_gateway_processes     |    16
min_gateway_processes     |     2
max_cmctl_sessions        |     4
password                  |   OFF
remote_admin              |    ON
trace_directory           | /ora_client10g/software/network/trace
trace_level               | SUPPORT
trace_timestamp           |    ON
trace_filelen             |  1000
trace_fileno              |     1
Number of filtering rules currently in effect: 2
(rule_list=
(rule=
(src=*)
(dst=*)
(srv=10g.world)
(act=accept)
(action_list=(aut=off)(moct=0)(mct=0)(mit=0)(conn_stats=on))
)
(rule=
(src=eult93)
(dst=127.0.0.1)
(srv=cmon)
(act=accept)
(action_list=(aut=off)(moct=0)(mct=0)(mit=0)(conn_stats=on))
)
)
The command completed successfully.

Database Server

listener.ora:
LISTENER_10g =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = eult93.sgp.st.com)(PORT = 1541))
)
)
)

SID_LIST_LISTENER_10g =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = 10g.world)
(SID_NAME = 10g)
(ORACLE_HOME = /ora_10g/software)
)
)

sqlnet.ora:
NAMES.DEFAULT_DOMAIN = world
NAMES.DIRECTORY_PATH= (TNSNAMES)

USE_CMAN = true

TCP.VALIDNODE_CHECKING = yes
TCP.INVITED_NODES = (localhost,eult93.sgp.st.com)
#TCP.EXCLUDED_NODES = (localhost,164.129.228.51,eult93.sgp.st.com)
TCP.EXCLUDED_NODES= (164.129.228.51)

SQLNET.EXPIRE_TIME = 1

The configuration is made to forbid direct access to the database for client IP 164.129.228.51 (desktop used for testing) and to allow database listener connection only for itself (or the Connection Manager hostname/IP if different for your environment). This a good security rue that would avoid end users to modify the tnsnames.ora file they have on their desktop to remove the Connection Manager entry...

Client

Classic connection:
EULT93_10G.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = eult93.sgp.st.com)(PORT = 1541))
)
(CONNECT_DATA =
(SID = 10g)
(SERVER = DEDICATED)
)
)

Using Connection Manager:
EULT93_CMAN.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = eult93.sgp.st.com)(PORT = 1551))
(ADDRESS = (PROTOCOL = TCP)(HOST = eult93.sgp.st.com)(PORT = 1541))
)
(CONNECT_DATA =
(SERVICE_NAME = 10g.world)
(SERVER = DEDICATED)
)
(SOURCE_ROUTE=YES)
)

Testing

With classic connection



C:\>tnsping eult93_10g

TNS Ping Utility for 32-bit Windows: Version 10.2.0.1.0 - Production on 14-SEP-2009 17:32:01

Copyright (c) 1997, 2005, Oracle.  All rights reserved.

Used parameter files:
C:\oem10g\network\admin\sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = eult93.sgp.st.com)(PORT = 1541))) (CONNECT_DATA = (SID = 10g) (SERVER = DEDICATED)))
TNS-12547: TNS:lost contact
C:\>sqlplus yjaquier@eult93_10g

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Sep 14 17:51:16 2009

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Enter password:
ERROR:
ORA-12537: TNS:connection closed

With Connection Manager

C:\>tnsping eult93_cman

TNS Ping Utility for 32-bit Windows: Version 10.2.0.1.0 - Production on 14-SEP-2009 17:32:02

Copyright (c) 1997, 2005, Oracle.  All rights reserved.

Used parameter files:
C:\oem10g\network\admin\sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = eult93.sgp.st.com)(PORT = 1551)) (ADDRESS = (PROTOCOL = TCP)(HOST = eult93.sgp.st.com)(PORT = 1541))) (CONNECT_DATA = (SERVICE_NAME = 10g.world) (SERVER = DEDICATED)) (SOURCE_ROUTE=YES))
OK (20 msec)
C:\>sqlplus yjaquier@eult93_cman

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Sep 14 17:32:05 2009

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Data Mining and Real Application Testing options

SQL>

Friday, April 3, 2009

SUN Solaris

Solaris Boot Sequence :-

Solaris 10 :-

Solaris 10 boot process is divided in 5 phases :-

1)Boot PROM phase:
  • PROM runs POST;
  • boot with local boot-device;
  • boot reads boot block;
  • boot load boot block

2)Boot Program phase:

  • boot block loads secondary boot program(ufsboot);
  • ufsboot loads kernel

3)Kernel Initialization phase:

  • kernel reads /etc/system;
  • kernel load modules

4)Init phase:

  • kernel starts /etc/init;
  • init starts svc.startd process

5)svc.startd phase:

  • svc.startd starts system processes



Solaris SPARC Boot Sequence :-


The following represents a summary of the boot process for a Solaris x.x system on Sparc hardware.

Power On: Depending on the system involved, you may see some output on a serial terminal immediately after power on. This may take the form of a Hardware Power ON message on a large Enterprise server, or a "'" or "," in the case of an older Ultra system. These indications will not be present on a monitor connected directly to the server.

POST: If the PROM diag-switch? parameter is set to true, output from the POST (Power On Self Test) will be viewable on a serial terminal. The PROM diag-level parameter determines the extent of the POST tests. If a serial terminal is not connected, a prtdiag -v will show the results of the POST once the system has booted. If a keyboard is connected, it will beep and the keyboard lights will flash during POST. If the POST fails, an error indication may be displayed following the failure.

Init System: The "Init System" process can be broken down into several discrete parts:

  • OBP: If diag-switch? is set, an Entering OBP message will be seen on a serial terminal. The MMU (memory management unit) is enabled.
  • NVRAM: If use-nvramrc? is set to true, read the NVRAMRC. This may contain information about boot devices, especially where the boot disk has been encapsulated with VxVM or DiskSuite.
  • Probe All: This includes checking for SCSI or other disk drives and devices.
  • Install Console: At this point, a directly connected monitor and keyboard will become active, or the serial port will become the system console access. If a keyboard is connected to the system, the lights will flash again during this step.
  • Banner: The PROM banner will be displayed. This banner includes a logo, system type, PROM revision level, the ethernet address, and the hostid.
  • Create Device Tree: The hardware device tree will be built. This device tree can be explored using PROM monitor commands at the ok> prompt, or by using prtconf once the system has been booted.

Extended Diagnostics: If diag-switch? and diag-level are set, additional diagnostics will appear on the system console.
auto-boot?: If the auto-boot? PROM parameter is set, the boot process will begin. Otherwise, the system will drop to the ok> PROM monitor prompt, or (if sunmon-compat? and security-mode are set) the > security prompt.


The boot process will use the boot-device and boot-file PROM parameters unless diag-switch? is set. In this case, the boot process will use the diag-device and diag-file.


bootblk: The OBP (Open Boot PROM) program loads the bootblk primary boot program from the boot-device (or diag-device, if diag-switch? is set). If the bootblk is not present or needs to be regenerated, it can be installed by running the installboot command after booting from a CDROM or the network. A copy of the bootblk is available at /usr/platform/`arch -k`/lib/fs/ufs/bootblk
ufsboot: The secondary boot program, /platform/`arch -k`/ufsboot is run. This program loads the kernel core image files. If this file is corrupted or missing, a bootblk: can't find the boot program or similar error message will be returned.

kernel: The kernel is loaded and run.

For 32-bit Solaris systems, the relevant files are:
/platform/`arch -k`/kernel/unix
/kernel/genunix

For 64-bit Solaris systems, the files are:
/platform/`arch -k`/kernel/sparcV9/unix
/kernel/genunix

  • As part of the kernel loading process, the kernel banner is displayed to the screen.
  • The kernel initializes itself and begins loading modules, reading the files with the ufsboot program until it has loaded enough modules to mount the root filesystem itself.
  • At that point, ufsboot is unmapped and the kernel uses its own drivers. If the system complains about not being able to write to the root filesystem, it is stuck in this part of the boot process.

The boot -a command singlesteps through this portion of the boot process. This can be a useful diagnostic procedure if the kernel is not loading properly.


/etc/system: The /etc/system file is read by the kernel, and the system parameters are set.
The following types of customization are available in the /etc/system file:

  • moddir: Changes path of kernel modules.
  • forceload: Forces loading of a kernel module.
  • exclude: Excludes a particular kernel module.
  • rootfs: Specify the system type for the root file system. (ufs is the default.)
  • rootdev: Specify the physical device path for root.
  • set: Set the value of a tuneable system parameter.


If the /etc/system file is edited, it is strongly recommended that a copy of the working file be made to a well-known location. In the event that the new /etc/system file renders the system unbootable, it might be possible to bring the system up with a boot -a command that specifies the old file. If this has not been done, the system may need to be booted from CD or network so that the file can be mounted and edited.


kernel initialized: The kernel creates PID 0 ( sched). The sched process is sometimes called the "swapper."
init: The kernel starts PID 1 (init).
init: The init process reads the /etc/inittab and /etc/default/init and follows the instructions in those files. Some of the entries in the /etc/inittab are:

  • fs: sysinit (usually /etc/rcS)
  • is: default init level (usually 3, sometimes 2)
  • s#: script associated with a run level (usually /sbin/rc#)


rc scripts: The rc scripts execute the files in the /etc/rc#.d directories. They are run by the /sbin/rc# scripts, each of which corresponds to a run level.

This Completes the BootUp process.

SUN Clusters

Thursday, April 2, 2009

HP-UX

HP-UX Boot Sequence


Boot Sequence: Quick Reference :-

On a server without vPars, a simplified boot sequence is:

1. ISL (Initial System Loader)
2. hpux (secondary system loader)
3. /stand/vmunix (kernel)

Adding vPars adds the monitor layer, so now hpux loads the monitor and then the monitor boots the kernels of the virtual partitions. The boot sequence becomes

1. ISL
2. hpux
3. /stand/vpmon (vPars monitor and partition database)
4. /stand/vmunix (kernels of the virtual partitions)


Boot Sequence: The Details :-

With or without vPars, the firmware loads and launches ISL.

ISL>
In a server without vPars, at the ISL prompt, the secondary system loader hpux loads the kernel /stand/vmunix:

ISL> hpux /stand/vmunix

However, in a server with vPars, at the ISL prompt, the secondary system loader hpux loads the vPars monitor /stand/vpmon:

ISL> hpux /stand/vpmon

The monitor loads the partition database (the default is /stand/vpdb) from the same disk that /stand/vpmon was booted. The monitor internally creates (but does not boot) each virtual partition according to the resource assignments in the partition database.

Next, the vPars monitor runs in interactive mode (when no options to /stand/vpmon are given) with a command line interface.

MON>

To boot a kernel in a virtual partition (that is, to launch a virtual partition), use the monitor command vparload. For example, to launch the virtual partition named ABCxyz:

MON> vparload -p ABCxyz

In this example, the vPars monitor would load the virtual partition ABCxyz and launch the kernel from the boot device specified for ABCxyz.

(The boot device is assigned when the virtual partition is created and is recorded in the monitor database.)

HP-UX is now booted on the virtual partition ABCxyz.

Once a virtual partition is running, you will be at the virtual console of a virtual partition. Subsequent virtual partitions can be booted using the vPars command vparboot at the UNIX shell prompt of ABCxyz.


Boot Sequence Difference between PA-RISC and Integrity Architectures :-


On a server without vPars, a simplified boot sequence is as follows.

PA-RISC Architecture :-

1. ISL(Initial System Loader)
2. hpux(secondary system loader)
3. /stand/vmunix(kernel)

Integrity Architecture (IA64) :-

1. EFI(Extensible Firmware Interface)
2. hpux.efi (HP-UX boot loader)
3. /stand/vmunix

Adding vPars adds the Monitor layer, so now hpux(for Integrity, hpux.efi) loads the Monitor. Then the Monitor boots the kernels of the virtual partitions. The boot sequence becomes the following.

1. ISL or EFI (firmware)
2. hpux or hpux.efi
3. /stand/vpmon (vPars Monitor and partition database)
4. /stand/vmunix (kernels of the virtual partitions)

M/C Service Guard

Veritas Cluster

Overview

Veritas Cluster enables one system to failover to the other system. All related software processes are simply moved from one system to the other system with minimal downtime.

Cluster Startup :-

Here is what the cluster does at startup:-

-Node checks if other node is already started, if so -- stays OFFLINE
-If no other machine is running, checks communication (gabconfig). May need system admin intervention if cluster requires both nodes to be available. (/sbin/gabconfig -c -x)
-Once communication between machines is open -- or gabconfig has been started, it sets up network (nic & ip adddress) (starts cluster server) .

If also brings up volume manager, file system, and then Applications.

File Locations (Logs, Conf, Executables):-

Log location: /var/VRTSvcs/log

There are several logs in this directory:-

engine.log_A: primary log, usually what you will be reading for debugging.

Conf files:-

Llt conf: /etc/llttab [should NOT need to access this]
Network conf: /etc/gabtab
Cluster conf: /etc/VRTSvcs/conf/config/main.cf (Has exact details on what the cluster contains. )

Most executables are in: /opt/VRTSvcs/bin or /sbin

Changing Configurations :-

ALWAYS be very careful when changing the cluster configurations.

There are two ways of changing the configurations.
The method one uses if the system is up (cluster is running on at least one node, preferably on both):
  1. haconf -makerw
  2. run needed commands (ie. hasys ....)
  3. haconf -dump -makero


If both systems are down: -

hastop -all (shouldn't need this as cluster is down)

cp main.cf main.cf.%date%

vi main.cf

hacf -verify /etc/VRTSvcs/conf/config

hacf -generate /etc/VRTSvcs/conf/config

hastart

Veritas Cluster Debugging Tips :-


The normal debugging of steps includes: checking on status, restarting if no faults, checking licenses, clearing faults if needed, and checking logs.


To find out Current Status:-
/opt/VRTSvcs/bin/hastatus -summary This will give the general status of each machine and processes
/opt/VRTSvcs/bin/hares -display This gives much more detail - down to the resource level.
If hastatus fails on both machines (it returns that the cluster is not up or returns nothing), try to start the cluster
/opt/VRTSvcs/bin/hastart
/opt/VRTSvcs/bin/hastatus -summary will tell you if processes started properly. It will NOT start processes on a FAULTED system.

To check licenses:

vxlicense -p

Make sure all licenses are current - and NOT expired! If they are expired, that is your problem. Call VERITAS to get temporary licenses.

There is a BUG with veritas licences. Veritas will not run if there are ANY expired licenses -- even if you have the valid ones you need. To get veritas to run, you will need to MOVE the expired licenses.

vxlicense -p

Note the NUMBER after the license (ie: Feature name: DATABASE_EDITION [100])


cd /etc/vx/elm

mkdir oldmv lic.number old [do this for all expired licenses]

vxlicense -p [Make sure there are no expired licenses AND your good licenses are there]

hastart
If still fails, call veritas for temp licenses. Otherwise, be certain to do the same on your second machine.


To clear FAULTS:


hares -display
For each resource that is faulted run:

hares -clear resource-name -sys faulted-system

If all of these clear, then run hastatus -summary and make sure that these are clear. If some don't clear you MAY be able to clear them on the group level. Only do this as last resort:

hagrp -disableresources groupname
hagrp -flush group -sys sysname
hagrp -enableresources groupname

To get a group to go online:

hagrp -online group -sys desired-system

Unix Flavours :-

HP unveils HP-UX 11i v3 server OS


New architecture supports 100 million zetta bytes of storage.

HP has announced the launch of HP-UX 11i v3, the latest version of its Unix-based server operating system.

Despite the increase in Linux servers, HP is committed to the HP-UX operating system, and plans releases every two to three years in line with its publicly available roadmap.

The vendor is placing a heavy focus on total cost of ownership, virtualisation and dynamic resource management.

HP claims an average 30 per cent performance increase in applications running HP-UX 11i v3 simply by upgrading and without any recompiling.

This is down to tweaks in threading and a complete rewrite of the storage stack. The biggest performance gains can be seen in Java applications.

HP director Nick van der Zweep said at a press conference in San Francisco: "There really is no need to recompile your applications to experience an improvement in operational performance."

"The new version is completely code-compatible with anything that currently runs properly on version 2."

The complete overall of the mass storage stack means that the architecture now supports the addressing of up to 100 million zettabytes of highly available, secure storage.

One zettabyte equals one billion terabytes, so the capacity represents effectively limitless data storage.

HP-UX 11i v3 also promises many new virtualisation features, allowing administrators to dynamically move memory and resources among distributed virtual partitions on the fly with no disruption to users.

Similarly, in the event of planned or unplanned downtime, instances can be moved to another available server or cluster to help maximise uptime and server utilisation.
Uptime and stability is also enhanced with the ability to hot-swap memory, processors and I/O cards.

A new Software Assistant simplifies patch and security bulletin management and patch deployment, while a Dynamic Root Disk system administration toolset enables online patching by running an image of the system while patches are installed.

Followers