As per defined IP Addresses in
/etc/hosts, Let’s find the same information using various commands available
from GRID and OS.
Let’s verify the same if Grid (cluster) picked up the correct
IP’s.
NODE-1
=========
mgracsolsrv64bit1:/export/home/grid: cat /etc/hosts
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.56.99
mgsrv-dns mgsrv-dns.mgdom.com
loghost
#PUBLIC
192.168.56.20 mgracsolsrv64bit1
mgracsolsrv64bit1.mgdom.com
192.168.56.21 mgracsolsrv64bit2
mgracsolsrv64bit2.mgdom.com
#PRIVATE
192.168.05.01 mgracsolsrv64bit1-priv
mgracsolsrv64bit1-priv.mgdom.com
192.168.05.02 mgracsolsrv64bit2-priv
mgracsolsrv64bit2-priv.mgdom.com
#VIRTUAL
192.168.56.30 mgracsolsrv64bit1-vip
mgracsolsrv64bit1-vip.mgdom.com
192.168.56.31 mgracsolsrv64bit2-vip
mgracsolsrv64bit2-vip.mgdom.com
###########################################################################################################
1) To Find Cluster Name
###########################################################################################################
$ /u01/app/11.2.0.1/grid/bin/olsnodes -c
mgrac-cluster
###########################################################################################################
2) Find the PUBLIC and VIP Node name
###########################################################################################################
$ /u01/app/11.2.0.1/grid/bin/olsnodes -n -i
mgracsolsrv64bit1
1 mgracsolsrv64bit1-vip
mgracsolsrv64bit2
2 mgracsolsrv64bit2-vip
$ /u01/app/11.2.0.1/grid/bin/srvctl config
nodeapps -a
VIP exists.:mgracsolsrv64bit1
VIP exists.: /mgracsolsrv64bit1-vip/192.168.56.30/255.255.255.0/e1000g0
VIP exists.:mgracsolsrv64bit2
VIP exists.: /mgracsolsrv64bit2-vip/192.168.56.31/255.255.255.0/e1000g0
$ /u01/app/11.2.0.1/grid/bin/crsctl stat res
ora.mgracsolsrv64bit1.vip -p |egrep
‘^NAME|TYPE|USR_ORA_VIP|START_DEPENDENCIES|SCAN_NAME|VERSION’
NAME=ora.mgracsolsrv64bit1.vip
TYPE=ora.cluster_vip_net1.type
START_DEPENDENCIES=hard(ora.net1.network)
pullup(ora.net1.network)
USR_ORA_VIP=mgracsolsrv64bit1-vip
VERSION=11.2.0.1.0
###########################################################################################################
3) To find Private IP Details
###########################################################################################################
$ /u01/app/11.2.0.1/grid/bin/olsnodes -n -i -l -p
mgracsolsrv64bit1
1 192.168.5.1
mgracsolsrv64bit1-vip => (Middle one is Private IP – 192.168.5.1)
The Oracle Interface
Configuration Tool (oifcfg) is used define and administer network interfaces
such as the public and private interfaces.
———————————————————————————————————————————————
Oracle only store network interface name and subnet ID in
OCR, not the netmask.
Oifcfg command can be used for such change, oifcfg commands only require to run on 1 of the cluster node, not all. When netmask is changed, the associated subnet
ID is also changed.
Interfaces used are : e1000g0 , e1000g1 , oifcfg would not show exact IP’s,
instead it shows SubnetID in OCR.
Subnet Info in Oracle Clusterware – OCR – To find
out what’s in OCR:
Both public and private network information are stored in OCR.
$ /u01/app/11.2.0.1/grid/bin/oifcfg getif
e1000g0 192.168.56.0 global public
e1000g1 192.168.0.0 global cluster_interconnect —-> Private Interface
Note:
The first column is the network adapter name
The second column is the subnet ID
The third column is always “global” and should not be changed
The last column indicates whether it’s public or
cluster_interconnect(private) in Oracle Clusterware
To include the subnet mask,
append the -n option to the -p option:
Shows the available interfaces
that you can configure with setif.
The –p parameter displays the
type of interface which can be PRIVATE, PUBLIC or UNKNOWN.
The iflist command queries the
operating system to find which network interfaces are present on this node.
————————————————————————————————————–
$ ifconfig -a
...
...
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 ==> Private IP
inet 192.168.5.1 netmask ffff0000 broadcast 192.168.255.255
...
...
...
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 ==> Private IP
inet 192.168.5.1 netmask ffff0000 broadcast 192.168.255.255
...
$ /u01/app/11.2.0.1/grid/bin/oifcfg iflist -p -n
e1000g0 192.168.56.0 PUBLIC
255.255.255.0
e1000g1
192.168.0.0 PUBLIC 255.255.0.0 —–> Private interface on different Subnet 255.255.0.0
e1000g2 192.168.56.0 PUBLIC
255.255.255.0
Note :
oifcfg getif => Does not works, if
Oracle Clusterware (CRS) is not running.
$ ps -ef |grep d.bin |wc -l
0
$ oifcfg getif
PRIF-10: failed to initialize the cluster registry
oifcfg iflist => Works, even if Oracle
Clusterware (CRS) is not running.
$ oifcfg iflist -p -n
e1000g0 192.168.56.0 PUBLIC 255.255.255.0
e1000g1 192.168.0.0 PUBLIC 255.255.0.0
$ cat /etc/hosts |grep 192.168.5.1
192.168.5.1 mgracsolsrv64bit1-priv
mgracsolsrv64bit1-priv.mgdom.com
Note:
The first column is the network adapter name
The second column is the subnet ID
The third column indicates whether it’s private, public or
unknown according to RFC standard, it has NOTHING to
do whether it’s used as private or public network in Oracle Clusterware
The last column is the netmask
$ netstat -i
Name Mtu
Net/Dest
Address
Ipkts Ierrs Opkts Oerrs Collis Queue
lo0 8232 loopback localhost 51795 0 51795 0 0 0
e1000g0 1500 mgracsolsrv64bit1 mgracsolsrv64bit1 6288 0 4673 0 0 0
e1000g1 1500 mgracsolsrv64bit1-priv mgracsolsrv64bit1-priv 121009 0 83308 0 0 0
e1000g2 1500 mgsrv-dns mgsrv-dns 4493 0 201 0 0 0
lo0 8232 loopback localhost 51795 0 51795 0 0 0
e1000g0 1500 mgracsolsrv64bit1 mgracsolsrv64bit1 6288 0 4673 0 0 0
e1000g1 1500 mgracsolsrv64bit1-priv mgracsolsrv64bit1-priv 121009 0 83308 0 0 0
e1000g2 1500 mgsrv-dns mgsrv-dns 4493 0 201 0 0 0
$ netstat -in
Name Mtu
Net/Dest
Address Ipkts Ierrs Opkts
Oerrs Collis Queue
lo0 8232 127.0.0.0 127.0.0.1 52598 0 52598 0 0 0
e1000g0 1500 192.168.56.0 192.168.56.20 6356 0 4734 0 0 0
e1000g1 1500 192.168.0.0 192.168.5.1 123476 0 85456 0 0 0
e1000g2 1500 192.168.56.0 192.168.56.99 4562 0 202 0 0 0
lo0 8232 127.0.0.0 127.0.0.1 52598 0 52598 0 0 0
e1000g0 1500 192.168.56.0 192.168.56.20 6356 0 4734 0 0 0
e1000g1 1500 192.168.0.0 192.168.5.1 123476 0 85456 0 0 0
e1000g2 1500 192.168.56.0 192.168.56.99 4562 0 202 0 0 0
mgracsolsrv64bit1:/export/home/grid: ifconfig -a
lo0:
flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 ==> Public IP
inet 192.168.56.20 netmask ffffff00 broadcast 192.168.56.255
e1000g0:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2 ==> SCAN IP 1
inet 192.168.56.81 netmask ffffff00 broadcast 192.168.56.255
e1000g0:3: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2 ==> SCAN IP 2
inet 192.168.56.82 netmask ffffff00 broadcast 192.168.56.255
e1000g0:5: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2 ==> VIP
inet 192.168.56.30 netmask ffffff00 broadcast 192.168.56.255
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 ==> Private IP
inet 192.168.5.1 netmask ffff0000 broadcast 192.168.255.255
e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 ==> DNS Server IP
inet 192.168.56.99 netmask ffffff00 broadcast 192.168.56.255
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 ==> Public IP
inet 192.168.56.20 netmask ffffff00 broadcast 192.168.56.255
e1000g0:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2 ==> SCAN IP 1
inet 192.168.56.81 netmask ffffff00 broadcast 192.168.56.255
e1000g0:3: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2 ==> SCAN IP 2
inet 192.168.56.82 netmask ffffff00 broadcast 192.168.56.255
e1000g0:5: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2 ==> VIP
inet 192.168.56.30 netmask ffffff00 broadcast 192.168.56.255
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 ==> Private IP
inet 192.168.5.1 netmask ffff0000 broadcast 192.168.255.255
e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 ==> DNS Server IP
inet 192.168.56.99 netmask ffffff00 broadcast 192.168.56.255
$ netstat -r
Routing Table: IPv4
Destination
Gateway
Flags
Ref Use Interface
-------------------- -------------------- ----- ----- ---------- ---------
default mgsrv-dns UG 1 0
192.168.0.0 mgracsolsrv64bit1-priv U 1 97 e1000g1
192.168.56.0 mgracsolsrv64bit1 U 1 90 e1000g0
192.168.56.0 mgsrv-dns U 1 0 e1000g2
192.168.56.0 mgracsolsrv64bit1 U 1 0 e1000g0:1 -> SCANIP 1 - 192.168.56.81
192.168.56.0 mgracsolsrv64bit1 U 1 0 e1000g0:3 -> SCANIP 2 - 192.168.56.82
192.168.56.0 mgracsolsrv64bit1 U 1 0 e1000g0:5 -> VIP - 192.168.56.30
224.0.0.0 mgracsolsrv64bit1 U 1 0 e1000g0
localhost localhost UH 59 21132 lo0
-------------------- -------------------- ----- ----- ---------- ---------
default mgsrv-dns UG 1 0
192.168.0.0 mgracsolsrv64bit1-priv U 1 97 e1000g1
192.168.56.0 mgracsolsrv64bit1 U 1 90 e1000g0
192.168.56.0 mgsrv-dns U 1 0 e1000g2
192.168.56.0 mgracsolsrv64bit1 U 1 0 e1000g0:1 -> SCANIP 1 - 192.168.56.81
192.168.56.0 mgracsolsrv64bit1 U 1 0 e1000g0:3 -> SCANIP 2 - 192.168.56.82
192.168.56.0 mgracsolsrv64bit1 U 1 0 e1000g0:5 -> VIP - 192.168.56.30
224.0.0.0 mgracsolsrv64bit1 U 1 0 e1000g0
localhost localhost UH 59 21132 lo0
$ netstat -rnv
IRE Table: IPv4
Destination
Mask
Gateway Device
Mxfrg
Rtt Ref Flg Out In/Fwd
-------------------- --------------- -------------------- ------ ----- ----- --- --- ----- ------
default 0.0.0.0 192.168.56.99 1500* 0 1 UG 0 0
192.168.0.0 255.255.0.0 192.168.5.1 e1000g1 1500* 0 1 U 100 0
192.168.56.0 255.255.255.0 192.168.56.20 e1000g0 1500* 0 1 U 93 0
192.168.56.0 255.255.255.0 192.168.56.99 e1000g2 1500* 0 1 U 0 0
192.168.56.0 255.255.255.0 192.168.56.20 e1000g0:1 1500* 0 1 U 0 0
192.168.56.0 255.255.255.0 192.168.56.20 e1000g0:3 1500* 0 1 U 0 0
192.168.56.0 255.255.255.0 192.168.56.20 e1000g0:5 1500* 0 1 U 0 0
224.0.0.0 240.0.0.0 192.168.56.20 e1000g0 1500* 0 1 U 0 0
127.0.0.1 255.255.255.255 127.0.0.1 lo0 8232* 0 59 UH 21725 0
-------------------- --------------- -------------------- ------ ----- ----- --- --- ----- ------
default 0.0.0.0 192.168.56.99 1500* 0 1 UG 0 0
192.168.0.0 255.255.0.0 192.168.5.1 e1000g1 1500* 0 1 U 100 0
192.168.56.0 255.255.255.0 192.168.56.20 e1000g0 1500* 0 1 U 93 0
192.168.56.0 255.255.255.0 192.168.56.99 e1000g2 1500* 0 1 U 0 0
192.168.56.0 255.255.255.0 192.168.56.20 e1000g0:1 1500* 0 1 U 0 0
192.168.56.0 255.255.255.0 192.168.56.20 e1000g0:3 1500* 0 1 U 0 0
192.168.56.0 255.255.255.0 192.168.56.20 e1000g0:5 1500* 0 1 U 0 0
224.0.0.0 240.0.0.0 192.168.56.20 e1000g0 1500* 0 1 U 0 0
127.0.0.1 255.255.255.255 127.0.0.1 lo0 8232* 0 59 UH 21725 0
###########################################################################################################
4) To Find SCAN IP
###########################################################################################################
$ nslookup mgrac-scan
Server:
192.168.56.99
Address:
192.168.56.99#53
Name: mgrac-scan.mgdom.com
Address: 192.168.56.82
Name: mgrac-scan.mgdom.com
Address: 192.168.56.83
Name: mgrac-scan.mgdom.com
Address: 192.168.56.81
$ dig mgrac-scan.mgdom.com +noall +answer
; <<>> DiG 9.6-ESV-R8
<<>> mgrac-scan.mgdom.com +noall +answer
;; global options: +cmd
mgrac-scan.mgdom.com.
600 IN
A 192.168.56.81
mgrac-scan.mgdom.com.
600 IN
A 192.168.56.82
mgrac-scan.mgdom.com.
600 IN
A 192.168.56.83
$ /u01/app/11.2.0.1/grid/bin/srvctl config scan
SCAN name: mgrac-scan, Network:
1/192.168.56.0/255.255.255.0/e1000g0
SCAN VIP name: scan1, IP: /192.168.56.83/192.168.56.83
SCAN VIP name: scan2, IP: /192.168.56.81/192.168.56.81
SCAN VIP name: scan3, IP: /192.168.56.82/192.168.56.82
When /etc/hosts is configured with
SCAN names:
$ cat /etc/hosts |grep -i scan
# SCAN
192.168.56.81 mgracscan1
192.168.56.82 mgracscan2
192.168.56.83 mgracscan3
$ /u01/app/11.2.0.1/grid/bin/srvctl config scan
SCAN name: mgrac-scan, Network:
1/192.168.56.0/255.255.255.0/e1000g0
SCAN VIP name: scan1, IP: /mgracscan3/192.168.56.83
SCAN VIP name: scan2, IP: /mgracscan1/192.168.56.81
SCAN VIP name: scan3, IP: /mgracscan2/192.168.56.82
$ srvctl config
scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
##################################################################################################################
Before You Start Grid Installation
Please follow below to configure the required IP Addresses.
##################################################################################################################
Even if you are using a
DNS,Oracle recommends that you add lines to the /etc/hosts file on each
node,specifying the public IP, VIP and private addresses.
Configure the /etc/hosts file so
that it is similar to the following example:
NODE-1
=========
mgracsolsrv64bit1:/export/home/grid: cat /etc/hosts
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.56.99
mgsrv-dns mgsrv-dns.mgdom.com
loghost
#PUBLIC
192.168.56.20 mgracsolsrv64bit1
mgracsolsrv64bit1.mgdom.com
192.168.56.21 mgracsolsrv64bit2
mgracsolsrv64bit2.mgdom.com
#PRIVATE
192.168.05.01 mgracsolsrv64bit1-priv
mgracsolsrv64bit1-priv.mgdom.com
192.168.05.02 mgracsolsrv64bit2-priv
mgracsolsrv64bit2-priv.mgdom.com
#VIRTUAL
192.168.56.30 mgracsolsrv64bit1-vip
mgracsolsrv64bit1-vip.mgdom.com
192.168.56.31 mgracsolsrv64bit2-vip
mgracsolsrv64bit2-vip.mgdom.com
=================================================================================
NODE-2
=========
mgracsolsrv64bit2:/export/home/grid: cat /etc/hosts
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
#PUBLIC
192.168.56.20 mgracsolsrv64bit1
mgracsolsrv64bit1.mgdom.com
192.168.56.21 mgracsolsrv64bit2
mgracsolsrv64bit2.mgdom.com
#PRIVATE
192.168.05.01 mgracsolsrv64bit1-priv
mgracsolsrv64bit1-priv.mgdom.com
192.168.05.02 mgracsolsrv64bit2-priv
mgracsolsrv64bit2-priv.mgdom.com
#VIRTUAL
192.168.56.30 mgracsolsrv64bit1-vip
mgracsolsrv64bit1-vip.mgdom.com
192.168.56.31 mgracsolsrv64bit2-vip
mgracsolsrv64bit2-vip.mgdom.com
=================================================================================
Public – Requires one interface or
network Adapter/card
=================================================================================
Determine the public host name for each node in the cluster.
1) For the public host name, use the primary host name of each
node.
2) In other words, use the name displayed by the hostname
command for example: mgracsolsrv64bit1
3) Static IP address
4) Configured before installation for each node, and resolvable
to that node before installation.
5) It should be On the same subnet as all other public IP
addresses, VIP addresses, and SCAN addresses.
6) A public IP is registered on DNS
7) Assign One IP address with an associated host name (or
network name) registered in the DNS for the public interface.
8) Record the host name and IP address in the system hosts file
also, /etc/hosts.
9) Each node needs a public IP address This can be an interface
bonded using IPMP.
=================================================================================
Private – Requires one interface or
network Adapter/card
=================================================================================
Determine the private hostname for each node in the cluster.
1) This private hostname does not need to be resolvable through
DNS and should be entered in the /etc/hosts file.
2) The private IP should NOT be accessable to servers not
participating in the local cluster. (Only RAC Nodes should ping each other
using Priv IP)
3) The private network should be on standalone dedicated
switch(es).
4) The private network should NOT be part of a larger overall
network topology.
5) The private network should be deployed on Gigabit Ethernet or
better
6) It is recommended that redundant NICs are configured For
Solaris either Sun Trunking (OS based) or Sun IPMP (OS based)
More information: <<Note: 283107.1>>
7) IPMP in general. When IPMP is used for the interconnect:
<<Note: 368464.1>>
NOTE: If
IPMP is used for public and/or cluster interconnect, critical merge patch
9729439 should be applied to both Grid Infrastructure and RDBMS Oracle homes.
8) Static IP address
9) Configured before installation, but on a separate, private
network, with its own subnet, that is not resolvable except by other cluster
member nodes.
10) A private IP only known to the servers in the RAC
configuration, to be used by the interconnect.
11) Configure the private interconnect network interface cards
to have a private node name and a private IP address.
12) This can be an interface bonded using IPMP.
13) Choose a private IP address that is in the address range
10.*.*.* or 192.168.*.*
Private 10.0.0.1 Hosts
file
Private 10.0.0.2 Hosts
file
14) The private IP address should be on a separate subnet than
the public IP address.
15) Oracle strongly recommends using a physically separate,
private network (You can use a vlan on a shared switch if you must).
16) You should ensure that the private IP addresses are reachable only by the cluster member nodes.
17) Use Non Routable network addresses for private interconnect:
Class A: 10.0.0.0 to 10.255.255.255
Class B: 172.16.0.0 to 172.31.255.255
Class C: 192.168.0.0 to 192.168.255.255
Or Google search – “subnet calculator”
=======================================================================================
VIP – NOT required separate Network
adapter/card , it uses existing public interface.
=======================================================================================
1) The virtual IP address and the network name must not be
currently in use, (But should not be accessible by a ping command or NOT
Pingable)
2) The virtual IP address must be on the same subnet as your
public IP address.
3) The virtual host name for each node should be registered with
your DNS.
4) Static IP address
5) Configured before installation for each node, The IP address
and host name are currently unused (it can be registered in a DNS, but should
not be accessible by a ping command).
6) It should On the same subnet as all other public IP
addresses, VIP addresses, and SCAN addresses.
7) A virtual IP is registered on DNS, but NOT defined in the
servers. It will be defined later during Oracle Clusterware Install
8) Assign One virtual IP (VIP) address with an associated host
name registered in a DNS.
9) Record the host name and VIP address in the system hosts
file, /etc/hosts.
===========================================================================================
SCAN – VIP – Define in
corporate DNS (Domain Name Service), this again uses existing public Interface
===========================================================================================
1) Determine your cluster name. The cluster name should satisfy
the following conditions:
2) The cluster name is globally unique throughout your host
domain.
3) The cluster name is at least 1 character long and less than
15 characters long.
4) The cluster name must consist of the same character set used
for host names: single-byte alphanumeric
characters (a to z, A to Z, and 0 to 9) and hyphens (-).
5) Define the SCAN in your corporate DNS (Domain Name Service),
You must ask your network administrator to create a single name,
that resolves to three IP addresses using a round robin
algorithm.
6) The IP addresses must be on the same subnet as your public
network in the cluster.
7) SCAN VIPs must NOT be in the /etc/hosts file, it must be
resolved by DNS.
$ /u01/app/11.2.0.1/grid/bin/olsnodes -c
mgrac-cluster
$ nslookup mgrac-scan
Server:
192.168.56.99
Address:
192.168.56.99#53
Name: mgrac-scan.mgdom.com
Address: 192.168.56.81
Name: mgrac-scan.mgdom.com
Address: 192.168.56.82
Name: mgrac-scan.mgdom.com
Address: 192.168.56.83
$ dig mgrac-scan.mgdom.com +noall +answer
; <<>> DiG 9.6-ESV-R8
<<>> mgrac-scan.mgdom.com +noall +answer
;; global options: +cmd
mgrac-scan.mgdom.com. 600
IN A
192.168.56.83
mgrac-scan.mgdom.com. 600
IN A
192.168.56.81
mgrac-scan.mgdom.com. 600
IN A
192.168.56.82
$ /u01/app/11.2.0.1/grid/bin/srvctl config scan
SCAN name: mgrac-scan, Network:
1/192.168.56.0/255.255.255.0/e1000g0
SCAN VIP name: scan1, IP: /mgracscan3/192.168.56.83
SCAN VIP name: scan2, IP: /mgracscan1/192.168.56.81
SCAN VIP name: scan3, IP: /mgracscan2/192.168.56.82
$ /u01/app/11.2.0.1/grid/bin/srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
Verify SCAN Ip’s Registered with
resource
===========================================
$ /u01/app/11.2.0.1/grid/bin/crsctl stat
res ora.scan1.vip -p |egrep
‘^NAME|TYPE|USR_ORA_VIP|START_DEPENDENCIES|SCAN_NAME|VERSION’
NAME=ora.scan1.vip
TYPE=ora.scan_vip.type
SCAN_NAME=mgrac-scan
START_DEPENDENCIES=hard(ora.net1.network)
dispersion:active(type:ora.scan_vip.type) pullup(ora.net1.network)
USR_ORA_VIP=192.168.56.83
VERSION=11.2.0.1.0
Refer few notes:
=================
How to Modify Private Network Information in Oracle Clusterware
(Doc ID 283684.1)
How to Modify Public Network Information including VIP in Oracle
Clusterware (Doc ID 276434.1)
How to Modify SCAN Setting or SCAN Listener Port after
Installation (Doc ID 972500.1)
===================================================================================
11g
R2 RAC: NIC BONDING
Oracle
uses interconnect for both cache fusion and oracle clusterware messaging. Based
on the number of nodes in the configuration, either the interconnect can be a
crossover cable when only two nodes are participating the cluster or it can be
connected via a switch. Networks, both public and private can be single points
of failure, Such failures can disrupt the operation of the cluster and reduce
availability. To avoid such failures, redundant networks should be configured.
This means That dual network adapters should be configured for both public and
private networks. However, to enable dual network connections and to
load-balance network traffic across the dual network Adapters, features such as
network interface card (NIC) bonding or NIC pairing should be used whenever
possible.
NIC bonding is a method of pairing multiple physical network connections into a
single logical interface. This logical interface is used to establish
connection with the database server. By allowing all network connections that
are part of the logical interface to be used during communication, this
provides load-balancing capabilities that would not be otherwise available. In
addition, when one of the network connections fails, the other connection will
continue to receive and transmit data, making it fault tolerant.
In a RAC configuration, there is a requirement to have a minimum of two network
connections. One connection is for the private interface between the nodes in
the cluster, and the other connection , called the public interface, is for
users or application servers to connect and transmit data to the database
server.
Private IP addresses are required by Oracle RAC to provide communication
between the cluster nodes. Depending on your private network configuration, you
may need one or more IP addresses.
Let’s implement and test NIC bonding in an 11gR2 RAC
setup.
Current configuration is as follows:
–
2 node VM setup – Node1, Node2
Public
network – eth0
Node1 : 192.9.201.183
Node2 : 192.9.201.187
Private
interconnect – eth1
Node1 : 10.0.0.1
Node2 : 10.0.0.2
– On both the nodes, before powering them on, Create
two network interfaces eth2 and eth3 which will be bonded and will
replace current private interconnect eth1.
–
Power on both machines
–
check that cluster services are running –
Node1#crsctl stat res -t
Node1#oifcfg getif
eth0
192.9.201.0 global public
eth1
10.0.0.0 global cluster_interconnect
———————
– On both the nodes,
———————
Step:
–
Using neat
. deactivate both eth2 and eth3 (have been assigned ipaddress by DHCP)
. Edit eth1 and remove entries for ipaddress and netmask because we will
use same ipaddress (10.0.0.1/2) for bond0 on both the nodes so that we
don’t have to make a new entry in DNS.
Step:
-
Create a logical interface (bond0) :
Create bond0 Configuration File in /etc/sysconfig/network-scripts directory on
both machines. The IPaddress used is the same as that was used for eth1 on both
the nodes.
Node1# vi
/etc/sysconfig/network-scripts/ifcfg-bond0
Add the following lines:
DEVICE=bond0
IPADDR=10.0.0.1
NETWORK=10.0.0.0
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
Node2# vi
/etc/sysconfig/network-scripts/ifcfg-bond0
Add the following lines:
DEVICE=bond0
IPADDR=10.0.0.2
NETWORK=10.0.0.0
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
Step :
-
Modify the individual network interface configuration files to reflect bonding
details. The MASTER clause indicates which logical interface (bond0) this
specific NIC belongs to, and the SLAVE clause indicates that it’s one among
other NICs that are bonded to the master and a slave to its master.
Edit eth2 and eth3 configuration files on both
machines:
# vi /etc/sysconfig/network-scripts/ifcfg-eth2
Modify/append as follows:
DEVICE=eth2
BOOTPROTO=none
USERCTL=no
MASTER=bond0
SLAVE=yes
# vi /etc/sysconfig/network-scripts/ifcfg-eth3
Make
sure file readS as follows for eth3 interface:
DEVICE=eth3
BOOTPROTO=none
USERCTL=no
MASTER=bond0
SLAVE=yes
Step :
- Configure the bond driver/module
The configuration consists of two lines for a logical interface, where
miimon (the media independent interface monitor) is configured in milliseconds
and represents the link monitoring frequency. Mode indicates the type of
configuration that will be deployed between the interfaces that are bonded or
paired together. Mode alb indicates that a round robin policy will be
used, and all interfaces will take turns in transmitting; Mode active-backup
indicates either of them can be used.Here, only one slave in the bond device
will be active at the moment. If the active slave goes down, the other slave
becomes active and all traffic is then done via the newly active slave.
Modify kernel modules configuration file on all
the nodes:
# vi /etc/modprobe.conf
Append following two lines:
alias
bond0 bonding
options
bond0 mode=balance-alb miimon=100
Step :
- Test configuration
1. On all the nodes, load the bonding module, enter:
# modprobe bonding
2. On all the nodes, restart the networking service in
order to bring up bond0 interface, enter:
# service network restart
3. Disable restart of crs on both nodes
node1#crsctl disable crs
node2#crsctl disable crs
4. The current status of the bond device bond0
is present in /proc/net/bonding/bond0.Type the following cat command to query the current status of Linux kernel bonding
driver:
# cat /proc/net/bonding/bond0
Sample outputs:
Ethernet
Channel Bonding Driver: v3.4.0 (October 7, 2008)
Bonding
Mode: load balancing (round-robin)
MII
Status: up
MII
Polling Interval (ms): 0
Up
Delay (ms): 0
Down
Delay (ms): 0
Slave
Interface: eth2
MII
Status: up
Link
Failure Count: 0
Permanent
HW addr: 00:0c:29:2f:ee:13
Slave
Interface: eth3
MII
Status: up
Link
Failure Count: 0
Permanent
HW addr: 00:0c:29:2f:ee:1d
5. Display the bonded interface using
‘ifconfig’ command which shows “bond0″ running as the master and both “eth2″
and “eth3″ running as slaves.Also, the hardware address of bond0 and its
underlying devices eth2 and eth3 are the same.
# ifconfig
Sample output
bond0 Link
encap:Ethernet HWaddr 00:0C:29:2F:EE:13
inet addr:10.0.0.1 Bcast:10.0.0.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe2f:ee13/64 Scope:Link
UP BROADCAST
RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:4211 errors:0 dropped:0 overruns:0
frame:0
TX packets:332 errors:0 dropped:0 overruns:0
carrier:0
collisions:0 txqueuelen:0
RX bytes:1431791 (1.3 MiB) TX bytes:68068
(66.4 KiB)
eth0
Link encap:Ethernet HWaddr 00:0C:29:2F:EE:FF
inet addr:192.9.201.183 Bcast:192.9.201.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe2f:eeff/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:40757 errors:0 dropped:0 overruns:0
frame:0
TX packets:44923 errors:0 dropped:0 overruns:0
carrier:0
collisions:0 txqueuelen:1000
RX bytes:25161328 (23.9 MiB) TX
bytes:14121868 (13.4 MiB)
Interrupt:67 Base address:0x2024
eth1
Link encap:Ethernet HWaddr 00:0C:29:2F:EE:09
inet6 addr: fe80::20c:29ff:fe2f:ee09/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:5839 errors:0 dropped:0 overruns:0
frame:0
TX packets:5396 errors:0 dropped:0 overruns:0
carrier:0
collisions:0 txqueuelen:1000
RX bytes:3141765 (2.9 MiB) TX bytes:2291293
(2.1 MiB)
Interrupt:75 Base address:0x20a4
eth2 Link
encap:Ethernet HWaddr 00:0C:29:2F:EE:13
UP BROADCAST
RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:3236 errors:0 dropped:0 overruns:0
frame:0
TX packets:174 errors:0 dropped:0 overruns:0
carrier:0
collisions:0 txqueuelen:1000
RX bytes:1239253 (1.1 MiB) TX bytes:34959
(34.1 KiB)
Interrupt:75 Base address:0x2424
eth3 Link
encap:Ethernet HWaddr 00:0C:29:2F:EE:13
UP BROADCAST
RUNNING SLAVE MULTICAST
MTU:1500 Metric:1
RX packets:975 errors:0 dropped:0 overruns:0
frame:0
TX packets:158 errors:0 dropped:0 overruns:0
carrier:0
collisions:0 txqueuelen:1000
RX bytes:192538 (188.0 KiB) TX bytes:33109
(32.3 KiB)
Interrupt:59 Base address:0x24a4
lo
Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:10252 errors:0 dropped:0 overruns:0
frame:0
TX packets:10252 errors:0 dropped:0 overruns:0
carrier:0
collisions:0 txqueuelen:0
RX bytes:7863150 (7.4 MiB) TX bytes:7863150
(7.4 MiB)
6. Get current pvt. network interface configuration being
used by cluster
grid Node1$oifcfg getif
eth0
192.9.201.0 global public
eth1
10.0.0.0 global cluster_interconnect
7. Set new pvt interconnect to bond0 –
Updates OCR
grid Node1$oifcfg setif -global
bond0/10.0.0.0:cluster_interconnect
8. Restart CRS
Node1#crsctl stop crs
crsctl start crs
9. check that crs started on both nodes
Node1#crsctl stat res -t
10. Get current pvt. interconnect info
– will display eth0 – public interconnect
eth1 – earlier pvt
interconnect
bond0 – pvt
interconnect
Node1#oifcfg getif
11. Delete earlier pvt interconnect (eth1)
Node1#oifcfg delif -global eth1/10.0.0.0
12. Get current pvt. interconnect info
– will display eth0 – public interconnect
bond0 – pvt
interconnect
Node1#oifcfg getif
13. On node2 , remove network adapter eth3 -
-
Click VM > settings
-
Click on last network adapter (eth3) > Remove > OK
-
check that eth3 has been removed – will not be listed
Node2#ifconfig
-
check that cluster services are still running on Node2 as eth2 is providing
private interconnect service
Node1#crsctl stat res -t
14. On node1 , remove network adapter eth3 -
-
Click VM > settings
-
Click on last network adapter (eth3) > Remove > OK
-
check that eth3 has been removed – will not be listed
Node1#ifconfig
-
Check that cluster services are still running on Node2 as eth2 on node1 is
providing private interconnect service
Node1#crsctl stat res -t
15. On node2 , remove network adapter eth2 (only
adapter left for pvt interconnect)
-
Click VM > settings
-
Click on last network adapter (eth2) > Remove > OK
-
Node2 immediately gets rebooted as it can’t communicate with node1
-
Check that node2 is not a part of the cluster any more
Node1#crsctl stat res -t
11g
R2 RAC: Highly Available IP (HAIP)
========================
In earlier releases, to minimize node evictions
due to frequent private NIC down events, bonding, trunking, teaming, or similar
technology was required to make use of redundant network connections between
the nodes. Oracle Clusterware now provides an integrated solution which ensures
“Redundant Interconnect Usage” as it supports IP failover .
Multiple private network adapters can be
defined either during the installation phase or afterward using the oifcfg. The
ora.cluster_interconnect.haip resource will pick up a highly available
virtual IP (the HAIP) from “link-local” (Linux/Unix) IP range
(169.254.0.0 ) and assign to each private network. With HAIP, by
default, interconnect traffic will be load balanced across all active
interconnect interfaces. If a private interconnect interface fails or becomes
non-communicative, then Clusterware transparently moves the corresponding HAIP
address to one of the remaining functional interfaces.
Grid Infrastructure can activate a maximum of
four private network adapters at a time even if more are defined. The number of
HAIP addresses is decided by how many private network adapters are active when
Grid comes up on the first node in the cluster . If there’s only one
active private network, Grid will create one; if two, Grid will create
two and so on. The number of HAIPs won’t increase beyond four even if more
private network adapters are activated . A restart of clusterware on all nodes
is required for new adapters to become effective.
Oracle RAC Databases, Oracle Automatic Storage
Management (clustered ASM), and Oracle Clusterware components such as CSS, OCR,
CRS, CTSS, and EVM components employ Redundant Interconnect Usage.
Non-Oracle software and Oracle software not listed above, however, will not be
able to benefit from this feature.
Let’s demonstrate :
Current
configuration :
Cluster name : cluster01
nodes : host01, host02
– Overview
– check current network network configuration
– check that a link local HAIP (eth1:1 ) has
been started for the only private interconnect eth1 on both the nodes
– Add another network adapter eth2 to both the
nodes
– Assign IP address to eth2 on both the nodes
– Restart network service on both the nods
– Check that eth2 has been activated on both
the nodes
– Add eth2 to as another private interconnect
on one of the nodes
– check that eth2 has been added to the cluster
as another private interconnect
– check that HAIP has not been activated yet
(c/ware needs to be restarted)
– Restart crs on both the nodes
– Check that the resource
ora.cluster_interconnect.haip has been restarted on both the nodes
– check that a link local HAIPs(eth1:1 and
eth2:1) have been started for both the private interconnects eth1
and eth2 on both the nodes from the subnet 169.254.*.* reserved for HAIP
– stop private interconnect on eth1 on node1
– check that eth1 is not active and
corresponding HAIP has failed over to eth2
— check that crs is still up on host01
Implementation
- Check current
network configuration
eth0 is public interconnect
eth1 is private interconnect
[root@host01 ~]# oifcfg getif
eth0 192.9.201.0 global public
eth1 10.0.0.0 global cluster_interconnect
eth0 192.9.201.0 global public
eth1 10.0.0.0 global cluster_interconnect
- check that a
link local HAIP (eth1:1 ) has been started for the only private interconnect
eth1 on both the nodes
[root@host01 ~]# ifconfig -a
(output
trimmed to show only private interconnect)
eth1 Link
encap:Ethernet HWaddr 00:0C:29:69:3E:AA
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eaa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:134731 errors:0 dropped:0 overruns:0 frame:0
TX packets:116938 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:75265764 (71.7 MiB) TX bytes:55228739 (52.6 MiB)
Interrupt:75 Base address:0x20a4
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eaa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:134731 errors:0 dropped:0 overruns:0 frame:0
TX packets:116938 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:75265764 (71.7 MiB) TX bytes:55228739 (52.6 MiB)
Interrupt:75 Base address:0x20a4
eth1:1 Link
encap:Ethernet HWaddr 00:0C:29:69:3E:AA
inet addr:169.254.4.103 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x20a4
inet addr:169.254.4.103 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x20a4
[root@host02
network-scripts]# ifconfig -a
(output
trimmed to show only private interconnects)
eth1 Link
encap:Ethernet HWaddr 00:0C:29:44:67:25
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe44:6725/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:31596 errors:0 dropped:0 overruns:0 frame:0
TX packets:32994 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16550162 (15.7 MiB) TX bytes:17683576 (16.8 MiB)
Interrupt:75 Base address:0x20a4
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe44:6725/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:31596 errors:0 dropped:0 overruns:0 frame:0
TX packets:32994 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16550162 (15.7 MiB) TX bytes:17683576 (16.8 MiB)
Interrupt:75 Base address:0x20a4
eth1:1 Link encap:Ethernet
HWaddr 00:0C:29:44:67:25
inet addr:169.254.91.243 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x20a4
inet addr:169.254.91.243 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x20a4
- Add another
network adapter eth2 to both the nodes
- Assign IP
address to eth2 on both the nodes
host01 : 10.0.0.11, subnet mask : 255.255.255.0
host02 : 10.0.0.22, subnet mask : 255.255.255.0
- Restart
network service on both the nodes
#service network restart
- Check that
eth2 has been activated on both the nodes
[root@host01 ~]# ifconfig -a
| grep eth2
eth2 Link encap:Ethernet HWaddr 00:0C:29:69:3E:B4
eth2 Link encap:Ethernet HWaddr 00:0C:29:69:3E:B4
[root@host02
network-scripts]# ifconfig -a | grep eth2
eth2 Link
encap:Ethernet HWaddr 00:0C:29:44:67:2F
- Add eth2 to as
another private interconnect on one of the nodes
[root@host01 ~]# oifcfg setif -global
eth2/10.0.0.0:cluster_interconnect
- check that
eth2 has been added to the cluster as another private interconnect
[root@host01 ~]# oifcfg getif
eth0
192.9.201.0 global public
eth1 10.0.0.0 global cluster_interconnect
eth2 10.0.0.0 global cluster_interconnect
eth1 10.0.0.0 global cluster_interconnect
eth2 10.0.0.0 global cluster_interconnect
- check that
HAIP has not been activated yet (c/ware needs to be restarted)
[root@host01 ~]# ifconfig -a | grep
eth2
eth2 Link encap:Ethernet HWaddr 00:0C:29:69:3E:B4
eth2 Link encap:Ethernet HWaddr 00:0C:29:69:3E:B4
[root@host02
network-scripts]# ifconfig -a | grep eth2
eth2
Link encap:Ethernet HWaddr 00:0C:29:44:67:2F
- Restart crs on
both the nodes
[root@host01 ~]# crsctl stop crs
crsctl start crs
crsctl start crs
[root@host02
network-scripts]# crsctl stop crs
crsctl start crs
crsctl start crs
- Check that the
resource ora.cluster_interconnect.haip has been restarted on both the nodes
(Since it is a resource of lower stack, -init
option has been used)
[root@host01 ~]# crsctl stat res
ora.cluster_interconnect.haip -init
NAME=ora.cluster_interconnect.haip
TYPE=ora.haip.type
TARGET=ONLINE
STATE=ONLINE on host01
TYPE=ora.haip.type
TARGET=ONLINE
STATE=ONLINE on host01
- check that a
link local HAIPs(eth1:1 and eth2:1) have been started for both the
private interconnects eth1 and eth2 on both the nodes from the subnet
169.254.*.* reserved for HAIP
[root@host01 ~]# ifconfig -a
(output
trimmed to show only private interconnects)
eth1
Link
encap:Ethernet HWaddr 00:0C:29:69:3E:AA
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eaa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:134731 errors:0 dropped:0 overruns:0 frame:0
TX packets:116938 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:75265764 (71.7 MiB) TX bytes:55228739 (52.6 MiB)
Interrupt:75 Base address:0x20a4
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eaa/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:134731 errors:0 dropped:0 overruns:0 frame:0
TX packets:116938 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:75265764 (71.7 MiB) TX bytes:55228739 (52.6 MiB)
Interrupt:75 Base address:0x20a4
eth1:1 Link encap:Ethernet
HWaddr 00:0C:29:69:3E:AA
inet addr:169.254.4.103 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x20a4
inet addr:169.254.4.103 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x20a4
eth2 Link encap:Ethernet
HWaddr 00:0C:29:69:3E:B4
inet addr:10.0.0.11 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eb4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4358 errors:0 dropped:0 overruns:0 frame:0
TX packets:404 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1487549 (1.4 MiB) TX bytes:76461 (74.6 KiB)
Interrupt:75 Base address:0x2424
inet addr:10.0.0.11 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eb4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4358 errors:0 dropped:0 overruns:0 frame:0
TX packets:404 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1487549 (1.4 MiB) TX bytes:76461 (74.6 KiB)
Interrupt:75 Base address:0x2424
eth2:1 Link
encap:Ethernet HWaddr 00:0C:29:69:3E:B4
inet addr:169.254.196.216 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x2424
inet addr:169.254.196.216 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x2424
[root@host02
network-scripts]# ifconfig -a
(output
trimmed to show only private interconnects)
eth1 Link
encap:Ethernet HWaddr 00:0C:29:44:67:25
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe44:6725/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:31596 errors:0 dropped:0 overruns:0 frame:0
TX packets:32994 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16550162 (15.7 MiB) TX bytes:17683576 (16.8 MiB)
Interrupt:75 Base address:0x20a4
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe44:6725/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:31596 errors:0 dropped:0 overruns:0 frame:0
TX packets:32994 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16550162 (15.7 MiB) TX bytes:17683576 (16.8 MiB)
Interrupt:75 Base address:0x20a4
eth1:1 Link encap:Ethernet HWaddr
00:0C:29:44:67:25
inet addr:169.254.91.243 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x20a4
inet addr:169.254.91.243 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x20a4
eth2 Link
encap:Ethernet HWaddr 00:0C:29:44:67:2F
inet addr:10.0.0.22 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe44:672f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7229 errors:0 dropped:0 overruns:0 frame:0
TX packets:2368 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4288301 (4.0 MiB) TX bytes:1163296 (1.1 MiB)
Interrupt:75 Base address:0x2424
inet addr:10.0.0.22 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe44:672f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7229 errors:0 dropped:0 overruns:0 frame:0
TX packets:2368 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4288301 (4.0 MiB) TX bytes:1163296 (1.1 MiB)
Interrupt:75 Base address:0x2424
eth2:1 Link
encap:Ethernet HWaddr 00:0C:29:44:67:2F
inet addr:169.254.174.223 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x2424
inet addr:169.254.174.223 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x2424
— stop private interconnect on node1
[root@host01 ~]# ifdown eth1
-- check that eth1 is not active and
corresponding HAIP (169.254.4.103) has failed over to eth2
[root@host01 ~]# ifconfig -a
(output
trimmed to show private interconnect only)
eth1 Link
encap:Ethernet HWaddr 00:0C:29:69:3E:AA
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:163401 errors:0 dropped:0 overruns:0 frame:0
TX packets:145495 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:89098576 (84.9 MiB) TX bytes:69881778 (66.6 MiB)
Interrupt:75 Base address:0x20a4
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:163401 errors:0 dropped:0 overruns:0 frame:0
TX packets:145495 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:89098576 (84.9 MiB) TX bytes:69881778 (66.6 MiB)
Interrupt:75 Base address:0x20a4
eth2 Link
encap:Ethernet HWaddr 00:0C:29:69:3E:B4
inet addr:10.0.0.11 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eb4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11649 errors:0 dropped:0 overruns:0 frame:0
TX packets:4738 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6370975 (6.0 MiB) TX bytes:2033237 (1.9 MiB)
Interrupt:75 Base address:0x2424
inet addr:10.0.0.11 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe69:3eb4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11649 errors:0 dropped:0 overruns:0 frame:0
TX packets:4738 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6370975 (6.0 MiB) TX bytes:2033237 (1.9 MiB)
Interrupt:75 Base address:0x2424
eth2:1 Link encap:Ethernet HWaddr
00:0C:29:69:3E:B4
inet addr:169.254.196.216 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x2424
inet addr:169.254.196.216 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x2424
eth2:2 Link
encap:Ethernet HWaddr 00:0C:29:69:3E:B4
inet addr:169.254.4.103 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x2424
inet addr:169.254.4.103 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:75 Base address:0x2424
– check that crs
is still up on host01
[root@host01 ~]# crsctl stat res -t
References:
----------------All of the blogs are for my own reference only---------------------
1 comment:
go langaunage training
azure training
java training
salesforce training
hadoop training
Data Science training
Post a Comment