Requirements:
Fuse, Fuse libs, Kernel Devel package, Repository database with big file tablespace
Install Fuse, Fuse libs and Kernel Devel package
Run all the steps below until directory creation on both the database nodes.
# cd /etc/yum.repos.d
Verify if you have public repository is present. If not wget it. Or else when ever you try to run install command it says nothing to do
# /usr/openv/pdde/pdopensource/bin/wget http://public-yum.oracle.com/public-yum-ol6.repo
After the above is added run the below to install fuse and kernel-devel
# yum install fuse fuse-libs kernel-devel
Once installed you will get
Setting up Install Process
Package fuse-2.9.4-1.0.1.el6.x86_64 already installed and latest version
Package fuse-libs-2.9.4-1.0.1.el6.x86_64 already installed and latest version
Package kernel-devel-2.6.32-642.11.1.el6.x86_64 already installed and latest version
Nothing to do
Optional - Dos2Unix
# sudo yum install dos2unix
Create the mount directory
# mkdir /dbfs_mnt
# chown oracle:dba /dbfs_mnt
*****************************************************************
Create Repository Database testdbfs
Create tablespace and dbfs_user
CREATE BIGFILE TABLESPACE DBFS_DATA DATAFILE '+DATA_DG' SIZE 1T AUTOEXTEND OFF NOLOGGING
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;
CREATE USER DBFS_USER IDENTIFIED BY "Password" DEFAULT TABLESPACE DBFS_DATA TEMPORARY TABLESPACE TEMP;
grant create session, resource, create view, dbfs_role to dbfs;
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus dbfs_user/dbfs_user
-- Here dbfs_Data is tablespace name and oracle is the folder name that will be created inside dbfs_mnt directory
@$ORACLE_HOME/rdbms/admin/dbfs_create_filesystem dbfs_data oracle
*****************************************************************
Run on Both nodes
Configure Shared Libraries
# echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf
# cd /usr/local/lib
# ln -s $ORACLE_HOME/lib/libclntsh.so.11.1
# ln -s $ORACLE_HOME/lib/libnnz11.so
# locate libfuse.so
# ln -s /lib64/libfuse.so.2 libfuse.so
# ldconfig
# ldconfig -p | grep fuse
# ln -s $ORACLE_HOME/bin/dbfs_client /sbin/mount.dbfs
# echo user_allow_other > /etc/fuse.conf
# chmod 644 /etc/fuse.conf
*****************************************************************
Download and modify mount-dbfs.zip from oracle website (Configuring DBFS on Oracle Exadata Database Machine (Doc ID 1054431.1)
Copy this zip to tmp folder on one of the server
# unzip mount-dbfs-20160215.zip
Archive: mount-dbfs-20160215.zip
inflating: mount-dbfs.conf
inflating: mount-dbfs.sh
# dos2unix mount-dbfs.conf
dos2unix: converting file mount-dbfs.conf to UNIX format ...
# dos2unix mount-dbfs.sh
dos2unix: converting file mount-dbfs.sh to UNIX format ...
You will have two files
1)mount-dbfs.sh
2)mount-dbfs.conf
We need to modify following values in conf file and leave the others as-is
DBNAME=testdbfs
MOUNT_POINT=/dbfs_mnt
DBFS_USER=dbfs_user
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1
GRID_HOME=/u01/app/11.2.0.4/grid
DBFS_PASSWD=Password
Once modified we need to copy these files to their respective locations on both nodes
using of dcli command here will copy to both nodes. So you dont need to repeat the same
Create a group called dbs_group in root home directory and include the 2 nodes
# cat dbs_group
10.111.11.111
10.111.11.222
# dcli -g ~/dbs_group -l root -d /u01/app/11.2.0.4/grid/crs/script/ -f /tmp/mount-dbfs.sh
# dcli -g ~/dbs_group -l root chown oracle:dba /u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
# dcli -g ~/dbs_group -l root chmod 750 /u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
# dcli -g ~/dbs_group -l root -d /etc/oracle -f /tmp/mount-dbfs.conf
# dcli -g ~/dbs_group -l root chmod 640 /etc/oracle/mount-dbfs.conf
Once copied verify all the locations and check permissions.
Now test the functionality to see if mounts are starting properly
$ cd $GRID_HOME/crs/scripts/
$ ./mount-dbfs.sh start
$ ./mount-dbfs.sh status
$ ./mount-dbfs.sh stop
When you start you can verify if the mount is up by checking df -kh . This should list the new mount
Once verified now its time to add to cluster
*****************************************************************
Create a shell script on one of the nodes and run it
$ vi add-dbfs-resource.sh
##### start script add-dbfs-resource.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mnt
DBNAME=testdbfs
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0.4/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource.sh
Run it as Oracle user
$ add-dbfs-resource.sh
Now verify using to see if the mount shows up in resources
crsctl status res -t
To start the mounts
crsctl start resource dbfs_mnt
To stop the mounts
crsctl stop resource dbfs_mnt -f
*****************************************************************
Note: When stopping make sure no session is open and pointing to the folder. If you are inside the folder and try to unmount you get error saying Busy.
If you try to stop the database when mounts are online you will get error as there is dependencies. To shut down we need to specify force option
After eveything is mounted the filesystem permissions will be changed back to root:root
srvctl start database -d testdbfs -f
You can also unmount using following
fusermount -u /dbfs_mnt
To force
umount -l /dbfs_mnt
fusermount -uz /dbfs_mnt
*****************************************************************
To drop filesystem
@dbfs_drop_filesystem.sql oracle
*****************************************************************
Testing mounts if working properly
$ # Connection prompts for password and holds session.
$ dbfs_client dbfs_user@testdbfs /dbfs_mnt
$ # Connection retrieves password from file and releases session.
echo Password > pw.f
$ nohup dbfs_client dbfs_user@testdbfs /dbfs_mnt < pw.f &
*****************************************************************
For debugging
Please refer to
http://www.hhutzler.de/blog/debugging-prolbems-when-mounting-a-dbfs/
$ means oracle user
# means root user
Fuse, Fuse libs, Kernel Devel package, Repository database with big file tablespace
Install Fuse, Fuse libs and Kernel Devel package
Run all the steps below until directory creation on both the database nodes.
# cd /etc/yum.repos.d
Verify if you have public repository is present. If not wget it. Or else when ever you try to run install command it says nothing to do
# /usr/openv/pdde/pdopensource/bin/wget http://public-yum.oracle.com/public-yum-ol6.repo
After the above is added run the below to install fuse and kernel-devel
# yum install fuse fuse-libs kernel-devel
Once installed you will get
Setting up Install Process
Package fuse-2.9.4-1.0.1.el6.x86_64 already installed and latest version
Package fuse-libs-2.9.4-1.0.1.el6.x86_64 already installed and latest version
Package kernel-devel-2.6.32-642.11.1.el6.x86_64 already installed and latest version
Nothing to do
Optional - Dos2Unix
# sudo yum install dos2unix
Create the mount directory
# mkdir /dbfs_mnt
# chown oracle:dba /dbfs_mnt
*****************************************************************
Create Repository Database testdbfs
Create tablespace and dbfs_user
CREATE BIGFILE TABLESPACE DBFS_DATA DATAFILE '+DATA_DG' SIZE 1T AUTOEXTEND OFF NOLOGGING
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;
CREATE USER DBFS_USER IDENTIFIED BY "Password" DEFAULT TABLESPACE DBFS_DATA TEMPORARY TABLESPACE TEMP;
grant create session, resource, create view, dbfs_role to dbfs;
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus dbfs_user/dbfs_user
-- Here dbfs_Data is tablespace name and oracle is the folder name that will be created inside dbfs_mnt directory
@$ORACLE_HOME/rdbms/admin/dbfs_create_filesystem dbfs_data oracle
*****************************************************************
Run on Both nodes
Configure Shared Libraries
# echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf
# cd /usr/local/lib
# ln -s $ORACLE_HOME/lib/libclntsh.so.11.1
# ln -s $ORACLE_HOME/lib/libnnz11.so
# locate libfuse.so
# ln -s /lib64/libfuse.so.2 libfuse.so
# ldconfig
# ldconfig -p | grep fuse
# ln -s $ORACLE_HOME/bin/dbfs_client /sbin/mount.dbfs
# echo user_allow_other > /etc/fuse.conf
# chmod 644 /etc/fuse.conf
*****************************************************************
Download and modify mount-dbfs.zip from oracle website (Configuring DBFS on Oracle Exadata Database Machine (Doc ID 1054431.1)
Copy this zip to tmp folder on one of the server
# unzip mount-dbfs-20160215.zip
Archive: mount-dbfs-20160215.zip
inflating: mount-dbfs.conf
inflating: mount-dbfs.sh
# dos2unix mount-dbfs.conf
dos2unix: converting file mount-dbfs.conf to UNIX format ...
# dos2unix mount-dbfs.sh
dos2unix: converting file mount-dbfs.sh to UNIX format ...
You will have two files
1)mount-dbfs.sh
2)mount-dbfs.conf
We need to modify following values in conf file and leave the others as-is
DBNAME=testdbfs
MOUNT_POINT=/dbfs_mnt
DBFS_USER=dbfs_user
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1
GRID_HOME=/u01/app/11.2.0.4/grid
DBFS_PASSWD=Password
Once modified we need to copy these files to their respective locations on both nodes
using of dcli command here will copy to both nodes. So you dont need to repeat the same
Create a group called dbs_group in root home directory and include the 2 nodes
# cat dbs_group
10.111.11.111
10.111.11.222
# dcli -g ~/dbs_group -l root -d /u01/app/11.2.0.4/grid/crs/script/ -f /tmp/mount-dbfs.sh
# dcli -g ~/dbs_group -l root chown oracle:dba /u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
# dcli -g ~/dbs_group -l root chmod 750 /u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
# dcli -g ~/dbs_group -l root -d /etc/oracle -f /tmp/mount-dbfs.conf
# dcli -g ~/dbs_group -l root chmod 640 /etc/oracle/mount-dbfs.conf
Once copied verify all the locations and check permissions.
Now test the functionality to see if mounts are starting properly
$ cd $GRID_HOME/crs/scripts/
$ ./mount-dbfs.sh start
$ ./mount-dbfs.sh status
$ ./mount-dbfs.sh stop
When you start you can verify if the mount is up by checking df -kh . This should list the new mount
Once verified now its time to add to cluster
*****************************************************************
Create a shell script on one of the nodes and run it
$ vi add-dbfs-resource.sh
##### start script add-dbfs-resource.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/11.2.0.4/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mnt
DBNAME=testdbfs
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0.4/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource.sh
Run it as Oracle user
$ add-dbfs-resource.sh
Now verify using to see if the mount shows up in resources
crsctl status res -t
To start the mounts
crsctl start resource dbfs_mnt
To stop the mounts
crsctl stop resource dbfs_mnt -f
*****************************************************************
Note: When stopping make sure no session is open and pointing to the folder. If you are inside the folder and try to unmount you get error saying Busy.
If you try to stop the database when mounts are online you will get error as there is dependencies. To shut down we need to specify force option
After eveything is mounted the filesystem permissions will be changed back to root:root
srvctl start database -d testdbfs -f
You can also unmount using following
fusermount -u /dbfs_mnt
To force
umount -l /dbfs_mnt
fusermount -uz /dbfs_mnt
*****************************************************************
To drop filesystem
@dbfs_drop_filesystem.sql oracle
*****************************************************************
Testing mounts if working properly
$ # Connection prompts for password and holds session.
$ dbfs_client dbfs_user@testdbfs /dbfs_mnt
$ # Connection retrieves password from file and releases session.
echo Password > pw.f
$ nohup dbfs_client dbfs_user@testdbfs /dbfs_mnt < pw.f &
*****************************************************************
For debugging
Please refer to
http://www.hhutzler.de/blog/debugging-prolbems-when-mounting-a-dbfs/
$ means oracle user
# means root user
No comments:
Post a Comment