In general, once an ASM instance completes the ASM disk's discovery operation, you can then use the ASM disks to either build a disk group or add it to a pre-existing disk group. You need at least one eligible ASM disk (with HEADER_STATUS either
CANDIDATE, FORMER, or PROVISIONED) to be able to build a disk group or to add to a pre-existing disk group. There are various methods to create a disk group, such as, DBCA (in pre 11g R2), ASMCA (with 11g R2), Grid Control, and CREATE DISKGROUP SQL statement in the SQLPLUS prompt. Although the easiest and most convenient way to build and manage a disk group is to use the GUI tools such as DBCA, ASMCA, or Grid Control, we are going to demonstrate how to create and manage a disk group using a set of SQL statements.
In order to create a disk group on the local ASM instance, you first need to identify the eligible ASM disks discovered by the instance. The following SQL statements are useful to list the eligible ASM disks:
export ORACLE_SID=+ASM1
export ORACLE_HOME=$ASM_HOME (in 11gR1)
export ORACLE_HOME=$ORA_GRID_HOME (in 11gR2)
SQL> SELECT path,header_status,mode_status,total_mb
FROM v$asm_disk WHERE header_status IN ('CANDIDATE','FORMER','PROVISIONED');
The query lists each disk's information along with the path, the header status, and size of the disk. Once you list the disks, you can use the following set of SQL statements to create a new disk group with different levels of mirroring options.
The following SQL statement creates a new disk group named DATA with one ASM disk, sde1 located under the /dev/ location. The EXTERNAL REDUNDANCY clause in the SQL statement indicates that you are relying on the STORAGE-level mirroring
(protection) option, not using the Oracle-provided mirroring level:
CREATE DISKGROUP data EXTERNAL REDUNDANCY DISK '/dev/sde1';
The following SQL statement creates a new disk group named DATA that consists of two failure groups with one disk to each failure group, using the Oracle-provided NORMAL redundancy (a two-way mirroring) level:
CREATE DISKGROUP data NORMAL REDUNDANCY FAILGROUP fgp1 DISK '/dev/sde1', FAILGROUP fgp2 DISK '/dev/sdf1';
The following SQL statement creates a new disk group named DATA with three failure groups with one disk to each failure group, using the Oracle-provided highest level of redundancy, a three-way mirroring-level option:
CREATE DISKGROUP data HIGH REDUNDANCY
FAILGROUP fgp1 DISK '/dev/sde1', FAILGROUP fgp2 DISK '/dev/sdf1', FAILGROUP fgp3 DISK '/dev/sdg1';
Note: Failure groups are used to copy the redundant copies of every extent. When the mirroring option is skipped, ASM applies the NORMAL REDUNDANCY (a two-way mirroring) level by default.
After a disk group is successfully built, it will then automatically mount in the local instance. If a server parameter file is being used (SPFILE), the name of the disk group is successfully added to the ASM_DISKGROUP initialization parameter in order
to mount the disk group automatically on ASM instance restarts. In order to make the disk group available on other ASM instances in a cluster, you simply need to mount the disk group running the ALTER DISKGROUP data MOUNT statement (ensure the same set of disks are accessible on the other nodes).
My Journey as Oracle DBA+DevOps Engineer
Wednesday, December 31, 2014
ASM disk group administration-Creating a disk group
Tuesday, December 30, 2014
ASM instance startup/shutdown
Managing an ASM instance is no different from managing the typical RDBMS database instances. The ASM instance could be managed by either using a set of SQLPLUS commands or the cluster aware SRVCTL utility. Nevertheless, it is strongly
recommended that you use the SRVCTL utility for managing the (start/stop) ASM instance in an RAC environment.
The ASM instance can be opened either in NOMOUNT, MOUNT, or RESTRICTED modes with the STARTUP command at the SQLPLUS prompt. When you have a planned maintenance on an ASM instance, you can open the ASM instance in a RESTRICT
mode to avoid any possible connections from the database instances. When the ASM instance is opened gracefully, it first discovers the disks and then mounts all the existing disk groups on the local instance.
To shut down the local ASM instance, you can use the various options available with the SHUTDOWN command in the SQLPLUS prompt. The supported options are NORMAL, IMMEDIATE, TRANSACTIONAL, and ABORT. As mentioned previously, you can use the SRVCTL utility to bring down the ASM instance as well. The following list of examples demonstrates how to start up/shut down an ASM instance using SQLPLUS and SRVCTL utilities:
srvctl stop asm –n raclinux1 –o normal:immediate:transactional:abort
export ORACLE_SID=+ASM1
export ORACLE_HOME=$ASM_HOME (in 11gR1)
export ORACLE_HOME=$ORA_GRID_HOME (in 11gR2)
sqlplus / as sysasm
SQL> SHUTDOWN NORMAL:IMMEDIATE:TRANSACTIONAL:ABORT
This example stops the ASM instance on raclinux1 node. Alternatively, you can also use either of the shutdown options.
srvctl start asm –n raclinux1 –o nomount:mount:restrict
export ORACLE_SID=+ASM1
export ORACLE_HOME=$ASM_HOME (in 11gR1)
export ORACLE_HOME=$ORA_GRID_HOME (in 11gR2)
sqlplus / as sysasm
SQL> STARTUP NOMOUNT:MOUNT:RESTRICT
This startup command starts up the ASM instance on raclinux1 node. When you start the instance in NOMOUNT mode, an ASM instance will be started without mounting the existing disk groups. When the instance is started in RESTRICT mode, no database
instance can attach to the ASM instance and all the disk groups are opened in restricted mode too.
Ensure that the database instances that are currently associated with the local ASM instance are stopped prior to shutting down the local ASM instance to avoid encountering an ASM instance shutdown error. In this case, the ASM instance will remain opened. Alternatively, you could use the ABORT option to forcefully shut down the currently connected database's instance before shutting down the ASM instance. It is also strongly recommended to dismount any existing Oracle Cluster File System (ACFS) to avoid any application I/O errors.
Note: If the voting disk and OCR files are placed in a disk group, you will not be able to stop the ASM instance.
In order to stop the ASM instance, you need to stop the cluster.
Friday, December 26, 2014
ASM dynamic views
To manage and supervise the ASM instance and its primary components (disk and disk group) effectively, you really need to know the ASM specific dynamic views, and how to map them to extract useful information about disks, disk groups, and
so on. This section will help you to understand the use of the ASM specific dynamic views available in ASM.
There are about 19 dynamic views available, as of 11g R2, and each dynamic view provides different helpful information. All the ASM dynamic views are predefined with V$ASM_. In the following section, we are going to focus on a handful of ASM
dynamic views.
V$ASM_DISK
When an ASM instance completes the disk discovery operation by reading the disk header information, all disks (used and usable) will then list in the V$ASM_DISK view. Each individual disk has a row in the V$ASM_DISK dynamic view and contains very useful information. The PATH column specifies the disk's name and location. The HEADER_STATUS column, in most cases, contains the following three possible values:
• CANDIDATE: Indicates that the unused disks are ready for use.
• FORMER: Indicates that the disk was formerly part of a disk group and is now ready for use.
• MEMBER: Indicates that the disk is currently part of an active disk group.
Apart from the preceding values, the view also contains other useful information about the disks, such as total size, free size, physical reads, redundancy level, and so on. On the flipside, every time you run a query against this view, ASM initiates disk
discovery operations for the new disks, where it reads all disk header information. Querying against this view could be an expensive operation at times, and could impact performance.
The following list of SQL statements demonstrates some of the useful queries against the view. However, prior to running these commands, ensure you are connected to an ASM instance through sqlplus with SYSASM privilege:
SELECT path,header_status,total_mb FROM v$asm_disk WHERE header_status in ('CANDIDATE','FORMER','PROVISIONED');
The preceding command displays information about the disks that are eligible to use.
V$ASM_DISKGROUP
After a disk group is successfully created in the local ASM instance, the disk group summary is visible in the V$ASM_DISKGROUP view. Each disk group maintains a row in the view along with the important information, such as disk group number and name, total disk group size, used space, free space, redundancy type, compatibility, mount state, and so on. Every time a query is run against the view, it is likely to have a similar impact to querying the V$ASM_DISK views. The following SQL command extracts the mounted disk group's name, total disk group size, and the free space left in the group:
SELECT name,state,total_mb,usable_file_mb FROM v$asm_diskgroup;
V$ASM_OPERATION
V$ASM_OPERATION is one of the useful views that displays a row for each long running operation in the ASM instance. For example, when a disk is being dropped or attached to an existing disk group, an ASM should initiate and complete the
rebalancing operations just before releasing the subject disk. Therefore, the view will present useful details, such as the amount of work that has been completed, and show the estimated time (in minutes) required to complete the operations. This should help
you to understand how long the operation will take to complete.
V$ASM_DISK_STAT
Although the V$ASM_DISK and V$ASM_DISK_STAT views display nearly identical information, querying the V$ASM_DISK_STAT view results is a less expensive operation in comparison to the V$ASM_DISK view. On the flip side, this view doesn't display the details about new disks on the system that the ASM instance has yet to discover. As querying the view is less expensive, it is strongly recommended that you use this view in order to display the information and statistics (read/write)
about the existing disks.
V$ASM_DISKGROUP_STAT
The V$ASM_DISKGROUP_STAT view displays statistical information about the mounted disk groups in the ASM instance. Unlike the V$ASM_DISKGROUP view, a query against this view doesn't result in new disk discovery operations but is less expensive in terms of performance. Therefore, it is recommended to use this view to display existing disk group information and statistical information about the disk groups.
V$ASM_CLIENT
When the V$ASM_CLIENT view is queried in the ASM instance, it displays the information about the database instances that are using the disk groups mounted and managed by the ASM instance.
ASM background processes
In addition to the typical set of instance background processes, an ASM instance comes with a few additional background processes that assist the ASM instance to perform its course of action. Therefore, understanding the individual role played by
these ASM-specific background processes in order to know how they help the ASM instance to manage and carry out its functionality, will be helpful to DBAs. In this section, we are going to cover the most useful ASM-specific background processes
and the part played by the individual ASM-specific background processes:
ASM instance configuration and management
Staring with 10g R1 and onwards, Oracle supports two types of instances: RDBMS and ASM. An ASM instance is a special kind of instance with only Shared Global Area (SGA) that typically consists of memory components such as Shared Pool,
Large Pool, Free Memory and ASM Cache, and a set of default background process, with some additional ASM-specific background processes. The ASM instance doesn't hold any physical structure and the name of the instance typically starts with +ASM. The instance would be generally named as +ASMn (where n represents the instance number) in a cluster environment. The INSTANCE_TYPE initialization parameter controls the role played by these two instances. The ASM instance configuration
requires only a handful of initialization parameters. These parameters are discussed in the following section.
ASM instances efficiently support both non-RAC and RAC databases. Only one ASM instance is required per node, irrespective of the number of databases/instances running on the node. In an RAC environment, you need to configure one ASM
instance per node and these ASM instances across the cluster communicate with each other using an interconnected network communication. The database instances interact with the ASM instance to manage the database datafiles. ASM instances
typically listen on 1521 port by default.
ASM initialization parameters
In this section, we are going to summarize the essential ASM instance initialization parameters, their usage, and recommended values:
ASM disk group
A disk group is a logical container for one or more ASM disks and is the highest level of data structure in ASM. When a database is configured to employ the disk group, the disk group then becomes the default location for its datafiles. The disk group
can be used to place various database file types, such as datafiles, online redo, archivelogs, RMAN backupsets, OCR and Voting disks (in 11g R2), and more. ASM also provides the flexibility of utilizing a single disk group by multiple ASM instances
and databases across a cluster.
After a disk group is successfully created and mounted for the first time in the ASM instance, the name of the disk group is automatically affiliated with the ASM_DISKGROUPS initialization parameter to be able to mount the disk group at ASM
instance restarts.
In general, when a datafile is created in a disk group, the datafile extents are striped/distributed evenly across the available disks of the disk group. Optionally, you can also set the following specified mirroring level at the disk group to protect the data
integrity by storing redundant copies of data (extents) in a separate failure group to cope with the disk outage symptom:
• External redundancy: Relies on the STORAGE (RAID)-level mirroring redundancy option to protect the data
• Normal redundancy: Provides a default two-way mirroring option
• High redundancy: Provides a three-way mirroring redundancy option of ASM files
As of 11g R2, the following limits have been imposed on the ASM instance:
• A maximum of 63 disk groups in a storage system
• 1 million files per disk group
The following diagram illustrates the structure of a disk group with three disks assigned to it:
ASM disk
A disk is a primary element of an ASM instance. A disk could be built or formed either from a Logical Unit Number (LUN) by a storage array, a disk partition or an entire disk, or a logical volume or Network Attached File (NFS). An ASM instance discovers the disks within the paths specified with the ASM_DISKSTRING initialization parameter. Whenever a new disk is discovered by the ASM instance, the header status of the disk is set to the CANDIDATE flag and makes the disk available in the instance.
After the disks are successfully discovered by the local ASM instance, they will appear in the V$ASM_DISK dynamic view of the local ASM instance and the disks are ready to use to build a new ASM disk group, or can be added to any pre-existing
ASM disk groups. However, when no particular paths for the disks are specified with the ASM_DISKSTRING initialization parameter, ASM by default look in the following OS specific paths to discover the disks:
On a cluster environment, ensure that all ASM disks are visible across all nodes and each node must have the exact set of permissions (660) and ownership (oracle:dba) to avoid running into any sorts of problems. Oracle also strongly advises to have the
same size of disks in a disk group to maintain the disk group balance. On the other hand, you have the flexibility to define a different naming convention for a disk across the nodes in a cluster.
The following limits, as of 11g R2, have been imposed on ASM disks:
• A maximum of 10,000 disks
• Up to 2 TB maximum storage per ASM disk, with EXADATA 4PB per ASM disk