How to add new multipath disks to ASM diskgroup on RHEL7/CentOS7, Oracle 11.2
Install and start multipath tools:

yum install device-mapper-multipath
mpathconf --enable
systemctl enable multipathd
systemctl start multipathd

If the system does not see the newly attached FC disks, you need to rescan SCSI bus:

ls /sys/class/fc_host/
host1
host2

instruct the driver to rediscover above remote ports:

echo 1 >/sys/class/fc_host/host1/issue_lip
echo 1 >/sys/class/fc_host/host2/issue_lip

wait for some 10-15 seconds and then

echo - - - >/sys/class/scsi_host/host1/scan
echo - - - >/sys/class/scsi_host/host2/scan
systemctl reload multipathd

New disk should now be visible:

multipath -ll
---snip---
mpath18 (360060160085928001e0ec4e356a2e011) dm-12 DGC,RAID 10
[size=6.8T][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:17 sdak 66:64 [active][ready]
\_ 2:0:0:17 sdbc 67:96 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:1:17 sdbu 68:128 [active][ready]
\_ 1:0:0:17 sds 65:32 [active][ready]
---snip---

To rename this newly created LUN mpath18 with wwid 360060160085928001e0ec4e356a2e011 to something more appropriate, e.g. DATA2, edit /etc/multipath.conf and add the following lines in multipaths section:

# 6.8T RAID10 DATA2
multipath {
wwid 360060160085928001e0ec4e356a2e011
alias data2
}

Reload multipath again and LUN should be visible under name data2:

service multipathd reload
multipath -l data2
data2 (360060160085928001e0ec4e356a2e011) dm-12 DGC,RAID 10
[size=6.8T][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=0][active]
\_ 1:0:1:17 sdak 66:64 [active][undef]
\_ 2:0:0:17 sdbc 67:96 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:1:17 sdbu 68:128 [active][undef]
\_ 1:0:0:17 sds 65:32 [active][undef]

ASM on Oracle database 11.2 supports disks up to 2TB so we need to create smaller partitions:

parted /dev/mapper/data2
(parted) mklabel gpt
(parted) mkpart primary 0 2000G
(parted) mkpart primary 2000G 4000G
(parted) mkpart primary 4000G 7516G
(parted) print
Model: Linux device-mapper (dm)
Disk /dev/mapper/data2: 7516GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 2000GB 2000GB primary
2 2000GB 4000GB 2000GB primary
3 4000GB 6000GB 2000GB primary
4 6000GB 7516GB 1516GB primary
(parted) quit

reload multipath and these four new partitions are visible as

/dev/mapper/data2p1
/dev/mapper/data2p2
/dev/mapper/data2p3
/dev/mapper/data2p4

To let udev handle device ownership for ASM, edit /etc/udev/rules.d/90-oracle-permissions.rules and add lines

PROGRAM="/bin/chown oracle:dba /dev/mapper/data2p1"
PROGRAM="/bin/chown oracle:dba /dev/mapper/data2p2"
PROGRAM="/bin/chown oracle:dba /dev/mapper/data2p3"
PROGRAM="/bin/chown oracle:dba /dev/mapper/data2p4"

Execute start_udev to apply ownership change and now we’re ready to mark these disks as ASM disks:

start_udev
/etc/init.d/oracleasm createdisk DATA21 /dev/mapper/data2p1
Marking disk "DATA21" as an ASM disk: [ OK ]
/etc/init.d/oracleasm createdisk DATA22 /dev/mapper/data2p2
Marking disk "DATA22" as an ASM disk: [ OK ]
/etc/init.d/oracleasm createdisk DATA23 /dev/mapper/data2p3
Marking disk "DATA23" as an ASM disk: [ OK ]
/etc/init.d/oracleasm createdisk DATA24 /dev/mapper/data2p4
Marking disk "DATA24" as an ASM disk: [ OK ]

If thi is RAC configuration, let the other cluster nodes rescan disks:

/etc/init.d/oracleasm scandisks

Finally, add new ASM disks to existing disk group ‘datadg’:

alter diskgroup datadg add disk 'ORCL:DATA21' name DATA21;
alter diskgroup datadg add disk 'ORCL:DATA22' name DATA22;
alter diskgroup datadg add disk 'ORCL:DATA23' name DATA23;
alter diskgroup datadg add disk 'ORCL:DATA24' name DATA24;

As soon as you add new disk to diskgroup, automatic disk rebalance process will start. If you have very large database and/or undersized SGA on ASM instance, watch out for ORA-04031 (unable to allocate x bytes of shared memory) errors because rebalance process is quite hungry for shared pool on ASM instance.