iSCSI – the WWW(Wild Wild West) of SAN

cowboy, dark, country-2582722.jpg

Let’s go over iSCSI on Pure Storage //FlashArray and Linux, I’ll be using Ubuntu.

Let’s go over a few things so you have them out the gate:

  • iSCSI an acronym for Internet Small Computer Systems Interface 
  • Configuring a Linux Host for iSCSI, this is Pure’s official documentation – link.
  • Linux (Ubuntu) Documentation- iSCSI Initiator – link.
  • Want a quick lesson on iSCSI – pick up here from Eye on Tech – link.

Let’s start with the basics on the Ubuntu Server. First off, you have to install iscsi and multipath.

sudo apt install open-iscsi multipath-tools

Get the IQN of your Linux Host – **note: you’ll need this, so copy it.

sudo cat /etc/iscsi/initiatorname.iscsi

root@a02-30:~# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2005-03.org.open-iscsi:3e42db5b719
root@a02-30:~#

Now – let’s jump over to the FlashArray, I’ll provide both CLI and show the GUI of each step.

FlashArray

CLI

  1. Create a host. Note the name of my host is a02-30, you can really name this anything you want.
    • purehost create a02-30

GUI

Name your host and choose your hosts personality. For Linux, use None – more information here.

CLI

Set the IQN of the host, remember we saved this above. (our’s is iqn.2005-03.org.open-iscsi:3e42db5b719)

purehost setattr --addiqnlist iqn.2005-03.org.open-iscsi:3e42db5b719

GUI

This is import, now you must attach a Volume to the host on the FlashArray. You can use an existing volume or create a new volume. For this test, I’m going to create a 3TB volume call jh_test.

CLI

Add volume to the host on the FlashArray.

purevol connect jh_test --host a02-30

GUI

Last step on the FlashArray – let’s find the ports, ip address’s we will use on the Linux host to connect to the FlashArray.

CLI

pureport list

GUI

Navigate to Settings Network, scroll down to Ethernet, find all the ports with an IP, Services should be iSCSI.

Linux Host

  • Configure Multipath
  • Discovery Storage
  • Login to Storage
  • Configure Multipath and iSCSI to start on boot
  • Mount the Volume on the Linux Host
  • Provision a file system

At a minimum you will want to add the following to the /etc/multipath.conf file, you may have to create it.

devices {
  device {
        vendor "PURE"
        product "FlashArray"
        fast_io_fail_tmo 10
        path_grouping_policy "group_by_prio"
        failback "immediate"
        prio "alua"
        hardware_handler "1 alua"
        max_sectors_kb 4096
    }
}

Restart Multipath using

systemctl restart multipathd.service

Now, it’s time to discover the iSCSI storage, you’ll the IP’s you gathered from the array using pureport list. Make sure you include the ports, i.e. 10.5.17.50:3260

sudo iscsiadm -m discovery -t st -p 10.5.17.50:3260

root@a02-30:~# sudo iscsiadm -m discovery -t st -p 10.5.17.50:3260
10.6.17.50:3260,1 iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f
10.6.17.51:3260,1 iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f
10.5.17.51:3260,1 iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f
10.5.17.50:3260,1 iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f

Now that we show them discovered, let’s login to them.

sudo iscsiadm -m node -p 10.5.17.50 --login

root@a02-30:~# sudo iscsiadm -m node -p 10.5.17.50 --login
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f, portal: 10.5.17.50,3260] (multiple)
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f, portal: 10.5.17.50,3260] successful.

Use the following commands to configure automatic start of iSCSI service.

sudo iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
sudo iscsiadm -m node --op=update -n node.startup -v automatic
systemctl enable open-iscsi

root@a02-30:~# sudo iscsiadm -m node --op=update -n node.conn[0].startup -v automatic
root@a02-30:~# sudo iscsiadm -m node --op=update -n node.startup -v automatic
root@a02-30:~# systemctl enable open-iscsi
Synchronizing state of open-iscsi.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable open-iscsi

Restart iSCSI services

systemctl restart iscsid.service

Now, we can login to the discovered volumes.

sudo iscsiadm -m node --loginall=automatic

root@a02-30:~# sudo iscsiadm -m node --loginall=automatic
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f, portal: 10.6.17.50,3260] (multiple)
iscsiadm: default: 1 session requested, but 1 already present.
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f, portal: 10.5.17.51,3260] (multiple)
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f, portal: 10.6.17.51,3260] (multiple)
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f, portal: 10.6.17.50,3260] successful.
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f, portal: 10.5.17.51,3260] successful.
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.630b4cac2e95652f, portal: 10.6.17.51,3260] successful.

Let’s provision the storage on the host…

You will need to create a mount point. I’ll use /mnt/jh_test

root@a02-30:~# sudo mkdir /mnt/jh_test

Let’s get the FlashArray Volume ID

sudo multipath -ll

root@a02-30:~# sudo multipath -ll
3624a93702b211dab33424da800011454 dm-1 PURE,FlashArray
size=3.0T features='0' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 11:0:0:1 sdb 8:16 active ready running
  |- 13:0:0:1 sdd 8:48 active ready running
  |- 12:0:0:1 sdc 8:32 active ready running
  `- 14:0:0:1 sde 8:64 active ready running

We’ll need to create a partition and filesystem, let’s using the volume ID above: 3624a93702b211dab33424da800011454

sudo fdisk /dev/mapper/3624a93702b211dab33424da800011454
n
p 
1
enter
enter
w 

root@a02-30:~# sudo fdisk /dev/mapper/3624a93702b211dab33424da800011454

Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
The size of this disk is 3 TiB (3298534883328 bytes). DOS partition table format cannot be used on drives for volumes larger than 2199023255040 bytes for 512-byte sectors. Use GUID partition table format (GPT).

Created a new DOS disklabel with disk identifier 0x7428904f.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (8192-4294967295, default 8192):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (8192-4294967295, default 4294967295):

Created a new partition 1 of type 'Linux' and of size 2 TiB.

Command (m for help): w
The partition table has been altered.
Failed to add partition 1 to system: Invalid argument

The kernel still uses the old partitions. The new table will be used at the next reboot.
Syncing disks.

Now that we have a partition, let’s create the File System

sudo mkfs.ext4 /dev/mapper/3624a93702b211dab33424da800011454-part1

root@a02-30:~# sudo mkfs.ext4 /dev/mapper/3624a93702b211dab33424da800011454-part1
mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done
Creating filesystem with 536869888 4k blocks and 134217728 inodes
Filesystem UUID: fed34e2e-3df0-419f-8e30-8cbcb4bd8c07
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
	102400000, 214990848, 512000000

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

Mount the filesystem to our /mnt/jh_test

sudo mount /dev/mapper/3624a93702b211dab33424da800011454-part1 /mnt/jh_test/

That’s it.

root@a02-30:~# cd /mnt/jh_test/
root@a02-30:/mnt/jh_test# ls
lost+found
root@a02-30:/mnt/jh_test# multipath -ll
3624a93702b211dab33424da800011454 dm-1 PURE,FlashArray
size=3.0T features='0' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 11:0:0:1 sdb 8:16 active ready running
  |- 13:0:0:1 sdd 8:48 active ready running
  |- 12:0:0:1 sdc 8:32 active ready running
  `- 14:0:0:1 sde 8:64 active ready running
root@a02-30:/mnt/jh_test#
About James Higley 7 Articles
James “higs” Higley is a Systems Engineer at Pure Storage [TECHNOLOGY]. Often out side of work, you can find James hanging out with his family and friends. [LIFE]

Be the first to comment

Leave a Reply

Your email address will not be published.


*