Skip to main content

Setup iSCSI Target & Initiator on CentOS 6 with CHAP Authentication

Setup iSCSI Target & Initiator on CentOS 6

What is iSCSI?

iSCSI is a network protocol defined to allow scsi commands over TCP/IP stack, allowing to hosts I/O block operations like a device storage attached locally. With iSCSI we have two diffrent basic concepts:

iSCSI initiator: we can say it "scsi client", and it can be connected to the server in two ways:

Software initiator: Normally is implemented as a module that will used by the network interface and emulate scsi devices. this is the most implementation.

Hardware initiator: It use a dedicated hardware to implement iSCSI. The work load of the iSCSI process is handled by this hardware.

iSCSI target: This is a references of storage resource located in a iSCSI server.

The servers (targets) have logical units or LUN, that is a number to identify a logical unit storage in the server and the client (initiator) negotiates the connection of a specified LUN with the target. In a iSCSI network each iSCSI element has a unique and permanent iSCSI name and it can be assigned for perticular address for access. Normally to name an iSCSI element, is followed the IQN (iSCSI qualified name).

iqn architecure:

* literal iqn.
* date (yyyy-mm) year and month.
* reversed domain name of the authority (com.linux-links, com.solutions-koenig, com.example)

iSCSI is the most commonly protocol used for the SAN (storage area network) because is cheaper than other protocols for network storage like FCoE (Fibre channel over ethernet). We will talk about FCOE in the next post.


SAN’s are very used to make storage devices accessible to the servers so that the devices appear like locally attached devices to the OS.

In my scenario I’ll configure on CentOS 6.5 an iSCSI target sharing an array disk device and other server as an iSCSI initiator that will map this device as local storage.


Configuring the iSCSI target:

Step 1: Install the software package:

# yum -y install scsi-target-utils

Edit target iSCSI configuration:

# vim /etc/tgt/targets.conf


backing-store /dev/sdb1
initiator-address 192.168.50.133
incominguser vishvendra vishvendra@123

Start the iSCSI target daemon and configure to startup at boot system:

# /etc/init.d/tgtd start

# chkconfig tgtd on

If iptables is running on the server, then we need to allow iSCSI traffic in iptables. Now we need to add the following rule in the iptables.

# iptables -A INPUT -p tcp --dport 3260 -j ACCEPT

# service iptables save

check the iSCSI target configuration:

# tgtadm --mode target --op show

Target 1: iqn.2014-01.com.linux-links.target01
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 5363 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/sdb1
            Backing store flags:
    Account information:
        vishvendra
    ACL information:
        192.168.50.133


Configuring the iSCSI Initiator

Install the software package:

# yum -y install iscsi-initiator-utils

Configure the iqn name for the initiator:

# vim /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.2012-10.net.cpd:san.initiator01
InitiatorAlias=initiator01


Edit the iSCSI initiator configuration:

# vim /etc/iscsi/iscsid.conf

# To manually startup the session set to "manual". The default is automatic.
node.startup = automatic

# To enable CHAP authentication
node.session.auth.authmethod = CHAP

# To set a CHAP username and password for initiator
node.session.auth.username = vishvendra
node.session.auth.password = vishvendra@123

Start iSCSI initiator daemon:

# /etc/init.d/iscsid start
# chkconfig iscsid on

[root@agent ~]# iscsiadm -m discovery -t st -p master.example.com
192.168.50.132:3260,1 iqn.2014-01.com.linux-links.target01

[root@agent ~]# iscsiadm -m node -T iqn.2014-01.com.linux-links.target01 -p master.example.com -l
Logging in to [iface: default, target: iqn.2014-01.com.linux-links.target01, portal: 192.168.50.132,3260] (multiple)
Login to [iface: default, target: iqn.2014-01.com.linux-links.target01, portal: 192.168.50.132,3260] successful.
[root@agent ~]#

[root@agent ~]# tailf /var/log/messages

Feb 23 05:09:38 agent kernel: scsi 36:0:0:0: RAID              IET      Controller       0001 PQ: 0 ANSI: 5
Feb 23 05:09:38 agent kernel: scsi 36:0:0:0: Attached scsi generic sg3 type 12
Feb 23 05:09:38 agent kernel: scsi 36:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0001 PQ: 0 ANSI: 5
Feb 23 05:09:38 agent kernel: sd 36:0:0:1: Attached scsi generic sg4 type 0
Feb 23 05:09:38 agent kernel: sd 36:0:0:1: [sdb] 10474317 512-byte logical blocks: (5.36 GB/4.99 GiB)
Feb 23 05:09:38 agent iscsid: Connection4:0 to [target: iqn.2014-01.com.linux-links.target01, portal: 192.168.50.132,3260] through [iface: default] is operational now
Feb 23 05:09:38 agent kernel: sd 36:0:0:1: [sdb] Write Protect is off
Feb 23 05:09:38 agent kernel: sd 36:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Feb 23 05:09:38 agent kernel: sdb: unknown partition table
Feb 23 05:09:38 agent kernel: sd 36:0:0:1: [sdb] Attached SCSI disk


[root@agent ~]# fdisk /dev/sdb
[root@agent ~]# mkfs.ext4 /dev/sdb1

[root@agent ~]# blkid /dev/sdb1

[root@agent ~]# vim /etc/fstab

UUID="c8e7a801-d23b-40de-a5fb-a480c80c7bfd"  /disk      ext4 _netdev    0 0

With the mount option "_netdev". there is a script "netfs" is responsible of the mount to this device. Without this option, Linux will try to mount this device before load the network support. We have to check that netfs is enabled to run in the runlevels:

[root@agent ~]# chkconfig --list netfs
[root@agent ~]# mount -a
[root@agent ~]# mount


This post is also available at our website : www.linux-links.com




















Comments

Popular posts from this blog

Docker Container Management from Cockpit

Cockpit can manage containers via docker. This functionality is present in the Cockpit docker package. Cockpit communicates with docker via its API via the /var/run/docker.sock unix socket. The docker API is root equivalent, and on a properly configured system, only root can access the docker API. If the currently logged in user is not root then Cockpit will try to escalate the user’s privileges via Polkit or sudo before connecting to the socket. Alternatively, we can create a docker Unix group. Anyone in that docker group can then access the docker API, and gain root privileges on the system. [root@rhel8 ~] #  yum install cockpit-docker    -y  Once the package installed then "containers" section would be added in the dashboard and we can manage the containers and images from the console. We can search or pull an image from docker hub just by searching with the keyword like nginx centos.   Once the Image downloaded we can start a contai

Remote Systems Management With Cockpit

The cockpit is a Red Hat Enterprise Linux web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. In RHEL 8 Cockpit is the default installation candidate we can just start the service and then can start the management of machines. For RHEL7 or Fedora based machines we can follow steps to install and configure the cockpit.  Following are the few features of cockpit.  Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions Installation of cockpit package.  [root@rhel8 ~] #  dnf   install cockpit cockpit-dashboard  -y  We need to enable the socket.  [root@rhel8 ~] #  systemctl enable --now cockpit.socket If firewall is runnin

Containers Without Docker on RHEL/Fedora

Docker is perfectly doing well with the containerization. Since docker uses the Server/Client architecture to run the containers. So, even if I am a client or developer who just wants to create a docker image from Dockerfile I need to start the docker daemon which of course generates some extra overhead on the machine.  Also, a daemon that needs to run on your system, and it needs to run with root privileges which might have certain security implications. Here now the solution is available where we do not need to start the daemon to create the containers. We can create the images and push them any of the repositories and images are fully compatible to run on any of the environment.  Podman is an open-source Linux tool for working with containers. That includes containers in registries such as docker.io and quay.io. let's start with the podman to manage the containers.  Install the package  [root@rhel8 ~] # dnf install podman -y  OR [root@rhel8 ~] # yum