Архив категорий Linux

Автор:human

Parted — работа с дисками linux

Как создать новую таблицу разделов mbr и раздел с файловой системой fat32, который займет все место.

<pre>parted -s /dev/sdd mklabel msdos mkpart primary fat32 0% 100%</pre>

Автор:human

Vsun cluster vmware

As Microsoft, VMware has a Software-Defined Storage solution called vSAN which is currently in version 6.2. This solution enables to aggregate local device storages as mechanical disks or SSD and create a highly available datastore. There are two deployment models: hybrid solution and full-flash solution.

In hybrid solution, you must have flash cache devices and mechanical cache devices (SAS or SATA). In full-flash solution you have only flash devices for cache and capacity. The disks either flash or capacity will be aggregated in disk groups. In each disk group, you can have 1 cache device and 7 capacity devices. Moreover, each host can handle 5 disk groups at maximum (35 capacity devices per host).

In this topic I will describe how to implement a hybrid solution in a three-nodes cluster. For the demonstration, ESXi nodes are located in virtual machines hosted by VMware Workstation. Unfortunately, Hyper-V under Windows Server 2016 handle not very well ESXi. Only IDE controllers and legacy network adapters are supported. So I can’t use my Storage Spaces Direct lab to host a vSAN cluster 🙂.

VMware vSAN lab overview

To run this lab, I have installed VMware Workstation 12.x pro on a traditional machine (gaming computer) running with Windows 10 version 1607. Each ESXi virtual machine is configured as below:

  • ESXi 6.0 update 2
  • 2x CPU with 2x Cores each
  • 16GB of memories (6GB required and more than 8GB recommended)
  • 1x OS disk (40GB)
  • 15x hard disks (10GB each)

Then I have deployed the vCenter server 6.0 update 2 in a single Windows Server 2012 R2 virtual machine.

I have deployed the following networks:

  • Management: 10.10.0.0/24 (VLAN ID: 10) – Native VLAN
  • vSAN traffic: 10.10.101.0/24 (VLAN ID: 12)
  • vMotion traffic: 10.10.102.0/24 (VLAN ID: 13)

In this topic, I assume that you have already installed your ESXi and vCenter server. I assume also that each server is reachable on the network and that you have created at least one datacenter in the inventory. All the screenshots have been taken from the vSphere Web Client.

Add ESXi host to the inventory

First of all, connect to your vSphere Web Client and navigate to Hosts and Clusters. As you can see in the following screenshot, I have already created several datacenters and folders. To add the host to the inventory, right click on a folder and select Add Host.

Next specify the host name or IP address of the ESXi node.

Then specify the credential to connect to the host. Once the connection is made, a permanent account is created and used for management and not anymore the specified account.

Then select the license to assign the the ESXi node.

On the next screen, choose if you want to prevent a user to logging directly into this host or not.

To finish, choose the VM location.

Repeat these steps to add more ESXi node to inventory. For the vSAN usage, I will add two additional nodes.

Create and configure the distributed switch

When you buy a vSAN license, a single distributed switch support is included. To support the vSAN, vMotion and management traffic, I’m going to create a distributed switch with three VMKernels. To create the distributed switch, navigate to Networking and right click on VM Network in a datacenter and choose New Distributed Switch as below.

Specify a distributed switch name and click on Next.

Choose a distributed switch version. Because I have only ESXi version 6.0, I choose the last version of the distributed switch.

Next change the number of uplinks as needed and specify the name of the port group. This port group will contain VMKernel adapters for vMotion, vSAN and management traffic.

Once the distributed switch is created, click on it and navigate to Manage and Topology. Click on the button encircled in red in the below screenshot to add physical NICs to uplink port group and to create VMKernel adapters.

In the first screen of the wizard, select Add hosts.

Specify each host name and click on Next.

Leave the default selection and click on Next. By selecting the following tasks to perform, I’ll add physical adapters to uplink port group and I’ll create VMKernel adapters.

In the next screen, assign the physical adapter (vmnic0) to the uplink port group of the distributed switch which has just been created. Once you have assigned all physical adapters, click on Next.

On the next screen, I’ll create the VMKernel adapters. To create them, just click on New adapter.

Select the port group associated to the distributed switch and click on Next.

Then select the purpose of the VMKernel adapter. For this one I choose Virtual SAN traffic.

Then specify an IP address for this virtual adapter. Click on Next to finish the creation of VMKernel adapter.

I create again a new VMKernel adapter for vMotion traffic.

Repeat the creation of VMKernel adapters for each ESXi host. At the end, you should have something like below:

Before make the configuration, the wizard analyzes the impact. Once all is ok, click on Next.

When the distributed switch is configured, it looks like that:

Create the cluster

Now the distributed switch and virtual network adapters are set, we can create the cluster. Come back to Hosts and Clusters in the navigator. Right click on your folder and select New Cluster.

Give a name to your cluster and for the moment, just turn on virtual SAN. I choose a manual disk claiming because I have set to manually which disks are flash and which disks are HDD. This is because ESXi nodes are in VMs and hard disks are detected all in flash.

Next, move the node in the cluster (drag and drop). Once all nodes are in the cluster, you should have an alert saying that there is no capacity. This is because we have selected manual claiming and no disk are for the moment suitable for vSAN.

Claim storage devices into vSAN

To claim a disk, select the cluster where vSAN is enabled and navigate to Disk Management. Click on the button encircled in red in the below screenshot:

As you saw in the below screenshot, all the disks are marked in flash. In this topic I want to implement a hybrid solution. vSphere Web Client offers the opportunity to mark manually a disk as HDD. This is possible because in production, some hardware are not well detected. In this case, you can set it manually. For this lab, I leave three disks in flash and I set 12 disks as HDD for each node. With this configuration, I will create three disk groups composing of one cache device and four capacity device.

Then you have to claim disks. For each node, select the three flash disks and claim them for cache tier. All disks that you have marked as HDD can be claimed for capacity tier.

Once the claiming wizard is finished, you should have three disk groups per node.

If you want to assign the license to your vSAN, navigate to Licensing and select the license.

Final configuration

Now that vSAN is enabled, you can Turn On vSphare HA and vSphere DRS to distribute virtual machines across the nodes.

Some vSphere HA settings must be changed in vSAN environment. You can read these recommendations in this post.

VM Storage policy

vSAN is based on VM Storage policy to configure the storage capacity. This configuration is applied by VM per VM basis with the VM Storage policy. We will discuss about VM Storage policy in another topic. But for the moment, just verify that the Virtual SAN Default Storage policy exists in the VM Storage Policies store.

Conclusion

In this topic we have seen how to create a vSAN cluster. There is no challenge in this but it is just the beginning. To use the vSAN you have to create VM Storage Policy and some of the capacity concept are not easy. We will discuss later about VM Storage policy. If you are interested by the same Microsoft solution, you can read this whitepaper.

Автор:human

Добавляем права на чтение пользователю в postgres. ОШИБКА: нет доступа к отношению

Добавляем права на чтение пользователю в postgres. ОШИБКА: нет доступа к отношению

CREATE ROLE newuser2 WITH LOGIN ENCRYPTED PASSWORD ‘newpass’;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO newuser2;
GRANT USAGE ON SCHEMA public to newuser2;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO newuser2;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO newuser2;
ALTER ROLE newuser2 WITH Superuser;

Автор:human

Хранилище на cepth на 3 нодах

Ceph is an open source storage platform which is designed for modern storage needs. Ceph is scalable to the exabyte level and designed to have no single points of failure making it ideal for applications which require highly available flexible storage.

The below diagram shows the layout of an example 3 node cluster with Ceph storage. Two network interfaces can be used to increase bandwidth and redundancy. This can help to maintain sufficient bandwidth for storage requirements without affecting client applications.

ceph-infrastructure

This example will create a 3 node Ceph cluster with no single point of failure to provide highly redundant storage. I will refer to three host names which are all resolvable via my LAN DNS server; ceph1ceph2 and ceph3 which are all on the jamescoyle.net domain. Each of these nodes has two disks configured; one which runs the Linux OS and one which is going to be used for the Ceph storage. The below output shows the storage available, which is exactly the same on each host. /dev/vda is the root partition containing the OS install and /dev/vdb is an untouched partition which will be used for Ceph.

Before getting started with setting up the Ceph cluster, you need to do some preparation work. Make sure the following prerequisites are met before continuing the tutorial.

You will need to perform the following steps on all nodes in your Ceph cluster. First you will add the Ceph repositories and download the key to make the latest Ceph packages available. Add the following line to a new /etc/apt/sources.list.d/ file.

Add the below entry, save and close the file.

Download the key from Ceph’s git page and install it.

Update all local repository cache.

Note: if you see the below code when running apt-get update then the above wget command has failed – it could be because the Ceph git URL has changed.

Run the following commands on just one of your Ceph nodes. I’ll use ceph1 for this example. Update your local package cache and install the ceph-deploy command.

Create the first Ceph storage cluster and specify the first server in your cluster by either hostname or IP address for [SERVER].

For example

Now deploy Ceph onto all nodes which will be used for the Ceph storage.  Replace the below [SERVER] tags with the host name or IP address of your Ceph cluster including the host you are running the command on. See this post if you get a key error here.

For example

Install the Ceph monitor and accept the key warning as keys are generated. So that you don’t have a single point of failure, you will need at least 3 monitors. You must also have an uneven number of monitors – 3, 5, 7, etc. Again, you will need to replace the [SERVER] tags with your server names or IP addresses.

Example

Now gather the keys of the deployed installation, just on your primary server.

Example

It’s now time to start adding storage to the Ceph cluster. The fdisk output at the top of this page shows that the disk I’m going to use for Ceph is /dev/vdb, which is the same for all the nodes in my cluster. Using Ceph terminology, we will create an OSD based on each disk in the cluster. We could have used a file system location instead of a whole disk but, for this example, we will use a whole disk. Use the below command, changing [SERVER] to the name of the Ceph server which houses the disk and [DISK] to the disk representation in /dev/.

For example

If the command fails, it’s likely because you have partitions on your disk. Run the fdisk command on the disk and press d to delete the partitions and w to save the changes. For example:

Run the osd command for all nodes in your Ceph cluster

We now have to calculate the number of placement groups (PG) for our storage pool. A storage pool is a collection of OSDs, 3 in our case, which should each contain around 100 placement groups. Each placement group will hold your client data and map it to an OSD whilst providing redundancy to protect against disk failure.

To calculate your placement group count, multiply the amount of OSDs you have by 100 and divide it by the number of number of times each part of data is stored. The default is to store each part of data twice which means that if a disk fails, you won’t loose the data because it’s stored twice.

For our example,

3 OSDs * 100 = 300
Divided by replicas, 300 / 2 = 150

Now lets create the storage pool! Use the below command and substitute [NAME]with the name to give this storage pool and [PG] with the number we just calculated.

For example

Яндекс.Метрика