Sometimes it’s needed to get hardware information on your Linux desktop or server using the command line only. Of course, you can do everything via CLI in Linux. Here just some reminders for myself how to do it.

Most of the information you can get using the following three commands:

  • lspci - list all PCI devices
  • lshw - list hardware
  • dmidecode - DMI table decoder

Using multiple keys to these CLI tools you can get everything you need.

It's not a full guide. It's just a list of commands to quickly identify what hardware do you have. I recommend executing all commands with root privileges to get more information.

1. What CPU do I have?

1.1. I think, almost everybody knows the command to get CPU information about your CPU. Here is just a friendly reminder:

# cat /proc/cpuinfo

2. How to get your GPU information?

2.1. A very short information:

# lspci | grep -i --color 'vga\|3d\|2d'

01:00.0 VGA compatible controller: ***

2.2.Need more details about your GPU?

# lspci -v -s 01:00.0

2.3. In the case of Nvidia GPUs you can collect some use `nvidia-smi` tool to get some information:

 

$ nvidia-smi

3. Motherboard

3.1. If you need just a model name:

#dmidecode -s baseboard-product-name

3.2. If you’re too lazy to google your motherboards specs, you can do it via the command below:

# dmidecode -t baseboard

4. RAM

Everybody knows `free` utility. It’s one of the easiest ways to see how many free or used memory you have. To determinate what actually RAM do you have you can use ‘dmidecode` or `lshw` utilites.

4.1. dmidecode:

# dmidecode --type 17

4.2. lshw:

# lshw -short -C memory

5. How to get your HDD or SSD hardware info?

For Linux, it doesn’t matter you’ve got SSD or HDD. To get their hardware information such a serial or model number, configuration and capabilities you can use the same software.

5.1. hdparm - get/set SATA/IDE device parameters

# hdparm -I /dev/sda

5.2. If you need to know more about your storage controllers and storage devices, just type the following command:

# lshw -class disk -class storage

6. Network Controllers Info (NIC)

Last but not least, there are two commands to get the information about your NICs.

6.1. lspci:

# lspci | egrep -i --color 'network|ethernet'

6.2.  lshw:

# lshw -class network

 


How to run CKAN tests

Published 1/4/2018 by e0ne in Python

CKAN is an open-source DMS (data management system) for powering data hubs and data portals. CKAN makes it easy to publish, share and use data. It powers datahub.io, catalog.data.gov and europeandataportal.eu/data/en/dataset among many other sites. http://ckan.org/

Open source world is interesting and challenging. Sometimes it’s easy and cheap. Sometimes it’s hard to contribute and costs a lot. Looking into the CKAN I was surprised that it’s used by government portals. That’s why I tried to use it a bit. Here is my short manual that extends an official one how to run functional and unit tests.

Unfortunately, I don’t have enough time to make a pull request (maybe you can do it instead of me:) ), so I just make a blog now.

The main issues with tests run are:

 

  • Documentation doesn’t cover all steps
  • CKAN uses outdated versions of Solr and Node.JS
  • Some bugs in tests which will be described later

 

All these things were found (or even reverse-engineered) in sources and CKAN’s CI results. You can found all needed data in the manual, GitHub (https://github.com/ckan/ckan/blob/master/circle.yml and https://github.com/ckan/ckan/blob/master/.circleci-matrix.yml)  and CI report for any pull request (https://github.com/ckan/ckan/pulls). I use the same versions as CI does.

 

I use Ubuntu 16.04 LTS distro in my environment. I strongly recommend to do it inside some virtual machine or containers. So it won’t break anything on your desktop or laptop.

1. Getting sources

Let’s go! First of all, you need to clone sources:

git clone https://github.com/ckan/ckan.git

 

2. Node.JS installation

For UI integration test you need to install Node.JS v. 0.10.33. It won’t work on the latest version for sure. 

curl -O https://nodejs.org/dist/v0.10.33/node-v0.10.33.tar.g

tar pxzf  node-v0.10.33.tar.gz

cd node-v0.10.33

./configure && make

sudo make install

 

You can install required npm packages now to run tests in the future:

npm install -g mocha-phantomjs@3.5.0 phantomjs@~1.9.1

 

3. PostgreSQL installation

I used that version of PostgreSQL which is available on my Linux distro:

apt install postgresql

apt install postgresql-server-dev-9.5

4. Redis

It should be simple, just run:

apt install redis-server

 

5. Python dependencies.

I use virtualenv wherever it’s possible:

cd ~/ckan

virtualenv .venv && . .venv/bin/activate

Once virtualenv is ready and activated, it’s time to install python packages:

pip install -r requirement-setuptools.txt

pip install -r requirements.txt

pip install -r dev-requirements.txt

python setup.py develop 

 

6. Database configuration

Configure some environment variables to get everything working. I used the same values as we’ve got in the Circle CI configuration:

export CKAN_POSTGRES_DB=ckan_test

export CKAN_POSTGRES_USER=ckan_default

export CKAN_POSTGRES_PWD=pass

export CKAN_DATASTORE_POSTGRES_DB=datastore_test

export CKAN_DATASTORE_POSTGRES_WRITE_USER=ckan_default

export CKAN_DATASTORE_POSTGRES_READ_USER=datastore_default

export CKAN_DATASTORE_POSTGRES_READ_PWD=passexport

Create required databases and grant permissions:

sudo -E -u postgres ./bin/postgres_init/1_create_ckan_db.sh

sudo -E -u postgres ./bin/postgres_init/2_create_ckan_datastore_db.sh

sed -i -e 's/.*datastore.read_url.*/ckan.datastore.read_url = postgresql:\/\/datastore_default:pass@\/datastore_test/' test-core.ini

paster datastore -c test-core.ini set-permissions | sudo -u postgres psql

 

7. Solr installation and configuration

To get tests passed I use Solr v. 4.3.1. There is a filed bug about Solr version. CKAN tests don’t work with Solr 6.x now:

curl-O  http://archive.apache.org/dist/lucene/solr/4.3.1/solr-4.3.1.tgz

tar zxvf solr-4.3.1.tgz

Now you have to start Solr. You can run it as a daemon or run it in a separate terminal:

cd solr-4.3.1/example/

java -jar start.jar

 

Solr initialization is required too:

export SOLR_HOME=~/solr-4.3.1

cd ~/ckan

./bin/solr_init/create_core.sh

 

8. Initialize test data

paster db init -c test-core.ini

 

9. Run test CKAN server

paster serve test-core.ini

10. Finally, run UI tests

mocha-phantomjs http://localhost:5000/base/test/index.html

11. Run unit and functional tests

To run all tests you need to execute the following command:

nosetests --ckan --reset-db --with-pylons=test-core.ini --nologcapture ckan ckanext

Unfortunately, you’ll have some failed bugs due to the https://github.com/ckan/ckan/issues/3675 :(. To successfully run all tests, you should use segments. E.g.:

nosetests --ckan --reset-db --with-pylons=test-core.ini --nologcapture --segments=abc ckan ckanext

 


Remote console via SSH with own bashrc

Published 8/9/2017 by e0ne in Linux
Tags: , ,

I think, most of us have customized bash or zsh environment. I'm too lazy to switch from bash to zsh, so I use bash on my laptop. There are some benefits: bash is still the more popular shell, so it exists on mostly Linux-based servers.

That's why I try to use my `.bashrc` wherever it's possible. But on a remote server, sometimes you don't have the own user account. That's why I've added a very simple alias for `ssh` command to mine `.bashrc` file:

function ssh() {
  BASH_RC=`cat ${HOME}/.bashrc | base64`
  `which ssh` -t $@ "echo \"${BASH_RC}\" | base64 --decode > /tmp/${USER}_bashrc; bash --rcfile /tmp/${USER}_bashrc; rm /tmp/${USER}_bashrc"
}

It will create temporary file with your `.bashrc` content on a remote host and remove it once the session ends. The backside of this solution is your ssh connection will start with some delay to copy your`.bashrc` file to the remote host. It's not an issue for me now and I'm happy this.

The next step is to make the same with `.vimrc` with all needed plugins.


This test is too slow

Published 7/28/2017 by e0ne in Python

Sometimes we need to understand why unit-test is so slow. Sometimes I’m to lazy to go deep to understand why.

That’s why I’ve created a very simple profiled class to make unit-tests profiling fast and simple. I used only cProfile, so it will work on any Python project. It’s so simple, so I can’t talk about it more. You can install it via ‘pip install ProfiledTest’ and use it like:

class SampleTest(ProfiledTest, unittest.TestCase):
    def test_sample(self):
        self.assertTrue(True)

 

GitHub url: https://github.com/e0ne/profiled_test


What is LIO target? Linux-OI Target is a Linux SCSI target introduced in a kernel v.2.6.38 and supports different fabrics modules like FibreChannel, iSCSI, iSER, etc. It works in a kernel space, so it’s faster than tgtd which is used in Cinder by default. Why do we still use tgtd instead of more faster LIO in Cinder by default? It’s only because we have to support rolling upgrades and we don’t know how to migrate from TGTd to LIO in a such way and pass Grenade successfully.

We’ve got non-voting gate-tempest-dsvm-full-lio-ubuntu-xenial job for a while. Due to some of my performance tests results it’s really faster than tgtd. So, how can you use it?

It’s pretty easy with LVM + Devstack. Everything you need is to add 'CINDER_ISCSI_HELPER=lioadm' to your localrc/local.conf.

If you have already configured Cinder+LVM it’s easy too to switch to the new target driver. I mean that you don’t have any in-use volume now but you have Cinder with LVM configured and running. Just follow these steps:

1) first of all, you have to install ‘rtslib-fb’ package using pip:

# pip install rtslib-fb

or using OS package manager:

# apt-get install python-rtslib-fb 

2) stop tgt:

# sudo service tgt stop

3) change /etc/cinder/cinder.conf to use LIO driver:

Set ‘iscsi_helper = lioadm' instead of ‘iscsi_helper = tgtadm

 

4) restart cinder and enjoy it!

 


I don’t like to post reviews. But this app is amazing! I use it more than 3 years and just want to share with you a list of my favorite features:

 

  • Split Panes - I started use iTerm2 because of it. Just press Cmd+d to split current panel. Cmd+Alt+Arrow to move between the panels
  • Search - Cmd+f to search throm commands and their output it work for the all session history
  • Inline images - seriously,  you can view images in your terminal. You need just to download ‘imgcat’ script and enjoy it
  • Download file from remote host - no need to use scp more, just ‘it2dl filename’ and it will download a file to ~/Downloads directory
  • Open list of most popular directories - Cmd+Alt+/ and select what you need. It’s real fast and useful
  • Restore the last session - no comments are needed here

P.S. Don’t forget to enable ‘Show Tip of the Day’ feature to enjoy more.

Useful links:

 

 

  • https://iterm2.com/documentation-utilities.html
  • https://www.iterm2.com/features.html


  • OpenStack Cinder provides an API to attach/detach volume to Nova instances. This is public, but not documented API which is used only by Nova now.  In scope of “Attach/detach volumes without Nova” [1] blueprint we introduce new python-cinderclient extension to provide attach/detach API not only for Nova called python-brick-cinderclient-ext. Before Mitaka release everybody who want to use Cinder volumes not only with Nova instances have to create hardening scripts based on python-cinderclient and os-brick [3] projects to make it done.

    Since Mitaka, Cinder opens attach/detach API for any users. It will allow to:

     

    • Attach volume to Ironic instance
    • Attach volume to any virtual/baremetal host which is not provisioned by Nova or Ironic

     

    It means, Cinder becomes stand-alone project that could be used outside OpenStack cloud with one limitation: Keystone is still required.

    For now, python-brick-cinderclient-ext has only ‘get-connector’ API. Attach/detach features are under development and any feedback are welcome to get implemented in the best way. I hope, it will be implemented and documented as well in scope of Mitaka release cycle.

    I will show you how it works in current proof-of-concept code [4]. Anybody is welcome to review and test it:).

    To demonstrate this feature I will use virtual Devstack environment with Ironic+Cinder. Here is my local.conf [5].

    Current limitations are:

     

    • Ironic instance must have access to API and storage networks (it works on Devstack with a default configuration
    • Users inside instance must have root permissions and be able to install required software

     

    Detailed manual how to setup Ironic using Devstack could be found here [6]. Since volumes attach/detach operations require python, open-iscsi, udev and other packages I will use Ubuntu-based image for Ironic instances. You can use Ubuntu cloud image [7] or build your own using ‘disk-image-builder’ tool [8]. I’ve built my Ubuntu image with disk-image-builder:

    $  disk-image-create ubuntu vm dhcp-all-interfaces grub2 -o ubuntu-image
    $  glance image-create --name ubuntu-image --visibility public \
    --disk-format qcow2 \
    --container-format bare < ubuntu-image.qcow2

    After it we need to run Ironic instance:

    #  query the image id of the default cirros image
    $  image=$(nova image-list | egrep "ubuntu" | awk '{ print $2 }')
    #  create keypair
    $  ssh-keygen
    $  nova keypair-add default --pub-key ~/.ssh/id_rsa.pub # spawn instance $ prv_net_id=$(neutron net-list | egrep "$PRIVATE_NETWORK_NAME"'[^-]' | awk '{ print $2 }') $ nova boot --flavor baremetal --nic net-id=$prv_net_id --image $image --key-name default testing

    Wait until instance is booted and ready [9]:

    $  nova list
    $ ironic node-list

    Now you can connect to the instance using SSH:

    $  ssh ubuntu@10.1.0.13

    By default, in Devstack both Nova and Ironic instances have access to OpenStack APIs.

    To attach volume you need to install required packages inside you instance:

    $  sudo apt-get install -y open-iscsi udev python-dev python-pip git

    NOTE: if you can't acces Internet inside your instance, try the following command on the DevStack host:

    $  sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERAD

    Clone and install the latest python-cinderclient (the latests version from PyPi will also work but you'll need to pass --os-volume-api-version explicit):

    $  git clone https://github.com/openstack/python-cinderclient.git
    $  cd python-cinderclient
    $  sudo pip install .

    Clone the python-brick-cinderclient-ext and apply the patch:

    $  git clone https://github.com/openstack/python-brick-cinderclient-ext.git
    $  cd python-brick-cinderclient-ext
    $  git fetch https://review.openstack.org/openstack/python-brick-cinderclient-ext refs/changes/44/263744/8 && git checkout FETCH_HEAD
    $  sudo pip install .

    That’s all! Now, you can attach/detach volumes inside your instance. Because it is still PoC implementation you need few additional steps:

    $  PATH=$PATH:/lib/udev
    $  export PATH

    The steps above is needed until python-brick-cinderclient-ext will use oslo.rootwrap or privsep libraries.

    Verify, that python-brick-cinderclient-ext works well [10] (you need to setup your own credentiala and auth_url):

    $  cat << EOF >> ~/openrc
     #!/usr/bin/env bash
    export OS_AUTH_URL="http://10.12.0.26:5000/v2.0"
    export OS_IDENTITY_API_VERSION="2.0"
    export OS_NO_CACHE="1"
    export OS_PASSWORD="password"
    export OS_REGION_NAME="RegionOne"
    export OS_TENANT_NAME="admin"
    export OS_USERNAME="admin"
    export OS_VOLUME_API_VERSION="2"
     EOF
    $  source ~/openrc
    $  sudo -E cinder get-connector

    You should get something this: [11].

    Finally, create and attach volume to your Ironic instance:

    $ cinder create 1
    $ sudo -E PATH=$PATH cinder local-attach 0a946c67-2d5c-4413-b8ec-350240e967d2

    You should get something like: [12]

    Now you can verify that volume is attached via iSCSI protocol [13]:

    $  sudo iscsiadm -m session
    $  ls -al /dev/disk/by-path/ip-192.168.122.32:3260-iscsi-iqn.2010-10.org.openstack:volume-625e9acc-d5d8-4e7b-84c6-3b55ed98e3f3-lun-1

    Detach is also easy:

    $  sudo -E PATH=$PATH cinder local-detach 0a946c67-2d5c-4413-b8ec-350240e967d2

    That’s all! You’ve got attached your Cinder volume to an Ironic instance without Nova! You can do the same steps to attach volumes inside Nova instance or your desktop. It will work too. I will show you a demo with Nova instance and cloud config scrips in the next post.

     

    [1] https://github.com/openstack/cinder-specs/blob/master/specs/mitaka/use-cinder-without-nova.rst
    [2] https://github.com/openstack/python-brick-cinderclient-ext
    [3] https://github.com/openstack/os-brick
    [4] https://review.openstack.org/263744
    [5] https://gist.github.com/e0ne/2579921aba839322decc
    [6] http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html#deploying-ironic-with-devstack
    [7] https://cloud-images.ubuntu.com/
    [8] http://docs.openstack.org/developer/ironic/deploy/install-guide.html#image-requirements
    [9] http://paste.openstack.org/show/483734/
    [11] http://paste.openstack.org/show/483742/
    [12] http://paste.openstack.org/show/483743/


    Other useful links:

    Didn't google "how to run horizon integration tests" unswer in 10 seconds. Making note how to do it.

    My development environment usually looks like: macbook + VM with Ubuntu Server or CentOS without GUI. I try to run all tests inside VMs. In case of Selenium tests, I need some preparation for it:

    1. $ sudo apt-get install firefox
      this command will install FireFox. Selenium has WebDriver for it out of the box
    2. $ sudo apt-get install xvfb
      install Virtual Frame Buffer for X Server (https://en.wikipedia.org/wiki/Xvfb)
    3. Run tests:
      • Simple way for OpenStack Horizon:
        ./run_tests.sh --integration --selenium-headless
      • Hard way for any project:
        • Start xvfb:
          $ sudo Xvfb :10 -ac
        • Start headless FireFox:
          DISPLAY=:10 firefox
        • run tests
    Useful links:

    It’s my first try to blog in English. Feel free to comment for any typo, grammar errors, etc.

    There are nothing new, nothing innovative below. There are just a step-by-step guide to not forget and to not google each time when I need it.

    Usually, on my dev environment, I’ve got KVM instances with disk images in QCOW format. So time to time I need to extend my virtual disks to get more free space.

    1. Shutdown VM:
      • `sudo shutdown -p now` inside VM
      • `sudo virsh shutdown <vm_name>` on my host
    2. Find QCOW file to change it:
      Be default, it’s located at `/var/lib/libvirt/images`
      `virsh dumpxml dsvm1 | grep file`
      Find something like:
      <disk type='file' device='disk'>
      <source file='/var/lib/libvirt/images/devstack.img'/>
    3. Create backup of your virtual drive (E.g. `cp /var/lib/libvirt/images/devstack.img /var/lib/libvirt/images/devstack.img.bak`)!!!!
    4. Change QCOW image size: `sudo qemu-img resize /var/lib/libvirt/images/devstack.img +10G`  - this command increases size with 10 GB more
      • If image has snapshots, you need to delete them first:
        sudo qemu-img snapshot -l /var/lib/libvirt/images/devstack.img
        sudo qemu-img snapshot -d <snapshot_id> /var/lib/libvirt/images/devstack.img
    5. Boot VM: `sudo virsh start <vm_name>`
    6. NOTE: I don’t care about disk data in this example. But I have backup (see #3) and can restore all needed data.
      Create new partition table with fdisk. fdisk can’t change partition size, we need to delete and create a new one:
      sudo fdisk /dev/sdb
      ‘d’ - delete partitions(s)
      ‘p’ - create new partition(s)
      ‘w’ - write changes
    7. Mount drive to your VM:
      sudo mount /dev/sdb1 /mnt/data
    8. Create filesystem:
      sudo resize2fs /dev/vda3
    9. In case, if you use something else like parted instead of fdisk, you could just extend filesystem size:
      sudo mkfs.ext3 /dev/sdb1

     


    Все нижесказанное является выдумкой автора. Все совпадения с действительностью являются абсолютно случайными. И, как говортся: в каждой шутке есть доля шутки.

     

    11:00. Пришел на работу. Нужно сделать кофе.

    11:10. Хорошо, теперь можно пописать код.

    11:11. Что за [IMPORTANT] письмо такое свалилось?

    12:30. Разгреб почту, ответил начальству, поддержке и коллегам. Можно писать код.

    12:45. Синк-ап.

    13:00. Нужно написать недельный репорт чем занимался.

    13:30. Обед - не сегодня, лучше попишу код.

    13:35. Опять тесты на CI завалились.

    15:30. Нашел проблему в другом компоненте и две у себя. Фикшу баги.

    18.00. Чашечка кофе с печенькой будет не лишней.

    18:15. Нужно занться планированием на следующую итерацию.

    18:30. Как у кастомера ничего не работает???

    20:00. Вот сейчас таки займусь этой фичей, которую все давно ждут.

    20:30. Что значит что нам надо пофиксить все баги до понедельника?

    21.00. Bug Fixed. Завтра утром попишу код и сделаю фичу. Там всего-то 2 часа делов и час на написание тестов...