[FR] openSUSE.Asia Summit 2017 – Tokyo Japan

openSUSE.Asia Summit 2017 Tokyo, Japan
This event at Tokyo is the fourth in openSUSE.Asia Summit.
I become excited to take part in this event, when I heard that Japan has been chosen as the host. One of the ways to take part is by submitting a paper.

Not long after, I got the news that my paper was accepted. Yeay, finally I can go to Japan. Along with my friends who also got accepted too, we bought and prepared all the things needed.

Three day before the actual main event, with three of my friends from GNU/Linux Bogor, we left from Jakarta to Tokyo by ANA (All Nippon Airways). The journey to Tokyo took around 7 hours (3,596 Miles)
IMG_5599

The first day arrived in Tokyo, I was very surprised by the weather. It was around 10°C. Me and my friend Kukuh, only got time to go around Shibuya and see Hachiko statue. After that, we went directly to our place of stay.

The second day, we went out around noon and decided to visit the Fujiko F Fujio Museum. After that, at night we came to the welcome party held by the committee at UEC (University of Electro Communications).
008-PA202944

Day 1, I followed some sessions on the first day and got to meet lot of new people. So exciting!
DSC08480

Day 2, The day I presented my paper about OpenStack LBaaS with openSUSE Leap. All went well, thanks God 🙂
DSC05667
P1010063

The next day after the event is finished, the committee has planned for us a one-day trip to Hinode Pier, Skytree and Akihabara.
Thanks a lot for the sightseeing trip 🙂
P1010331

Since I still got two more days left in Tokyo, me and my friend Dhenandi, moved to hostel near Sensō-ji temple in Asakusa and have planned to visit some places nearby.

The last day in Japan, before going back home, I took the time to go to Odaiba to see Gundam and Liberty statues. They are huge! 😀
DSC05827
Trip to openSUSE.Asia Summit 2017 is truly a pleasant experience, it was an unforgettable time to be the part of the openSUSE.Asia Summit in Tokyo.

You can see more pictures on Flickr

Arigatōgozaimashita, Thank you and see you next year!

Another way How to Extend a Windows Partition on OpenStack Instance

### Environment
– OpenStack Newton (KVM)
– Instance OS Windows Server 2012 R2
– Disk = 1TB [C:] 200GB [D:] 800GB
– Add Disk 1TB to [D:] Partition

### Compute OpenStack

cd /var/lib/nova/instance/a1b2c3e4f5gxxxxxx
qemu-img create -f qcow2 disk-new.qcow2 2040G
virt-resize disk disk-new.qcow2 --expand /dev/sda2
[root@openstack a1b2c3e4f5gxxxxxx]# virt-resize disk disk-new.qcow2 --expand /dev/sda2
[   0.0] Examining disk
**********

Summary of changes:

/dev/sda1: This partition will be left alone.

/dev/sda2: This partition will be resized from 824.0G to 1840.0G.

**********
[   3.8] Setting up initial partition table on disk-new.qcow2
[   4.0] Copying /dev/sda1
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
[ 865.4] Copying /dev/sda2
100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
Resize operation completed with no errors.  Before deleting the old disk,
carefully check that the resized disk boots and works correctly.

### Windows Powel Shell

C:\>chkdsk D:
C:\>diskpart
DISKPART> select volume 1 
DISKPART> extend filesystem

DiskPart successfully extended the file system on the volume.
DISKPART> list volume

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ----------  ---  -----------  -----  ----------  -------  ---------  --------
  Volume 0     C   WINSRV2012R  NTFS   Partition    200 GB  Healthy    System
  Volume 1     D   DATA         NTFS   Partition   1839 GB  Healthy
DISKPART> exit

Refrensi: https://support.microsoft.com/en-us/help/832316/the-partition-size-is-extended-but-the-file-system-remains-the-origina

Install a Ceph Storage Cluster on All in One Node

Mari langsung saja, cara membuat Ceph Storage Cluster di openSUSE Leap dalam satu Node.

Persiapan

  • Hostname: ceph-aio
  • OS: openSUSE 42.2
  • 3 Disk tambahan untuk OSD
  • Akses Internet

Pasang paket ceph dan ceph-deploy

zypper -y install ceph
zypper -y install ceph-deploy

Buat user baru untuk ceph-deploy

useradd -m -s /bin/bash ary
passwd ary

Konfigurasi agar user ceph-deploy tadi memiliki privileges root

echo "ary ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ary
chmod 0440 /etc/sudoers.d/ary

Masuk sebagai pengguna baru yang tadi telah dibuat, lalu generate kunci publik untuk ssh tanpa sandi

sudo su - ary
ssh-keygen
ssh-copy-id ary@ceph-aio

Buat direktori baru untuk ceph-deploy

cd ~
mkdir my-cluster
cd my-cluster

Buat konfigurasi cluster di direktori ini

ceph-deploy new ceph-aio
ls -lh

Setelah ceph-deploy new maka akan terbuat file konfigurasi baru di direktori tadi, set replikasi menjadi = 2  dan type = 0 karena kita menggunakan AIO node

echo "osd pool default size = 2" >> ceph.conf
echo "osd crush chooseleaf type = 0" >> ceph.conf
echo "public network = 10.7.7.0/24" >> ceph.conf

Pasang Ceph

ceph-deploy install ceph-aio
sudo chown -R ceph:ceph /var/lib/ceph/mon

Buat monitor awal

ceph-deploy mon create ceph-aio
ceph-deploy mon create-initial

Generate kunci untuk autentikasi

ceph-deploy gatherkeys ceph-aio

Setelah selesai, harusnya direktori  terdapat keyrings berikut:

{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
{cluster-name}.bootstrap-rgw.keyring

Buat partisi storage

sudo /dev/vdX

vdX type =  83

sudo partprobe
sudo fdisk -l

Format dan mounting disk

sudo mkfs.xfs /dev/vdb1
sudo mkdir -p /var/local/osd-vdb1
sudo mkfs.xfs /dev/vdc1
sudo mkdir -p /var/local/osd-vdc1
sudo mkfs.xfs /dev/vdd1
sudo mkdir -p /var/local/osd-vdd1
exit
echo "/dev/vdb1 /var/local/osd-vdb1 xfs defaults 0 0" >>/etc/fstab
echo "/dev/vdc1 /var/local/osd-vdc1 xfs defaults 0 0" >>/etc/fstab
echo "/dev/vdd1 /var/local/osd-vdd1 xfs defaults 0 0" >>/etc/fstab
mount -a
mount | grep vd
df -h

Ubah permission di direktori yang sudah dibuat tadi

sudo su - ary
cd ~/my-cluster
sudo chown ceph:ceph /var/local/osd*

Jalankan prepare dan activate untuk disk OSD

ceph-deploy osd prepare ceph-aio:/var/local/osd-vdb1 ceph-aio:/var/local/osd-vdc1 ceph-aio:/var/local/osd-vdd1
ceph-deploy osd activate ceph-aio:/var/local/osd-vdb1 ceph-aio:/var/local/osd-vdc1 ceph-aio:/var/local/osd-vdd1

Salin konfigurasi dan key admin

ceph-deploy admin ceph-aio

Set permission

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

Cek cluster dan osd status

ceph -s
ceph osd tree

Tes membuat pool dan operasi object data

ceph osd pool create pool-01 128
echo coba > coba.txt
echo test > test.txt
echo okay > okay.txt
rados put object-01 coba.txt --pool=pool-01
rados put object-02 test.txt --pool=pool-01
rados put object-03 okay.txt --pool=pool-01
rados ls --pool=pool-01

Verifikasi

sudo find /var/local/osd* -name *object-01*
sudo find /var/local/osd* -name *object-02*
sudo find /var/local/osd* -name *object-03*

Proxmox Bind Mount – Mount Storage dari Host ke container LXC

Sebelum mulai pastikan kedua direktori di Host dan di lokasi target Container sudah dibuat. Sebagai contoh saya ingin me-mount direktori
/mnt/share-storage/ dari Host ke dalam Container direktori /mnt/data/
Mesin Proxmox yang saya gunakan versi: Proxmox Virtual Environment 4.3

1. Cek ID container

root@host:~# lxc-ls --fancy
NAME STATE   AUTOSTART GROUPS IPV4                 
311  RUNNING 0         -      10.0.0.211, -    
312  RUNNING 0         -      10.0.0.212, -    
313  RUNNING 0         -      10.0.0.213, -

2. Edit berkas konfigurasi container LXC

root@host:~# vim /etc/pve/lxc/313.conf

3. Tambahkan baris berikut

mp0:[SOURCE-HOST],mp=[TARGET-CONTAINER]
mp0:/mnt/share-storage/,mp=/mnt/data/

4. Kemudian muat ulang container

root@host:~# lxc-stop -n 313 
root@host:~# lxc-start -n 313

5. Verifikasi hasil bind mount didalam container

root@host:~# lxc-attach -n 313 

root@ct313:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop2      148G  867M  140G   1% /
/dev/sdb1       1.1T   71M  1.1T   1% /mnt/data
none            492K     0  492K   0% /dev
tmpfs            32G     0   32G   0% /dev/shm
tmpfs            32G  8.2M   32G   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            32G     0   32G   0% /sys/fs/cgroup

6. Jika Anda memiliki beberapa path yang ingin di-mount maka tinggal tambahkan saja dibawah baris mp0 jadi mp1, mp2, dst

mp0:/mnt/share-storage/,mp=/mnt/data/
mp1:/mnt/pve/share-storage1,mp=/mnt/data1
mp2:/mnt/pve/share-storage2,mp=/mnt/data2
...

~Sekian dan Semoga bermanfaat. 😀

Konfigurasi Swarm Cluster di openSUSE Leap (Docker 1.21)

Setelah kemarin di openSUSE.Asia Summit 2016 saya mengunakan the old way dalam membangun Docker Swarm Cluster 😀

Maka dari itu saya membuat video ini, yang akan menjelaskan bagaimana cara membuat Docker Swarm Cluster, Join to Swarm Manager, Create Service, Scaling Service dan Failover dengan the new wayEnjoy!

~Sekian dan Terima kasih!


Refrensi: Pak Max Huang | Docker Engine 1.12 comes with built-in Distribution & Orchestration System