GoodCommand documentation!

os

CentOS上编译安装内核

获取内核源码

iso版本内核源码包获取路径 [http://vault.centos.org/centos/7/os/Source/SPackages/]
一般使用IOS安装包安装安装系统后,想要获取和当前系统默认内核相匹配的源码,使用这个链接

update版本内核源码包获取路径 [http://vault.centos.org/centos/7/updates/Source/SPackages/] 针对每个内核版本,发行版会定期发布update版本,内核大版本会一样,小版本不同,期待获取更新的时候使用这个链接

使用wget命令下载安装包

wget http://vault.centos.org/centos/7/os/Source/SPackages/kernel-alt-4.14.0-115.el7a.0.1.src.rpm

源码包安装解压

rpm -iv kernel-alt-4.14.0-115.el7a.0.1.src.rpm

源码好会安装在~/rpmbuild目录下

源码打patch

打patch主要是因为CentOS的源码其实和Redhat一致,Redhat的源码应用patch后就变成CentOS

打patch之前需要安装一些工具,主要是rpmbuild和git

yum install -y rpm-build git

注意rpm-build的小横杠,rpmbuild是一个命令,但是安装包是rpm-build

首先打源码包里面的patch,可以在当前用户的任意路径执行

rpmbuild -bp --nodeps ~/rpmbuild/SPECS/kernel-alt.spec

打自己想打的patch,譬如 0001.patch

cd ~/rpmbuild/BUILD/kernel-alt-4.14.0-115.7.1.el7a/linux-4.14.0-115.7.1.el7a.aarch64
git am 0001.patch

安装必要依赖

编译内核rpm包。

这一步,编译内核,并且打包成rpm安装包,这样就可以把安装包拷贝到目标机器上执行安装了。

安装编译工具

yum groupinstall –y “Development Tools”
yum install -y ncurses-devel make gcc bc bison flex elfutils-libelf-devel openssl-devel rpm-build redhat-rpm-config -y

配合config. 主要是对内核编译选项进行设置。先把当前用来启动系统的config拷贝过来是最保险的。注意是.config

cp /boot/config-4.14.0-115.el7a.0.1.aarch64 ./.config

修改.config中的CONFIG_SYSTEM_TRUSTED_KEYS=“certs/centos.pem”为空。

CONFIG_SYSTEM_TRUSTED_KEYS=""

执行编译, 下载编译脚本:

wget https://raw.githubusercontent.com/xin3liang/home-bin/master/build-kernel-natively.sh

或者从这里下载build-kernel-natively 在源码目录执行

/home/build-kernel-natively.sh

安装内核rpm包

编译好的内核安装包在当前用户主目录下的rpmbuild/RPM下。

编译驱动

由于重新编译了内核, 如果不是默认包含的驱动, 就需要重新进行编译。这个时候当前内核版本/usr/lib/modules/$(uname -r)可能没有build目录,这个时候就需要创建一个符号链接指当前内核版本源码所在目录

/usr/lib/modules/4.14.0-4k-2019-07-03/build -> /root/rpmbuild/BUILD/kernel-alt-4.14.0-115.el7a/linux-4.14.0-115.el7.0.1.aarch64

CentOS软件包管理,设置软件源

设置CentOS软件源

本地软件源

设置和redhat相同请参考:RedHat软件包管理,设置软件源

在线源
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.huaweicloud.com/repository/conf/CentOS-7-anon.repo
epel源

方法1:

yum install https://mirrors.huaweicloud.com/epel/epel-release-latest-7.noarch.rpm
rpm --import https://mirrors.huaweicloud.com/epel/RPM-GPG-KEY-EPEL-7

方法2:

yum install epel-release
CentOS8

CentOS-Base.repo 文件中,启用baseurl并且替换网址为https://mirrors.huaweicloud.com, Centos8 alt软件源合一了

[BaseOS]
name=CentOS-$releasever - Base huawei
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=BaseOS&infra=$infra
baseurl=https://mirrors.huaweicloud.com/$contentdir/$releasever/BaseOS/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Key的问题

一般一个repo有两个key。 repo key 和 package key

我们通过yum-config-manager添加repo, 通过 repm import引入key, 以kubernetes为例

yum-config-manager --add-repo
curl -OL https://packages.cloud.google.com/yum/doc/yum-key.gpg
问题解决

如果出现repodata/repomd.xml Error 404

Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * epel: fedora.cs.nctu.edu.tw
https://mirrors.huaweicloud.com/centos/7/os/aarch64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below wiki article

https://wiki.centos.org/yum-errors

If above article doesn't help to resolve this issue please use https://bugs.centos.org/.

https://mirrors.huaweicloud.com/centos/7/extras/aarch64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
Trying other mirror.
https://mirrors.huaweicloud.com/centos/7/updates/aarch64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found

解决办法修改CentOS-Base.repo, baseurl中的baseurl=https://mirrors.huaweicloud.com/centos/ 修改为baseurl=https://mirrors.huaweicloud.com/centos-altarch/7/updates/

[user1@kunpeng920 ~]$ sudo dnf config-manager
No such command: config-manager. Please use /bin/dnf --help
It could be a DNF plugin command, try: "dnf install 'dnf-command(config-manager)'"

解决办法:

sudo dnf install -y dnf-plugins-core
CentOS 常见依赖包
yum install bash-completion bash-completion-extras # 命令行补全

yum install ncurses-devel zlib-devel texinfo gtk+-devel gtk2-devel
qt-devel tcl-devel tk-devel libX11-devel kernel-headers kernel-devel
yum install https://mirrors.huaweicloud.com/epel/epel-release-latest-7.noarch.rpm
rpm –import https://mirrors.huaweicloud.com/epel/RPM-GPG-KEY-EPEL-7
CentOS 软件包常用命令
yum install iperf3
yum -y install firefox
yum remove firefox
yum -y remove firefox
yum update mysql
yum list openssh
yum list openssh-4.3p2
yum list installed | less #查询已安装软件包
yum list installed | grep kernel    #查看已安装内核
yum search snappy
yum info snappy
yum update
yum repolist        #查询已经启用的软件源
yum repolist all    #查询所有软件源
yum config-manager --disable ovirt-4.1   #禁用软件源
dnf config-manager --disable ovirt-4.1   #禁用软件源
yum grouplist
yum groupinstall "Development Tools"

yum provides htop   #查看拿个软件包提供命令
yum provides /usr/include/mysql/mysql.h     #查看哪个软件包提供mysql.h
yum --enbalerepo=epel install phpmyadmin #指定软件源安装软件包
yum clean all       #清除缓存
yum history         #查看安装历史
yum list <package_name> --showduplicates    #显示所有版本软件
yum install <package_name>-<version_info>   #安装指定版本软件包
yum downgrade <package_name>-<version_info> #强制降级软件包
sudo dnf config-manager --add-repo https://mirrors.huaweicloud.com/ceph/rpm-luminous/el7/aarch64/
sudo yum config-manager --add-repo https://mirrors.huaweicloud.com/ceph/rpm-luminous/el7/aarch64/

yumdownloader --urls nload  #获取nload的url下载地址

rpm -ivh [package_name]     #安装软件包
rpm -Uvh [package_name]     #升级软件包
rpm -e   [package_name]     #卸载软件包
rpm -qa                     #查询已安装软件包
rpm -q   [package_name]     #查询软件包是否已经安装
rpm -qi  [package_name]     #查看软件包信息
rpm -aql [package_name]     #列出软件包安装的文件,也就是把哪些可执行文件复制到了系统目录
rpm -qf  [绝对路径    ]     #列出可执行文件/命令是由哪个安装包安装的
rpm -e kernel-debuginfo-4.14.0-115.el7a.aarch64 kernel-debuginfo-common-aarch64-4.14.0-115.el7a.aarch64 kernel-4.14.0-115.el7a.aarch64 kernel-devel-4.14.0-115.el7a.aarch64 #卸载内核
``
查找RPM包的网站

https://www.rpmfind.net/

NeoKylin软件包管理,设置本地源

插入iso

在BMC界面插入操作系统镜像,可以观察到多了一个设备sr0

[root@kylin ~]# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                   8:0    0  7.3T  0 disk
├─sda2                8:2    0    1G  0 part /boot
├─sda3                8:3    0  7.3T  0 part
│ ├─nlas-swap       253:1    0    4G  0 lvm  [SWAP]
│ ├─nlas-root       253:0    0   50G  0 lvm  /
│ └─nlas-home       253:5    0  7.2T  0 lvm  /home
└─sda1                8:1    0  200M  0 part /boot/efi
nvme0n1             259:0    0  2.9T  0 disk
├─nvme0n1p1         259:1    0    1G  0 part
└─nvme0n1p2         259:2    0  2.9T  0 part
  ├─nlas_kylin-root 253:4    0   50G  0 lvm
  ├─nlas_kylin-swap 253:2    0    4G  0 lvm
  └─nlas_kylin-home 253:3    0  2.9T  0 lvm
[root@kylin ~]#
[root@kylin ~]#
[root@kylin ~]# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                  11:0    1  2.9G  0 rom
sda                   8:0    0  7.3T  0 disk
├─sda2                8:2    0    1G  0 part /boot
├─sda3                8:3    0  7.3T  0 part
│ ├─nlas-swap       253:1    0    4G  0 lvm  [SWAP]
│ ├─nlas-root       253:0    0   50G  0 lvm  /
│ └─nlas-home       253:5    0  7.2T  0 lvm  /home
└─sda1                8:1    0  200M  0 part /boot/efi
nvme0n1             259:0    0  2.9T  0 disk
├─nvme0n1p1         259:1    0    1G  0 part
└─nvme0n1p2         259:2    0  2.9T  0 part
  ├─nlas_kylin-root 253:4    0   50G  0 lvm
  ├─nlas_kylin-swap 253:2    0    4G  0 lvm
  └─nlas_kylin-home 253:3    0  2.9T  0 lvm
[root@kylin ~]#

挂载iso

[root@kylin dev]# mkdir /mnt/cdrom
[root@kylin dev]#
[root@kylin dev]# mount /dev/sr0 /mnt/cdrom
mount: /dev/sr0 写保护,将以只读方式挂载
[root@kylin dev]#
[root@kylin dev]# df
文件系统                   1K-块    已用       可用 已用% 挂载点
devtmpfs               133625152       0  133625152    0% /dev
tmpfs                  133636288       0  133636288    0% /dev/shm
tmpfs                  133636288   58560  133577728    1% /run
tmpfs                  133636288       0  133636288    0% /sys/fs/cgroup
/dev/mapper/nlas-root   52403200 1052084   51351116    3% /
/dev/sda2                1038336  127132     911204   13% /boot
/dev/sda1                 204580    7760     196820    4% /boot/efi
/dev/mapper/nlas-home 7752529920 2665096 7749864824    1% /home
tmpfs                   26727296       0   26727296    0% /run/user/0
/dev/sr0                 3003034 3003034          0  100% /mnt/cdrom

添加本地源

[root@kylin yum.repos.d]# touch media.repo
[root@kylin yum.repos.d]# ls
media.repo  ns7-adv.repo
[root@kylin yum.repos.d]# vim media.repo

文件/etc/yum.repos.d/media.repo的内容:

[local_media_from_iso]
baseurl=file:///mnt/cdrom

修改好后可以查看到已经添加的源

[root@kylin yum.repos.d]# yum repolist
源标识                                                                   源名称                                                                                       状态
local_media_from_iso                                                     local_media_from_iso                                                                         3,645
ns7-adv-os/aarch64                                                       NeoKylin Linux Advanced Server 7 - Os                                                            0
ns7-adv-updates/aarch64                                                  NeoKylin Linux Advanced Server 7 - Updates                                                       0
repolist: 3,645
[root@kylin yum.repos.d]#

尝试安装软件

[root@kylin cdrom]# yum install vim
http://update.cs2c.com.cn:8080/NS/V7/V7Update5/os/adv/lic/base/aarch64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
正在尝试其它镜像。
To address this issue please refer to the below knowledge base article

http://www.cs2c.com.cn

If above article doesn't help to resolve this issue please contact with CS2C Support.



 One of the configured repositories failed (NeoKylin Linux Advanced Server 7 - Os),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Run the command with the repository temporarily disabled
            yum --disablerepo=ns7-adv-os ...

     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:

            yum-config-manager --disable ns7-adv-os
        or
            subscription-manager repos --disable=ns7-adv-os

     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=ns7-adv-os.skip_if_unavailable=true

failure: repodata/repomd.xml from ns7-adv-os: [Errno 256] No more mirrors to try.
http://update.cs2c.com.cn:8080/NS/V7/V7Update5/os/adv/lic/base/aarch64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
[root@kylin cdrom]#

其他源有问题,禁用掉。编辑/etc/yum.repos.d/ns7-adv.repo改成enbale=0

[ns7-adv-os]
name=NeoKylin Linux Advanced Server 7 - Os
baseurl=http://update.cs2c.com.cn:8080/NS/V7/V7Update5/os/adv/lic/base/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-neokylin-release
enabled=0

[ns7-adv-updates]
name=NeoKylin Linux Advanced Server 7 - Updates
baseurl=http://update.cs2c.com.cn:8080/NS/V7/V7Update5/os/adv/lic/updates/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-neokylin-release
enabled=0

[ns7-adv-addons]
name=NeoKylin Linux Advanced Server 7 - Addons
baseurl=http://update.cs2c.com.cn:8080/NS/V7/V7Update5/os/adv/lic/addons/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-neokylin-release
enabled=0

这时安装正常

[root@kylin cdrom]# yum install vim
local_media_from_iso                                                                                                                                | 3.7 kB  00:00:00
local_media_from_iso/group_gz                                                                                                                       | 136 kB  00:00:00
正在解决依赖关系
--> 正在检查事务
---> 软件包 vim-enhanced.aarch64.2.7.4.160-4.el7 将被 安装
--> 正在处理依赖关系 vim-common = 2:7.4.160-4.el7,它被软件包 2:vim-enhanced-7.4.160-4.el7.aarch64 需要
--> 正在处理依赖关系 perl(:MODULE_COMPAT_5.16.3),它被软件包 2:vim-enhanced-7.4.160-4.el7.aarch64 需要
--> 正在处理依赖关系 libperl.so()(64bit),它被软件包 2:vim-enhanced-7.4.160-4.el7.aarch64 需要
--> 正在处理依赖关系 libgpm.so.2()(64bit),它被软件包 2:vim-enhanced-7.4.160-4.el7.aarch64 需要
--> 正在检查事务

alpine

轻量级linux发行版,类似于 busybox

获取kernel代码

有时我们经常需要获取当前内核版本的代码,查看代码确认问题,或者重新编译内核

CentOS

凡事都可以到这里去下载 http://vault.centos.org/centos

在/usr/src/kernels/可以看到安装好的内核源码

yum install kernel-devel-$(uname -r)    #安装当前内核版本的代码,保证和当前内核版本一致kernel-devel-4.14.0-115.el7a.0.1.aarch64
yum install kernel-devel                #安装当前版本内核的update版本kernel-devel-4.14.0-115.8.1.el7a.aarch64
或者到网址下载
[默认内核版本源码] kernel-alt-4.14.0-115.el7a.0.1.src.rpm 2018-11-27 06:00 101M
[更新版本源码] kernel-alt-4.14.0-115.2.2.el7a.src.rpm 2018-11-29 15:26 101M
[更新版本源码] kernel-alt-4.14.0-115.5.1.el7a.src.rpm 2019-02-07 15:56 101M
[更新版本源码] kernel-alt-4.14.0-115.6.1.el7a.src.rpm 2019-03-18 16:01 101M
[更新版本源码] kernel-alt-4.14.0-115.7.1.el7a.src.rpm 2019-05-24 16:26 101M

RedHat

命令行方式和CentOS一样 网址需要订阅才有

Ubuntu

apt-get source linux-image-$(uname -r)
git clone git://kernel.ubuntu.com/ubuntu/ubuntu-<release codename>.git

RedHat编译安装内核(英文)

Get source code

You Should be RedHat’s customer/partner to have source code access right.

wget URL

There will be kernel-alt-4.14.0-115.el7a.src.rpm at current dir when download successfuly.

Extract archive

rpm2cpio kernel-alt-4.14.0-115.el7a.src.rpm | cpio -idmv
xz -d linux-4.14.0-115.el7a.tar.xz
tar -xf linux-4.14.0-115.el7a.tar
cd linux-4.14.0-115.el7a/

Apply patch

Since having bugs to fix, we need to make some changes base on RedHat’s release. You may skip this step if you just want to build the kernel. Assuming patches are at ~/patch/ Do following commands under linux-4.14.0-115.el7a/ by order

git apply ~/patch/0001-net-hns3-remove-hns3_fill_desc_tso.patch
git apply ~/patch/0002-net-hns3-move-DMA-map-into-hns3_fill_desc.patch
git apply ~/patch/0003-net-hns3-add-handling-for-big-TX-fragment.patch
git apply ~/patch/0004-net-hns3-rename-hns_nic_dma_unmap.patch
git apply ~/patch/0005-net-hns3-fix-for-multiple-unmapping-DMA-problem.patch
git apply ~/patch/0006-net-hns3-Fix-for-packet-buffer-setting-bug.patch
git apply ~/patch/0007-net-hns3-getting-tx-and-dv-buffer-size-through-firmw.patch
git apply ~/patch/0008-net-hns3-aligning-buffer-size-in-SSU-to-256-bytes.patch
git apply ~/patch/0009-net-hns3-fix-a-SSU-buffer-checking-bug.patch
git apply ~/patch/0010-net-hns3-add-8-BD-limit-for-tx-flow.patch

Creat a .config file

Assuming you are build the kernel for current ARM64 system already had RedHat installed. Simply copy .config from /boot/config-xxx is ok.

cp /boot/config-4.14.0-115.el7a.aarch64 ./.config

Set CONFIG_SYSTEM_TRUSTED_KEYS empty at .config

CONFIG_SYSTEM_TRUSTED_KEYS=""

Get build script to build kernel

wget https://raw.githubusercontent.com/xin3liang/home-bin/master/build-kernel-natively.sh

set rpm name as you like by assign a value to LOCALVERSION

export LOCALVERSION="-liuxl-test-`date +%F`"

install dependence

yum install -y ncurses-devel make gcc bc bison flexelfutils-libelf-devel openssl-devel

Run script

chmod +x build-kernel-natively.sh
./build-kernel-natively.sh

After script top, there will be two files at ~/rpmbuild/RPMS/aarch64,looks like:

kernel-4.14.0_liuxl_test_2019_02_27-1.aarch64.rpm
kernel-headers-4.14.0_liuxl_test_2019_02_27-1.aarch64.rpm

Install new kernel

yum install kernel-4.14.0_liuxl_test_2019_02_27-1.aarch64.rpm

Reboot and choose the new kernel to start up

RedHat编译安装内核(中文)

向redhat获取源码URL

wget URL

下载成功会出现kernel-alt-4.14.0-115.el7a.src.rpm

解压源码包

rpm2cpio kernel-alt-4.14.0-115.el7a.src.rpm | cpio -idmv
xz -d linux-4.14.0-115.el7a.tar.xz
tar -xf linux-4.14.0-115.el7a.tar
cd linux-4.14.0-115.el7a/

打上patch(没有可忽略)

假设patch文件放在 ~/patch/ 在源码目录下顺序执行以下命令

git apply ~/patch/0001-net-hns3-remove-hns3_fill_desc_tso.patch
git apply ~/patch/0002-net-hns3-move-DMA-map-into-hns3_fill_desc.patch
git apply ~/patch/0003-net-hns3-add-handling-for-big-TX-fragment.patch
git apply ~/patch/0004-net-hns3-rename-hns_nic_dma_unmap.patch
git apply ~/patch/0005-net-hns3-fix-for-multiple-unmapping-DMA-problem.patch
git apply ~/patch/0006-net-hns3-Fix-for-packet-buffer-setting-bug.patch
git apply ~/patch/0007-net-hns3-getting-tx-and-dv-buffer-size-through-firmw.patch
git apply ~/patch/0008-net-hns3-aligning-buffer-size-in-SSU-to-256-bytes.patch
git apply ~/patch/0009-net-hns3-fix-a-SSU-buffer-checking-bug.patch
git apply ~/patch/0010-net-hns3-add-8-BD-limit-for-tx-flow.patch

创建内核配置文件

如果是在ARM64服务器本机编辑,之需要在 /boot/config-xxx 复制过来即可.

cp /boot/config-4.14.0-115.el7a.aarch64 ./.config

把config中的CONFIG_SYSTEM_TRUSTED_KEYS变量枝为空串

CONFIG_SYSTEM_TRUSTED_KEYS=""

获取编译脚本

wget https://raw.githubusercontent.com/xin3liang/home-bin/master/build-kernel-natively.sh

可以使用LOCALVERSION设置内核名字

export LOCALVERSION="-liuxl-test-`date +%F`"

安装编译依赖

yum install -y ncurses-devel make gcc bc bison flexelfutils-libelf-devel openssl-devel

执行脚本

chmod +x build-kernel-natively.sh
./build-kernel-natively.sh

~/rpmbuild/RPMS/aarch64下会生成以下文件

kernel-4.14.0_liuxl_test_2019_02_27-1.aarch64.rpm
kernel-headers-4.14.0_liuxl_test_2019_02_27-1.aarch64.rpm

安装内核

yum install kernel-4.14.0_liuxl_test_2019_02_27-1.aarch64.rpm

重启选择新内核启动

编译问题解决

1、缺少openssl库:

scripts/extract-cert.c:21:25: fatal error: openssl/bio.h: No such file or directory
 #include <openssl/bio.h>
                         ^
compilation terminated.
scripts/sign-file.c:25:30: fatal error: openssl/opensslv.h: No such file or directory
 #include <openssl/opensslv.h>
                              ^
compilation terminated.
  CHK     scripts/mod/devicetable-offsets.h
make[1]: *** [scripts/extract-cert] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [scripts/sign-file] Error 1
make: *** [scripts] Error 2
make: *** Waiting for unfinished jobs....

解决办法:

yum install openssl-devel

注意openssl-devel在redhat的软件源中有,但是在epel中是没有的。[点击查看详细]

2、

RedHat软件包管理,设置软件源

一般redhat安装有3个源需要我们考虑。一是官方源,也就是将服务器注册到redhat官方,由官方源提供更新,这里不作介绍。二是使用ISO本地安装,安装redhat时使用的ISO包含了大量常用软件,这个时候挂载到本地系统,也可以实现安装。另外可以考虑epel源,也就是额外的rpm包软件源。

一、ISO本地软件源

从me@192.168.1.201复制到本机

[root@readhat76 ~]# scp me@192.168.1.201:~/RHEL-ALT-7.6-20181011.n.0-Server-aarch64-dvd1.iso ./

挂载镜像

[root@readhat76 ~]# mkdir /mnt/cd_redhat7.6
[root@readhat76 ~]# mount -o loop RHEL-ALT-7.6-20181011.n.0-Server-aarch64-dvd1.iso /mnt/cd_redhat7.6
[root@readhat76 ~]# lsblk
loop0                     7:0    0    3G  0 loop /mnt/cd_redhat7.6

添加本地源

redhat7.6及以下软件源配置文件如下:

cat /etc/yum.repos.d/local_iso.repo
[localiso]
name=redhatapp
baseurl=file:///mnt/cd_redhat/
enable=1
gpgcheck=0

baseurl=file:///mnt/cd_redhat/刚才创建的挂载目录. 载配置文件[local_iso_RHEL7.6.repo]

redhat8.0及以上软件源配置文件如下:

cat /etc/yum.repos.d/local_iso.repo
[base]
name=baseos
baseurl=file:///mnt/cd_redhat/BaseOS
enable=1
gpgcheck=0

[app]
name=app
baseurl=file:///mnt/cd_redhat/AppStream
enable=1
gpgcheck=0

baseurl=file:///mnt/cd_redhat是刚才创建的挂载目录.下载配置文件[local_iso_RHEL8.0.repo]

确认添加成功

yum repolist

可以看到添加好的源

[root@readhat76 ~]# yum repolist
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
repo id                                            repo name                                            status
localiso                                           redhatapp                                            3,713
repolist: 3,713

安装软件

这个时候就可以使用命令安装软件了:

yum install gcc
二、添加epel软件源。
添加epel软件源最简单的办法就是到镜像站下载一个epel源安装包进行安装就可以了。 随便一个镜像站,打开镜像站网址。找到epel-release-latest-7.noarch.rpm文件下载安装。
以华为镜像站为例:
浏览器打开https://mirrors.huaweicloud.com/epel/ 找到epel-release-latest-7
yum install https://mirrors.huaweicloud.com/epel/epel-release-latest-7.noarch.rpm
rpm --import https://mirrors.huaweicloud.com/epel/RPM-GPG-KEY-EPEL-7

同时引入key,KEY是官方发布软件包的验证极致,这里使用官方的公钥安装到本地,当下载软件包时可以验证包的安全性。 如果是RHEL6,请安装epel-release-latest-6.noarch.rpm和RPM-GPG-KEY-EPEL-6

这个时候会在/etc/yum.repo.d/下面多了一个epel.repo的文件。

yum clean all
yum update
yum install htop

这样就可以安装htop了

如果之前已经安装过了epel软件包,其实可以直接替换epel.repo中的url

sudo sed -i "s/#baseurl/baseurl/g" /etc/yum.repos.d/epel.repo
sudo sed -i "s/mirrorlist/#mirrorlist/g" /etc/yum.repos.d/epel.repo
sudo sed -i "s@http://download.fedoraproject.org/pub@https://mirrors.huaweicloud.com@g" /etc/yum.repos.d/epel.repo

[epel 官方文档]

下载二进制软件包
yum --downloadonly [package_name]               #只下载软件包.   这个命令有时候并不工作,在8.0上测试过

yum install yum-utils                           #或者使用yum install dnf-utils
yumdownloader --downloadonly [package_name]     #只下载软件包
三、常用命令
yum install iperf3
yum -y install firefox
yum remove firefox
yum -y remove firefox
yum update mysql
yum list openssh
yum list openssh-4.3p2
yum list installed | less #查询已安装软件包
yum search snappy
yum info snappy
yum update
yum repolist        #查询已经启用的软件源
yum repolist all    #查询所有软件源
yum --enbalerepo=epel install phpmyadmin
yum clean all       #清除缓存
yum history         #查看安装历史
yum list <package_name> --showduplicates    #显示所有版本软件
yum install <package_name>-<version_info>   #安装指定版本软件包
yum downgrade <package_name>-<version_info> #强制降级软件包

yum list installed | grep kernel    #查看已安装内核

rpm -ivh [package_name]     #安装软件包
rpm -Uvh [package_name]     #升级软件包
rpm -e   [package_name]     #卸载软件包
rpm -qa                     #查询已安装软件包
rpm -q   [package_name]     #查询软件包是否已经安装
rpm -qi  [package_name]     #查看软件包信息
rpm -ql  [package_name]     #列出软件包安装的文件,也就是把哪些可执行文件复制到了系统目录
rpm -qf  [绝对路径    ]     #列出可执行文件/命令是由哪个安装包安装的

Suse配置本地软件源

配置过程了redhat差不多。

mount SLE-15-SP1-Packages-aarch64-Beta4-DVD1.iso ./sdk
zypper ar ./sdk  local_repo

ubuntu常用设置

设置root密码

sudo passwd

允许root用户登陆

vim /etc/ssh/ssd_config
#有一段描述PermitRootLogin probit-passwd,提示设置或者添加 PermitRootLogin yes即可
PermitRootLogin yes
#重启服务
systemctl restart ssh

设置.bash_history

在命令行使用history可以查看当前用户执行过的命令,可以很方便地帮助我们回忆做了什么事情。history命令的输出其实是在~/.bash_history文件中保存的。 默认情况下保存命令的条数是有限的。可以通过修改一些参数来进行定制。

以下在是在ubuntu18.04中的设置,其他linux应该改大同小异。 配置文件是:

  • /etc/profile 计算机全局生效,所有用户都有影响
  • ~/.bashrc 当前用户生效
查看当前配置

可以打开上述文件,或者使用echo查看当前的设置:

#查看hisotry命令每次输出最大记录
echo $HISTSIZE
#查看.bash_history文件最大记录
echo $HISTFILESIZE
#查看历史记录时间格式
echo $HISTTIMEFORMAT
#查看历史记录保存文件
echo $HISTFILE
添加配置

这里修改/etc/profile追加以下内容:

#设置文件最大记录
HISTFILESIZE=20000
#设置时间格式,使用history命令时会输出时间
HISTTIMEFORMAT="%F %T "
#用户多个终端时,共享history
shopt -s histappend
#实时追加history,默认是用户退出时才刷新history
PROMPT_COMMAND="history -a"

退出终端,重新登录生效 或者:

source /etc/profile

###配置结果

me@ubuntu:~$ #查看hisotry命令每次输出最大记录
me@ubuntu:~$ echo $HISTSIZE
10000
me@ubuntu:~$ #查看.bash_history文件最大记录
me@ubuntu:~$ echo $HISTFILESIZE
20000
me@ubuntu:~$ #查看历史记录时间格式
me@ubuntu:~$ echo $HISTTIMEFORMAT
%F %T
me@ubuntu:~$ #查看历史记录保存文件
me@ubuntu:~$ echo $HISTFILE
/home/me/.bash_history
me@ubuntu:~$history
 3943  2019-02-18 16:18:21 echo $HISTSIZE
 3944  2019-02-18 16:18:21 #查看.bash_history文件最大记录
 3945  2019-02-18 16:18:21 echo $HISTFILESIZE
 3946  2019-02-18 16:18:21 #查看历史记录时间格式
 3947  2019-02-18 16:18:21 echo $HISTTIMEFORMAT
 3948  2019-02-18 16:18:21 #查看历史记录保存文件
 3949  2019-02-18 16:18:23 echo $HISTFILE
 3950  2019-02-18 16:18:32 history --help
 3951  2019-02-18 16:19:45 history
 3952  2019-02-18 16:19:48 history
cat正常 vim中文乱码

在.vimrc中添加

set fileencodings=utf-8,ucs-bom,gb18030,gbk,gb2312,cp936
set termencoding=utf-8
set encoding=utf-8

ubuntu 远程桌面

通常服务器安装的server版是没有桌面系统的, 如果想要给服务器安装桌面环境怎么办。

ubuntu-desktop 桌面环境

Ubuntu的桌面环境很多,genome,unity等,其实我也记不住,也不想用😅,所以装默认的吧

sudo apt-get install ubuntu-desktop

如果发现无法下载某些包怎么办,特别是使用国内软件源的时候,可能会出现下面的错误。

Unable to correct missing packages.
E: Failed to fetch https://mirrors.huaweicloud.com/ubuntu-ports/pool/main/g/ghostscript/libgs9-common_9.26~dfsg+0-0ubuntu0.18.04.7_all.deb  Undetermined Error [IP: 117.78.24.36 443]
E: Failed to fetch https://mirrors.huaweicloud.com/ubuntu-ports/pool/main/g/ghostscript/libgs9_9.26~dfsg+0-0ubuntu0.18.04.7_arm64.deb  Undetermined Error [IP: 117.78.24.36 443]
E: Failed to fetch https://mirrors.huaweicloud.com/ubuntu-ports/pool/main/g/ghostscript/ghostscript_9.26~dfsg+0-0ubuntu0.18.04.7_arm64.deb  Undetermined Error [IP: 117.78.24.36 443]
E: Failed to fetch https://mirrors.huaweicloud.com/ubuntu-ports/pool/main/libg/libgd2/libgd3_2.2.5-4ubuntu0.3_arm64.deb  Undetermined Error [IP: 117.78.24.36 443]
E: Failed to fetch https://mirrors.huaweicloud.com/ubuntu-ports/pool/main/f/firefox/firefox_65.0.1+build2-0ubuntu0.18.04.1_arm64.deb  Undetermined Error [IP: 117.78.24.36 443]
E: Failed to fetch https://mirrors.huaweicloud.com/ubuntu-ports/pool/main/g/ghostscript/ghostscript-x_9.26~dfsg+0-0ubuntu0.18.04.7_arm64.deb  Undetermined Error [IP: 117.78.24.36 443]
E: Failed to fetch https://mirrors.huaweicloud.com/ubuntu-ports/pool/main/t/thunderbird/thunderbird_60.5.1+build2-0ubuntu0.18.04.1_arm64.deb  File has unexpected size (3145728 != 33795760). Mirror sync in progress? [IP: 117.78.24.36 443]

这个时候可以

sudo apt-get install ubuntu-desktop --fix-missing
如果还是不行,怎么办,很可能是国内的源没有完全同步软件包。这个时候前面配置软件源的教程ubuntu配置软件源备份的sources.list就起作用了。
把备份的文件复制一份到sources.list.d目录下,并且命名需要是.list。
sudo cp /etc/apt/sources.list.backup /etc/apt/sources.list.d/sources.list
sudo apt-get update
sudo apt-get install ubuntu-desktop

这个时候就可以了。

另外所有的安装包加起来很大,下载需要很久,我就遇到了firefox从美国地址下载的情况,这个时候ctrl+c停止,在https://launchpad.net查找对应软件包并下载。
例如firefox的下载地址是https://launchpad.net/ubuntu/bionic/arm64/firefox/65.0.1+build2-0ubuntu0.18.04.1 下载后使用dpkg命令安装
sudo dpkg -i firefox_65.0.1+build2-0ubuntu0.18.04.1_arm64.deb

出现依赖问题安装停止时执行

sudo apt-get -f install

远程桌面

有了桌面环境了,但是服务器其实不在我们身边,无法插上显示器查看桌面环境,这个时候可以配置远程桌面登录,方法有很多vnc,teamviewer等,但是我还是喜欢windows自带的远程桌面。未了让windows的远程桌面能连接到服务器,需要配置服务端环境。

sudo apt-get install xrdp

安装成功后,可以看到xrdp的监听端口。

root@ubuntu:~# netstat -antup | grep xrdp
tcp6       0      0 ::1:3350                :::*                    LISTEN      54713/xrdp-sesman
tcp6       0      0 :::3389                 :::*                    LISTEN      54735/xrdp
tcp6       0      0 127.0.0.1:3389          127.0.0.1:37756         ESTABLISHED 58139/xrdp #这里时我已经脸上才出现的
root@ubuntu:~#

请注意需要防火墙放行,或者直接禁掉.

sudo ufw disable

如果服务器处于NAT之内,可以考虑在网关上做端口映射,把3389暴露出去。

ubuntu软件包管理

经常需要知道改安装什么软件, linux下安装软件不像windows那么傻瓜。软件包装上来之后,一段时间不用你甚至会把命令忘记了,软件就不存在了。当前装的软件是什么版本,怎么查询某个包提供什么命令,这些问题时常困扰着我们。

查询某个可用软件包。

例如现在想要安装nfs,但是不知道该下载什么包,也不知道什么包相关,可以使用命令查询

#列出所有可用软件包
apt-cache pkgnames
#查询nfs相关的软件包
root@ubuntu:~# apt-cache pkgnames | grep nfs
daemonfs
nfs-ganesha-nullfs
argonaut-fai-nfsroot
nfs-ganesha
nfs-ganesha-doc
libfile-nfslock-perl
nfs-kernel-server #一般来说我们只需要安装这个就可以了
nfs-ganesha-proxy
unionfs-fuse
libyanfs-java
nfswatch
python-nfs-ganesha
nfs-ganesha-mem
nfstrace-doc
libnfs-dev
nfs-ganesha-mount-9p
nfstrace
nfs-ganesha-gluster
nfs-ganesha-vfs
nfs-ganesha-xfs
libnfs11
libnfsidmap-dev
nfs4-acl-tools
fai-nfsroot
nfs-ganesha-gpfs
nfs-common
libnfsidmap2

# 也可以使用
apt search nfs来查询

具体安装教程可以参考nfs

查看软件包信息

查看软件的大小,版本,依赖,项目主页,功能信息等

apt search iperf3 由命令或者名称,搜索适用与当前版本的软件包
apt show iperf3  可以查询已安装的或者未安装的软件包信息
dpkg -s coreutils 查询已安装的软件包信息
root@ubuntu:~# apt show iperf3
Package: iperf3
Version: 3.1.3-1
Priority: optional
Section: universe/net
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Raoul Gunnar Borenius <borenius@dfn.de>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 41.0 kB
Depends: libc6 (>= 2.17), libiperf0
Homepage: http://software.es.net/iperf/
Download-Size: 8,788 B
APT-Sources: https://mirrors.huaweicloud.com/ubuntu-ports bionic/universe arm64 Pack                                                                                                         ages
Description: Internet Protocol bandwidth measuring tool
 Iperf3 is a tool for performing network throughput measurements. It can
 test either TCP or UDP throughput.
 .
 This is a new implementation that shares no code with the original
 iperf from NLANR/DAST and also is not backwards compatible.
 .
 This package contains the command line utility.

查看dep包信息

dpkg --info dpkg --info ceph_12.2.11-0ubuntu0.18.04.1_arm64.deb

查看命令对应的软件包

有时候我们知道一个命令,想要知道哪个软件包提供这个命令。

dpkg -S vim
dpkg -S /usr/bin/vim
dpkg --search vim
dpkg-query --search vim
#这三条命令等价,搜索本地已安装的软件包,会给出包含vim关键字的软件包名称和路径

apt-file search kvm-ok              根据命令名字搜索
apt-file search '/usr/bin/rsync'    根据命令路径搜索
可以查找安装的或者未安装的包,给出包含kvm-ok命令的软件包和路径

apt search virsh

升级系统中的所有软件

这会升级系统所有已经安装的软件到最新版本,未安装的不会安装,未安装的依赖不安装,也就是只升级不安装。

sudo apt upgrade

升级指定软件

其实和安装命令一样,如果有版本更新会自动安装。

sudo apt install iperf3

安装指定版本的软件

但是感觉没有什么用,一般一个发行代号只提供一个版本。

sudo apt intall vsftpd=2.3.5-3ubuntu1

卸载软件

sudo apt remove iperf3 #卸载软件,但是不会删除配置
sudo apt purge ipef3 #purge会卸载软件的同时删除所有配置

下载源码

sudo apt --download-only source iperf3  #下载不解压
sudo apt source ipef3                   #下载并解压
apt --compile source iperf3             #下载并编译

如果没有在sources.list中设置软包url会出现:

Reading package lists... Done
E: You must put some 'source' URIs in your sources.list

在软件源文件中取消dev-src行前的注释,然后执行apt update。 软件源的更多配置,请参考ubuntu 软件源

下载二进制包

apt download iperf3
apt download --print-uris  iperf3 #显示软件包下载地址,获取url

sudo apt install --download-only python-pecan #下载所有二进制包,包含依赖,不安装。下载位置是/var/cache/apt/archives

搜索并编译软件依赖

apt build-dep iperf3

例子

我们知道系统中有ssh命令,但是不知道是哪个软件包提供的。
首先先用which命令确认执行的是哪一个ssh命令
root@ubuntu:~# which ssh
/usr/bin/ssh

查找提供命令的软件包

root@ubuntu:~# dpkg -S /usr/bin/ssh
openssh-client: /usr/bin/ssh

再查询软件包penssh-client的信息

root@ubuntu:~# dpkg -s openssh-client
Package: openssh-client
Status: install ok installed
Priority: standard
Section: net
Installed-Size: 3732
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Architecture: arm64
Multi-Arch: foreign
Source: openssh
Version: 1:7.6p1-4
Replaces: ssh, ssh-krb5
Provides: rsh-client, ssh-client

更新内核

下载deb包 https://kernel.ubuntu.com/~kernel-ppa/mainline/

安装

sudo dpkg -i linux-*.deb
sudo update-grub
sudo reboot now

ubuntu软件源配置

这里以18.04.1 LTS (Bionic Beaver)为例介绍软件源的配置。配置国内软件源,可以在安装/更新软件的时候获得更快的速度。

备份软件源

自带的软件源一般是美国的地址,但是书写规范,在实在没有办法的时候可以恢复成默认的源,慢一点但是可用。

sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup

使用华为镜像站。

镜像站地址是:https://mirrors.huaweicloud.com/上面有各种开源软件的下载地址。
替换sources.list当中的url为华为镜像站的url。本人使用ARM平台,所以使用ubuntu-ports的镜像地址。
例如:
deb http://us.ports.ubuntu.com/ubuntu-ports/ bionic main restricted
deb http://ports.ubuntu.com/ubuntu-ports bionic-security main restricted

替换为

deb https://mirrors.huaweicloud.com/ubuntu-ports/ bionic main restricted
deb https://mirrors.huaweicloud.com/ubuntu-ports/ bionic-security main restricted

可以使用命令行替换

sed -i "s@http://us.ports.ubuntu.com/ubuntu-ports/@https://mirrors.huaweicloud.com/ubuntu-ports/@g" /etc/apt/sources.list
sed -i "s@http://ports.ubuntu.com/ubuntu-ports@https://mirrors.huaweicloud.com/ubuntu-ports/@g" /etc/apt/sources.list

这里有一份完成文件:sources.list ## 执行更新

sudo apt update

出现类似输出证明软件源配置成功

root@ubuntu:~# apt update
Get:1 https://mirrors.huaweicloud.com/ubuntu-ports bionic InRelease [242 kB]
Get:2 https://mirrors.huaweicloud.com/ubuntu-ports bionic-updates InRelease [88.7 kB]
Get:3 https://mirrors.huaweicloud.com/ubuntu-ports bionic-backports InRelease [74.6 kB]
Get:4 https://mirrors.huaweicloud.com/ubuntu-ports bionic-security InRelease [88.7 kB]
Get:5 https://mirrors.huaweicloud.com/ubuntu-ports bionic/main arm64 Packages [975 kB]
Get:6 https://mirrors.huaweicloud.com/ubuntu-ports bionic/main Translation-en [516 kB]
Get:7 https://mirrors.huaweicloud.com/ubuntu-ports bionic/restricted arm64 Packages [664 B]
Get:8 https://mirrors.huaweicloud.com/ubuntu-ports bionic/restricted Translation-en [3,584 B]
Get:9 https://mirrors.huaweicloud.com/ubuntu-ports bionic/universe arm64 Packages [8,316 kB]

设置apt命令行代理prxoy [1]

apt -o Acquire::https::proxy="socks5h://127.0.0.1:1080" \
   -o Acquire::http::proxy="socks5h://127.0.0.1:1080"  \
   update

或者创建并写到/etc/apt/apt.conf.d/12proxy

Acquire::http::proxy="socks5h:127.0.0.1:1080";
Acquire::https::proxy="socks5h:127.0.0.1:1080";

问题记录

Certificate verification failed

  root@d54cd5b61fde:/host# apt update
  Ign:1 https://mirrors.huaweicloud.com/ubuntu-ports bionic InRelease
  Ign:2 https://mirrors.huaweicloud.com/ubuntu-ports bionic-security InRelease
  Ign:3 https://mirrors.huaweicloud.com/ubuntu-ports bionic-updates InRelease
  Ign:4 https://mirrors.huaweicloud.com/ubuntu-ports bionic-backports InRelease
  Get:5 http://ports.ubuntu.com/ubuntu-ports bionic InRelease [242 kB]
  Err:6 https://mirrors.huaweicloud.com/ubuntu-ports bionic Release
    Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown.  Could not handshake: Error in the certificate verification. [IP: 117.78.24.32 443]
  Err:7 https://mirrors.huaweicloud.com/ubuntu-ports bionic-security Release
    Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown.  Could not handshake: Error in the certificate verification. [IP: 117.78.24.32 443]
  Err:8 https://mirrors.huaweicloud.com/ubuntu-ports bionic-updates Release
Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown.  Could not handshake: Error in the certificate verification. [IP: 117.78.24.32 443]

解决办法:

把https替换成http. 或者apt

[1]https://www.jianshu.com/p/bc4d7b758503

UOS

UOS是 unity operating system 的简称,统一操作系统。有国内多家公司联名筹备

UOS目前可以再Taishan服务(鲲鹏920)上运行。

UOS 软件源配置

UOS基于debian, 其实也可以使用ubuntu的软件源。

下载软件源文件到指定目录,添加公钥, 公钥3B4FE6ACC0B21F32可以先不添加,apt update会自己报错提示。

sudo wget -O /etc/apt/sources.list.d/Ubuntu-Ports-bionic.list https://mirrors.huaweicloud.com/repository/conf/Ubuntu-Ports-bionic.list
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3B4FE6ACC0B21F32
sudo apt update

设置之后可以看到更新结果

uos@uos-PC:/etc/apt/sources.list.d$
uos@uos-PC:/etc/apt/sources.list.d$ sudo apt update
获取:1 https://mirrors.huaweicloud.com/ubuntu-ports bionic InRelease [242 kB]
获取:2 https://mirrors.huaweicloud.com/ubuntu-ports bionic-security InRelease [88.7 kB]
获取:3 https://mirrors.huaweicloud.com/ubuntu-ports bionic-updates InRelease [88.7 kB]
获取:4 https://mirrors.huaweicloud.com/ubuntu-ports bionic-backports InRelease [74.6 kB]
获取:5 https://mirrors.huaweicloud.com/ubuntu-ports bionic/multiverse Sources [181 kB]
获取:6 https://mirrors.huaweicloud.com/ubuntu-ports bionic/main Sources [829 kB]
获取:7 https://mirrors.huaweicloud.com/ubuntu-ports bionic/universe Sources [9,051 kB]

问题记录

uos@uos-PC:/etc/apt/sources.list.d$ sudo apt update
获取:1 https://mirrors.huaweicloud.com/ubuntu-ports bionic InRelease [242 kB]
获取:2 https://mirrors.huaweicloud.com/ubuntu-ports bionic-security InRelease [88.7 kB]
错误:1 https://mirrors.huaweicloud.com/ubuntu-ports bionic InRelease
  由于没有公钥,无法验证下列签名: NO_PUBKEY 3B4FE6ACC0B21F32
获取:3 https://mirrors.huaweicloud.com/ubuntu-ports bionic-updates InRelease [88.7 kB]
获取:4 https://mirrors.huaweicloud.com/ubuntu-ports bionic-backports InRelease [74.6 kB]
错误:2 https://mirrors.huaweicloud.com/ubuntu-ports bionic-security InRelease
  由于没有公钥,无法验证下列签名: NO_PUBKEY 3B4FE6ACC0B21F32
错误:3 https://mirrors.huaweicloud.com/ubuntu-ports bionic-updates InRelease
  由于没有公钥,无法验证下列签名: NO_PUBKEY 3B4FE6ACC0B21F32
错误:4 https://mirrors.huaweicloud.com/ubuntu-ports bionic-backports InRelease
  由于没有公钥,无法验证下列签名: NO_PUBKEY 3B4FE6ACC0B21F32
正在读取软件包列表... 完成
N: 忽略‘ubuntu-archive-keyring.gpg’(于目录‘/etc/apt/sources.list.d/’),鉴于它的文件扩展名无效
W: GPG 错误:https://mirrors.huaweicloud.com/ubuntu-ports bionic InRelease: 由于没有公钥,无法验证下列签名: NO_PUBKEY 3B4FE6ACC0B21F32
E: 仓库 “https://mirrors.huaweicloud.com/ubuntu-ports bionic InRelease” 没有数字签名。
N: 无法安全地用该源进行更新,所以默认禁用该源。
N: 参见 apt-secure(8) 手册以了解仓库创建和用户配置方面的细节。
W: GPG 错误:https://mirrors.huaweicloud.com/ubuntu-ports bionic-security InRelease: 由于没有公钥,无法验证下列签名: NO_PUBKEY 3B4FE6ACC0B21F32
E: 仓库 “https://mirrors.huaweicloud.com/ubuntu-ports bionic-security InRelease” 没有数字签名。
N: 无法安全地用该源进行更新,所以默认禁用该源。
N: 参见 apt-secure(8) 手册以了解仓库创建和用户配置方面的细节。
W: GPG 错误:https://mirrors.huaweicloud.com/ubuntu-ports bionic-updates InRelease: 由于没有公钥,无法验证下列签名: NO_PUBKEY 3B4FE6ACC0B21F32
E: 仓库 “https://mirrors.huaweicloud.com/ubuntu-ports bionic-updates InRelease” 没有数字签名。
N: 无法安全地用

command

BMC

Baseboard Management Controller 用于管理服务器的子系统,有独立的CPU和内存,可以读取到母版上各硬件设施的状态。

以下命令在BMC提示符下执行。

iBMC:/->
#查询固件版本BMC、CPLD、BIOS信息
ipmcget -d v

#查询健康事件(普通,严重,告警)
ipmcget -t fru0 -d healthevents

#查询iBMC管理网口的IP信息。
ipmcget -d ipinfo

#ipaddr命令用于设置iBMC网口的IPv4地址和掩码。
ipmcset -d ipaddr -v <ipaddr> <mask>
ipmcset -d ipaddr -v 192.168.0.25 255.255.255.0

#ipmode命令用于设置iBMC网口的IPv4 DHCP模式。
ipmcset -d ipmode -v dhcp

#gateway命令用来设置iBMC网口的IPv4网关地址。
ipmcset -d gateway -v <gateway>
ipmcset -d gateway -v 192.168.0.1

#reset命令用来重启iBMC管理系统。
ipmcset -d reset

#查询和设置BMC服务状态
ipmcget -t service -d list
ipmcset -t service -d state -v <option> <enabled | disabled>
ipmcset -t service -d state -v http enabled

#查询和设置启动设备
ipmcget -d bootdevice
ipmcset -d bootdevice -v <option>
ipmcset -d bootdevice -v 0 #取消强制启动
ipmcset -d bootdevice -v 1 #从PXE启动
ipmcset -d bootdevice -v 2 #从默认硬盘启动
ipmcset -d bootdevice -v 5 #从默认CD/DVD启动
ipmcset -d bootdevice -v 6 #启动后进入BIOS菜单

#重启BMC
ipmcset -d reset

#重启服务器设备。
ipmcget -d powerstate      #查询上电状态
ipmcset -d frucontrol -v 0 #强制重启
ipmcset -d powerstate -v 0 #正常下电
ipmcset -d powerstate -v 1 #上电
ipmcset -d powerstate -v 2 #强制下电

查询系统健康事件

iBMC:/->ipmcget -d healthevents
Event Num  | Event Time           | Alarm Level  | Event Code   | Event Description
1          | 2019-11-04 07:07:39  | Major        | 0x10000015   | Abnormal mainboard CPLD 3 self-check result.
2          | 2019-11-04 07:07:36  | Major        | 0x28000001   | The SAS or PCIe cable to front disk backplane is incorrectly connected.
3          | 2019-11-04 07:07:40  | Major        | 0x28000001   | The SAS or PCIe cable to front disk backplane PORTB is incorrectly connected.
4          | 2019-11-04 07:07:40  | Major        | 0x28000001   | The SAS or PCIe cable to front disk backplane PORTA is incorrectly connected.
iBMC:/->

命令行升级BMC

上传文件到BMC

scp TaiShan_2280_V2_5280_V2-BIOS_V105.hpm Administrator@192.168.2.53:/tmp/
scp TS200-2280-iBMC-V366.hpm Administrator@192.168.2.53:/tmp/

升级命令

iBMC:/->
iBMC:/->ipmcset -d upgrade -v /tmp/TS200-2280-iBMC-V366.hpm
Please make sure the iBMC is working while upgrading.
Updating...
100%
Upgrade successfully.
iBMC:/->

升级成功,可以看到 Active iBMC    Version:           (U68)3.66

iBMC:/->ipmcget -d v
------------------- iBMC INFO -------------------
IPMC               CPU:           Hi1710
IPMI           Version:           2.0
CPLD           Version:           (U6076)1.00
Active iBMC    Version:           (U68)3.66
Active iBMC      Build:           003
Active iBMC      Built:           18:21:27 Nov  2 2019
Backup iBMC    Version:           3.55
SDK            Version:           3.33
SDK              Built:           20:39:29 Jul 18 2019
Active Uboot   Version:           2.1.13 (Dec 24 2018 - 20:23:20)
Backup Uboot   Version:           2.1.13 (Dec 24 2018 - 20:23:20)
----------------- Product INFO -----------------
Product             ID:           0x0001
Product           Name:           TaiShan 2280 V2
BIOS           Version:           (U75)0.88
-------------- Mother Board INFO ---------------
Mainboard      BoardID:           0x00b9
Mainboard          PCB:           .A
------------------- NIC INFO -------------------
NIC 1 (TM280)  BoardID:           0x0067
NIC 1 (TM280)      PCB:           .A
NIC 2 (TM210)  BoardID:           0x0068
NIC 2 (TM210)      PCB:           .A
--------------- Riser Card INFO ----------------
Riser1       BoardName:           BC82PRNE
Riser1         BoardID:           0x0032
Riser1             PCB:           .A
Riser2       BoardName:           BC82PRUA
Riser2         BoardID:           0x0094
Riser2             PCB:           .A
-------------- HDD Backplane INFO --------------
Disk BP1      BoardName:          BC11THBQ
Disk BP1       BoardID:           0x0073
Disk BP1           PCB:           .A
Disk BP1     CPLD Version:        (U3)1.11
-------------------- PS INFO -------------------
PS1            Version:           DC:107 PFC:107
iBMC:/->

命令行升级BIOS

复制文件到BMC的/tmp/目录下,下电,使用命令升级

iBMC:/->ipmcset -t maintenance -d upgradebios -v /tmp/TaiShan_2280_V2_5280_V2-BIOS_V105.hpm
Please power off OS first, and then upgrade BIOS again.
iBMC:/->ipmcset -d powerstate -v 0
WARNING: The operation may have many adverse effects.
Do you want to continue?[Y/N]:Y
Control fru0 normal power off successfully.
iBMC:/->ipmcset -t maintenance -d upgradebios -v /tmp/TaiShan_2280_V2_5280_V2-BIOS_V105.hpm
Please make sure the iBMC is working while upgrading.
Updating...
100%
Upgrade successfully.
iBMC:/->

重新开机

iBMC:/->ipmcget -d powerstate
Power state   : Off
Hotswap state : M1
iBMC:/->ipmcset -d powerstate -v 1
WARNING: The operation may have many adverse effects.
Do you want to continue?[Y/N]:Y
Control fru0 power on successfully.
iBMC:/->

这个时候可以看到成功了 BIOS           Version:           (U75)1.05

iBMC:/->ipmcget -d v
------------------- iBMC INFO -------------------
IPMC               CPU:           Hi1710
IPMI           Version:           2.0
CPLD           Version:           (U6076)1.00
Active iBMC    Version:           (U68)3.66
Active iBMC      Build:           003
Active iBMC      Built:           18:21:27 Nov  2 2019
Backup iBMC    Version:           3.55
SDK            Version:           3.33
SDK              Built:           20:39:29 Jul 18 2019
Active Uboot   Version:           2.1.13 (Dec 24 2018 - 20:23:20)
Backup Uboot   Version:           2.1.13 (Dec 24 2018 - 20:23:20)
----------------- Product INFO -----------------
Product             ID:           0x0001
Product           Name:           TaiShan 2280 V2
BIOS           Version:           (U75)1.05
-------------- Mother Board INFO ---------------
Mainboard      BoardID:           0x00b9
Mainboard          PCB:           .A
------------------- NIC INFO -------------------
NIC 1 (TM280)  BoardID:           0x0067
NIC 1 (TM280)      PCB:           .A
NIC 2 (TM210)  BoardID:           0x0068
NIC 2 (TM210)      PCB:           .A
--------------- Riser Card INFO ----------------
Riser1       BoardName:           BC82PRNE
Riser1         BoardID:           0x0032
Riser1             PCB:           .A
Riser2       BoardName:           BC82PRUA
Riser2         BoardID:           0x0094
Riser2             PCB:           .A
-------------- HDD Backplane INFO --------------
Disk BP1      BoardName:          BC11THBQ
Disk BP1       BoardID:           0x0073
Disk BP1           PCB:           .A
Disk BP1     CPLD Version:        (U3)1.11
-------------------- PS INFO -------------------
PS1            Version:           DC:107 PFC:107

在OS内获取BMC IP地址

[root@localhost ~]# ipmitool lan print 1
Set in Progress         : Set Complete
IP Address Source       : Static Address
IP Address              : 192.168.2.63
Subnet Mask             : 255.255.255.0
MAC Address             : e0:00:84:2b:44:dd
SNMP Community String   : TrapAdmin12#$
IP Header               : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP      : 192.168.2.1
802.1q VLAN ID          : Disabled
RMCP+ Cipher Suites     : 0,1,2,3,17
Cipher Suite Priv Max   : XuuaXXXXXXXXXXX
                        :     X=Cipher Suite Unused
                        :     c=CALLBACK
                        :     u=USER
                        :     o=OPERATOR
                        :     a=ADMIN
                        :     O=OEM
Bad Password Threshold  : Not Available

BMC一键收集信息格式说明

主要关注

AppDump/card_manage/card_info 设备上的PCIE卡,硬盘,Raid卡,Riser卡,主板信息
AppDump/cooling_app/fan_info.txt 风扇的数量,转速
AppDump/cooling_app/sensor_alarm/current_event.txt 传感器告警
AppDump/StorageMgnt/RAID_Controller_Info.txt  Raid卡信息和Raid卡上的硬盘
LogDump/linux_kern_log BMC linux kernel内核日志
OSDump/systemcom.tar 串口日志

SOL串口信息

目录

子目录

文件名

文件内容说明

-

-

dump_app_log

iBMC收集结果列表

dump_log

一键收集结果列表

3rdDump

-

error_log

Apache错误日志

access_log

Apache访问日志

error_log.1

Apache错误日志备份文件

access_log.1

Apache访问日志备份文件

httpd.conf

Apache http配置文件

httpd-port.conf

Apache http端口配置文件

httpd-ssl.conf

Apache https配置文件

httpd-ssl-port.conf

Apache https端口配置文件

httpd-ssl-protocol.conf

Apache https协议版本配置文件

httpd-ssl-ciphersuite.conf

Apache https协议加密套件配置文件

说明:

iBMC V350及以上版本支持收集此信息。

AppDump

User

User_dfl.log

User模块管理对象的信息

card_manage

card_manage_dfl.log

Card_Manage模块管理对象的信息

card_info

服务器上配置的扣卡信息

sdi_card_cpld_info

SDI V3卡的CPLD寄存器信息

说明:

只有适配且已正确安装了SDI V3卡的产品支持收集此信息。

BMC

BMC_dfl.log

iBMC模块管理对象的信息

fruinfo.txt

FRU电子标签信息

nandflash_info.txt

NAND flash信息

说明:

iBMC V361及以上版本支持收集此信息。

net_info.txt

网口配置信息

psu_info.txt

服务器上配置的电源信息

PowerMgnt

PowerMgnt_dfl.log

PowerMgnt模块管理对象的信息

power_statistics.csv

功率统计信息

power_bbu_info.log

BBU模块日志

说明:

TaiShan 200服务器支持收集此日志。

UPGRADE

UPGRADE_dfl.log

Upgrade模块管理对象的信息

upgrade_info

iBMC相关器件的版本信息

BIOS

BIOS_dfl.log

BIOS模块管理对象的信息

bios_info

BIOS配置信息

options0.ini

BIOS配置信息对照表

changed0.ini

BIOS配置变更项列表

display0.ini

BIOS显示信息对照表

discovery

discovery_dfl.log

Discovery模块管理对象的信息

agentless

agentless_dfl.log

Agentless模块管理对象的信息

说明:

iBMC V360及以上版本支持收集此信息。

diagnose

diagnose_dfl.log

Diagnose模块管理对象的信息

diagnose_info

Port 80的故障诊断信息

Snmp

Snmp_dfl.log

Snmp模块管理对象的信息

cooling_app

cooling_app_dfl.log

Cooling模块管理对象的信息

fan_info.txt

风扇型号、转速等详细信息

CpuMem

CpuMem_dfl.log

CpuMem模块管理对象的信息

cpu_info

服务器配置的CPU参数的详细信息

mem_info

服务器配置的内存参数的详细信息

kvm_vmm

kvm_vmm_dfl.log

KVM_VMM模块管理对象的信息

ipmi_app

ipmi_app_dfl.log

IPMI模块管理对象的信息

Dft

Dft_dfl.log

DFT模块管理对象的信息

net_nat

net_nat_dfl.log

Net_NAT模块管理对象的信息

PcieSwitch

PcieSwitch_dfl.log

PCIeSwitch模块管理对象的信息

sensor_alarm

sensor_alarm_dfl.log

Sensor_Alarm模块管理对象的信息

sensor_info.txt

服务器所有传感器信息列表

current_event.txt

服务器当前健康状态和告警事件

sel.tar

当前sel信息和历史sel信息打包文件

sensor_alarm_sel.bin.md5

sel原始记录文件完整性校验码

sensor_alarm_sel.bin.bak.md5

sel原始记录备份文件完整性校验码

sensor_alarm_sel.bin.sha256

sel原始记录文件完整性校验码

sensor_alarm_sel.bin.bak.sha256

sel原始记录备份文件完整性校验码

sensor_alarm_sel.bin.bak

sel原始记录备份文件

sensor_alarm_sel.bin

sel原始记录文件

sel.db

sel数据库文件

LedInfo

服务器当前LED灯的显示状态

sensor_alarm_sel.bin.tar.gz

sel历史记录打包文件

MaintDebug

MaintDebug_dfl.log

MaintDebug模块管理对象的信息

FileManage

FileManage_dfl.log

FileManage模块管理对象的信息

switch_card

switch_card_dfl.log

Switch_Card模块管理对象的信息

phy_register_info

后插板phy寄存器信息

port_adapter_info

后插板接口器件信息

StorageMgnt

StorageMgnt_dfl.log

StorageMgnt模块管理对象的信息

RAID_Controller_Info.txt

当前RAID控制器/逻辑盘/硬盘的信息

rimm

rimm_dfl.log

StorageMgnt模块管理对象的信息

redfish

redfish_dfl.log

Redfish模块管理对象的信息

component_uri.json

部件URI列表

dfm

dfm.log

DFM模块管理对象的信息

dfm_debug_log

dfm_debug_log.1

PME框架调试日志

BMALogDump

-

bma_debug_log

bma_debug_log.1.gz

bma_debug_log.2.gz

bma_debug_log.3.gz

iBMA日志

CoreDump

-

core-*(以“core-”开头的文件)

内存转储文件,根据系统运行情况可能产生一个或者多个文件,为应用程序core dump文件。

RTOSDump

sysinfo

cmdline

iBMC内核的命令行参数

cpuinfo

iBMC内核的CPU芯片信息

devices

iBMC系统的设备信息

df_info

iBMC分区空间的使用信息

diskstats

iBMC的磁盘信息

filesystems

iBMC的文件系统信息

free_info

iBMC的内存使用概况

interrupts

iBMC的中断信息

ipcs_q

iBMC的进程队列信息

ipcs_q_detail

iBMC的进程队列详细信息

ipcs_s

iBMC的进程信号量信息

ipcs_s_detail

iBMC的进程信号量详细信息

loadavg

iBMC系统运行负载情况

locks

iBMC内核锁住的文件列表

meminfo

iBMC的内存占用详细信息

modules

iBMC的模块加载列表

mtd

iBMC的配置分区信息

partitions

iBMC所有设备分区信息

ps_info

ps -elf

iBMC进程详细信息

slabinfo

iBMC内核内存管理slab信息

stat

iBMC的CPU利用率

top_info

top -bn 1

显示当前iBMC进程运行情况

uname_info

uname -a

显示当前iBMC内核版本

uptime

iBMC系统运行时间

version

iBMC当前的ROTS版本

vmstat

iBMC虚拟内存统计信息

versioninfo

ibmc_revision.txt

iBMC版本编译节点信息

app_revision.txt

iBMC版本信息

build_date.txt

iBMC版本构建时间

fruinfo.txt

FRU电子标签信息

RTOS-Release

RTOS版本信息

RTOS-Revision

RTOS版本标记号

server_config.txt

服务器当前的配置信息

networkinfo

ifconfig_info

网络信息,执行ifconfig的结果

ipinfo_info

iBMC配置的网络信息

_data_var_dhcp_dhclient.leases

DHCP租约文件

dhclient.leases

DHCP租约文件

dhclient6.leases

DHCP租约文件

dhclient6_eth0.leases

DHCP租约文件

dhclient6_eth1.leases

DHCP租约文件

dhclient6_eth2.leases

DHCP租约文件

dhclient.conf

DHCP配置文件

dhclient_ip.conf

DHCP配置文件

dhclient6.conf

DHCP配置文件

dhclient6_ip.conf

DHCP配置文件

resolv.conf

DNS配置文件

ipinfo.sh

iBMC网络配置脚本

netstat_info

netstat -a

显示当前网络端口、连接使用情况

route_info

route

显示当前路由信息

services

服务端口信息

other_info

extern.conf

BMC日志文件配置

remotelog.conf

syslog定制配置文件

ssh

SSH服务配置

sshd_config

SSHD服务配置文件

logrotate.status

logrotate状态记录文件

login

login pam登录规则

sshd

SSH pam登录规则

sfcb

CIM pam登录规则

datafs_log

data检测日志

ntp.conf

NTP服务配置

vsftpd

FTP pam登录规则

driver_info

dmesg_info

系统启动信息,执行dmesg的结果

lsmod_info

当前加载驱动模块信息

kbox_info

kbox信息

edma_drv_info

edma驱动信息

cdev_drv_info

字符设备驱动信息

veth_drv_info

虚拟网卡驱动信息

SpLogDump

说明:
  • iBMC V363及以上版本支持收集此日志。
  • TaiShan 200服务器支持收集此日志。

-

config

配置导出备份文件

说明:
  • SP运行过程中无法收集此日志。
  • SP运行配置导出功能后可收集该日志。

deviceinfo.json

服务器资产信息

说明:

SP运行过程中无法收集此日志。

diagnose

硬件诊断日志

说明:
  • SP运行过程中无法收集此日志。
  • SP运行硬件诊断功能后可收集该日志。

dmesg.log

小系统dmesg日志

说明:

SP运行过程中无法收集此日志。

filepatchup_debug.log

极速部署文件打包日志

说明:
  • SP运行过程中无法收集此日志。
  • SP运行极速部署功能后可收集该日志。

images.log

极速部署克隆日志

说明:
  • SP运行过程中无法收集此日志。
  • SP运行极速部署功能后可收集该日志。

images_restore.log

极速部署还原日志

说明:
  • SP运行过程中无法收集此日志。
  • SP运行极速部署还原功能后可收集该日志。

maintainlog.csv

SP维护日志。带时间戳的maintainlog文件为之前收集的日志。

说明:

SP运行过程中无法收集此日志。

operatelog.csv

SP运行日志。带时间戳的operatinglog文件为之前收集的日志。

说明:

SP运行过程中无法收集此日志。

ping6.log

网络通信日志

说明:

SP运行过程中无法收集此日志。

quickdeploy_debug.log

极速部署日志

说明:
  • SP运行过程中无法收集此日志。
  • SP运行极速部署功能后可收集该日志。

varmesg.log

小系统信息日志

说明:

SP运行过程中无法收集此日志。

sp_upgrade_info.log

SP自升级日志

说明:
  • SP运行过程中无法收集此日志。
  • SP运行自升级功能后可收集该日志。

upgrade

SP固件升级日志

说明:

SP运行过程中无法收集此日志。

version.json

SP版本配置文件

说明:

SP运行过程中无法收集此日志。

version.json.*.sha

SP版本配置文件的校验文件

说明:

SP运行过程中无法收集此日志。

LogDump

-

arm_fdm_log

arm_fdm_log.tar.gz

TaiShan系列服务器的FDM日志

LSI_RAID_Controller_Log

LSI_RAID_Controller_Log.1.gz

LSI_RAID_Controller_Log.2.gz

LSI RAID控制器的日志

PD_SMART_INFO_C*

硬盘的SMART日志,*为RAID控制器的编号

linux_kernel_log

linux_kernel_log.1

Linux内核日志

operate_log

operate_log.tar.gz

用户操作日志

remote_log

remote_log.1.gz

syslog test操作日志、sel日志

security_log

security_log.1

安全日志

strategy_log

strategy_log.tar.gz

运行日志

fdm.bin

fdm.bin.tar.gz

FDM原始故障日志

fdm_me_log

fdm_me_log.tar.gz

ME故障日志

fdm_pfae_log

FDM预告警日志

fdm_mmio_log

fdm_mmio_log.tar.gz

FDM板卡配置日志

maintenance_log

maintenance_log.tar.gz

维护日志

ipmi_debug_log

ipmi_debug_log.tar.gz

IPMI模块日志

ipmi_mass_operation_log

ipmi_mass_operation_log.tar.gz

IPMI模块运行日志

app_debug_log_all

app_debug_log_all.1.gz

app_debug_log_all.2.gz

app_debug_log_all.3.gz

所有应用模块调试日志

agentless_driver_log

agentless_driver_log.1.gz

agentless_driver_log.2.gz

agentless_driver_log.3.gz

agentless驱动的日志文件

kvm_vmm_debug_log

kvm_vmm_debug_log.tar.gz

KVM模块日志

ps_black_box.log

电源黑匣子日志

third_party_file_bak.log

第三方文件备份日志记录

OSDump

-

systemcom.tar

SOL串口信息

img*.jpeg

业务侧最后一屏图像

*.rep

业务侧屏幕自动录像文件

video_caterror_rep_is_deleted.info

删除过大的caterror录像的提示

DeviceDump

i2c_info

*_info

I2C设备的寄存器/存储区信息

Register

-

cpld_reg_info

CPLD寄存器信息

OptPme

pram

说明:

本文件夹的文件来源于/opt/pme/pram目录,如果出现没有记录在此的文件,为程序运行过程中产生的中间文件,不存在信息安全问题。

filelist

“/opt/pme/pram”目录下文件列表

BIOS_FileName

SMBIOS信息

BIOS_OptionFileName

BIOS配置信息

BMC_dhclient.conf

DHCP配置文件

BMC_dhclient.conf.md5

完整性校验码

BMC_dhclient.conf.sha256

完整性校验码

BMC_dhclient6.conf

DHCP配置文件

BMC_dhclient6.conf.md5

完整性校验码

BMC_dhclient6.conf.sha256

完整性校验码

BMC_dhclient6_ip.conf

DHCP配置文件

BMC_dhclient6_ip.conf.md5

完整性校验码

BMC_dhclient6_ip.conf.sha256

完整性校验码

BMC_dhclient_ip.conf

DHCP配置文件

BMC_dhclient_ip.conf.md5

完整性校验码

BMC_dhclient_ip.conf.sha256

完整性校验码

BMC_HOSTNAME

iBMC主机名

BMC_HOSTNAME.md5

完整性校验码

BMC_HOSTNAME.sha256

完整性校验码

CpuMem_cpu_utilise

服务器CPU利用率

CpuMem_mem_utilise

服务器内存利用率

cpu_utilise_webview.dat

CPU利用率曲线数据

env_web_view.dat

环境温度曲线数据

fsync_reg.ini

文件同步配置文件

lost+found

文件夹

md_so_maintenance_log

维护日志

md_so_maintenance_log.tar.gz

维护日志打包

md_so_operate_log

操作日志

md_so_operate_log.md5

完整性校验码

md_so_operate_log.sha256

完整性校验码

md_so_operate_log.tar.gz

操作日志打包

md_so_strategy_log

策略日志

md_so_strategy_log.md5

完整性校验码

md_so_strategy_log.sha256

完整性校验码

md_so_strategy_log.tar.gz

策略日志打包

memory_webview.dat

管理对象运行信息

per_config.ini

iBMC配置持久化文件

per_config.ini.md5

完整性校验码

per_config.ini.sha256

完整性校验码

per_config_permanent.ini

iBMC配置持久化文件

per_config_permanent.ini.md5

完整性校验码

per_config_permanent.ini.sha256

完整性校验码

per_config_reset.ini

iBMC配置持久化文件

per_config_reset.ini.bak

iBMC配置持久化文件

per_config_reset.ini.bak.md5

完整性校验码

per_config_reset.ini.bak.sha256

完整性校验码

per_config_reset.ini.md5

完整性校验码

per_config_reset.ini.sha256

完整性校验码

per_def_config.ini

iBMC配置持久化文件

per_def_config.ini.md5

完整性校验码

per_def_config.ini.sha256

完整性校验码

per_def_config_permanent.ini

iBMC配置持久化文件

per_def_config_permanent.ini.md5

完整性校验码

per_def_config_permanent.ini.sha256

完整性校验码

per_def_config_reset.ini

iBMC配置持久化文件

per_def_config_reset.ini.bak

iBMC配置持久化文件

per_def_config_reset.ini.bak.md5

完整性校验码

per_def_config_reset.ini.bak.sha256

完整性校验码

per_def_config_reset.ini.md5

完整性校验码

per_def_config_reset.ini.sha256

完整性校验码

per_power_off.ini

iBMC配置持久化文件

per_power_off.ini.md5

完整性校验码

per_power_off.ini.sha256

完整性校验码

per_reset.ini

iBMC配置持久化文件

per_reset.ini.bak

iBMC配置持久化文件

per_reset.ini.bak.md5

完整性校验码

per_reset.ini.bak.sha256

完整性校验码

per_reset.ini.md5

完整性校验码

per_reset.ini.sha256

完整性校验码

pflash_lock

flash文件锁

PowerMgnt_record

管理对象运行信息

powerview.txt

功率统计文件

proc_queue

进程队列id文件夹

ps_web_view.dat

管理对象运行信息

sel.db

SEL数据库

sel_db_sync

SEL数据库同步锁

semid

进程信号量id文件夹

sensor_alarm_sel.bin

SEL原始记录文件

sensor_alarm_sel.bin.md5

完整性校验码

sensor_alarm_sel.bin.sha256

完整性校验码

sensor_alarm_sel.bin.tar.gz

SEL历史记录打包文件

Snmp_snmpd.conf

Snmp配置文件

Snmp_snmpd.conf.md5

完整性校验码

Snmp_snmpd.conf.sha256

完整性校验码

Snmp_http_configure

HTTP配置文件

Snmp_http_configure.md5

完整性校验码

Snmp_http_configure.sha256

完整性校验码

Snmp_https_configure

HTTPS配置文件

Snmp_https_configure.md5

完整性校验码

Snmp_https_configure.sha256

完整性校验码

Snmp_https_tsl

HTTPS TLS配置文件

Snmp_https_tsl.md5

完整性校验码

Snmp_https_tsl.sha256

完整性校验码

up_cfg

升级配置文件夹

User_login

login pam登录规则

User_login.md5

完整性校验码

User_login.sha256

完整性校验码

User_sshd

SSH pam登录规则

User_sshd.md5

完整性校验码

User_sshd.sha256

完整性校验码

User_sshd_config

SSH配置文件

User_sshd_config.md5

完整性校验码

User_sshd_config.sha256

完整性校验码

User_vsftp

FTP pam登录规则

User_vsftp.md5

完整性校验码

User_vsftp.sha256

完整性校验码

eo.db

SEL数据库

save

说明:

本文件夹的文件来源于/opt/pme/save目录,*.md5文件为完整性校验码,*.sha256文件为完整性校验码,*.bak文件为备份文件,*.tar.gz为打包保存文件,per_*.ini为配置持久化文件,*sel.*为系统事件记录文件(如果出现没有记录在此的文件,为程序运行过程中产生的中间文件,不存在信息安全问题。)

filelist

“/opt/pme/pram”目录下文件列表

BIOS_FileName

SMBIOS信息

BMC_dhclient.conf.bak

DHCP配置备份文件

BMC_dhclient.conf.bak.md5

完整性校验码

BMC_dhclient.conf.bak.sha256

完整性校验码

BMC_dhclient.conf.md5

完整性校验码

BMC_dhclient.conf.sha256

完整性校验码

BMC_dhclient6.conf.bak

DHCP配置备份文件

BMC_dhclient6.conf.bak.md5

完整性校验码

BMC_dhclient6.conf.bak.sha256

完整性校验码

BMC_dhclient6.conf.md5

完整性校验码

BMC_dhclient6.conf.sha256

完整性校验码

BMC_dhclient6_ip.conf.bak

DHCP配置备份文件

BMC_dhclient6_ip.conf.bak.md5

完整性校验码

BMC_dhclient6_ip.conf.bak.sha256

完整性校验码

BMC_dhclient6_ip.conf.md5

完整性校验码

BMC_dhclient6_ip.conf.sha256

完整性校验码

BMC_dhclient_ip.conf.bak

DHCP配置备份文件

BMC_dhclient_ip.conf.bak.md5

完整性校验码

BMC_dhclient_ip.conf.bak.sha256

完整性校验码

BMC_dhclient_ip.conf.md5

完整性校验码

BMC_dhclient_ip.conf.sha256

完整性校验码

BMC_HOSTNAME.bak

主机名配置备份文件

BMC_HOSTNAME.bak.md5

完整性校验码

BMC_HOSTNAME.bak.sha256

完整性校验码

BMC_HOSTNAME.md5

完整性校验码

BMC_HOSTNAME.sha256

完整性校验码

CpuMem_cpu_utilise

管理对象运行信息

CpuMem_mem_utilise

管理对象运行信息

md_so_operate_log.bak

操作日志

md_so_operate_log.bak.md5

完整性校验码

md_so_operate_log.md5

完整性校验码

md_so_operate_log.bak.sha256

完整性校验码

md_so_strategy_log.bak

策略日志

md_so_operate_log.sha256

完整性校验码

md_so_strategy_log.bak.md5

完整性校验码

md_so_strategy_log.bak.sha256

完整性校验码

md_so_strategy_log.md5

完整性校验码

md_so_strategy_log.sha256

完整性校验码

per_config.ini

iBMC配置持久化文件

per_config.ini.bak

iBMC配置持久化文件

per_config.ini.bak.md5

完整性校验码

per_config.ini.bak.sha256

完整性校验码

per_config.ini.md5

完整性校验码

per_config.ini.sha256

完整性校验码

per_def_config.ini

iBMC配置持久化文件

per_def_config.ini.bak

iBMC配置持久化文件

per_def_config.ini.bak.md5

完整性校验码

per_def_config.ini.bak.sha256

完整性校验码

per_def_config.ini.md5

完整性校验码

per_def_config.ini.sha256

完整性校验码

per_power_off.ini

iBMC配置持久化文件

per_power_off.ini.bak

iBMC配置持久化文件

per_power_off.ini.bak.md5

完整性校验码

per_power_off.ini.bak.sha256

完整性校验码

per_power_off.ini.md5

完整性校验码

per_power_off.ini.sha256

完整性校验码

PowerMgnt_record

管理对象运行信息

sensor_alarm_sel.bin

SEL原始记录文件

sensor_alarm_sel.bin.bak

SEL原始记录文件

sensor_alarm_sel.bin.bak.md5

完整性校验码

sensor_alarm_sel.bin.bak.sha256

完整性校验码

sensor_alarm_sel.bin.md5

完整性校验码

sensor_alarm_sel.bin.sha256

完整性校验码

sensor_alarm_sel.bin.tar.gz

SEL历史记录打包文件

Snmp_http_configure.bak

HTTP配置备份文件

Snmp_http_configure.bak.md5

完整性校验码

Snmp_http_configure.bak.sha256

完整性校验码

Snmp_http_configure.md5

完整性校验码

Snmp_http_configure.sha256

完整性校验码

Snmp_https_configure.bak

HTTPS配置备份文件

Snmp_https_configure.bak.md5

完整性校验码

Snmp_https_configure.bak.sha256

完整性校验码

Snmp_https_configure.md5

完整性校验码

Snmp_https_configure.sha256

完整性校验码

Snmp_https_tsl.bak

HTTPS TLS配置备份文件

Snmp_https_tsl.bak.md5

完整性校验码

Snmp_https_tsl.bak.sha256

完整性校验码

Snmp_https_tsl.md5

完整性校验码

Snmp_https_tsl.sha256

完整性校验码

Snmp_snmpd.conf.bak

Snmp配置备份文件

Snmp_snmpd.conf.bak.md5

完整性校验码

Snmp_snmpd.conf.bak.sha256

完整性校验码

Snmp_snmpd.conf.md5

完整性校验码

Snmp_snmpd.conf.sha256

完整性校验码

User_login.bak

login pam登录规则

User_login.bak.md5

完整性校验码

User_login.bak.sha256

完整性校验码

User_login.md5

完整性校验码

User_login.sha256

完整性校验码

User_sshd.bak

SSH pam登录规则

User_sshd.bak.md5

完整性校验码

User_sshd.bak.sha256

完整性校验码

User_sshd.md5

完整性校验码

User_sshd.sha256

完整性校验码

User_sshd_config.bak

SSH配置文件

User_sshd_config.bak.md5

完整性校验码

User_sshd_config.bak.sha256

完整性校验码

User_sshd_config.md5

完整性校验码

User_sshd_config.sha256

完整性校验码

User_vsftp.bak

FTP pam登录规则

User_vsftp.bak.md5

完整性校验码

User_vsftp.bak.sha256

完整性校验码

User_vsftp.md5

完整性校验码

User_vsftp.sha256

完整性校验码

eo.db

SEL数据库

eo.db.md5

完整性校验码

eo.db_backup

SEL数据库

eo.db.md5_backup

完整性校验码

BMC R730

当不在机房的时候,可以ssh到BMC界面,在BMC命令行执行以下语句:

racadm serveraction powerup                                         #开启服务器
racadm serveraction powerdown                                       #关闭服务器
racadm serveraction powercycle                                      #关机后再开启服务器
racadm serveraction powerstatus                                     #查看服务器状

dpdk

DPDK使用了轮询(polling)而不是中断来处理数据包。在收到数据包时,经DPDK重载的网卡驱动不会通过中断通知CPU, 而是直接将数据包存入内存,交付应用层软件通过DPDK提供的接口来直接处理,这样节省了大量的CPU中断时间和内存拷贝时间。

下载地址

http://core.dpdk.org/download/

编译安装

yum install make makecache gcc-c++ patch kernel-devel numactl
cd dpdk-stable-17.11.6
usertools/dpdk-setup.sh
2

编译成功会有提示:

_images/dpdk_build_sucess.PNG

巨型页配置

usertools/dpdk-setup.sh
[21] Setup hugepage mappings for NUMA systems
[28] List hugepage info from /proc/meminfo

绑定dpdk

usertools/dpdk-setup.sh
[17] Insert IGB UIO module
[23] Bind Ethernet/Crypto device to IGB UIO module
[22] Display current Ethernet/Crypto device settings


Network devices using DPDK-compatible driver
============================================
0002:e9:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe

Network devices using kernel driver
===================================
<none>

执行helloworld测试。【文档地址】(https://doc.dpdk.org/guides/linux_gsg/build_sample_apps.html)

cd examples/helloworld/
export RTE_SDK=$HOME/DPDK
export RTE_TARGET=x86_64-native-linux-gcc

make
    CC main.o
    LD helloworld
    INSTALL-APP helloworld
    INSTALL-MAP helloworld.map

ls build/app
    helloworld helloworld.map

./helloworld -l 0-3 -n 4

测试处理

/home/PF_RING-6.0.2/userland/examples/pfsend_dir -i dna0 -r10 -n0

watch -d -n 1 IPNetStat 0

优化操作:

收一核
发一核
IPnet 绑到numctl一个核上, 观察网卡是否在哪一个P上。dpdk在另一个片上

增加内存256
增加处理进程数量

问题记录

问题1: 缺少numa.h,
/home/lixianfa/dpdk/dpdk-stable-17.11.6/lib/librte_eal/linuxapp/eal/eal_memory.c:56:18: fatal error: numa.h: No such file or directory

解决办法

sudo yum install numactl-devel
问题2: could not split insn
/home/me/dpdk-stable-18.11.2/drivers/event/octeontx/timvf_worker.c: In function ‘timvf_timer_arm_burst_sp’:
/home/me/dpdk-stable-18.11.2/drivers/event/octeontx/timvf_worker.c:88:1: error: could not split insn
 }
 ^
(insn 95 98 99 (parallel [
            (set (reg:DI 0 x0 [orig:98 D.8599 ] [98])
                (mem/v:DI (reg/f:DI 21 x21 [orig:88 D.8605 ] [88]) [-1  S8 A64]))
            (set (mem/v:DI (reg/f:DI 21 x21 [orig:88 D.8605 ] [88]) [-1  S8 A64])
                (unspec_volatile:DI [
                        (plus:DI (mem/v:DI (reg/f:DI 21 x21 [orig:88 D.8605 ] [88]) [-1  S8 A64])
                            (const_int -281474976710656 [0xffff000000000000]))
                        (const_int 0 [0])
                    ] UNSPECV_ATOMIC_OP))
            (clobber (reg:CC 66 cc))
            (clobber (reg:DI 1 x1))
            (clobber (reg:SI 2 x2))
        ]) /home/me/dpdk-stable-18.11.2/drivers/event/octeontx/timvf_worker.h:95 1832 {atomic_fetch_adddi}
     (expr_list:REG_UNUSED (reg:CC 66 cc)
        (expr_list:REG_UNUSED (reg:SI 2 x2)
            (expr_list:REG_UNUSED (reg:DI 1 x1)
                (nil)))))
/home/me/dpdk-stable-18.11.2/drivers/event/octeontx/timvf_worker.c:88:1: internal compiler error: in final_scan_insn, at final.c:2897
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://bugzilla.redhat.com/bugzilla> for instructions.
Preprocessed source stored into /tmp/ccDIw6Il.out file, please attach this to your bugreport.
make[6]: *** [timvf_worker.o] Error 1
make[5]: *** [octeontx] Error 2
make[4]: *** [event] Error 2
make[3]: *** [drivers] Error 2
make[2]: *** [all] Error 2
make[1]: *** [pre_install] Error 2
make: *** [install] Error 2
------------------------------------------------------------------------------
 RTE_TARGET exported as arm64-armv8a-linuxapp-gcc
------------------------------------------------------------------------------

Press enter to continue ...

还没有解决办法 https://www.mail-archive.com/dev@dpdk.org/msg121218.html

厂家测试数据

ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         398106                   1327
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         398106                   1327
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         396911                   1323
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         396527                   1322
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         394882                   1316
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         394882                   1316
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         424770                   1416
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         424770                   1416
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         423611                   1412
ARM-131# show traffic
-----------------------------------------------------------



Tasks: 785 total,   6 running, 427 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.7 us,  7.3 sy,  0.0 ni, 89.8 id,  0.0 wa,  0.0 hi,  0.2 si,  0.0 st
KiB Mem : 66271616 total, 29970176 free,  6529280 used, 29772160 buff/cache
KiB Swap:  4194240 total,  4194240 free,        0 used. 47201280 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
22649 root      20   0 2633984   2.0g   2.0g S 215.5  3.1  54:05.93 exam
23455 root      20   0    8512   8000   2112 R  95.7  0.0   2:54.89 tcpreplay
23457 root      20   0    8512   8000   2112 R  95.7  0.0   2:55.43 tcpreplay
23456 root      20   0    8448   7936   2048 R  95.4  0.0   2:54.99 tcpreplay
23459 root      20   0    8512   8000   2048 R  95.1  0.0   2:50.93 tcpreplay
23458 root      20   0    8512   8064   2112 R  94.7  0.0   2:49.99 tcpreplay
23416 root      20   0  113280   5440   2880 S   2.6  0.0   0:11.37 htop
23472 root      20   0  118528   8576   3840 R   1.0  0.0   0:02.30 top
  301 root      20   0       0      0      0 S   0.3  0.0   0:13.58 ksoftirqd/48
16824 root      20   0  498112  16576  10752 S   0.3  0.0   0:00.97 gsd-smartcard
    1 root      20   0  164672  16512   6016 S   0.0  0.0   0:03.82 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.06 kthreadd
    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 kworker/0:0H
    5 root      20   0       0      0      0 I   0.0  0.0   0:00.14 kworker/u128:0
    7 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_wq
    8 root      20   0       0      0      0 S   0.0  0.0   0:00.29 ksoftirqd/0
    9 root      20   0       0      0      0 I   0.0  0.0   0:05.67 rcu_sched
   10 root      20   0       0      0      0 I   0.0  0.0   0:00.00 rcu_bh
   11 root      rt   0       0      0      0 S   0.0  0.0   0:00.06 migration/0



tcpreplay -i enahisic2i3 -M 10000 -l 0 link.pcap
tcpreplay -i enahisic2i3 -M 10000 -l 0 link.pcap
tcpreplay -i enahisic2i3 -M 10000 -l 0 link.pcap
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         423611                   1412
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         424017                   1413
ARM-131# show traffic
-----------------------------------------------------------
Interface pps                      Mbps
-----------------------------------------------------------
0         0                        0
1         423236                   1411
ARM-131# show traffic

dpdk文档地址:

dpdk文档地址

Regex

正则表达式

.       - Any Character Except New Line
\d      - Digit (0-9)
\D      - Not a Digit (0-9)
\w      - Word Character (a-z, A-Z, 0-9, _)
\W      - Not a Word Character
\s      - Whitespace (space, tab, newline)
\S      - Not Whitespace (space, tab, newline)

\b      - Word Boundary
\B      - Not a Word Boundary
^       - Beginning of a String
$       - End of a String

[]      - Matches Characters in brackets
[^ ]    - Matches Characters NOT in brackets
|       - Either Or
( )     - Group

Quantifiers:
*       - 0 or More
+       - 1 or More
?       - 0 or One
{3}     - Exact Number
{3,4}   - Range of Numbers (Minimum, Maximum)


#### Sample Regexs ####

[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]

from https://raw.githubusercontent.com/CoreyMSchafer/code_snippets/master/Regular-Expressions/snippets.txt

youtube: https://www.youtube.com/watch?v=sa-TUpSx1JA&list=WL&index=2&ab_channel=CoreySchafer

use in notepad++ http://shouce.jb51.net/notepad_book/npp_func_regex_replace.html

SELinux

SELinux是CentOS和redhat的安全特性。 遇到一个问题,使用httpd或者时nginx设置在线软件源的时候,会出现符号链接不可用。
nginx会报:
403 Forbidden

httpd会报

403 Forbidden : You don't have permission to access / on this server

在文件/var/log/audit/audit.log可以查看到denied请求:

type=AVC msg=audit(1560175301.653:7102): avc:  denied  { read } for  pid=30807 comm="nginx" name="hisi" dev="dm-0" ino=101710253 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:home_root_t:s0 tclass=dir permissive=0
type=SYSCALL msg=audit(1560175301.653:7102): arch=c00000b7 syscall=56 success=no exit=-13 a0=ffffffffffffff9c a1=aaaafff50290 a2=84800 a3=0 items=0 ppid=30804 pid=30807 auid=4294967295 uid=995 gid=991 euid=995 suid=995 fsuid=995 egid=991 sgid=991 fsgid=991 tty=(none) ses=4294967295 comm="nginx" exe="/usr/sbin/nginx" subj=system_u:system_r:httpd_t:s0 key=(null)
type=PROCTITLE msg=audit(1560175301.653:7102): proctitle=6E67696E783A20776F726B65722070726F63657373

如果不关闭,autoindex, FollowSymLinks等参数不用。在找不到网页时无法列出目录。复制过来的目录也没有访问权限。

/usr/sbin/sestatus -v       #查看SElinux状态,如果SELinux status参数为enabled即为开启状态
SELinux status:                 enabled
getenforce                  #查看SElinux状态
setenforce 0                #临时关闭SElinux,设置SELinux 成为permissive模式
setenforce 1                #启用SELinux,设置SELinux 成为enforcing模式

临时关闭SElinux

setenforce 0                #临时关闭SElinux,设置SELinux 成为permissive模式

永久关闭SELinux

vim /etc/selinux/config

SELINUX=enforcing 修改为
SELINUX=disabled
#重启机器即可

安装必要的SELinux策略管理工具

yum install -y policycoreutils-python setroubleshoot

重置SELinux策略到默认状态

$> touch /.autorelabel
$> reboot

This action will return the SELinux policies to their default. After reboot we’re ready to continue configuring our policies. # 查看SELinux的内置策略 To see a list of all built-in policy booleans you can use

getsebool -a

启用内置规则

setsebool -P httpd_can_network_connect=1

查看失败的认证请求

cat /var/log/audit/audit.log | grep fail

查看失败原因

grep nginx /var/log/audit/audit.log | audit2why

创建SELinux策略

grep nginx audit.log | audit2allow -M nginxpolicy
semodule -i nginxpolicy.pp

ServerStatus

用来监测服务器状态的组件。

TDengine

gmake[2]: Entering directory `/home/me/TDengine/build'
/usr/bin/cmake -E cmake_progress_report /home/me/TDengine/build/CMakeFiles
[  0%] Building C object deps/zlib-1.2.11/CMakeFiles/z.dir/src/adler32.c.o
cd /home/me/TDengine/build/deps/zlib-1.2.11 && /usr/bin/cc  -DLINUX -D_LIBC_REENTRANT -D_M_X64 -D_REENTRANT -D__USE_POSIX -std=gnu99 -Wall -fPIC -malign-double -g -Wno-char-subscripts -malign-stringops -msse4.2 -D_FILE_OFFSET_BITS=64 -D_LARGE_FILE -O0 -DDEBUG -I/home/me/TDengine/deps/zlib-1.2.11/inc    -o CMakeFiles/z.dir/src/adler32.c.o   -c /home/me/TDengine/deps/zlib-1.2.11/src/adler32.c
cc: error: unrecognized command line option ‘-malign-double’
cc: error: unrecognized command line option ‘-malign-stringops’
cc: error: unrecognized command line option ‘-msse4.2’
gmake[2]: *** [deps/zlib-1.2.11/CMakeFiles/z.dir/src/adler32.c.o] Error 1
gmake[2]: Leaving directory `/home/me/TDengine/build'
gmake[1]: *** [deps/zlib-1.2.11/CMakeFiles/z.dir/all] Error 2
gmake[1]: Leaving directory `/home/me/TDengine/build'
gmake: *** [all] Error 2
[me@vm-1 build]$
[me@vm-1 build]$

解决办法:

修改Cmakelists.txt 把 -malign-double,-malign-stringops,-msse4.2都去掉

[ 16%] Building C object src/util/CMakeFiles/tutil.dir/src/tcrc32c.c.o
cd /home/me/TDengine/build/src/util && /usr/bin/cc  -DLINUX -DUSE_LIBICONV -D_LIBC_REENTRANT -D_M_X64 -D_REENTRANT -D__USE_POSIX -std=gnu99 -Wall -fPIC -g -Wno-char-subscripts -D_FILE_OFFSET_BITS=64 -D_LARGE_FILE -O0 -DDEBUG -I/home/me/TDengine/src/inc -I/home/me/TDengine/src/os/linux/inc    -o CMakeFiles/tutil.dir/src/tcrc32c.c.o   -c /home/me/TDengine/src/util/src/tcrc32c.c
/home/me/TDengine/src/util/src/tcrc32c.c:20:23: fatal error: nmmintrin.h: No such file or directory
 #include <nmmintrin.h>
                       ^
compilation terminated.
gmake[2]: *** [src/util/CMakeFiles/tutil.dir/src/tcrc32c.c.o] Error 1
gmake[2]: Leaving directory `/home/me/TDengine/build'
gmake[1]: *** [src/util/CMakeFiles/tutil.dir/all] Error 2
gmake[1]: Leaving directory `/home/me/TDengine/build'
gmake: *** [all] Error 2

解决办法:

nmmintrin.h 修改为 arm_neon.h

/usr/bin/cmake -E cmake_progress_report /home/me/TDengine/build/CMakeFiles 90
[ 28%] Building C object src/util/CMakeFiles/tutil.dir/src/tcrc32c.c.o
cd /home/me/TDengine/build/src/util && /usr/bin/cc  -DLINUX -DUSE_LIBICONV -D_LIBC_REENTRANT -D_M_X64 -D_REENTRANT -D__USE_POSIX -std=gnu99 -Wall -fPIC -g -Wno-char-subscripts -D_FILE_OFFSET_BITS=64 -D_LARGE_FILE -O0 -DDEBUG -I/home/me/TDengine/src/inc -I/home/me/TDengine/src/os/linux/inc    -o CMakeFiles/tutil.dir/src/tcrc32c.c.o   -c /home/me/TDengine/src/util/src/tcrc32c.c
/home/me/TDengine/src/util/src/tcrc32c.c: In function ‘crc32c_hw’:
/home/me/TDengine/src/util/src/tcrc32c.c:1210:5: warning: implicit declaration of function ‘_mm_crc32_u8’ [-Wimplicit-function-declaration]
     crc0 = _mm_crc32_u8((uint32_t)(crc0), *next);
     ^
/home/me/TDengine/src/util/src/tcrc32c.c:1227:7: warning: implicit declaration of function ‘_mm_crc32_u64’ [-Wimplicit-function-declaration]
       crc0 = _mm_crc32_u64(crc0, *(uint64_t *)(next));
       ^
/home/me/TDengine/src/util/src/tcrc32c.c: In function ‘taosResolveCRC’:
/home/me/TDengine/src/util/src/tcrc32c.c:1341:5: error: unknown register name ‘%edx’ in ‘asm’
     __asm__("cpuid" : "=c"(ecx) : "a"(eax) : "%ebx", "%edx"); \
     ^
/home/me/TDengine/src/util/src/tcrc32c.c:1351:3: note: in expansion of macro ‘SSE42’
   SSE42(sse42);
   ^
/home/me/TDengine/src/util/src/tcrc32c.c:1341:5: error: unknown register name ‘%ebx’ in ‘asm’
     __asm__("cpuid" : "=c"(ecx) : "a"(eax) : "%ebx", "%edx"); \
     ^
/home/me/TDengine/src/util/src/tcrc32c.c:1351:3: note: in expansion of macro ‘SSE42’
   SSE42(sse42);
   ^
gmake[2]: *** [src/util/CMakeFiles/tutil.dir/src/tcrc32c.c.o] Error 1
gmake[2]: Leaving directory `/home/me/TDengine/build'
gmake[1]: *** [src/util/CMakeFiles/tutil.dir/all] Error 2
gmake[1]: Leaving directory `/home/me/TDengine/build'
gmake: *** [all] Error 2

解决办法:

参考资料:

去掉nmmintrin.h: https://www.cnblogs.com/makefile/p/6084784.html nmmintrin.h

ARM原生CRC32原生指令:http://3ms.huawei.com/hi/group/2851011/wiki_5359181.html

主要是CRC32的问题:

在线计算器 https://www.lammertbies.nl/comm/info/crc-calculation.html

python生成CRC C代码:https://pycrc.org/models.html

ARM关于CRC的指令:http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0801g/awi1476352818103.html

alias

pi@raspberrypi:~ $ alias -p
alias du='du -h --max-depth=1'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias ls='ls --color=auto'

alias只对当前shell有效,推出后不起作用。可以写到~/.bashrc中使自动生效

aria2

安装

apt install aria2

配置

文件文件位于

/etc/aria2/aria2.conf

主要配置下载目录

dir=/home/download

配置webui需要nginx配合 安装nginx之后, 把Aria2WebUI的压缩包解药到/var/www/即可

pi@linux:/etc/aria2 $ ls /var/www
html  htmlx.zip

启动

aria2使用systemd管理,aria2会监听6800端口

systemctl start aria2

启用Aria2WebUI,需要看到UI之后配置连接端口到6800,如果有用户和密码,则需要进行相应配置

systemctl start nginx

arping

发送ARP请求数据包

[user1@centos ~]$ arping -I enp189s0f0 -c 3 192.168.1.002
ARPING 192.168.1.002 from 192.168.1.122 enp189s0f0
Unicast reply from 192.168.1.002 [C0:A8:02:81:00:04]  0.611ms
Unicast reply from 192.168.1.002 [C0:A8:02:81:00:04]  0.607ms
Unicast reply from 192.168.1.002 [C0:A8:02:81:00:04]  0.594ms
Sent 3 probes (1 broadcast(s))
Received 3 response(s)

asciinema

一个录制ascii命令行的神器 [1]

[1]https://asciinema.org/

autotool GNU

GNU软件标准Makefile 目标

make all
编译程序,库,文档等。 和make表现一样
make install
安装需要被安装的程序
make install-strip
和make install 一样,但是要strip debugging symbol
make uninstall
和make install 相反的目的
make clean
和make all相反的目的,删除编译好的目标
make distclean
同事删除./configure产生的文件
make check
如果有测试套件,执行测试套件
make installcheck
检查已经安装的程序和库
make dist
生成 name-version.tag.gz

GNU软件项目标准文件组织

Directory variable Default value
prefix                  /usr/local
        exec-prefix     prefix
            bindir      exec-prefix/bin
            libdir      exec-prefix/lib
...
        includedir      prefix/include
        datarootdir     prefix/share
            datadir     datarootdir
            mandir      datarootdir/man
            infodir     datarootdir/info

在configure的时候指定prefix

./configure --prefix ~/usr

configure中定义的变量:

CC          C compiler command
CFLAGS      C compiler flags
CXX         C++ compiler command
CXXFLAGS    C++ compiler flags
LDFLAGS     linker flags
CPPFLAGS    C/C++ preprocessor flags
... See ‘./configure --help’ for a full list
./configure --prefix ~/usr CC=gcc-3 CPPFLAGS=-I$HOME/usr/include LDFLAGS=-L$HOME/usr/lib

创建build目录的目的是,中间过程生成的目标文件保存在build当中。

如果主机上已经有同名的目标文件

--program-prefix=PREFIX     设置前缀名
--program-suffix=SUFFIX     设置后缀名
‘--program-transform-name=PROGRAM’  run ‘sed PROGRAM’ on installed program names.
~/amhello-1.0 % ./configure --program-prefix test-
~/amhello-1.0 % make
~/amhello-1.0 % sudo make install
yum install -y automake autoconf

GNU Autoconf

‘autoconf’ Create configure from configure.ac.
‘autoheader’ Create config.h.in from configure.ac.
‘autoreconf’ Run all tools in the right order.
‘autoscan’ Scan sources for common portability problems,and related macros missing from configure.ac.
‘autoupdate’ Update obsolete macros in configure.ac.
‘ifnames’ Gather identifiers from all #if/#ifdef/... directives.
‘autom4te’ The heart of Autoconf. It drives M4 and implements the features used by most of the above tools.
            Useful for creating more than just configure files

GNU Automake

‘automake’ Create Makefile.ins from Makefile.ams and configure.ac.
‘aclocal’ Scan configure.ac for uses of third-party macros, and gather definitions in aclocal.m4.

configure.ac

# Prelude.
AC_INIT([amhello], [1.0], [bug-report@address])
AM_INIT_AUTOMAKE([foreign -Wall -Werror])
# Checks for programs.
AC_PROG_CC
# Checks for libraries.
# Checks for header files.
# Checks for typedefs, structures, and compiler characteristics.
# Checks for library functions.
# Output files.
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_FILES([FILES])
AC_OUTPUT

awk

使用awk可以帮助我们快速处理文本,筛选数据。

例子1

重新编排文本列,加入\t之后复制到excel可以自动生成表格

cat arm_fio_simple.log | awk '{printf "%3d %s %-10s %2d %3d %s %s %-4s %s %s %6s %8s %7s %6s\n",$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14}' > arm_temp.txt
cat x86_simple.log | awk '{printf "%3s\t%20s\t%-20s\t%-6s\t%-10s\t%9s\t%-10s\t%-10s\t%-10s\n",$1,$2,$3,$4,$5,$6,$7,$8,$9}' > excel_86.txt

处理之前x86_simple.log

5 bs-rw-numjob-iodepth 4k-read-1-64  100k   411MB/s   638.26   usr=7.77% sys=39.01%
6 bs-rw-numjob-iodepth 4k-read-1-128  98.4k   403MB/s   1300.35   usr=8.04% sys=39.84%
7 bs-rw-numjob-iodepth 4k-read-1-265  98.1k   402MB/s   2699.93   usr=8.65% sys=40.14%
8 bs-rw-numjob-iodepth 4k-read-8-1  73.2k   300MB/s   108.57   usr=2.24% sys=14.11%
9 bs-rw-numjob-iodepth 4k-read-8-4  98.5k   403MB/s   323.68   usr=1.96% sys=38.91

处理之后excel_86.txt

5   bs-rw-numjob-iodepth    4k-read-1-64            100k    411MB/s        638.26   usr=7.77%   sys=39.01%
6   bs-rw-numjob-iodepth    4k-read-1-128           98.4k   403MB/s       1300.35   usr=8.04%   sys=39.84%
7   bs-rw-numjob-iodepth    4k-read-1-265           98.1k   402MB/s       2699.93   usr=8.65%   sys=40.14%
8   bs-rw-numjob-iodepth    4k-read-8-1             73.2k   300MB/s        108.57   usr=2.24%   sys=14.11%
9   bs-rw-numjob-iodepth    4k-read-8-4             98.5k   403MB/s        323.68   usr=1.96%   sys=38.91%

例子2

过滤程序输出,提取数据

#!/bin/bash
result="
4k_read: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.13
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=128MiB/s][r=32.7k IOPS][eta 00m:00s]
4k_read: (groupid=0, jobs=1): err= 0: pid=44255: Mon Mar 18 03:32:57 2019
  read: IOPS=32.8k, BW=128MiB/s (134MB/s)(7679MiB/60001msec)
    slat (nsec): min=3722, max=34210, avg=3972.12, stdev=187.36
    clat (usec): min=11, max=1172, avg=25.81, stdev= 4.09
     lat (usec): min=27, max=1176, avg=29.90, stdev= 4.09
    clat percentiles (nsec):
     |  1.00th=[24192],  5.00th=[24192], 10.00th=[24448], 20.00th=[24960],
     | 30.00th=[24960], 40.00th=[24960], 50.00th=[25216], 60.00th=[25216],
     | 70.00th=[25216], 80.00th=[25472], 90.00th=[26240], 95.00th=[28800],
     | 99.00th=[50944], 99.50th=[51456], 99.90th=[55040], 99.95th=[57088],
     | 99.99th=[80384]
   bw (  KiB/s): min=128407, max=135016, per=99.98%, avg=131028.03, stdev=1461.54, samples=119
   iops        : min=32101, max=33754, avg=32756.97, stdev=365.39, samples=119
  lat (usec)   : 20=0.01%, 50=98.44%, 100=1.55%, 250=0.01%, 500=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=4.51%, sys=16.50%, ctx=1965949, majf=0, minf=19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1965940,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=7679MiB (8052MB), run=60001-60001msec

Disk stats (read/write):
  sdb: ios=2120756/0, merge=0/0, ticks=47028/0, in_queue=46712, util=72.05%
"

iops_bandwith=$(echo "$result" | grep "IOPS=")
iops=$(echo $iops_bandwith | awk '{print $2}'|awk -F '[=,]' '{print $2}')
bandwith=$(echo $iops_bandwith | awk -F '[()]' '{print $2}')
lat=$(echo "$result" | grep "\ lat.*avg"| awk -F, '{print $3}'|awk -F= '{print $2}')
cpu=$(echo "$result" | grep cpu |awk -F '[:,]' '{print $2 $3}')

echo $iops $bandwith $lat $cpu

输出结果:

32.8k 134MB/s 29.90 usr=4.51% sys=16.50%
awk 截取字符串
[root@ceph-node00 ceph]# lsblk | grep ceph
└─ceph--cbab595d--da69--431f--b7b6--c52101f10d39-osd--block--2091e673--d027--4b9f--b8c0--6e7f476fc741 253:11   0   7.3T  0 lvm
└─ceph--69275ba9--1d6a--4478--9ccd--1a23f7831b37-osd--block--4b24b591--1b1e--4e29--8c55--d279187e039d 253:9    0   7.3T  0 lvm
└─ceph--7805f320--84b3--4000--a6fb--c32bf9b32a0c-osd--block--4603b474--8bfa--47f7--b69a--0394a727d863 253:4    0   7.3T  0 lvm
└─ceph--2d3a8630--dbfb--4eba--97bf--7fbfb5cc91ef-osd--block--636d1e4b--5a01--4fc4--aa11--260f7356a7bc 253:14   0   7.3T  0 lvm
└─ceph--c4e816d1--6e97--4aef--9abf--7502c94709f6-osd--block--1f3e67b4--166b--408f--ae1a--6e07a4667bec 253:12   0   7.3T  0 lvm
└─ceph--7608df58--0556--424e--9b97--659a4bab1e84-osd--block--b5db2324--be59--4f59--8958--46394f580535 253:10   0   7.3T  0 lvm
└─ceph--7fe90132--69c8--4c15--a60f--7f2037b4230c-osd--block--b1b345d3--8e44--4f5b--807b--f1dcca93b5a2 253:8    0   7.3T  0 lvm
└─ceph--bf42f625--f8a1--4351--8762--3bc84847b90e-osd--block--478d5cda--a78d--41c3--a2f9--253c41e62cba 253:3    0   7.3T  0 lvm
└─ceph--b2a6fe35--c3df--46b3--981c--beaedfc27f53-osd--block--49c7c971--bf3e--481e--be2b--40c021ccb88b 253:13   0   7.3T  0 lvm
[root@ceph-node00 ceph]# lsblk | grep ceph | awk '{print substr($1,3)}'
ceph--cbab595d--da69--431f--b7b6--c52101f10d39-osd--block--2091e673--d027--4b9f--b8c0--6e7f476fc741
ceph--69275ba9--1d6a--4478--9ccd--1a23f7831b37-osd--block--4b24b591--1b1e--4e29--8c55--d279187e039d
ceph--7805f320--84b3--4000--a6fb--c32bf9b32a0c-osd--block--4603b474--8bfa--47f7--b69a--0394a727d863
ceph--dda7a760--1a67--45b0--8992--0148beea4146-osd--block--e887daf2--b51a--4c75--a793--e85c9af286b8
ceph--2d3a8630--dbfb--4eba--97bf--7fbfb5cc91ef-osd--block--636d1e4b--5a01--4fc4--aa11--260f7356a7bc
ceph--c4e816d1--6e97--4aef--9abf--7502c94709f6-osd--block--1f3e67b4--166b--408f--ae1a--6e07a4667bec
ceph--7608df58--0556--424e--9b97--659a4bab1e84-osd--block--b5db2324--be59--4f59--8958--46394f580535
ceph--7fe90132--69c8--4c15--a60f--7f2037b4230c-osd--bl
awk 获取IP地址

获取IP地址

ip a | grep -E "inet [0-9]*.[0-9]*.[0-9]*.[0-9]*/24"| awk '{print $2}' |awk -F '/' '{print $1}'

substr 删除第2各字段的头5各字符

ip a | awk '/inet /{print substr($2,5)}' |awk -F '[/]' '{print $1}'

bcache

bcache的回写模式:

writethrough
既写SSD也写HDD, 读如果命中,就可以直接从SSD中读,适用于读多写少的场景
writearound
绕过SSD直接写HDD。
writeback
全部写SSD,后台刷脏数据。

划分bcache分区

for ssdname in sdv sdw sdx sdy; do
    parted -a optimal /dev/$ssdname  mkpart primary 2048s 30GiB
    parted -a optimal /dev/$ssdname  mkpart primary 30GiB 60GiB
    parted -a optimal /dev/$ssdname  mkpart primary 60GiB 90GiB
    parted -a optimal /dev/$ssdname  mkpart primary 90GiB 120GiB
    parted -a optimal /dev/$ssdname  mkpart primary 120GiB 150GiB
    parted -a optimal /dev/$ssdname  mkpart primary 150GiB 165GiB
    parted -a optimal /dev/$ssdname  mkpart primary 165GiB 180GiB
    parted -a optimal /dev/$ssdname  mkpart primary 180GiB 195GiB
    parted -a optimal /dev/$ssdname  mkpart primary 195GiB 210GiB
    parted -a optimal /dev/$ssdname  mkpart primary 210GiB 225GiB
    parted -a optimal /dev/$ssdname  mkpart primary 225GiB 358GiB
    parted -a optimal /dev/$ssdname  mkpart primary 358GiB 491GiB
    parted -a optimal /dev/$ssdname  mkpart primary 491GiB 624GiB
    parted -a optimal /dev/$ssdname  mkpart primary 624GiB 757GiB
    parted -a optimal /dev/$ssdname  mkpart primary 757GiB 890GiB
done

创建bcache

make-bcache -C --wipe-bcache /dev/sdv11 -B /dev/sda
make-bcache -C --wipe-bcache /dev/sdv12 -B /dev/sdb
make-bcache -C --wipe-bcache /dev/sdv13 -B /dev/sdc
make-bcache -C --wipe-bcache /dev/sdv14 -B /dev/sdd
make-bcache -C --wipe-bcache /dev/sdv15 -B /dev/sde

make-bcache -C --wipe-bcache /dev/sdw11 -B /dev/sdf
make-bcache -C --wipe-bcache /dev/sdw12 -B /dev/sdg
make-bcache -C --wipe-bcache /dev/sdw13 -B /dev/sdh
make-bcache -C --wipe-bcache /dev/sdw14 -B /dev/sdi
make-bcache -C --wipe-bcache /dev/sdw15 -B /dev/sdj

make-bcache -C --wipe-bcache /dev/sdx11 -B /dev/sdk
make-bcache -C --wipe-bcache /dev/sdx12 -B /dev/sdl
make-bcache -C --wipe-bcache /dev/sdx13 -B /dev/sdm
make-bcache -C --wipe-bcache /dev/sdx14 -B /dev/sdn
make-bcache -C --wipe-bcache /dev/sdx15 -B /dev/sdo

make-bcache -C --wipe-bcache /dev/sdy11 -B /dev/sdp
make-bcache -C --wipe-bcache /dev/sdy12 -B /dev/sdq
make-bcache -C --wipe-bcache /dev/sdy13 -B /dev/sdr
make-bcache -C --wipe-bcache /dev/sdy14 -B /dev/sds
make-bcache -C --wipe-bcache /dev/sdy15 -B /dev/sdt

删除bcache

echo 1 > /sys/fs/bcache/04f7ecc3-b55b-4f3d-b843-621acc41c3f7/unregister
echo 1 > /sys/fs/bcache/05121aa6-45c3-4613-8a5b-f04370f31c21/unregister
echo 1 > /sys/fs/bcache/0b99a0ac-6dde-49f5-b729-20bc1a815e14/unregister
echo 1 > /sys/fs/bcache/21777519-de77-4c25-b5e5-da9da3f5ca6f/unregister
echo 1 > /sys/fs/bcache/2404ebe1-40b9-49dc-b24c-73d9dcc80235/unregister
echo 1 > /sys/fs/bcache/2535a73f-5538-47d8-b213-903f427134bf/unregister
echo 1 > /sys/fs/bcache/33028cb9-fc8b-4d61-b9d7-e7728bb25503/unregister
echo 1 > /sys/fs/bcache/36671630-3c78-443a-8344-6e73fc0a627a/unregister
echo 1 > /sys/fs/bcache/44193335-016a-4bef-92fa-97d2d87dbb42/unregister
echo 1 > /sys/fs/bcache/45fd1bde-fae6-4da8-8f61-2f67731a6970/unregister
echo 1 > /sys/fs/bcache/483ddcc5-1f17-40d4-bc95-bdfd37b3b04c/unregister
echo 1 > /sys/fs/bcache/53c3b6e8-463c-4cf4-9943-2f9c80a20729/unregister
echo 1 > /sys/fs/bcache/53ce3bef-23ce-4533-8bbf-1a1d4bd8562d/unregister
echo 1 > /sys/fs/bcache/5653aaad-ccae-4195-a6e4-cc22a4fe0567/unregister
echo 1 > /sys/fs/bcache/7826e7fe-dc4d-4b15-a194-f095839ca2f5/unregister
echo 1 > /sys/fs/bcache/8a015751-3163-4715-9320-0d633fc46e6e/unregister
echo 1 > /sys/fs/bcache/a333a60f-ae3b-4e8e-8bc0-61707005e7a2/unregister
echo 1 > /sys/fs/bcache/ac9a2a04-0e12-4bdf-a76b-382e9f1e4e42/unregister
echo 1 > /sys/fs/bcache/cb5c9671-26a0-4d7a-a765-bcb82960257e/unregister
echo 1 > /sys/fs/bcache/dcf44cab-5545-4dc6-ad9f-c7ad5d3441ed/unregister

删除bcache之后重新写一遍硬盘dd_wipe_disk.sh

#!/bin/bash

for i in {a..t};
do
        echo sd$i
        dd if=/dev/zero of=/dev/sd$i bs=1M count=1
done

for ssd in v w x y;
do
        for i in {11..15};
        do
                echo sd$ssd$i
                dd if=/dev/zero of=/dev/sd"$ssd""$i" bs=1M count=1
        done
done

问题记录

  1. 编译bcache工具报错
make-bcache.c:11:10: fatal error: blkid.h: No such file or directory
#include <blkid.h>
          ^~~~~~~~~
compilation terminated.
make: *** [<builtin>: make-bcache] Error 1

解决办法:

yum install libblkid-devel
  1. 编译bcache,undefined reference to `crc64’
[root@localhost bcache-tools-1.0.8]# make
cc -O2 -Wall -g `pkg-config --cflags uuid blkid`    make-bcache.c bcache.o  `pkg-config --libs uuid blkid` -o make-bcache
/usr/bin/ld: /tmp/ccMKyCXr.o: in function `write_sb':
/root/tools/bcache-tools-1.0.8/make-bcache.c:277: undefined reference to `crc64'
collect2: error: ld returned 1 exit status
make: *** [<builtin>: make-bcache] Error 1

[1] [2]

[1]https://ypdai.github.io/2018/07/13/bcache%E9%85%8D%E7%BD%AE%E4%BD%BF%E7%94%A8/
[2]https://www.kernel.org/doc/Documentation/bcache.txt

bitcoin

在ARM上区块链 [1] 的支持情况如何,能否使用比特币。

有钱包即可使用比特币

目前钱包应用很多,有手机版,桌面版,专用硬件版,网页版 [2] 。可以根据情况到官网网址进行选择。

这里主要看下桌面版能否支持ARM,在上面的网址下载的安装包是bitcoin-0.19.0.1-arm-linux-gnueabihf.tar.gz。查看里面的二进制文件目前不支持ARM64的。

不过我们在 bitcoin-core的网站上找到了ARM64版本bitcoin-0.19.0.1-aarch64-linux-gnu.tar.gz [3]

下载比特币软件。

wget https://bitcoincore.org/bin/bitcoin-core-0.19.0.1/bitcoin-0.19.0.1-aarch64-linux-gnu.tar.gz
tar xf bitcoin-0.19.0.1-aarch64-linux-gnu.tar.gz
cd bitcoin-0.19.0.1/bin

启动服务,会自动同步区块

./btcoind

 2020-07-16T01:53:17Z dnsseed thread exit
 2020-07-16T01:53:19Z Synchronizing blockheaders, height: 4000 (~0.66%)
 2020-07-16T01:53:21Z New outbound peer connected: version: 70015, blocks=639437, peer=5 (full-relay)
 2020-07-16T01:53:22Z New outbound peer connected: version: 70015, blocks=639437, peer=6 (full-relay)
 2020-07-16T01:53:23Z New outbound peer connected: version: 70015, blocks=639437, peer=7 (full-relay)
 2020-07-16T01:53:25Z New outbound peer connected: version: 70015, blocks=639437, peer=8 (full-relay)
 2020-07-16T01:53:33Z Synchronizing blockheaders, height: 6000 (~0.99%)
 2020-07-16T01:53:37Z Synchronizing blockheaders, height: 8000 (~1.33%)
 2020-07-16T01:53:43Z Synchronizing blockheaders, height: 10000 (~1.66%)
 2020-07-16T01:53:50Z Synchronizing blockheaders, height: 12000 (~1.99%)
 2020-07-16T01:53:53Z Synchronizing blockheaders, height: 14000 (~2.33%)
 2020-07-16T01:53:57Z Synchronizing blockheaders, height: 16000 (~2.66%)
 2020-07-16T01:54:06Z Synchronizing blockheaders, height: 18000 (~3.00%)
 2020-07-16T01:54:14Z Synchronizing blockheaders, height: 20000 (~3.35%)

同步数据可能需要很长时间,少则一两个小时,多则10多个小时,取决于和服务器的链接速度。

获取钱包地址: 3a0f7a2e3ba2e1d4810db537959421be866c1f6c ::

[user1@centos bin]$ ./bitcoin-cli getwalletinfo
{
  "walletname": "",
  "walletversion": 169900,
  "balance": 0.00000000,
  "unconfirmed_balance": 0.00000000,
  "immature_balance": 0.00000000,
  "txcount": 0,
  "keypoololdest": 1578981187,
  "keypoolsize": 1000,
  "keypoolsize_hd_internal": 1000,
  "paytxfee": 0.00000000,
  "hdseedid": "3a0f7a2e3ba2e1d4810db537959421be866c1f6c",
  "private_keys_enabled": true,
  "avoid_reuse": false,
  "scanning": false
}

再创建一个钱包

root@40ab90fdd8df:~/bitcoin-0.19.0.1/bin# ./bitcoin-cli createwallet redwallet
{
"name": "redwallet",
"warning": ""
}

常用命令

  1. bitcoin-cli
bitcoin-cli getwalletinfo       # 获取钱包信息
bitcoin-cli getnetworkinfo      # 查看网络状态:
bitcoin-cli getpeerinfo         # 查看网络节点:
bitcoin-cli getblockchaininfo   # 查看区块链信息:如同步进度、
bitcoin-cli help                # 查看所有命令
  1. bitcoind
./bitcoind                      # 启动比特币服务
./bitcoind -c                   # 以配置文件启动后台服务

搭建运行自定义区块链服务

区块链可以取消中间人,可以实现peer-to-peer的交易。

主要在金融领域应用和论证。全球范围内超过 90% 的中央银行已经开始了这方面的论证

[1]https://github.com/bitcoin/bitcoin
[2]https://bitcoin.org/zh_CN/choose-your-wallet?step=5&platform=linux
[3]https://bitcoincore.org/bin/bitcoin-core-0.19.0.1/

blkdiscard

blkdiscard可以清除块设备上的分区,对SSD非常有用

blkdiscard /dev/sdv
blkdiscard /dev/sdw
blkdiscard /dev/sdx
blkdiscard /dev/sdy

查询硬盘是否支持trim,DISC-GRANDISC-MAX 非零表示支持trim

[root@ceph1 ~]# lsblk –discard NAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO sda 0 0B 0B 0 sdb 0 0B 0B 0 sdc 0 0B 0B 0 sdd 0 0B 0B 0 sde 0 0B 0B 0 sdf 0 0B 0B 0 sdg 0 0B 0B 0 sdh 0 0B 0B 0 sdi 0 0B 0B 0 sdj 0 0B 0B 0 sdk 0 0B 0B 0 sdl 0 0B 0B 0 sdm 0 0B 0B 0 sdn 0 0B 0B 0 sdo 0 0B 0B 0 sdp 0 0B 0B 0 sdq 0 0B 0B 0 sdr 0 0B 0B 0 sds 0 0B 0B 0 sdt 0 0B 0B 0 sdu 0 0B 0B 0 ├─sdu1 0 0B 0B 0 ├─sdu2 0 0B 0B 0 ├─sdu3 0 0B 0B 0 ├─sdu4 0 0B 0B 0 └─sdu5 0 0B 0B 0 sdv 0 4K 2G 0 sdw 0 4K 2G 0 sdx 0 4K 2G 0 sdy 0 4K 2G 0

blockchain

区块链

热门的区块链技术

  1. bitcoin(数字货币) [1]

    这是最早也是最有名的区块链实施项目。一种可保存在电子钱包中的数字货币。

  2. Ripple(区块链技术平台)

    Ripple 能提供全天候、实时、同步、透明且信息丰富的交易。在交易开始前,您能够实时确认汇率和费用,立即完成付款。 Ripple 解决方案为 MasterCard 和 Visa 等发卡机构整合了付款消息传递和资金结算功能

  3. Ethereum(数字货币)

    扩展比特币平台,让比特币能处理数字货币以外的普通交易。为了做到这一点,他们需要一个强大的脚本语言,来编写“智能合同”中的业务逻辑。由于没有与比特币团队达成共识,Vitalik 另起炉灶,启动了一个新平台的开发,也就是我们所说的Ethereum。 Ethereum 的首个生产版本于 2015 年 7 月发布。Ethereum 支持一种名为 Ether 的加密货币。

  4. Hyperledger(区块链技术平台)

    Hyperledger 的重心是为企业构建区块链平台。考虑到支持数字货币所带来的风险,Hyperledger 决定不涉足该领域。这是它与 Ethereum 的一大区别。它的设计重点放在安全性、可扩展性和隐私上面,其他实施项目(如 Ethereum)同样面临这些挑战。巨头参与,比如 IBM,因特尔,埃森哲,摩根大通,富国银行,空客,三星集团

  5. ipfs/filecoin [2]

    星际文件系统IPFS(Inter-Planetary File System)是一个面向全球的、点对点的分布式版本文件系统,目标是为了补充(甚至是取代)目前统治互联网的超文本传输协议(HTTP),将所有具有相同文件系统的计算设备连接在一起。原理用基于内容的地址替代基于域名的地址,也就是用户寻找的不是某个地址而是储存在某个地方的内容,不需要验证发送者的身份,而只需要验证内容的哈希,通过这样可以让网页的速度更快、更安全、更健壮、更持久。

    filecoin [3] 是ipfs上的一个代币,而filecoin就是通过贡献闲置的硬盘来作为奖励矿工的一种方式。Filecoin采用了一种全新的算法(工作量证明),简单的来说,就是你拥有的硬盘容量够大,那么你获取的filecoin奖励就越多

  6. Dragonchain

    龙链(Dragonchain)是迪士尼打造的混合公有/私有区块链的区块链平台。 龙链是另一种用来保持记录和处理交易的区块链。它与比特币的底层技术十分相似,但又有一点不同。龙链是一种多币制的区块链,节点就可以随之定义一种货币并支持其使用。该网络上可以同时使用多种货币。龙链的共识机制可以支持一种或多种现有的共识机制(Trust,PoW,PoS),甚至是可以支持自己定义和创建一种新的共识机制。

区块链统计
区块链应用 厂商 支持语言 部署方式
bitcoin 开源社区 C/C++ 公有云
Ripple Ripple C++、python 公有云或者私有云
Ethereum 开源社区 Solidity、Python 、 C++ 和 Java 公有云(Azure, AWS)
Hyperledger 开源社区,linux基金会 Go language、 Java 和 JavaScript Docker, IBM bluemix
ipfs 开源社区 Go,JavaScript,Python,C 公有云
Filecoin 开源社区 Go 公有云
Dragonchain 迪士尼->龙链基金会 Go 公有云或私有云
[1]https://bitcoincore.org/
[2]https://filecoin.io/
[3]filecoin 编译安装

bitcoin

在ARM上区块链 [4] 的支持情况如何,能否使用比特币。

有钱包即可使用比特币

目前钱包应用很多,有手机版,桌面版,专用硬件版,网页版 [5] 。可以根据情况到官网网址进行选择。

这里主要看下桌面版能否支持ARM,在上面的网址下载的安装包是bitcoin-0.19.0.1-arm-linux-gnueabihf.tar.gz。查看里面的二进制文件目前不支持ARM64的。

不过我们在 bitcoin-core的网站上找到了ARM64版本bitcoin-0.19.0.1-aarch64-linux-gnu.tar.gz [6]

下载比特币软件。

wget https://bitcoincore.org/bin/bitcoin-core-0.19.0.1/bitcoin-0.19.0.1-aarch64-linux-gnu.tar.gz
tar xf bitcoin-0.19.0.1-aarch64-linux-gnu.tar.gz
cd bitcoin-0.19.0.1/bin

启动服务,会自动同步区块

./btcoind

 2020-07-16T01:53:17Z dnsseed thread exit
 2020-07-16T01:53:19Z Synchronizing blockheaders, height: 4000 (~0.66%)
 2020-07-16T01:53:21Z New outbound peer connected: version: 70015, blocks=639437, peer=5 (full-relay)
 2020-07-16T01:53:22Z New outbound peer connected: version: 70015, blocks=639437, peer=6 (full-relay)
 2020-07-16T01:53:23Z New outbound peer connected: version: 70015, blocks=639437, peer=7 (full-relay)
 2020-07-16T01:53:25Z New outbound peer connected: version: 70015, blocks=639437, peer=8 (full-relay)
 2020-07-16T01:53:33Z Synchronizing blockheaders, height: 6000 (~0.99%)
 2020-07-16T01:53:37Z Synchronizing blockheaders, height: 8000 (~1.33%)
 2020-07-16T01:53:43Z Synchronizing blockheaders, height: 10000 (~1.66%)
 2020-07-16T01:53:50Z Synchronizing blockheaders, height: 12000 (~1.99%)
 2020-07-16T01:53:53Z Synchronizing blockheaders, height: 14000 (~2.33%)
 2020-07-16T01:53:57Z Synchronizing blockheaders, height: 16000 (~2.66%)
 2020-07-16T01:54:06Z Synchronizing blockheaders, height: 18000 (~3.00%)
 2020-07-16T01:54:14Z Synchronizing blockheaders, height: 20000 (~3.35%)

同步数据可能需要很长时间,少则一两个小时,多则10多个小时,取决于和服务器的链接速度。

获取钱包地址: 3a0f7a2e3ba2e1d4810db537959421be866c1f6c ::

[user1@centos bin]$ ./bitcoin-cli getwalletinfo
{
  "walletname": "",
  "walletversion": 169900,
  "balance": 0.00000000,
  "unconfirmed_balance": 0.00000000,
  "immature_balance": 0.00000000,
  "txcount": 0,
  "keypoololdest": 1578981187,
  "keypoolsize": 1000,
  "keypoolsize_hd_internal": 1000,
  "paytxfee": 0.00000000,
  "hdseedid": "3a0f7a2e3ba2e1d4810db537959421be866c1f6c",
  "private_keys_enabled": true,
  "avoid_reuse": false,
  "scanning": false
}

再创建一个钱包

root@40ab90fdd8df:~/bitcoin-0.19.0.1/bin# ./bitcoin-cli createwallet redwallet
{
"name": "redwallet",
"warning": ""
}

常用命令

  1. bitcoin-cli
bitcoin-cli getwalletinfo       # 获取钱包信息
bitcoin-cli getnetworkinfo      # 查看网络状态:
bitcoin-cli getpeerinfo         # 查看网络节点:
bitcoin-cli getblockchaininfo   # 查看区块链信息:如同步进度、
bitcoin-cli help                # 查看所有命令
  1. bitcoind
./bitcoind                      # 启动比特币服务
./bitcoind -c                   # 以配置文件启动后台服务

搭建运行自定义区块链服务

区块链可以取消中间人,可以实现peer-to-peer的交易。

主要在金融领域应用和论证。全球范围内超过 90% 的中央银行已经开始了这方面的论证

[4]https://github.com/bitcoin/bitcoin
[5]https://bitcoin.org/zh_CN/choose-your-wallet?step=5&platform=linux
[6]https://bitcoincore.org/bin/bitcoin-core-0.19.0.1/

filecoin

去中心化的存储网络

filecoin 目前没有ARM64版本。

[user1@centos filecoin]$ pwd
/home/user1/open_software/filecoin-release/filecoin
[user1@centos filecoin]$ file ./*
./go-filecoin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=357de502b13f0450cbe7b1fc0ed73fadffe9e1f5, not stripped
./paramcache:  ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=1c5add2b39bb2cd4c383af6cbef91fe9c4495af3, not stripped

filecoin的编译需要下载很多go模块, 被屏蔽。

[user1@centos go-filecoin]$
[user1@centos go-filecoin]$ FILECOIN_USE_PRECOMPILED_RUST_PROOFS=true go run ./build                                                                                                                     uild deps
pkg-config --version
0.27.1
Installing dependencies...
go mod download
 13.32 KiB / 13.32 KiB [===============================] 100.00% 100.43 KiB/s 0s
 147.90 MiB / 147.90 MiB [================================================================================================================================================] 100.00% 588.52 KiB/s 4m17s
 4.88 KiB / 4.88 KiB [========================================================================================================================================================] 100.00% 27.33 KiB/s 0s
 13.32 KiB / 13.32 KiB [======================================================================================================================================================] 100.00% 81.60 KiB/s 0s
 4.88 KiB / 4.88 KiB [========================================================================================================================================================] 100.00% 55.46 MiB/s 0s
 13.32 KiB / 13.32 KiB [=====================================================================================================================================================] 100.00% 378.19 KiB/s 0s
 2.04 GiB / 2.48 GiB [=======================================================================================================================>--------------------------]  82.07% 587.53 KiB/s 1h0m35s
 4.88 KiB / 4.88 KiB [========================================================================================================================================================] 100.00% 10.93 MiB/s 0s
 4.88 KiB / 4.88 KiB [========================================================================================================================================================] 100.00% 44.05 MiB/s 0s
 4.88 KiB / 4.88 KiB [===============================

执行成功出现:

                                 Dload  Upload   Total   Spent    Left  Speed
100 9498k  100 9498k    0     0   548k      0  0:00:17  0:00:17 --:--:--  593k
+ [[ 0 -ne 0 ]]
+ eval 'tarball_path='\''/tmp/filecoin-ffi-Linux_16941733.tar.gz'\'''
++ tarball_path=/tmp/filecoin-ffi-Linux_16941733.tar.gz
++ mktemp -d
+ tmp_dir=/tmp/tmp.hWE9Bq7GHa
+ tar -C /tmp/tmp.hWE9Bq7GHa -xzf /tmp/filecoin-ffi-Linux_16941733.tar.gz
+ find -L /tmp/tmp.hWE9Bq7GHa -type f -name filecoin.h -exec cp -- '{}' . ';'
+ find -L /tmp/tmp.hWE9Bq7GHa -type f -name libfilecoin.a -exec cp -- '{}' . ';'
+ find -L /tmp/tmp.hWE9Bq7GHa -type f -name filecoin.pc -exec cp -- '{}' . ';'
+ echo 'successfully installed prebuilt libfilecoin'
successfully installed prebuilt libfilecoin

filecoine工程分析

lotus---------------------------------主工程 https://github.com/filecoin-project/lotus.git
|-- extern
|   |-- filecoin-ffi------------------向量化 https://github.com/filecoin-project/filecoin-ffi.git
|   |                                 filcrypto.h filcrypto.pc libfilcrypto.a
|   |
|   `-- serialization-vectors---------rust库 https://github.com/filecoin-project/serialization-vectors

问题记录

缺少opencl
# github.com/filecoin-project/filecoin-ffi
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: cannot find -lOpenCL
collect2: error: ld returned 1 exit status

解决办法

sudo dnf install -y ocl-icd-devel.aarch64
输入文件是x86的
lecoin.a(futures_cpupool-1f3bf26aa9279af0.futures_cpupool.ahnnhqyk-cgu.3.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(futures_cpupool-1f3bf26aa9279af0.futures_cpupool.ahnnhqyk-cgu.4.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file \`/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(qutex-8dfbe8197b98ccc5.qutex.8mzkyvtz-cgu.0.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(qutex-8dfbe8197b98ccc5.qutex.8mzkyvtz-cgu.1.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(blake2s_simd-e06fbb96181f173a.blake2s_simd.cqrh7vav-cgu.11.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(crossbeam_utils-e8dfdc01aecf4d4c.crossbeam_utils.av4hkwzx-cgu.0.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(blake2b_simd-8e21006b644a8dcd.blake2b_simd.du1wdeab-cgu.11.rcgu.o)' is incompatible with aarch64 o

未解决。可能是go编译工程没有成功

board_connect

board_connect 是内部链接单板的工具。但是有时候发现无法链接上。

lixianfa@BoardServer2:~$ board_connect 4
/home/lixianfa/grub.cfg  #sync to#  /home/hisilicon/ftp/grub.cfg-01-00-18-c0-a8-02-81
Connected to board: No=185, type=D05.
Info: SOL payload already de-activated
[SOL Session operational.  Use ~? for help]

这个时候需要修改grub启动项。

Redhat-D06

编辑/etc/default/grub添加console=ttyAMA0,115200 完整的/etc/default/grub如下:

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgg
b quiet console=ttyAMA0,115200"
GRUB_DISABLE_RECOVERY="true"

更新grub.cfg:

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg #务必注意grub.cfg的路径是否是这个路径。

Ubuntu 18.04.1 LTS-D05

编辑/etc/default/grub添加console=ttyAMA0,115200 完整的/etc/default/grub如下:

GRUB_DEFAULT=saved
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=2
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX="console=ttyAMA0,115200"

更新grub.cfg:

sudo grub-mkconfig -o /boot/grub/grub.cfg

bond

在服务器看来,bond就是多个网口组bond,变成一个虚拟网口,在虚拟网口上设置一个IP地址,虚拟网口拥有更大的带宽。 在交换机看来,bond就是链路聚合,多个端口聚合在一起形成一个更大带宽的链路。 交换机不仅可以本台设备上聚合,还可以跨设备聚合。

服务器的配置

bond的配置目标
+--------------------------------------------------------------------+
|                        交换机                                      |
|                                                                    |
|                 XGigabitEthernet0/0/3        XGigabitEthernet0/0/7 |
|     XGigabitEthernet0/0/1    XGigabitEthernet0/0/5                 |
|         +--+    +--+             +--+      +--+                    |
+--------------------------------------------------------------------+
          +--+    +--+             +--+      +--+
           |       |                |         |
           |       |                |         |
           | bond0 |                |   bond1 |
           |       |                |         |
    +-----++-+----++-+-------------++-+------++-+--------+
    |     |  |    |  |             |  |      |  |        |
    |     +--+    +--+             +--+      +--+        |
    |    enp137s0 enp138s0        enp139s0    enp140s0   |
    |                                                    |
    +----------------------------------------------------+
                   服务器
enp137s0的配置

主要关注MASTER指定为bond0, SLAVE指定为yes

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp137s0
UUID=83891043-9a97-395a-9da5-a313db1b33ab
ONBOOT=yes
DEVICE=enp137s0
MASTER=bond0
SLAVE=yes
enp138s0的配置

主要关注MASTER指定为bond0, SLAVE指定为yes

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp138s0
UUID=64d5f715-da01-30c2-a8a8-fc5ea2dbada0
ONBOOT=yes
DEVICE=enp138s0
MASTER=bond0
SLAVE=yes
enp139s0的配置

主要关注MASTER指定为bond1, SLAVE指定为yes

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp139s0
UUID=88cc9027-34b8-30c1-97eb-78a339bca915
ONBOOT=yes
DEVICE=enp139s0
MASTER=bond1
SLAVE=yes
enp140s0的配置

主要关注MASTER指定为bond1, SLAVE指定为yes

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp140s0
UUID=c40a6a9e-0bac-37c0-80f6-e62b684a57d7
ONBOOT=yes
DEVICE=enp140s0
MASTER=bond1
SLAVE=yes
bond0的配置
1
2
3
4
5
6
7
8
9
DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=128.10.100.10
NETMASK=255.255.255.0
BONDING_OPTS="mode=4 miimon=100 lacp_rate=fast xmit_hash_policy=layer3+4"
bond1的配置
1
2
3
4
5
6
7
8
9
DEVICE=bond1
NAME=bond1
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=128.10.200.10
NETMASK=255.255.255.0
BONDING_OPTS="mode=4 miimon=100 lacp_rate=fast xmit_hash_policy=layer3+4"
bonad的配置结果
10: enp137s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 9c:52:f8:91:c3:c3 brd ff:ff:ff:ff:ff:ff
11: enp138s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 9c:52:f8:91:c3:c3 brd ff:ff:ff:ff:ff:ff
12: enp139s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 9c:52:f8:91:c3:c5 brd ff:ff:ff:ff:ff:ff
13: enp140s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 9c:52:f8:91:c3:c5 brd ff:ff:ff:ff:ff:ff
14: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9c:52:f8:91:c3:c5 brd ff:ff:ff:ff:ff:ff
    inet 128.10.200.10/24 brd 128.10.200.255 scope global noprefixroute bond1
       valid_lft forever preferred_lft forever
    inet6 fe80::9e52:f8ff:fe91:c3c5/64 scope link
       valid_lft forever preferred_lft forever
15: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9c:52:f8:91:c3:c3 brd ff:ff:ff:ff:ff:ff
    inet 128.10.100.10/24 brd 128.10.100.255 scope global noprefixroute bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::9e52:f8ff:fe91:c3c3/64 scope link
       valid_lft forever preferred_lft forever

网口enp137s0和enp138s0属于bond0,网口enp139s0和enp140s0属于bond1。bond0和bond1是生成的虚拟网口

交换机的配置

交换机配置过程

创建一个trunk接口,设置lacp

interface Eth-Trunk8
 port link-type trunk
 mode lacp
 undo local-preference enable

interface Eth-Trunk9
 port link-type trunk
 mode lacp
 undo local-preference enable

把其中两个接口绑定到eth-trunk 8是bond0。绑定两个接口到eth-trunk 9对应bond1

interface XGigabitEthernet0/0/1
 flow-control
 eth-trunk 8
interface XGigabitEthernet0/0/3
 flow-control
 eth-trunk 8

interface XGigabitEthernet0/0/5
 flow-control
 eth-trunk 9
interface XGigabitEthernet0/0/7
 flow-control
 eth-trunk 9
#
交换机配置结果:

可以看到接口Eth-Trunk8包含了两个10GE口。

 [Quidway-Eth-Trunk8]dis interface brief
 PHY: Physical
 *down: administratively down
 (l): loopback
 (s): spoofing
 (E): E-Trunk down
 (b): BFD down
 (e): ETHOAM down
 (dl): DLDP down
 (d): Dampening Suppressed
 InUti/OutUti: input utility/output utility
 Interface                   PHY   Protocol InUti OutUti   inErrors  outErrors

   XGigabitEthernet0/0/29    up    up          0%     0%          0          0
   XGigabitEthernet0/0/31    up    up          0%     0%          0          0
 Eth-Trunk6                  up    up          0%     0%          0          0
   XGigabitEthernet0/0/30    up    up          0%     0%          0          0
   XGigabitEthernet0/0/32    up    up          0%     0%          0          0
 Eth-Trunk7                  down  down        0%     0%          0          0
   XGigabitEthernet0/0/34    up    up          0%     0%          0          0
   XGigabitEthernet0/0/36    down  down        0%     0%          0          0
 Eth-Trunk8                  up    up       0.02%  0.02%          0          0
   XGigabitEthernet0/0/1     up    up       0.02%  0.02%          0          0
   XGigabitEthernet0/0/3     up    up       0.01%  0.02%          0          0
 Eth-Trunk9                  up    up       0.01%  0.01%          0          0
   XGigabitEthernet0/0/5     up    up       0.02%  0.01%          0          0
   XGigabitEthernet0/0/7     up    up       0.01%  0.02%          0          0
 Eth-Trunk10                 up    up       0.01%  0.01%          0          0
   XGigabitEthernet0/0/2     up    up          0%  0.01%          0          0
   XGigabitEthernet0/0/4     up    up       0.01%  0.01%          0          0
 Eth-Trunk11                 up    up          0%     0%          0          0
   XGigabitEthernet0/0/6     up    up       0.01%     0%          0          0
[Quidway]display eth-trunk 8
Eth-Trunk8's state information is:
Local:
LAG ID: 8                   WorkingMode: LACP
Preempt Delay: Disabled     Hash arithmetic: According to SIP-XOR-DIP
System Priority: 32768      System ID: 94db-da37-c340
Least Active-linknumber: 1  Max Active-linknumber: 8
Operate status: up          Number Of Up Port In Trunk: 2
--------------------------------------------------------------------------------
ActorPortName          Status   PortType PortPri PortNo PortKey PortState Weight
XGigabitEthernet0/0/1  Selected 10GE     32768   15     2113    10111100  1
XGigabitEthernet0/0/3  Selected 10GE     32768   16     2113    10111100  1

Partner:
--------------------------------------------------------------------------------
ActorPortName          SysPri   SystemID        PortPri PortNo PortKey PortState
XGigabitEthernet0/0/1  65535    9c52-f891-c3c3  255     1      15      11111100
XGigabitEthernet0/0/3  65535    9c52-f891-c3c3  255     2      15      11111100

[Quidway]display eth-trunk 9
Eth-Trunk9's state information is:
Local:
LAG ID: 9                   WorkingMode: LACP
Preempt Delay: Disabled     Hash arithmetic: According to SIP-XOR-DIP
System Priority: 32768      System ID: 94db-da37-c340
Least Active-linknumber: 1  Max Active-linknumber: 8
Operate status: up          Number Of Up Port In Trunk: 2
--------------------------------------------------------------------------------
ActorPortName          Status   PortType PortPri PortNo PortKey PortState Weight
XGigabitEthernet0/0/5  Selected 10GE     32768   17     2369    10111100  1
XGigabitEthernet0/0/7  Selected 10GE     32768   18     2369    10111100  1

Partner:
--------------------------------------------------------------------------------
ActorPortName          SysPri   SystemID        PortPri PortNo PortKey PortState
XGigabitEthernet0/0/5  65535    9c52-f891-c3c5  255     1      15      11111100
XGigabitEthernet0/0/7  65535    9c52-f891-c3c5  255     2      15      11111100

[Quidway]

在Eth-Trunk8接口上可以看到学到的服务器mac地址9c52-f891-c3c3;在Eth-Trunk9接口上可以看到学到的服务器mac地址9c52-f891-c3c5

 [Quidway]display mac-address
 -------------------------------------------------------------------------------
 MAC Address    VLAN/VSI                          Learned-From        Type
 -------------------------------------------------------------------------------
 0000-0000-0316 1/-                               XGE0/0/48           dynamic
 0001-0263-0405 1/-                               XGE0/0/48           dynamic
 0001-0800-00b6 1/-                               XGE0/0/48           dynamic
 9c52-f891-c3c3 1/-                               Eth-Trunk8          dynamic
 9c52-f891-c3c5 1/-                               Eth-Trunk9          dynamic
 9c52-f892-15f3 1/-                               Eth-Trunk6          dynamic
 9c52-f892-4d23 1/-                               Eth-Trunk14         dynamic

删除Eth-Trunk

<HUAWEI> system-view
[~HUAWEI] interface eth-trunk 8
[~HUAWEI-Eth-Trunk8] undo trunkport  XGigabitEthernet0/0/1
[*HUAWEI-Eth-Trunk8] undo trunkport  XGigabitEthernet0/0/3
[~HUAWEI-Eth-Trunk8] quit
[~HUAWEI] undo interface eth-trunk 8
[*HUAWEI] save

其它知识

链路聚合有多种模式,这里使用的是mode4,更多模式相关的内容需要搜索查询。

brctl

brctl 常用命令

brctl addbr stage       #添加虚拟交换机stage
brctl delbr stage       #删除虚拟交换机stage, 需要交换机是down的状态
brctl addif dev eth0    #添加接口的虚拟交换机
brctl addif dev eth0    #删除dev交换机上的eth0接口
brctl stp dev off       #关闭STP
brctl stp dev on        #启动STP
[user1@centos bin]$ brctl show
bridge name     bridge id               STP enabled     interfaces
dev             8000.000000000000       no
docker0         8000.02420d8a54b5       no
prod            8000.000000000000       no
stage           8000.000000000000       no
virbr0          8000.5254003852f7       yes             virbr0-nic
[user1@centos bin]$

virbr0-nic是virbr0上的一张卡

参考文档

  1. brctl Command Examples for Ethernet Network Bridge https://www.thegeekstuff.com/2017/06/brctl-bridge/

btsync

P2P同步工具,用 Docker 来运行还是比较方便的。

DATA_FOLDER=/path/to/data/folder/on/the/host
WEBUI_PORT=<port to access the webui on the host>

mkdir -p $DATA_FOLDER

docker run -d --name Sync \
        -p 127.0.0.1:$WEBUI_PORT:8888 \
        -p 55555 \
        -v $DATA_FOLDER:/mnt/sync \
        --restart on-failure \
        resilio/sync

打开 web UI 输入key就可以开始同步了。 比如 B7P64IMWOCXWEYOXIMBX6HN5MHEULFS4V 。

calico

[root@master1 ~]# ./calicoctl-linux-arm64 node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.122.103 | node-to-node mesh | up    | 01:38:37 | Established |
| 192.168.122.104 | node-to-node mesh | up    | 01:39:00 | Established |
+-----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

[root@master1 ~]#

问题: /bin/sh: clang: not found

[WARN  tini (6)] Tini is not running as PID 1 and isn't registered as a child subreaper.
Zombie processes will not be re-parented to Tini, so zombie reaping won't work.
To fix the problem, use the -s option or set the environment variable TINI_SUBREAPER to register Tini as a child subreaper, or run Tini as PID 1.
Starting with UID : 1000
make: Entering directory '/go/src/github.com/projectcalico/node/bin/bpf/bpf-apache'
/bin/sh: clang: not found
make: *** [Makefile:52: sockops.d] Error 127
make: *** Waiting for unfinished jobs....
/bin/sh: clang: not found
make: *** [Makefile:52: redir.d] Error 127
/bin/sh: clang: not found
make: *** [Makefile:52: filter.d] Error 127
make: Leaving directory '/go/src/github.com/projectcalico/node/bin/bpf/bpf-apache'
Makefile:150: recipe for target 'remote-deps' failed
make: *** [remote-deps] Error 2
apt install clang

问题: <built-in>’/include/generated/uapi/linux/version.h’ file not found

set -e; rm -f connect_balancer.d; \
        clang -M -x c -D__KERNEL__ -D__ASM_SYSREG_H -D__LINUX_BPF_H__ -Wno-unused-value -Wno-pointer-sign -Wno-compare-distinct-pointer-types -Wunused -Wall -fno-stack-protector -O2 -emit-llvm --include=/usr/src/linux-headers-5.6.0-0.bpo.2-common/include/uapi/linux/bpf.h --include=/include/generated/uapi/linux/version.h connect_balancer.c > connect_balancer.d.$$ || { rm -f connect_balancer.d.$$; false; } ; \
        sed 's,\(connect_balancer\)\.o[ :]*,\1.o connect_balancer.d : ,g' < connect_balancer.d.$$ > connect_balancer.d; \
        rm -f connect_balancer.d.$$
set -e; rm -f tc.d; \
        clang -M -x c -D__KERNEL__ -D__ASM_SYSREG_H -D__LINUX_BPF_H__ -Wno-unused-value -Wno-pointer-sign -Wno-compare-distinct-pointer-types -Wunused -Wall -fno-stack-protector -O2 -emit-llvm --include=/usr/src/linux-headers-5.6.0-0.bpo.2-common/include/uapi/linux/bpf.h --include=/include/generated/uapi/linux/version.h tc.c > tc.d.$$ || { rm -f tc.d.$$; false; } ; \
        sed 's,\(tc\)\.o[ :]*,\1.o tc.d : ,g' < tc.d.$$ > tc.d; \
        rm -f tc.d.$$
<built-in>:2:10: fatal error: <built-in>'/include/generated/uapi/linux/version.h' file not found:
2:10: fatal error: '/include/generated/uapi/linux/version.h' file not found
#include "/include/generated/uapi/linux/version.h"
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#include "/include/generated/uapi/linux/version.h"
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
make: *** [Makefile:105: tc.d] Error 1
make: *** Waiting for unfinished jobs....
1 error generated.
make: *** [Makefile:105: connect_balancer.d] Error 1
make: Leaving directory '/go/src/github.com/projectcalico/node/bin/bpf/bpf-gpl'
Makefile:150: recipe for target 'remote-deps' failed
make: *** [remote-deps] Error 2

Ceph

ceph 是一个高性能的分布式系统。提供三种服务:对象存储(Object Storage)、块存储(BLOCK Storage)、文件系统(File System)

一个ceph集群至少需要一个Ceph Monitor,Ceph Manager 和Ceph OSD。如果需要使用Ceph文件系统(Ceph File System),则还需要一个元数据服务器(Ceph Metadata)

下面以安装ceph 12.2.11为例

安装ceph, 卸载ceph

yum install -y ceph
yum uninstall -y ceph

:: code-block:: shell

ceph -s ceph -w ceph df osdmaptool osd1.map –upmap out1.txt –upmap-pool cephfs_data –upmap-max 300

查看集群的pool,下面可以看出来有一个pool,ID是1, 名字是volumes

[root@ceph-node00 ~]# ceph osd lspools
1 volumes
[root@ceph-node00 ~]# ceph osd pool ls detail
pool 1 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 4096 pgp_num 4096 autoscale_mode warn last_change 1644 lfor 0/0/739 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
        removed_snaps [1~5]

查看集群的虚拟磁盘

[root@ceph-node00 ~]# rbd ls volumes
test-000
test-001
test-002
test-003
test-004
test-005
test-006
test-007
test-008

查看虚拟磁盘

[root@ceph-node00 ~]# rbd info volumes/test-319
rbd image 'test-319':
        size 400 GiB in 102400 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 7de74cf76e78
        block_name_prefix: rbd_data.7de74cf76e78
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Fri Jul  5 23:30:08 2019
        access_timestamp: Sat Jul  6 15:11:10 2019
        modify_timestamp: Sat Jul  6 15:25:48 2019

手动部署

ubuntu node1 mon

生成uuid。

root@ubuntu:~# uuidgen
8b9fb887-8b58-4391-b002-a7e5fa5947e2

ceph.conf

fsid = 8b9fb887-8b58-4391-b002-a7e5fa5947e2
#设置node1(ubuntu)为mon节点
mon initial members = ubuntu
#设置mon节点地址
mon host = 192.168.1.10
public network = 192.168.1.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
#设置副本数
osd pool default size = 3
#设置最小副本数
osd pool default min size = 1
osd pool default pg num = 64
osd pool default pgp num = 64
osd crush chooseleaf type = 1
osd_mkfs_type = xfs
max mds = 5
mds max file size = 100000000000000
mds cache size = 1000000
#设置osd节点down后900s,把此osd节点逐出ceph集群,把之前映射到此节点的数据映射到其他节点。
mon osd down out interval = 900

[mon]
#把时钟偏移设置成0.5s,默认是0.05s,由于ceph集群中存在异构PC,导致时钟偏移总是大于0.05s,为了方便同步直接把时钟偏移设置成0.5s
mon clock drift allowed = .50

下载二进制包

ubuntu

wget -q http://download.ceph.com/debian-{release}/pool/main/c/ceph/ceph_{version}{distro}_{arch}.deb
wget -q http://download.ceph.com/debian-luminouse/pool/main/c/ceph/ceph_13.2.0bionic_x86_64.deb

ceph preflight log

wget -q -O - 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

如果添加成功,可以查看到添加好的key

me@ubuntu:~$ apt-key list
/etc/apt/trusted.gpg
--------------------
pub   rsa4096 2015-09-15 [SC]
      08B7 3419 AC32 B4E9 66C1  A330 E84A C2C0 460F 3994
uid           [ unknown] Ceph.com (release key) <security@ceph.com>
echo deb https://download.ceph.com/debian-luminouse/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

redhat

rpm –import ‘https://download.ceph.com/keys/release.asc

问题

逐一安装以下软件包

libaio1
libsnappy1
libcurl3
curl
libgoogle-perftools4
google-perftools
libleveldb1

dpkg -i libaio1_0.3.110-5_arm64.deb dpkg -i libsnappy1v5_1.1.7-1_arm64.deb dpkg -i curl_7.58.0-2ubuntu3.6_arm64.deb dpkg -i libleveldb1v5_1.20-2_arm64.deb dpkg -i librbd1_12.2.11-0ubuntu0.18.04.1_arm64.deb dpkg -i librados* librados-dev_12.2.11-0ubuntu0.18.04.1_arm64.deb

libcurl3 和libcurl4冲突

root@ubuntu:# dpkg -i libcurl3_7.58.0-2ubuntu2_arm64.deb
Selecting previously unselected package libcurl3:arm64.
dpkg: regarding libcurl3_7.58.0-2ubuntu2_arm64.deb containing libcurl3:arm64:
 libcurl3 conflicts with libcurl4
  libcurl4:arm64 (version 7.58.0-2ubuntu3.6) is present and installed.

dpkg: error processing archive libcurl3_7.58.0-2ubuntu2_arm64.deb (--install):
 conflicting packages - not installing libcurl3:arm64
Errors were encountered while processing:
 libcurl3_7.58.0-2ubuntu2_arm64.deb
libgoogle-perftools4 会缺少依赖
root@ubuntu:# dpkg -i libgoogle-perftools4_2.5-2.2ubuntu3_arm64.deb
(Reading database ... 133811 files and directories currently installed.)
Preparing to unpack libgoogle-perftools4_2.5-2.2ubuntu3_arm64.deb ...
Unpacking libgoogle-perftools4 (2.5-2.2ubuntu3) over (2.5-2.2ubuntu3) ...
dpkg: dependency problems prevent configuration of libgoogle-perftools4:
 libgoogle-perftools4 depends on libtcmalloc-minimal4 (= 2.5-2.2ubuntu3); however:
  Package libtcmalloc-minimal4 is not installed.

dpkg: error processing package libgoogle-perftools4 (--install):
 dependency problems - leaving unconfigured
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Errors were encountered while processing:
 libgoogle-perftools4
root@ubuntu:/home/lxf/201/ceph_standalone/deb#

解决办法:下载并安装libtcmalloc-minimal4

使用dpkg -i 所有的deb包

Errors were encountered while processing: libcurl3_7.58.0-2ubuntu2_arm64.deb ceph-common ceph-mgr ceph libgoogle-perftools4 radosgw ceph-base ceph-mon google-perftools

dpkg -i libtcmalloc-minimal4_2.5-2.2ubuntu3_arm64.deb dpkg -i libgoogle-perftools4_2.5-2.2ubuntu3_arm64.deb dpkg -i python-prettytable_0.7.2-3_all.deb dpkg -i libbabeltrace1_1.5.5-1_arm64.deb dpkg -i ceph-common_12.2.11-0ubuntu0.18.04.1_arm64.deb dpkg -i ceph-base_12.2.11-0ubuntu0.18.04.1_arm64.deb dpkg -i ceph-mon_12.2.11-0ubuntu0.18.04.1_arm64.deb dpkg -i ceph-mgr_12.2.11-0ubuntu0.18.04.1_arm64.deb

Ceph operate

ceph daemon osd.2 show config       #查看OSD的参数
ceph daemon osd.2 perf restet       #重置OSD的性能参数统计
ceph daemon osd.2 perf dump > a.txt #到处配置
ceph pg dump                        #查看pg分布
for i in {40..59};do ceph daemon osd.$i config set osd_max_backfills 10;done

目标是新建如下集群:

[root@192e168e100e118 ~]# ceph -s
  cluster:
    id:     6534efb5-b842-40ea-b807-8e94c398c4a9
    health: HEALTH_WARN
            noscrub,nodeep-scrub flag(s) set

  services:
    mon: 5 daemons, quorum ceph-node00,ceph-node01,ceph-node06,ceph-node07,ceph-node02 (age 2w)
    mgr: ceph-node00(active, since 2w)
    osd: 96 osds: 96 up (since 12d), 96 in (since 2w)
         flags noscrub,nodeep-scrub

  data:
    pools:   9 pools, 4096 pgs
    objects: 40.96M objects, 156 TiB
    usage:   491 TiB used, 230 TiB / 721 TiB avail
    pgs:     4096 active+clean

卸载过程

文件存储卸载过程

在客户端取消挂载点

在所有的客户端执行取消挂载ceph文件系统

pssh -h client_hosts.txt -i -P "umount /mnt/cephfs"

这里的挂载点点是/mnt/cephfs.

ceph1,ceph2,ceph3,ceph4:/ on /mnt/cephfs type ceph (rw,relatime,sync,name=admin,secret=<hidden>,acl,wsize=16777216)
[root@client1 vdbench]# pssh -h client_hosts.txt -i -P "umount /mnt/cephfs"
[1] 16:11:37 [SUCCESS] root@client1:22
[2] 16:11:37 [SUCCESS] root@client3:22
[3] 16:11:37 [SUCCESS] root@client4:22
[4] 16:11:37 [SUCCESS] root@client2:22
[root@client1 vdbench]#
[root@client1 vdbench]#

确认客户端为 0

[root@ceph1 ~]# ceph fs status
cephfs - 0 clients
======
+------+--------+-------+---------------+-------+-------+
| Rank | State  |  MDS  |    Activity   |  dns  |  inos |
+------+--------+-------+---------------+-------+-------+
|  0   | active | ceph4 | Reqs:    0 /s | 5111  | 5113  |
+------+--------+-------+---------------+-------+-------+
+-----------------+----------+-------+-------+
|       Pool      |   type   |  used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata |  156M |  183T |
|   cephfs_data   |   data   |  976G |  183T |
+-----------------+----------+-------+-------+

+-------------+
| Standby MDS |
+-------------+
|    ceph2    |
|    ceph3    |
|    ceph1    |
+-------------+
MDS version: ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)
[root@ceph1 ~]#
停止MDS进程

在ceph节点停止mds进程

pssh -h backend_hosts.txt -i -P "systemctl stop ceph-mds.target"
[root@client1 vdbench]# pssh -h backend_hosts.txt -i -P "systemctl stop ceph-mds.target"
[1] 16:17:27 [SUCCESS] root@ceph2:22
Stderr:
Authorized users only. All activities may be monitored and reported.
[2] 16:17:27 [SUCCESS] root@ceph4:22
Stderr:
Authorized users only. All activities may be monitored and reported.
[3] 16:17:27 [SUCCESS] root@ceph1:22
Stderr:
Authorized users only. All activities may be monitored and reported.
[4] 16:17:27 [SUCCESS] root@ceph3:22
Stderr:
Authorized users only. All activities may be monitored and reported.
[root@client1 vdbench]#

有时候会出现如下报错

[root@ceph2 ~]# ceph fs rm cephfs
Error EINVAL: all MDS daemons must be inactive before removing filesystem

请参考问题记录解决

删除后端文件存储
[root@ceph1 ~]# ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
pool 'cephfs_metadata' removed
[root@ceph1 ~]# ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it
pool 'cephfs_data' removed
[root@ceph1 ~]#

如果报错提示需要设置MON允许删除pool

/etc/ceph/ceph.conf 中需要包含:

[mon]
mon_allow_pool_delete = true
删除pool
# 文件存储池删除
ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it

# 块存储池删除
ceph osd pool delete images images --yes-i-really-really-mean-it
ceph osd pool delete volumes volumes --yes-i-really-really-mean-it
停止OSD进程

在每一个ceph节点上执行 systemctl stop ceph-osd.target

[root@client1 bin]# pssh -h backend_hosts.txt -i -P "systemctl stop ceph-osd.target"
[1] 16:43:35 [SUCCESS] root@ceph2:22
Stderr:
Authorized users only. All activities may be monitored and reported.
[2] 16:43:35 [SUCCESS] root@ceph3:22
Stderr:
Authorized users only. All activities may be monitored and reported.
[3] 16:43:35 [SUCCESS] root@ceph4:22
Stderr:
Authorized users only. All activities may be monitored and reported.
[4] 16:43:35 [SUCCESS] root@ceph1:22
Stderr:
Authorized users only. All activities may be monitored and reported.
删除HDD OSD

在可以对集群进行管理的节点上执行删除

for i in {0..95}; do
    ceph osd down osd.$i
    ceph osd out osd.$i
    ceph osd crush remove osd.$i
    ceph auth del osd.$i
    ceph osd rm osd.$i
done

查看删除情况

[root@ceph1 bin]# ceph osd tree
ID CLASS WEIGHT TYPE NAME      STATUS REWEIGHT PRI-AFF
-1            0 root default
-3            0     host ceph1
-5            0     host ceph2
-7            0     host ceph3
-9            0     host ceph4

取消每台ceph节点上OSD挂载

umount /var/lib/ceph/osd/ceph-*
rm -rf /var/lib/ceph/osd/ceph-*
删除每台ceph节点上的上的lvm分区

方法一:

lvs | grep osd | awk '{print $2}' | xargs lvremove -y       #先删除lvm
vgs | grep ceph | awk '{print $1}' | xargs vgremove -y      #再删除lvm group, 有时候可以直接执行这一条

可以在单台设备上执行上述命令,

pdsh -w '^arm.txt' 'lvs | grep osd | awk {print $2} | xargs lvremove -y'
pdsh -w '^arm.txt' 'vgs | grep ceph | awk {print $1} | xargs vgremove -y '

传递的命令带有单引号,所以这里加了转义符号。

方法二:

lsblk | grep ceph |awk '{print substr($1,3)}'                           #列出所有的lvm分区
lsblk | grep ceph |awk '{print substr($1,3)}' | xargs dmsetup remove    #列出所有的lvm分区,并删除

也可以指定删除某一个

dmsetup remove ceph--7c7c2721--5dfc--45e4--9946--5316e21087df-osd--block--92276738--1bbe--4229--a094--761ceda16812

删除前

[root@ceph1 bin]# lsblk
NAME                                                                                                      MAJ:MIN  RM   SIZE RO TYPE MOUNTPOINT
loop0                                                                                                       7:0     0   4.2G  0 loop /mnt/euler
sda                                                                                                         8:0     0   7.3T  0 disk
└─bcache0                                                                                                 251:0     0   7.3T  0 disk
  └─ceph--1f0cdb93--553b--4ae9--a70d--44d1f330d564-osd--block--ace0eccc--eba3--4216--a66a--b9725ec56cdf   250:0     0   7.3T  0 lvm
sdb                                                                                                         8:16    0   7.3T  0 disk
└─bcache1                                                                                                 251:128   0   7.3T  0 disk
  └─ceph--d1c3ee5c--41a7--4662--be22--c5bc3e78ad69-osd--block--8a9951cf--33ac--4246--a6a7--36048e5852bf   250:1     0   7.3T  0 lvm
sdc                                                                                                         8:32    0   7.3T  0 disk
└─bcache2                                                                                                 251:256   0   7.3T  0 disk
  └─ceph--0bea6159--6d83--4cd5--be49--d1b4a74c4007-osd--block--476506ce--64a3--461e--8ffc--78de4f29a0ed   250:2     0   7.3T  0 lvm
sdd                                                                                                         8:48    0   7.3T  0 disk
└─bcache3                                                                                                 251:384   0   7.3T  0 disk
  └─ceph--8efa3be6--8448--47ff--9653--4f9d52439f80-osd--block--a4659aeb--bbc5--4ca0--8e4f--656b3ca47aad   250:3     0   7.3T  0 lvm
sde

删除后

[root@ceph1 bin]# lsblk
NAME         MAJ:MIN  RM   SIZE RO TYPE MOUNTPOINT
loop0          7:0     0   4.2G  0 loop /mnt/euler
sda            8:0     0   7.3T  0 disk
└─bcache0    251:0     0   7.3T  0 disk
sdb            8:16    0   7.3T  0 disk
└─bcache1    251:128   0   7.3T  0 disk
sdc            8:32    0   7.3T  0 disk
└─bcache2    251:256   0   7.3T  0 disk
sdd            8:48    0   7.3T  0 disk
删除bcache(未使用请跳过)
pssh -h backend_hosts.txt -i -P -I < resetbcache.sh

删除前

[root@ceph1 bin]# lsblk
NAME         MAJ:MIN  RM   SIZE RO TYPE MOUNTPOINT
loop0          7:0     0   4.2G  0 loop /mnt/euler
sda            8:0     0   7.3T  0 disk
└─bcache0    251:0     0   7.3T  0 disk
sdb            8:16    0   7.3T  0 disk
└─bcache1    251:128   0   7.3T  0 disk
sdc            8:32    0   7.3T  0 disk
└─bcache2    251:256   0   7.3T  0 disk
sdd            8:48    0   7.3T  0 disk
└─bcache3    251:384   0   7.3T  0 disk
sde            8:64    0   7.3T  0 disk
└─bcache4    251:512   0   7.3T  0 disk
sdf            8:80    0   7.3T  0 disk

删除后

[root@ceph1 dzw]# lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0     7:0    0   4.2G  0 loop /mnt/euler
sda       8:0    0   7.3T  0 disk
sdb       8:16   0   7.3T  0 disk
sdc       8:32   0   7.3T  0 disk
sdd       8:48   0   7.3T  0 disk
sde       8:64   0   7.3T  0 disk
sdf       8:80   0   7.3T  0 disk
sdg       8:96   0   7.3T  0 disk
sdh       8:112  0   7.3T  0 disk
sdi       8:128  0   7.3T  0 disk
sdj       8:144  0   7.3T  0 disk
sdk       8:160  0   7.3T  0 disk
sdl       8:176  0   7.3T  0 disk

最好dd一遍所有HDD和SSD分区

for ssd in v w x y;
do
        for i in {1..15};
        do
                echo sd$ssd$i
                dd if=/dev/zero of=/dev/sd"$ssd""$i" bs=1M count=1
        done
done

警告

到这里就完成了卸载,可以重新添加OSD了,再往下的过程是格式化所有硬盘,重新分区

卸载ceph
yum remove -y ceph
创建文件存储集群过程

SSD分区 cache配置 安装ceph

yum install -y ceph

ceph osd pool create cephfs_data 2048
ceph osd pool create cephfs_metadata 2048
ceph fs new cephfs cephfs_metadata cephfs_data
cat ceph.client.admin.keyring
格式化每台设备上的HDD,SSD(如果有)
for disk in {a..l}
    do parted -s /dev/sd${disk} mklabel gpt
    ceph-volume lvm zap /dev/sd${disk} --destroy
done
for ssd_disk in nvme0n1 nvme1n1
    do parted -s /dev/$ssd_disk mklabel gpt
    ceph-volume lvm zap /dev/$ssd_disk --destroy
done
在deploy节点上收集key
ceph-deploy gatherkeys ceph-node00
for node in {00..07}; do
    ceph-deploy gatherkeys ceph-node${node}
done
创建 HDD OSD

正常情况下在ceph-deploy节点上执行创建

for node in {00..07}; do
    for disk in {a..l};do
        ceph-deploy osd create --data /dev/sd${disk} ceph-node${node}
        sleep 2
    done
done

如果需要设置SSD作为wal和db在每个节点上执行

vgcreate ceph-db /dev/nvme0n1
vgcreate ceph-wal /dev/nvme1n1
for index in {a..l};do
    lvcreate -n ceph-db-$index -L 240G ceph-db;
    lvcreate -n ceph-wal-$index -L 240G ceph-wal;
done

正常情况下在deploy节点上执行

for node in {00..07}; do
    for disk in {a..l};do
        ceph-deploy --overwrite-conf osd create --data /dev/sd${disk} ceph-node${node}
    done
done

如果需要设置SSD作为wal和db在每个节点上执行

vgcreate ceph-db /dev/nvme0n1
vgcreate ceph-wal /dev/nvme1n1
for node in {00..07}; do
    for disk in {a..l};do
        ceph-deploy --overwrite-conf osd create --data /dev/sd${disk} \
        --block-db ceph-db/ceph-db-$disk \
        --block-wal ceph-wal/ceph-wal-$disk ceph-node${node}
    done
done
创建pool

正常情况下创建pool

ceph osd pool create volumes 4096 4096
ceph osd pool application enable volumes rbd

如果需要创建EC pool

ceph osd erasure-code-profile set testprofile k=4 m=2   #创建名字为testprofile的profile。 k+m为4+2。允许2个OSD出错。还有其他参数请查询其他文档
ceph osd erasure-code-profile get testprofile   #查看创建好的profile
ceph osd crush rule create-erasure test_profile_rule test_profile #根据profile创建crush rule
ceph osd crush rule ls  #查看所有的rule
ceph osd crush rule dump test_profile_rule  #查看某条rule的配置

ceph osd pool create volumes test_profile test_profile_rule
ceph osd pool set volumes allow_ec_overwrites true
ceph osd pool application enable volumes rbd

ceph osd crush rule create-replicated replicated_volumes default host
ceph osd pool create volumes_replicated_metadata replicated replicated_volumes
ceph osd pool create volumes_repli_metadata 1024 1024 replicated replicated_volumes
ceph osd pool application enable volumes_repli_metadata rbd

reference

创建rbd

一共创建400个rbd

for i in {000..399};do rbd create size3/test-$i --size 400G; done

约2分钟 如果是EC池

for i in {000..399};do
    rbd create volumes_repli_metadata/test-$i --size 400G --data-pool volumes;
done
写入数据
pdcp -w ^dell.txt fill_hdd_data.sh /root/rbd_test/
pdsh -w ^dell.txt 'cd /root/rbd_test; . fill_hdd_data.sh'
查看rbd容量
for index in {000..399};do
    rbd du volumes/test-$index
done

SSD 集群重测

格式化SSD
parted /dev/nvme1n1 -s mklabel gpt
parted /dev/nvme0n1 -s mklabel gpt
收集key
ceph-deploy gatherkeys
ceph-deploy osd create --data /dev/nvme0n1 ceph-node00
ceph-deploy osd create --data /dev/nvme1n1 ceph-node00
创建 pool
[root@ceph-node00 ~]# ceph osd pool create volumes 4096 4096
Error ERANGE:  pg_num 4096 size 3 would mean 12288 total pgs, which exceeds max 4000 (mon_max_pg_per_osd 250 * num_in_osds 16)
[root@ceph-node00 ~]# ceph osd pool create volumes 512 512
创建rbd

一共创建50个rbd

for i in {01..50};do
    rbd create --size 100G volumes/test-$i
done
写满rbd数据
pdsh -w ^dell.txt "cd /root/rbd_test;. fill_nvm2_data.sh"
查看rbd的容量
for index in {01..50};do
    rbd du volumes/test-$index
done

其它常用操作

收集数据
for host in `cat ../dell.txt`; do
    scp -r root@${host}:/root/rbd_test/192/* ./;
done
分发脚本
for host in `cat dell.txt`; do
    scp do_fio.sh root@${host}:/root/rbd_test/;
done
for host in `cat dell.txt`; do
    scp rmhostname.sh root@${host}:/root/rbd_test/;
done
重启进入bios
for host in ``cat BMC_arm.txt``; do
    ipmitool -I lanplus -H ${host} -U Administrator -P Admin@9000 chassis bootdev bios;
    wait ;
done
执行单个测试
fio315 -runtime=120     \
        -size=100%  \
        -bs=4k      \
        -rw=read    \
        -ioengine=rbd   \
        -direct=1       \
        -iodepth=32     \
        -numjobs=1  \
        -clientname=admin \
        -pool=volumes   \
        -ramp_time=10   \
        -rbdname=test-13 \
        --output="$(date "+%Y-%m-%d-%H%M")".json \
        -name="$(date "+%Y-%m-%d-%H%M")".json
统计json文件
py /home/monitor/test_script/parase_fio.py ./
禁用 osd
systemctl | grep ceph-osd | grep fail | awk ‘{print $2}’
systemctl | grep ceph-osd | grep fail | awk ‘{print $2}| xargs systemctl disable
systemctl | grep ceph-osd | grep fail | awk ‘{print $2}| xargs systemctl status
ceph绑核

可以先用`taskset -acp 0-23 {osd-pid}` 看看对性能帮助有多大。如果有帮助,再调整ceph参数配置

绑定node2

for osd_pid in $(pgrep ceph-osd); do taskset -acp 48-71 $osd_pid ;done
for osd_pid in $(pgrep ceph-osd); do ps -o thcount $osd_pid ;done
daemon命令查看集群状态
ceph daemon mon.cu-pve04 help       #显示monitor的命令帮助
ceph daemon mon.cu-pve04 sessions   #
ceph daemon osd.0 config show
ceph daemon osd.0 help              #显示命令帮助
ceph daemon osd.0 "dump_historic_ops_by_duration" #显示被ops的时间
noscrub 设置
ceph used set noscrub       #停止scrub
ceph osd unset noscrub      #启动scrub
删除lvm分区效果
sdk                                                                                                     8:160  0   7.3T  0 disk
sdi                                                                                                     8:128  0   7.3T  0 disk
sdg                                                                                                     8:96   0   7.3T  0 disk
└─ceph--e59eb57a--ca76--4b1c--94f5--723d83acf023-osd--block--8f205c61--80b5--4251--9fc4--52132f71f378 253:11   0   7.3T  0 lvm
nvme1n1                                                                                               259:0    0   2.9T  0 disk
└─ceph--192b4f4b--c3d0--48d2--a7df--1d721c96ad41-osd--block--4f61b14a--0412--4891--90c6--75cad9f68be8 253:2    0   2.9T  0 lvm
sde                                                                                                     8:64   0   7.3T  0 disk
└─ceph--ae498ea1--917c--430e--bdf9--cb76720b12cd-osd--block--8d20de06--7b58--48de--90a0--6353cada8c82 253:9    0   7.3T  0 lvm
sdc                                                                                                     8:32   0   7.3T  0 disk
└─ceph--69b9fdfb--f6f0--427d--bea8--379bec4a15dc-osd--block--0642e902--89c1--4490--bd9a--e1986c0eb50b 253:7    0   7.3T  0 lvm
sdl                                                                                                     8:176  0   7.3T  0 disk
sda                                                                                                     8:0    0   7.3T  0 disk
└─ceph--f7113ad8--a34e--4bb2--9cb8--8b27f48e7ce1-osd--block--8d67b2c0--1490--4a51--839a--2ea472fb53c8 253:5    0   7.3T  0 lvm
sdj                                                                                                     8:144  0   7.3T  0 disk
nvme0n1                                                                                               259:1    0   2.9T  0 disk
└─ceph--869d506c--83be--4abe--aaf6--70cf7900d5ff-osd--block--fede0b19--429d--4ec5--9c21--352c6b43f1d1 253:3    0   2.9T  0 lvm
sdh                                                                                                     8:112  0   7.3T  0 disk
[root@ceph-node03 ~]#
[root@ceph-node03 ~]#
[root@ceph-node03 ~]#
[root@ceph-node03 ~]#
[root@ceph-node03 ~]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdf               8:80   0   7.3T  0 disk
sdd               8:48   0   7.3T  0 disk
sdm               8:192  0 446.1G  0 disk
├─sdm3            8:195  0 444.9G  0 part
│ ├─centos-swap 253:1    0     4G  0 lvm
│ ├─centos-home 253:4    0 390.9G  0 lvm  /home
│ └─centos-root 253:0    0    50G  0 lvm  /
├─sdm1            8:193  0   200M  0 part /boot/efi
└─sdm2            8:194  0     1G  0 part /boot
sdb               8:16   0   7.3T  0 disk
sdk               8:160  0   7.3T  0 disk
sdi               8:128  0   7.3T  0 disk
sdg               8:96   0   7.3T  0 disk
nvme1n1         259:0    0   2.9T  0 disk
sde               8:64   0   7.3T  0 disk
sdc               8:32   0   7.3T  0 disk
sdl               8:176  0   7.3T  0 disk
sda               8:0    0   7.3T  0 disk
sdj               8:144  0   7.3T  0 disk
nvme0n1         259:1    0   2.9T  0 disk
sdh               8:112  0   7.3T  0 disk

问题记录

问题:inform the kernel of the change
[root@ceph2 ~]# parted /dev/sdy mklabel gpt
Warning: The existing disk label on /dev/sdy will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
Error: Partition(s) 11, 12, 13, 14, 15 on /dev/sdy have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will
remain in use.  You should reboot now before making further changes.
Ignore/Cancel? yes^C
[root@ceph2 ~]#

解决办法,bcache没有删除干净使用find命令查找没有删除的bcache分区

find / -name bcahce
问题: Error EINVAL: all MDS daemons must be inactive before removing filesystem
[root@ceph2 ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph2 ~]# ceph fs rm cephfs
Error EINVAL: all MDS daemons must be inactive before removing filesystem

这个时候需要手动fail最后一个mds [1]

ceph mds fail ceph1 ceph2 ceph3 ceph4
[1]手动fail mds https://www.spinics.net/lists/ceph-users/msg17960.html
问题: stderr: Failed to find PV /dev/bcache1

添加OSD时的实际操作是:

ceph-deploy osd create ceph1 --data /dev/bcache0 --block-db /dev/sdv6 --block-wal /dev/sdv1
[ceph1][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/bcache1 --block.wal /dev/sdv2 --block.db /dev/sdv7
[ceph1][WARNIN] -->  RuntimeError: command returned non-zero exit status: 5
[ceph1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph1][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 726db1da-7d82-4071-8818-f74323cd068d
[ceph1][DEBUG ] Running command: vgcreate --force --yes ceph-974225f2-04dc-49cb-805f-f1b36a4ea98b /dev/bcache1
[ceph1][DEBUG ]  stderr: Failed to find PV /dev/bcache1
[ceph1][DEBUG ] --> Was unable to complete a new OSD, will rollback changes
[ceph1][DEBUG ] --> OSD will be fully purged from the cluster, because the ID was generated
[ceph1][DEBUG ] Running command: ceph osd purge osd.0 --yes-i-really-mean-it
[ceph1][DEBUG ]  stderr: purged osd.0
[ceph1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/bcache1 --block.wal /dev/sdv2 --block.db /dev/sdv7
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

解决办法:

有一台设备的 /etc/lvm/lvm.conf注释了下面的设置, 取消注释

global_filter = [ "a|/dev/bcache0|", "a|/dev/bcache1|", "a|/dev/bcache2|", "a|/dev/bcache3|", "a|/dev/bcache4|", "a|/dev/bcache5|", "a|/dev/bcache6|", "a|/dev/bcache7|", "a|/dev/bcache8|", "a|/dev/bcache9|", "a|/dev/bcache10|", "a|/dev/bcache11|", "a|/dev/bcache12|", "a|/dev/bcache13|", "a|/dev/bcache14|", "a|/dev/bcache15|", "a|/dev/bcache16|", "a|/dev/bcache17|", "a|/dev/bcache18|", "a|/dev/bcache19|" ]

ceph 优化

修改bluestore_cache_size_ssd翻倍

[root@ceph1 ceph]# ceph --show-config | grep ssd
bluestore_cache_size_ssd = 3221225472
[osd]
bluestore_cache_size_ssd = 6442450944

osd绑核

每个ceph节点上,一个OSD进程绑两个核

MD绑核

每个ceph节点上, 一个MDS进程一个核,固定未40核

修改SSD调度模式

在此处修改固态盘调度模式为none

for f in `ls -d /sys/block/sd[v-y]`;do echo none > $f/queue/scheduler;done

清除缓存

echo 3 > /proc/sys/vm/drop_caches

PG 均衡未知

其中一个办法是使用umap [1]

问题记录

  1. 在预热过程中发现, 预热一段时间后, SSD的读带宽无法无法上去, 一直停留在200MB/s的水平, svctm(服务时间)超过2ms,通常情况下是小于1ms。 在一次测试中发现和测试结果无关,带有读操作之后,服务时间恢复正常的零点几毫秒。
[1]华为cloud论坛上的鲲鹏优化讨论 <https://bbs.huaweicloud.com/forum/thread-26303-1-1.html>

ceph pg balance

ceph 均衡PG [1] [2] , 使每个OSD的pg数量相等

开启均衡

ceph balancer status            # 查看状态
ceph mgr module enable balancer # 启用模块(默认)
ceph balancer on                # 启用平衡
ceph balancer mode upmap        # 设置模式(修改PG mapping)
ceph balancer mode crush-compat # 设置模式(修改weight)
ceph osd set-require-min-compat-client luminous  # 设置兼容版本(upmap模式需要此设置)
less /var/log/ceph/ceph.audit.log                # 查看调整日志
ceph osd df                                      # 查看调整结果

ceph osd getmap -o osd1.map
osdmaptool osd.map --upmap out.txt --upmap-pool cephfs_data
osdmaptool osd1.map --upmap out1.txt --upmap-pool cephfs_data --upmap-max 300

均衡之前

[root@ceph1 ~]# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE  USE     AVAIL %USE VAR  PGS
 0   ssd 7.27739  1.00000 7452G  67386M 7386G 0.88 0.76 127
 1   ssd 7.27739  1.00000 7452G  84810M 7369G 1.11 0.96 154
 2   ssd 7.27739  1.00000 7452G  90694M 7363G 1.19 1.03 163
 3   ssd 7.27739  1.00000 7452G  95630M 7358G 1.25 1.08 163
 4   ssd 7.27739  1.00000 7452G  99986M 7354G 1.31 1.13 154
 5   ssd 7.27739  1.00000 7452G  72930M 7380G 0.96 0.83 139
 6   ssd 7.27739  1.00000 7452G  89674M 7364G 1.18 1.02 149
 7   ssd 7.27739  1.00000 7452G  83170M 7370G 1.09 0.94 143
 8   ssd 7.27739  1.00000 7452G  89210M 7364G 1.17 1.01 168
 9   ssd 7.27739  1.00000 7452G  81254M 7372G 1.06 0.92 161
10   ssd 7.27739  1.00000 7452G  86194M 7367G 1.13 0.98 142
11   ssd 7.27739  1.00000 7452G  79042M 7374G 1.04 0.90 135
12   ssd 7.27739  1.00000 7452G  91098M 7363G 1.19 1.03 162
13   ssd 7.27739  1.00000 7452G  86222M 7367G 1.13 0.98 160
14   ssd 7.27739  1.00000 7452G  83950M 7370G 1.10 0.95 158
15   ssd 7.27739  1.00000 7452G  75662M 7378G 0.99 0.86 162
16   ssd 7.27739  1.00000 7452G  76582M 7377G 1.00 0.87 158
17   ssd 7.27739  1.00000 7452G  79886M 7374G 1.05 0.90 143
18   ssd 7.27739  1.00000 7452G  80078M 7373G 1.05 0.91 142
19   ssd 7.27739  1.00000 7452G  86226M 7367G 1.13 0.98 155
20   ssd 7.27739  1.00000 7452G  98662M 7355G 1.29 1.12 162
21   ssd 7.27739  1.00000 7452G  97010M 7357G 1.27 1.10 170
22   ssd 7.27739  1.00000 7452G  91694M 7362G 1.20 1.04 163
23   ssd 7.27739  1.00000 7452G  83654M 7370G 1.10 0.95 152
24   ssd 7.27739  1.00000 7452G  84558M 7369G 1.11 0.96 144
25   ssd 7.27739  1.00000 7452G  93190M 7361G 1.22 1.06 148
26   ssd 7.27739  1.00000 7452G  96062M 7358G 1.26 1.09 163
27   ssd 7.27739  1.00000 7452G  78042M 7375G 1.02 0.88 150
28   ssd 7.27739  1.00000 7452G  94458M 7359G 1.24 1.07 136
29   ssd 7.27739  1.00000 7452G  89714M 7364G 1.18 1.02 137
30   ssd 7.27739  1.00000 7452G    101G 7350G 1.36 1.17 175
31   ssd 7.27739  1.00000 7452G    101G 7350G 1.36 1.18 169
32   ssd 7.27739  1.00000 7452G  90318M 7363G 1.18 1.02 161
33   ssd 7.27739  1.00000 7452G  87510M 7366G 1.15 0.99 147
34   ssd 7.27739  1.00000 7452G  99118M 7355G 1.30 1.12 180
35   ssd 7.27739  1.00000 7452G  84338M 7369G 1.11 0.96 138
36   ssd 7.27739  1.00000 7452G  78362M 7375G 1.03 0.89 149
37   ssd 7.27739  1.00000 7452G  83610M 7370G 1.10 0.95 142
38   ssd 7.27739  1.00000 7452G  82210M 7371G 1.08 0.93 163
39   ssd 7.27739  1.00000 7452G  78970M 7374G 1.03 0.89 156
40   ssd 7.27739  1.00000 7452G  95034M 7359G 1.25 1.08 145
41   ssd 7.27739  1.00000 7452G  95082M 7359G 1.25 1.08 151
42   ssd 7.27739  1.00000 7452G  86190M 7367G 1.13 0.98 141
43   ssd 7.27739  1.00000 7452G  81322M 7372G 1.07 0.92 143
44   ssd 7.27739  1.00000 7452G  78938M 7374G 1.03 0.89 149
45   ssd 7.27739  1.00000 7452G  91010M 7363G 1.19 1.03 167
46   ssd 7.27739  1.00000 7452G  83302M 7370G 1.09 0.94 138
47   ssd 7.27739  1.00000 7452G  83266M 7370G 1.09 0.94 137
48   ssd 7.27739  1.00000 7452G  81142M 7372G 1.06 0.92 147
49   ssd 7.27739  1.00000 7452G  90910M 7363G 1.19 1.03 141
50   ssd 7.27739  1.00000 7452G  92578M 7361G 1.21 1.05 155
51   ssd 7.27739  1.00000 7452G  69102M 7384G 0.91 0.78 129
52   ssd 7.27739  1.00000 7452G  95318M 7358G 1.25 1.08 154
53   ssd 7.27739  1.00000 7452G  95289M 7358G 1.25 1.08 168
54   ssd 7.27739  1.00000 7452G  82046M 7371G 1.08 0.93 148
55   ssd 7.27739  1.00000 7452G  87358M 7366G 1.14 0.99 165
56   ssd 7.27739  1.00000 7452G  91318M 7362G 1.20 1.03 166
57   ssd 7.27739  1.00000 7452G  93522M 7360G 1.23 1.06 156
58   ssd 7.27739  1.00000 7452G    102G 7349G 1.37 1.18 168
59   ssd 7.27739  1.00000 7452G 101966M 7352G 1.34 1.15 157
60   ssd 7.27739  1.00000 7452G  72946M 7380G 0.96 0.83 132
61   ssd 7.27739  1.00000 7452G  77718M 7376G 1.02 0.88 122
62   ssd 7.27739  1.00000 7452G  89394M 7364G 1.17 1.01 160
63   ssd 7.27739  1.00000 7452G    112G 7339G 1.51 1.30 174
64   ssd 7.27739  1.00000 7452G  98122M 7356G 1.29 1.11 161
65   ssd 7.27739  1.00000 7452G  84386M 7369G 1.11 0.96 141
66   ssd 7.27739  1.00000 7452G 100830M 7353G 1.32 1.14 180
67   ssd 7.27739  1.00000 7452G  93634M 7360G 1.23 1.06 171
68   ssd 7.27739  1.00000 7452G  73758M 7380G 0.97 0.84 138
69   ssd 7.27739  1.00000 7452G  81202M 7372G 1.06 0.92 141
70   ssd 7.27739  1.00000 7452G  92550M 7361G 1.21 1.05 155
71   ssd 7.27739  1.00000 7452G  89542M 7364G 1.17 1.01 159
72   ssd 7.27739  1.00000 7452G  94414M 7359G 1.24 1.07 171
73   ssd 7.27739  1.00000 7452G  92546M 7361G 1.21 1.05 171
74   ssd 7.27739  1.00000 7452G  81190M 7372G 1.06 0.92 151
75   ssd 7.27739  1.00000 7452G  87006M 7367G 1.14 0.99 158
76   ssd 7.27739  1.00000 7452G  96202M 7358G 1.26 1.09 175
77   ssd 7.27739  1.00000 7452G  88338M 7365G 1.16 1.00 141
78   ssd 7.27739  1.00000 7452G    108G 7343G 1.45 1.26 169
79   ssd 7.27739  1.00000 7452G  85245M 7368G 1.12 0.97 150
                    TOTAL  582T   6897G  575T 1.16
MIN/MAX VAR: 0.76/1.30  STDDEV: 0.12

均衡过程

[root@ceph1 ~]# ceph -s
  cluster:
    id:     9326d103-6d2e-4d8e-9434-e47e964d1f91
    health: HEALTH_WARN
            23187/1745280 objects misplaced (1.329%)

  services:
    mon: 4 daemons, quorum ceph1,ceph2,ceph3,ceph4
    mgr: ceph1(active)
    mds: cephfs-1/1/1 up  {0=ceph4=up:active}, 3 up:standby
    osd: 80 osds: 80 up, 80 in; 82 remapped pgs

  data:
    pools:   2 pools, 4096 pgs
    objects: 568k objects, 2272 GB
    usage:   6916 GB used, 575 TB / 582 TB avail
    pgs:     0.073% pgs not active
             23187/1745280 objects misplaced (1.329%)
             4004 active+clean
             62   active+remapped+backfill_wait
             26   active+remapped+backfilling
             3    peering
             1    active+clean+remapped

  io:
    recovery: 1213 MB/s, 303 objects/s

均衡之后

[root@ceph1 ~]# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE  USE    AVAIL %USE VAR  PGS
 0   ssd 7.27739  1.00000 7452G 82494M 7371G 1.08 0.93 151
 1   ssd 7.27739  1.00000 7452G 82294M 7371G 1.08 0.93 155
 2   ssd 7.27739  1.00000 7452G 86054M 7367G 1.13 0.97 154
 3   ssd 7.27739  1.00000 7452G 87710M 7366G 1.15 0.99 150
 4   ssd 7.27739  1.00000 7452G 96582M 7357G 1.27 1.09 151
 5   ssd 7.27739  1.00000 7452G 78626M 7375G 1.03 0.89 155
 6   ssd 7.27739  1.00000 7452G 86410M 7367G 1.13 0.98 150
 7   ssd 7.27739  1.00000 7452G 88502M 7365G 1.16 1.00 154
 8   ssd 7.27739  1.00000 7452G 84710M 7369G 1.11 0.96 154
 9   ssd 7.27739  1.00000 7452G 77894M 7375G 1.02 0.88 153
10   ssd 7.27739  1.00000 7452G 89790M 7364G 1.18 1.02 152
11   ssd 7.27739  1.00000 7452G 83482M 7370G 1.09 0.95 149
12   ssd 7.27739  1.00000 7452G 87630M 7366G 1.15 0.99 156
13   ssd 7.27739  1.00000 7452G 85030M 7368G 1.11 0.96 151
14   ssd 7.27739  1.00000 7452G 79566M 7374G 1.04 0.90 152
15   ssd 7.27739  1.00000 7452G 76942M 7376G 1.01 0.87 155
16   ssd 7.27739  1.00000 7452G 75326M 7378G 0.99 0.85 154
17   ssd 7.27739  1.00000 7452G 84622M 7369G 1.11 0.96 152
18   ssd 7.27739  1.00000 7452G 85998M 7368G 1.13 0.97 152
19   ssd 7.27739  1.00000 7452G 86298M 7367G 1.13 0.98 154
20   ssd 7.27739  1.00000 7452G 94190M 7360G 1.23 1.07 154
21   ssd 7.27739  1.00000 7452G 90726M 7363G 1.19 1.03 156
22   ssd 7.27739  1.00000 7452G 87898M 7366G 1.15 1.00 156
23   ssd 7.27739  1.00000 7452G 82450M 7371G 1.08 0.93 150
24   ssd 7.27739  1.00000 7452G 90090M 7364G 1.18 1.02 154
25   ssd 7.27739  1.00000 7452G 94414M 7359G 1.24 1.07 153
26   ssd 7.27739  1.00000 7452G 93522M 7360G 1.23 1.06 157
27   ssd 7.27739  1.00000 7452G 80406M 7373G 1.05 0.91 153
28   ssd 7.27739  1.00000 7452G 99870M 7354G 1.31 1.13 153
29   ssd 7.27739  1.00000 7452G 96382M 7357G 1.26 1.09 154
30   ssd 7.27739  1.00000 7452G 94618M 7359G 1.24 1.07 159
31   ssd 7.27739  1.00000 7452G 94893M 7359G 1.24 1.07 154
32   ssd 7.27739  1.00000 7452G 85850M 7368G 1.13 0.97 155
33   ssd 7.27739  1.00000 7452G 88750M 7365G 1.16 1.01 153
34   ssd 7.27739  1.00000 7452G 88910M 7365G 1.17 1.01 154
35   ssd 7.27739  1.00000 7452G 96822M 7357G 1.27 1.10 156
36   ssd 7.27739  1.00000 7452G 83758M 7370G 1.10 0.95 155
37   ssd 7.27739  1.00000 7452G 87394M 7366G 1.15 0.99 150
38   ssd 7.27739  1.00000 7452G 81154M 7372G 1.06 0.92 156
39   ssd 7.27739  1.00000 7452G 76602M 7377G 1.00 0.87 158
40   ssd 7.27739  1.00000 7452G 98358M 7355G 1.29 1.11 152
41   ssd 7.27739  1.00000 7452G 92886M 7361G 1.22 1.05 151
42   ssd 7.27739  1.00000 7452G 92146M 7362G 1.21 1.04 149
43   ssd 7.27739  1.00000 7452G 90126M 7364G 1.18 1.02 158
44   ssd 7.27739  1.00000 7452G 83350M 7370G 1.09 0.94 153
45   ssd 7.27739  1.00000 7452G 82242M 7371G 1.08 0.93 153
46   ssd 7.27739  1.00000 7452G 91098M 7363G 1.19 1.03 149
47   ssd 7.27739  1.00000 7452G 90058M 7364G 1.18 1.02 148
48   ssd 7.27739  1.00000 7452G 85786M 7368G 1.12 0.97 154
49   ssd 7.27739  1.00000 7452G 98862M 7355G 1.30 1.12 152
50   ssd 7.27739  1.00000 7452G 91434M 7362G 1.20 1.04 153
51   ssd 7.27739  1.00000 7452G 80554M 7373G 1.06 0.91 149
52   ssd 7.27739  1.00000 7452G 94966M 7359G 1.24 1.08 152
53   ssd 7.27739  1.00000 7452G 90673M 7363G 1.19 1.03 153
54   ssd 7.27739  1.00000 7452G 85718M 7368G 1.12 0.97 153
55   ssd 7.27739  1.00000 7452G 81618M 7372G 1.07 0.92 152
56   ssd 7.27739  1.00000 7452G 86982M 7367G 1.14 0.99 155
57   ssd 7.27739  1.00000 7452G 91050M 7363G 1.19 1.03 154
58   ssd 7.27739  1.00000 7452G 94478M 7359G 1.24 1.07 153
59   ssd 7.27739  1.00000 7452G 96430M 7357G 1.26 1.09 153
60   ssd 7.27739  1.00000 7452G 85606M 7368G 1.12 0.97 156
61   ssd 7.27739  1.00000 7452G 89002M 7365G 1.17 1.01 153
62   ssd 7.27739  1.00000 7452G 90314M 7363G 1.18 1.02 157
63   ssd 7.27739  1.00000 7452G   100G 7351G 1.34 1.16 157
64   ssd 7.27739  1.00000 7452G 95850M 7358G 1.26 1.09 159
65   ssd 7.27739  1.00000 7452G 92030M 7362G 1.21 1.04 153
66   ssd 7.27739  1.00000 7452G 90830M 7363G 1.19 1.03 156
67   ssd 7.27739  1.00000 7452G 85146M 7368G 1.12 0.96 155
68   ssd 7.27739  1.00000 7452G 82534M 7371G 1.08 0.93 150
69   ssd 7.27739  1.00000 7452G 84454M 7369G 1.11 0.96 151
70   ssd 7.27739  1.00000 7452G 91202M 7362G 1.20 1.03 155
71   ssd 7.27739  1.00000 7452G 89382M 7364G 1.17 1.01 158
72   ssd 7.27739  1.00000 7452G 85314M 7368G 1.12 0.97 154
73   ssd 7.27739  1.00000 7452G 85646M 7368G 1.12 0.97 155
74   ssd 7.27739  1.00000 7452G 81206M 7372G 1.06 0.92 156
75   ssd 7.27739  1.00000 7452G 84958M 7369G 1.11 0.96 156
76   ssd 7.27739  1.00000 7452G 87398M 7366G 1.15 0.99 153
77   ssd 7.27739  1.00000 7452G 96258M 7358G 1.26 1.09 153
78   ssd 7.27739  1.00000 7452G   100G 7351G 1.35 1.17 157
79   ssd 7.27739  1.00000 7452G 87453M 7366G 1.15 0.99 154
                    TOTAL  582T  6898G  575T 1.16
MIN/MAX VAR: 0.85/1.17  STDDEV: 0.08
[root@ceph1 ~]#
[1]参考资料: https://forum.proxmox.com/threads/ceph-balancing-osd-distribution-new-in-luminous.43328/
[2]参考资料: https://www.wanghongxu.cn/2018/10/23/ceph-shu-ju-ping-heng/

chmod

文件权限操作

修改自己的文件权限

chmod -w myfile
chmod -rwx file
chmod +wx file
chmod =r myfile

用户(u),组(g), 其它(o)

chmod u=rw myfile
chmod g=rw myfile
chmod ug=rw myfile
chmod o= myfile
chmod o-rw myfile
chmod g+r myfile
chmod g-w myfile

cloud office solution

方案 frp+nextcloud+onlyoffice

cmake

cmake to build a project

cmake常用命令

  • INCLUDE_DIRECTORY 添加头文件目录,相当于-I 或者 CPLUS_INCLUDE_PATH

  • LINK_DIRECTORIES 添加需要链接的库文件目录,相当于-L

  • LINK_LIBRARIES 添加需要链接库路径文件路径,完整路径

    LINK_LIBRARIES("/opt/MATLAB/R2012a/bin/glnxa64/libeng.so")
    
  • TARGET_LINK_LIBRARIES 设置需要链接的库文件名称

  • add_compile_options(-std=c++11) 添加编译选项,所有的编译器,C和C++编译器都会受影响

  • target_compile_options(foo PUBLIC -fno-rtti)

【https://www.hahack.com/codes/cmake/】

设置C flags和C++ flags

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -g -O2 -Wall -Wno-sign-compare -Wno-unused-result")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -O2 -Wall")

cmake 设置make的时候显示具体编译命令,可以看到CC命令

set(CMAKE_VERBOSE_MAKEFILE on)
编译安装cmake, 遇到很多undefined reference to OPENSSL_sk_num
  1. bootstrap –help查看并使用系统库,比如cmcurl出错时。 ./bootstrap –system-curl
  2. 或者参考这个博客 https://blog.csdn.net/weixin_45617478/article/details/104121691

cmake中变量遇到的坑 https://cslam.cn/archives/c9f565b5.html https://murphypei.github.io/blog/2018/10/cmake-variable.html https://xyz1001.xyz/articles/53989.html

config.guess

config.guess可以生成

很多工程的configure脚本会检测系统环境生成makefile, configure 用到的一部分脚本就是config.guess.

[me@centos]$ ./config.guess
aarch64-unknown-linux-gnu

root@192e168e100e118 ~/config# ./config.guess
x86_64-pc-linux-gnu

pi@raspberrypi:~/code/config $ ./config.guess
armv7l-unknown-linux-gnueabihf

coremark

测试CPU性能的benchmark

git clone https://github.com/eembc/coremark.git
cd coremark
make
[me@servername ~]$ git clone https://github.com/eembc/coremark.git
Cloning into 'coremark'...
remote: Enumerating objects: 189, done.
remote: Total 189 (delta 0), reused 0 (delta 0), pack-reused 189
Receiving objects: 100% (189/189), 426.12 KiB | 198.00 KiB/s, done.
Resolving deltas: 100% (117/117), done.
[me@servername ~]$
[me@servername ~]$ ls
coremark
[me@servername ~]$
[me@servername ~]$ cd coremark/
[me@servername coremark]$
[me@servername coremark]$ make
make XCFLAGS=" -DPERFORMANCE_RUN=1" load run1.log
make[1]: Entering directory `/home/me/coremark'
make port_prebuild
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_prebuild'.
make[2]: Leaving directory `/home/me/coremark'
make link
make[2]: Entering directory `/home/me/coremark'
gcc -O2 -Ilinux64 -I. -DFLAGS_STR=\""-O2 -DPERFORMANCE_RUN=1  -lrt"\" -DITERATIONS=0 -DPERFORMANCE_RUN=1 core_list_join.c core_main.c core_matrix.c core_state.c core_util.c linux64/core_portme.c -o ./coremark.exe -lrt
Link performed along with compile
make[2]: Leaving directory `/home/me/coremark'
make port_postbuild
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_postbuild'.
make[2]: Leaving directory `/home/me/coremark'
make port_preload
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_preload'.
make[2]: Leaving directory `/home/me/coremark'
echo Loading done ./coremark.exe
Loading done ./coremark.exe
make port_postload
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_postload'.
make[2]: Leaving directory `/home/me/coremark'
make port_prerun
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_prerun'.
make[2]: Leaving directory `/home/me/coremark'
./coremark.exe  0x0 0x0 0x66 0 7 1 2000 > ./run1.log
make port_postrun
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_postrun'.
make[2]: Leaving directory `/home/me/coremark'
make[1]: Leaving directory `/home/me/coremark'
make XCFLAGS=" -DVALIDATION_RUN=1" load run2.log
make[1]: Entering directory `/home/me/coremark'
make port_preload
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_preload'.
make[2]: Leaving directory `/home/me/coremark'
echo Loading done ./coremark.exe
Loading done ./coremark.exe
make port_postload
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_postload'.
make[2]: Leaving directory `/home/me/coremark'
make port_prerun
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_prerun'.
make[2]: Leaving directory `/home/me/coremark'
./coremark.exe  0x3415 0x3415 0x66 0 7 1 2000  > ./run2.log
make port_postrun
make[2]: Entering directory `/home/me/coremark'
make[2]: Nothing to be done for `port_postrun'.
make[2]: Leaving directory `/home/me/coremark'
make[1]: Leaving directory `/home/me/coremark'
Check run1.log and run2.log for results.
See README.md for run and reporting rules.
[me@servername coremark]$
[me@servername coremark]$
[me@servername coremark]$ ls
barebones         core_main.c   coremark.h     core_state.c  cygwin  LICENSE.md  linux64   README.md  run2.log
core_list_join.c  coremark.exe  core_matrix.c  core_util.c   docs    linux       Makefile  run1.log   simple
[me@servername coremark]$
[me@servername coremark]$
[me@servername coremark]$ more run1.log
2K performance run parameters for coremark.
CoreMark Size    : 666
Total ticks      : 15836
Total time (secs): 15.836000
Iterations/Sec   : 12629.451882
Iterations       : 200000
Compiler version : GCC4.8.5 20150623 (Red Hat 4.8.5-36)
Compiler flags   : -O2 -DPERFORMANCE_RUN=1  -lrt
Memory location  : Please put data memory location here
                        (e.g. code in flash, data on heap etc)
seedcrc          : 0xe9f5
[0]crclist       : 0xe714
[0]crcmatrix     : 0x1fd7
[0]crcstate      : 0x8e3a
[0]crcfinal      : 0x4983
Correct operation validated. See README.md for run and reporting rules.
CoreMark 1.0 : 12629.451882 / GCC4.8.5 20150623 (Red Hat 4.8.5-36) -O2 -DPERFORMANCE_RUN=1  -lrt / Heap
[me@servername coremark]$
[me@servername coremark]$
[me@servername coremark]$ ls
barebones         core_main.c   coremark.h     core_state.c  cygwin  LICENSE.md  linux64   README.md  run2.log
core_list_join.c  coremark.exe  core_matrix.c  core_util.c   docs    linux       Makefile  run1.log   simple
[me@servername coremark]$
[me@servername coremark]$
[me@servername coremark]$ more run2.log
2K validation run parameters for coremark.
CoreMark Size    : 666
Total ticks      : 15847
Total time (secs): 15.847000
Iterations/Sec   : 12620.685303
Iterations       : 200000
Compiler version : GCC4.8.5 20150623 (Red Hat 4.8.5-36)
Compiler flags   : -O2 -DPERFORMANCE_RUN=1  -lrt
Memory location  : Please put data memory location here
                        (e.g. code in flash, data on heap etc)
seedcrc          : 0x18f2
[0]crclist       : 0xe3c1
[0]crcmatrix     : 0x0747
[0]crcstate      : 0x8d84
[0]crcfinal      : 0x5b5d
Correct operation validated. See README.md for run and reporting rules.
[me@servername coremark]$

crc32

准备研究一下crc32

lscpu中可以看到crc32的flags。ARM应该有相应的库可以用。

参考资料

curl

url访问工具

curl www.baidu.com

通过http代理访问 .. code-block:: shell

通过ssh tunnel代理访问

ssh -D localhost:9999 me@192.168.1.201
curl -x socks5://127.0.0.1:9999 cip.cc

devmem

使用devmem时,需要内核生成了/dev/mem设备,要生成这个设备需要配置内核编译选项

CONFIG_STRICT_DEVMEM=y
CONFIG_DEVKMEM=y
CONFIG_DEVMEM=y

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.huaweicloud.com/repository/conf/CentOS-7-anon.repo

我使用的源码是: https://bootlin.com/pub/mirror/devmem2.c

devtoolset

可以非常方便地安装各种版本GCC

yum search centos-release
yum install centos-release-scl-rh.noarch
yum install devtoolset-8
scl enable devtoolset-7 bash
scl enable devtoolset-8 bash

请不要安装centos-release-scl.noarch。

遇到过一个问题, 先安装centos-release-scl.noarch之后再安装centos-release-scl-rh.noarch导致查找不到devtoolset安装包

diff

有时候我们希望比较文件差异,或者目录结构差异。

以左右对比方式显示请使用:sdiff或者使用diff -y参数

sdiff old new
diff old new

以着色方式对比显示请使用:colordiff

colordiff -u old new
diff -u old new | colordiff

-u #表示已合并格式显示

比较文件差异

比较src目录下的文件差异,递归比较子目录-r;同时不显示差异的文件内容,仅显示文件名-q;同时希望排除目标文件

diff -qr  -x "*.o" specpu2006_test_from_iso/tools/src specpu2006_test_from_tar/tools/src
Only in specpu2006_test_from_tar/tools/src/tar-1.25/gnu: ref-add.sed
Only in specpu2006_test_from_tar/tools/src/tar-1.25/gnu: ref-del.sed
Only in specpu2006_test_from_tar/tools/src/tar-1.25/gnu: stdio.h
Files specpu2006_test_from_iso/tools/src/tar-1.25/gnu/stdio.in.h and specpu2006_test_from_tar/tools/src/tar-1.25/gnu/stdio.in.h differ
Only in specpu2006_test_from_tar/tools/src/tar-1.25/gnu: stdlib.h
Only in specpu2006_test_from_tar/tools/src/tar-1.25/gnu: string.h
Only in specpu2006_test_from_tar/tools/src/tar-1.25/gnu: strings.h

只关心改动的文件,不关心新增或者缺少的文件。加一个grep就可以了。同时希望结果漂亮点column -t

diff -qr -x "*.o" specpu2006_test_from_iso/tools/src specpu2006_test_from_tar/tools/src | grep differ | column -t
Files  specpu2006_test_from_iso/tools/src/buildtools                 and  specpu2006_test_from_tar/tools/src/buildtools                 differ
Files  specpu2006_test_from_iso/tools/src/make-3.82/glob/glob.c      and  specpu2006_test_from_tar/tools/src/make-3.82/glob/glob.c      differ
Files  specpu2006_test_from_iso/tools/src/perl-5.12.3/Configure      and  specpu2006_test_from_tar/tools/src/perl-5.12.3/Configure      differ
Files  specpu2006_test_from_iso/tools/src/specsum/gnulib/stdio.in.h  and  specpu2006_test_from_tar/tools/src/specsum/gnulib/stdio.in.h  differ
Files  specpu2006_test_from_iso/tools/src/tar-1.25/gnu/stdio.in.h    and  specpu2006_test_from_tar/tools/src/tar-1.25/gnu/stdio.in.h    differ

左右格式显示,不显示相同行

生成patch文件和打patch

生成patch保存到文件

diff -ruaN sources-orig/ sources-fixed/ >myfixes.patch

合入patch

cd sources-orig/
patch -p1 < ../myfixes.patch
patching file officespace/interest.go

回滚patch,打了patch之后后悔了,希望不要打patch

patch -R < myfixes.patch

多个文件生成一个patch

diff -ruaN path/file1.c /path/file2.c >> all_in_one.path
diff -ruaN path/filea.c /path/fileb.c >> all_in_one.path

disk

硬件信息

在开始测试之前先用命令行工具检查硬盘的信息。

smartctl -a /dev/sdb

一般可以看到类似输出:

[root@localhost ~]# smartctl -a /dev/sdb
smartctl 6.6 2017-11-05 r4594 [aarch64-linux-4.18.0-68.el8.aarch64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Intel 730 and DC S35x0/3610/3700 Series SSDs
Device Model:     INTEL SSDSC2BB800G6
Serial Number:    BTWA7053075K800HGN
LU WWN Device Id: 5 5cd2e4 14da27aab
Firmware Version: G2010150
User Capacity:    800,166,076,416 bytes [800 GB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 T13/2015-D revision 3
SATA Version is:  SATA 2.6, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Fri Mar  8 16:46:19 2019 CST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

如果设备安装了raid卡,是看不到硬盘信息的。这个时候只能从iBMC界面查看

ubuntu@ubuntu:~$ sudo smartctl -a /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-46-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               AVAGO
Product:              AVAGO
Revision:             4.65
User Capacity:        798,999,183,360 bytes [798 GB]
Logical block size:   512 bytes
Logical Unit id:      0x6500283359349804241c75cf0bb21412
Serial number:        001214b20bcf751c2404983459332800
Device type:          disk
Local Time is:        Fri Mar 15 08:26:35 2019 UTC
SMART support is:     Unavailable - device lacks SMART capability.

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     0 C
Drive Trip Temperature:        0 C

Error Counter logging not supported

Device does not support Self Test logging

查看硬盘分区类型

有时候希望知道把硬盘格式化成什么分区了

lsblk -f

x86

[root@localhost stream]# lsblk -f
NAME              FSTYPE      LABEL UUID                                   MOUNTPOINT
nvme0n1
├─nvme0n1p3       LVM2_member       45XcIA-acC1-knNo-DGqJ-xfJo-qv27-GcTifd
│ ├─centos00-home xfs               f748eb86-1771-42cd-bd36-fe7a469f7994
│ ├─centos00-swap swap              f05c6b1f-66ca-4993-91bf-0983ff4af2b0
│ └─centos00-root xfs               1111403b-8be4-409d-833e-502d1c05ca4f
├─nvme0n1p1
└─nvme0n1p2       xfs               b0c52bf4-94dd-4836-929d-f14998064de9
sda
├─sda2            LVM2_member       F4Y5X8-x7MA-g6E2-3ENx-ye0s-p7e8-eJ3216
│ ├─centos-swap   swap              5150bd1b-e2da-4b9d-9830-cffac4662b9f   [SWAP]
│ ├─centos-home   xfs               17778bbf-b08d-4d50-b35b-033235756827   /home
│ └─centos-root   xfs               ad72866e-5ad3-45fa-b318-79577c783a91   /
└─sda1            xfs               02d582c6-b93a-497d-93bb-da20ba887e51   /boot

ARM

root@ubuntu:~/app/stream# lsblk -f
NAME   FSTYPE LABEL UUID                                 MOUNTPOINT
sda
├─sda1 vfat         819D-544E                            /boot/efi
└─sda2 ext4         b72d7507-0c9b-4d8e-8546-566649cb34b0 /
sdb

查看硬盘是SSD还是HDD

lsblk

查看设备上的硬盘。

如何指导硬盘是固态硬盘还是机械硬盘

1
2
3
4
5
#!/bin/bash
echo "lsblk" | tee -a $hardware_software_conf
lsblk -o name,maj:min,rm,size,ro,type,rota,mountpoint >> $hardware_software_conf
wait
printf "\n\n****************\n" | tee -a $hardware_software_conf

使用-o参数定制输出项

或者

me@arm64server-1:~$ cat /sys/block/sdc/queue/rotational
0
me@arm64server-1:~$ cat /sys/block/sdb/queue/rotational
1

1是机械硬盘, 0是固态硬盘

dns

DNS(Domain Name System)是域名解析系统,解析域名得到IP地址。

常用命令

dig @114.114.114.114 registry-1.docker.io       #使用114.114.114.114查询域名registry-1.docker.io的IP

rDNS(Reverse DNS)反向域名解析,由IP地址反查域名服务,得到域名。

host 123.125.66.120

问题记录

ping报错,dns无法解析

root@ubuntu:/etc/apt# ping www.baidu.com
ping: www.baidu.com: Temporary failure in name resolution

配置文件路径是/etc/resolv.conf

# Generated by NetworkManager
nameserver 192.168.2.1

搭建dns服务器请参考 dnsmasq

/etc/resolv.conf 可能会被NetworkManager重写 [1]

原因是, NetworkManager好像是定期从dhcp服务器获取dns并且更新到/etc/resolv.conf上,可以通过过查看日志确认:

[user1@kunpeng920 ~]$ journalctl -f -u NetworkManager
-- Logs begin at Mon 2020-03-09 14:34:39 HKT. --
Mar 25 14:09:10 kunpeng920 dhclient[3617]: DHCPREQUEST on enp189s0f0 to 192.168.1.107 port 67 (xid=0x53549a3d)
Mar 25 14:09:10 kunpeng920 dhclient[3617]: DHCPACK from 192.168.1.107 (xid=0x53549a3d)
Mar 25 14:09:10 kunpeng920 NetworkManager[2730]: <info>  [1585116550.7118] dhcp4 (enp189s0f0):   address 192.168.1.180
Mar 25 14:09:10 kunpeng920 NetworkManager[2730]: <info>  [1585116550.7118] dhcp4 (enp189s0f0):   plen 24 (255.255.255.0)
Mar 25 14:09:10 kunpeng920 NetworkManager[2730]: <info>  [1585116550.7118] dhcp4 (enp189s0f0):   gateway 192.168.1.2
Mar 25 14:09:10 kunpeng920 NetworkManager[2730]: <info>  [1585116550.7118] dhcp4 (enp189s0f0):   lease time 3200
Mar 25 14:09:10 kunpeng920 NetworkManager[2730]: <info>  [1585116550.7118] dhcp4 (enp189s0f0):   nameserver '114.114.114.114'
Mar 25 14:09:10 kunpeng920 NetworkManager[2730]: <info>  [1585116550.7119] dhcp4 (enp189s0f0):   nameserver '192.168.1.107'
Mar 25 14:09:10 kunpeng920 NetworkManager[2730]: <info>  [1585116550.7119] dhcp4 (enp189s0f0): state changed bound -> bound
Mar 25 14:09:10 kunpeng920 dhclient[3617]: bound to 192.168.1.180 -- renewal in 1508 seconds.

解决办法是:添加 dns=none 到 /etc/NetworkManager/NetworkManager.conf [1] [2]

[user1@kunpeng920 NetworkManager]$ git diff --color NetworkManager.conf.backup NetworkManager.conf
diff --git a/NetworkManager.conf.backup b/NetworkManager.conf
index 1979ea6..2d23845 100644
--- a/NetworkManager.conf.backup
+++ b/NetworkManager.conf
@@ -22,6 +22,7 @@
# the previous one.

[main]
+dns=none
#plugins=ifcfg-rh,ibft

无法链接的ipv6地址

The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Error downloading packages:
Curl error (7): Couldn't connect to server for
https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=aarch64&infra=$infra&content=$contentdir
[Failed to connect to 2620:52:3:1:dead:beef:cafe:fed7: Network is unreachable]

解决办法:思路是不要查询url的ipv6地址, 如何禁止ipv6 dns查询, 还不知道, 但是把本机的网卡IPv6功能关了是可信的办法之一。

disable_ipv6 of enp189s0f0
echo 1 > /proc/sys/net/ipv6/conf/enp189s0f0/disable_ipv6
disable_ipv6 of all interfaces
echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
禁止IPv6之前
6: enp189s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
   link/ether 00:18:2d:04:00:5c brd ff:ff:ff:ff:ff:ff
   inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute enp189s0f0
      valid_lft forever preferred_lft forever
   inet6 fe80::6d73:6430:e089:b1c7/64 scope link noprefixroute
      valid_lft forever preferred_lft forever
禁止IPv6后
6: enp189s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
   link/ether 00:18:2d:04:00:5c brd ff:ff:ff:ff:ff:ff
   inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute enp189s0f0
      valid_lft forever preferred_lft forever
[1](1, 2) https://wiseindy.com/blog/linux/how-to-set-dns-in-centos-rhel-7-prevent-network-manager-from-overwriting-etc-resolv-conf/
[2]https://forums.centos.org/viewtopic.php?t=8647

dnsmasq

是一个dns服务器和dhcp服务器

apt install dnsmasq
sudo vim /etc/dnsmasq.conf
sudo systemctl start dnsmasq
我只修改/etc/dnsmasq.conf默认监听端口和为百度指明特定dns服务器
# Listen on this specific port instead of the standard DNS port
# (53). Setting this to zero completely disables DNS function,
# leaving only DHCP and/or TFTP.
port=5353


#server=/localnet/192.168.0.1
server=/www.baidu.com/114.114.114.114

#如果需要指定上有服务器的端口
server=100.100.1.1#5353

#记录dns请求日志,并且保存到文件
log-queries
log-facility=/tmp/dnsmasq.log
[1]https://www.hi-linux.com/posts/30947.html

docker

操作系统级别的虚拟化技术,用于实现应用的快速自动化部署。 [3]

docker安装

按照官网提供的安装办法,在centos上 [1]

# Use the following command to set up the stable repository.
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install containerd.io docker-ce docker-ce-cli
sudo systemctl start docker
sudo docker run hello-world

docker service文件的地址是: /usr/lib/systemd/system/docker.service

docker常用命令

完整的文档在 docker run reference [2]

docker run -it ubuntu bash  #前台运行容器并且进入, 只有一个进程bash
exit                        #退出bash, 容器停止运行。 可以使用如下命令
ctrl + p + q                #退出容器,bash不退出, 容器继续运行

docker exec -it 35dfs bash  #进入已经在运行程序,启动一个bash进程,这个时候系统一共有两个bash
exit                        #退出bash,因为还有一个bash已经在运行,所以容器不会停止运行。

sudo usermod -a -G docker user1     #把user1添加到docker组中, 这样就可以执行docker命令时不需要sudo了。docker以root权限运行
sudo systemctl enable docker        #开机自启动
docker start {ID}                   #重新启动停止的容器

docker run  -i                      #interative 交互式应用
            -t                      #tty 虚拟终端
            --name webserver        #指定容器名字
            --rm                    #容器停止后自动删除容器
            -d                      #以detach方式,也就是分离方式连接终端,以便在关闭终端时不影响容器的运行。
            -P 80                   #不带参数,发布所有容器内的端口到主机随机端口,使用docker port CONTAINER 可以查询。
            -p 8888:80              #8888:88 主机的8888端口为容器88端口的映射。
            -v %PWD/web:/var/www/html/web:ro    #指定本地主机路径%PWD/web映射到目的路径/var/www/html/web
            --restart=always        #无论退出代码是什么,自动重启
            --restart=on-failure:5  #失败时重启,最多重启5次

docker inspect ubuntu               #查看容器运行的信息
docker inspect --format='{{.State.Running}}' ubuntu #格式化查询
docker history e5c51ef702d4         #查看docker 镜像的构建历史
docker port 774b2f613874            #显示容器端口->主机端口

docker save -o /home/my/myfile.tar centos:16 #保存镜像到文件
docker load -i myfile.tar                    #导入镜像

删除停止的容器

docker rm $(docker ps -a -q -f status=exited)
docker container prune

警告

容器无法联网怎么办,考虑添加NAT, 参考理解 veth

iptables -t nat -A POSTROUTING -o eno3 -s 172.17.0.0/16 -j MASQUERADE

docker的主要内容:

  1. 容器网络
  2. 容器网络10GE
  3. 容器io
  4. Kubernates

Dockerfile proxy config

dockerfile 中一些安装软件包的命令有可能需要使用proxy

ENV HTTP_PROXY "socks5://192.168.1.201:2044"
ENV HTTPS_PROXY "socks5://192.168.1.201:2044"
ENV FTP_PROXY "socks5://192.168.1.201:2044"
ENV NO_PROXY "localhost,127.0.0.0/8,172.17.0.2/8"

问题记录

docker ps Got permission denied
[user1@centos leetcode]$ docker ps
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/json: dial unix /var/run/docker.sock: connect: permission denied
[user1@centos leetcode]$ sudo usermod -aG docker $USER
[user1@centos leetcode]$
CentOS 8 none of the providers can be installed
[root@ref-controller ~]# sudo yum install containerd.io docker-ce docker-ce-cli
Last metadata expiration check: 0:04:44 ago on Wed 25 Mar 2020 09:45:26 AM CST.
Error:
Problem: package docker-ce-3:19.03.8-3.el7.aarch64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
- cannot install the best candidate for the job
- package containerd.io-1.2.10-3.2.el7.aarch64 is excluded
- package containerd.io-1.2.13-3.1.el7.aarch64 is excluded
- package containerd.io-1.2.2-3.3.el7.aarch64 is excluded
- package containerd.io-1.2.2-3.el7.aarch64 is excluded
- package containerd.io-1.2.4-3.1.el7.aarch64 is excluded
- package containerd.io-1.2.5-3.1.el7.aarch64 is excluded
- package containerd.io-1.2.6-3.3.el7.aarch64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

其实软件源里面有containerd.io-1.2.6-3.3.el7.aarch64,但是为什么提示被排除,有可能是没有为8版本设置软件源的原因。

解决办法:

yum install -y https://download.docker.com/linux/centos/7/aarch64/stable/Packages/containerd.io-1.2.6-3.3.el7.aarch64.rpm
standard_init_linux.go:190: exec user process caused “exec format error”

似乎时一个普遍问题 [4]

Removing intermediate container fe1c9196349d
---> e18ce876e1c4
Step 25/38 : FROM builderbase AS current
---> 8883f7dfe759
Step 26/38 : COPY . .
---> 6acefabe075e
Step 27/38 : COPY --from=upstream-resources /usr/src/app/md_source/. ./
---> bfcebabe01f5
Step 28/38 : RUN ./_scripts/update-api-toc.sh
---> Running in d5f322b580ed
standard_init_linux.go:190: exec user process caused "exec format error"
The command '/bin/sh -c ./_scripts/update-api-toc.sh' returned a non-zero code: 1
Traceback (most recent call last):
File "/home/me/.local/bin/docker-compose", line 11, in <module>
   sys.exit(main())
File "/home/me/.local/lib/python2.7/site-packages/compose/cli/main.py", line 72, in main
   command()
File "/home/me/.local/lib/python2.7/site-packages/compose/cli/main.py", line 128, in perform_command
   handler(command, command_options)
File "/home/me/.local/lib/python2.7/site-packages/compose/cli/main.py", line 1077, in up
   to_attach = up(False)
File "/home/me/.local/lib/python2.7/site-packages/compose/cli/main.py", line 1073, in up
   cli=native_builder,
File "/home/me/.local/lib/python2.7/site-packages/compose/project.py", line 548, in up
   svc.ensure_image_exists(do_build=do_build, silent=silent, cli=cli)
[1]安装docker https://docs.docker.com/install/linux/docker-ce/centos/
[2]docker run 参数。 https://docs.docker.com/engine/reference/run/
[3]一个docker教程参考 https://yeasy.gitbooks.io/docker_practice/image/list.html
[4]https://forums.docker.com/t/standard-init-linux-go-190-exec-user-process-caused-exec-format-error/49368
[5]https://docs.docker.com/network/proxy/

docker buildx

image 构建工具,用于构建多种架构的镜像

安装buildx

开启实验室特性

docker客户端

user1@intel6248:~$ cat ~/.docker/config.json
{
"experimental": "enabled"
}

docker 服务端

user1@intel6248:~$ cat /etc/docker/daemon.json
{
"experimental": true
}

systemctl daemon-reload
systemctl restart docker

确认配置成功, Experimental: true

docker version
安装 docker buildx

如果是 19.03.8, 安装完docker之后就包含了。

如果没有可以直接下载二进制 [3] , 并放到指定目录

mkdir -p ~/.docker/cli-plugins
mv buildx ~/.docker/cli-plugins/docker-buildx

确认安装成功

docker buildx ls
docker buildx create --name mybuilder --use
docker buildx inspect --bootstrap
安装模拟器

如果上面的步骤没有显示多种平台的支持,那么就需要安装模拟器,现在dockers 官方文档只说明了, buildx会包含再docker destop for MAC & windows默认包含buildx, 对于community 版本, 我参考这两篇文章进行设置 [1] [2]

user1@intel6248:~$ docker buildx ls
NAME/NODE         DRIVER/ENDPOINT             STATUS  PLATFORMS
mybuilder *       docker-container
mybuilder0      unix:///var/run/docker.sock running linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
default           docker
default         default                     running linux/amd64, linux/386

最简单的办法就是

docker run --rm --privileged docker/binfmt:a7996909642ee92942dcd6cff44b9b95f08dad64

其中tag可以到这里查询最新的。 [4]

build多平台image

这里我使用了来自 [2] 的hello.c [5]

docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 --push -t lixianfadocker/hello .

运行多平台image

运行命令

docker run --rm --name hello lixianfadocker/hello

在X86上的运行输出是

user1@intel6248:~/Dockerfile_kunpeng/Dockerfile_multi_arch$ docker run --rm --name hello lixianfadocker/hello

Status: Downloaded newer image for lixianfadocker/hello:latest
Hello, my architecture is Linux buildkitsandbox 4.15.0-99-generic #100-Ubuntu SMP Wed Apr 22 20:32:56 UTC 2020 x86_64 Linux

在Kunpeng920上的运行输出是

user1@Arm64-server:~$ docker run --rm --name hello lixianfadocker/hello

Status: Downloaded newer image for lixianfadocker/hello:latest
Hello, my architecture is Linux buildkitsandbox 4.15.0-99-generic #100-Ubuntu SMP Wed Apr 22 20:32:56 UTC 2020 aarch64 Linux

使用build farm

在单台设备上使用build farm的问题是,用模拟指令的方式, 会非常慢。

# 创建一个上下文环境
docker context create --docker "host=ssh://user1@192.168.1.203" intel6248

# assuming contexts node-amd64 and node-arm64 exist in "docker context ls"
$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .

创建

[1]https://jite.eu/2019/10/3/multi-arch-docker/
[2](1, 2) https://community.arm.com/developer/tools-software/tools/b/tools-software-ides-blog/posts/getting-started-with-docker-for-arm-on-linux
[3]https://github.com/docker/buildx/releases
[4]https://hub.docker.com/r/docker/binfmt/tags?page=1&ordering=last_updated
[5]https://github.com/LyleLee/Dockerfile_kunpeng/tree/master/Dockerfile_multi_arch

docker compose

编译docker-compose

https://github.com/docker/compose/issues/6831

No module named ‘pkg_resources.py2_warn’

已经在github进行讨论 [1]

35612 INFO: Building EXE from EXE-00.toc
35613 INFO: Appending archive to ELF section in EXE /code/dist/docker-compose
35785 INFO: Building EXE from EXE-00.toc completed successfully.
+ ls -la dist/
total 16460
drwxrwxrwx    2 root     root          4096 Mar 28 18:17 .
drwxr-xr-x    1 root     root          4096 Mar 28 18:16 ..
-rwxr-xr-x    1 root     root      16839184 Mar 28 18:17 docker-compose
+ ldd dist/docker-compose
        /lib/ld-musl-aarch64.so.1 (0xffffb4c50000)
        libz.so.1 => /lib/libz.so.1 (0xffffb4bff000)
        libc.musl-aarch64.so.1 => /lib/ld-musl-aarch64.so.1 (0xffffb4c50000)
+ mv dist/docker-compose /usr/local/bin
+ docker-compose version
[996] Failed to execute script pyi_rth_pkgres
Traceback (most recent call last):
File "site-packages/PyInstaller/loader/rthooks/pyi_rth_pkgres.py", line 11, in <module>
File "/code/.tox/py37/lib/python3.7/site-packages/PyInstaller/loader/pyimod03_importers.py", line 627, in exec_module
    exec(bytecode, module.__dict__)
File "site-packages/pkg_resources/__init__.py", line 86, in <module>
ModuleNotFoundError: No module named 'pkg_resources.py2_warn'
The command '/bin/sh -c script/build/linux-entrypoint' returned a non-zero code: 255
me@ubuntu:~/code/compose$ vim script/^C
docker-compose version
docker-compose version 1.25.0, build unknown
docker-py version: 4.1.0
CPython version: 3.7.4
OpenSSL version: OpenSSL 1.1.1d  10 Sep 2019
Removing intermediate container acb5b89e92a7
---> 8fd1183543df
Step 33/39 : FROM alpine:${RUNTIME_ALPINE_VERSION} AS runtime-alpine
Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout
[1]https://github.com/pypa/setuptools/issues/1963

docker iptables详解

容器ping 114

主机上iptables计数清零

sudo iptables -Z

主机上使用抓包

sudo tcpdump -i docker0 icmp -w 40ping.cap

容器执行40次ping操作

ping 114.114.114.114 -c 40

导出iptables规则

sudo iptables-save -c > iptables.rules

得到抓包内容的最后4个数据包是, 抓包正常,40个数据包发出,40个数据包接收:

77  2020-06-13 00:06:30.832927      172.17.0.2      114.114.114.114 ICMP    98      Echo (ping) request  id=0x0027, seq=39/9984, ttl=64 (reply in 78)
78  2020-06-13 00:06:31.016522      114.114.114.114 172.17.0.2      ICMP    98      Echo (ping) reply    id=0x0027, seq=39/9984, ttl=58 (request in 77)
79  2020-06-13 00:06:31.833406      172.17.0.2      114.114.114.114 ICMP    98      Echo (ping) request  id=0x0027, seq=40/10240, ttl=64 (reply in 80)
80  2020-06-13 00:06:32.017020      114.114.114.114 172.17.0.2      ICMP    98      Echo (ping) reply    id=0x0027, seq=40/10240, ttl=60 (request in 79)

导出的iptables 规则如下

# Generated by iptables-save v1.6.1 on Fri Jun 12 20:18:26 2020
*raw
:PREROUTING ACCEPT [257:20972]
:OUTPUT ACCEPT [111:10984]
COMMIT
# Completed on Fri Jun 12 20:18:26 2020
# Generated by iptables-save v1.6.1 on Fri Jun 12 20:18:26 2020
*mangle
:PREROUTING ACCEPT [261:21180]
:INPUT ACCEPT [169:12060]
:FORWARD ACCEPT [80:6720]
:OUTPUT ACCEPT [115:11336]
:POSTROUTING ACCEPT [195:18056]
COMMIT
# Completed on Fri Jun 12 20:18:26 2020
# Generated by iptables-save v1.6.1 on Fri Jun 12 20:18:26 2020
*filter
:INPUT ACCEPT [177:12476]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [123:12056]
:DOCKER - [0:0]                     # 创建DOCKER链
:DOCKER-ISOLATION-STAGE-1 - [0:0]   # 创建DOCKER-ISOLATION-STAGE-1链
:DOCKER-ISOLATION-STAGE-2 - [0:0]   # 创建DOCKER-ISOLATION-STAGE-2链
:DOCKER-USER - [0:0]                # 创建DOCKER-USER链
[80:6720] -A FORWARD -j DOCKER-USER                 # 在FORWARD种插入一条规则, 所有转发链上的数据包由DOCKER-USER进行处理
[80:6720] -A FORWARD -j DOCKER-ISOLATION-STAGE-1    # 在FORWARD种插入一条规则, 所有转发链上的数据包由DOCKER-ISOLATION-STAGE-1进行处理。 DOCKER-USER返回后,这些数据包也被匹配
[40:3360] -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT # 如果从docker0转发出去,也就是来自容器的数据包, 如果是已经和已有的链接相关,或者已经建立链接则允许
[0:0] -A FORWARD -o docker0 -j DOCKER               # 所有从docker0出去的数据包, 要经过DOCKER链处理,但是这里实际上DOCKER链是空白
[40:3360] -A FORWARD -i docker0 ! -o docker0 -j ACCEPT  # 入接口是docker0, 出接口不是docker0的数据包允许。入接口是docker0, 意味着来自容器,出接口不是docker0, 意味着转发到其他出接口,这里是到114的数据包。
[0:0] -A FORWARD -i docker0 -o docker0 -j ACCEPT        # 进入docker0的(目的地址是容器),而且来自容器。也就是两个容器件的数据,默认接收。但是这里是0,因为我们的simple1只往114.114.114.114发送数据包
[40:3360] -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2   # 目的是容器,但是不来自容器。 也就外来数据,这里是ping的回包。
[80:6720] -A DOCKER-ISOLATION-STAGE-1 -j RETURN                                             # DOCKER-ISOLATION-STAGE-1 返回
[0:0] -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP    # DOCKER-ISOLATION-STAGE-2 ,ping的回包都来自114,所以不会被匹配。
[40:3360] -A DOCKER-ISOLATION-STAGE-2 -j RETURN         # DOCKER-ISOLATION-STAGE-2 返回
[80:6720] -A DOCKER-USER -j RETURN                      # DOCKER-USER 返回
COMMIT
# Completed on Fri Jun 12 20:18:26 2020
# Generated by iptables-save v1.6.1 on Fri Jun 12 20:18:26 2020
*nat
:PREROUTING ACCEPT [14:2748]
:INPUT ACCEPT [1:264]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
[0:0] -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER              # PREROUTING链,所有目的地址是本机的数据包,都要被DOCKER链处理
[0:0] -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER # 所有主机发出的包,不是到回环地址的,但是是到本机的,都要被DOCKER链处理。容器ping 114显然不会有这些数据包
[1:84] -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE       # POSTROUTING链上, 如果来自容器, 但是不是到容器,也就是到外部114,进行地址伪装。源地址NAT。只会匹配第一个数据包。
[0:0] -A DOCKER -i docker0 -j RETURN                                    # docker 链返回
COMMIT
# Completed on Fri Jun 12 20:18:26 2020

总结转发图是

_images/docker_packet_flow-容器ping外部masquerade.svg

容器访问网站

容器执行40次ping操作

for i in {1..2}; do curl www.baidu.com; sleep 2; done
[34:7777] -A FORWARD -j DOCKER-USER
[34:7777] -A FORWARD -j DOCKER-ISOLATION-STAGE-1
[16:6715] -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
[0:0] -A FORWARD -o docker0 -j DOCKER
[18:1062] -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
[0:0] -A FORWARD -i docker0 -o docker0 -j ACCEPT
[18:1062] -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
[34:7777] -A DOCKER-ISOLATION-STAGE-1 -j RETURN
[0:0] -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
[18:1062] -A DOCKER-ISOLATION-STAGE-2 -j RETURN
[34:7777] -A DOCKER-USER -j RETURN
COMMIT

*nat
:PREROUTING ACCEPT [19:3070]
:INPUT ACCEPT [3:432]
:OUTPUT ACCEPT [1:78]
:POSTROUTING ACCEPT [1:78]
:DOCKER - [0:0]
[1:90] -A PREROUTING -m addrtype --dst-type LOCAL -j LOG --log-prefix "dst-type: "  # 这个是PROTO=UDP SPT=137 DPT=137,"nmbd"发送数据包,和docker无关
[1:90] -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
[0:0] -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
[4:238] -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE  # 一个curl  # 一个curl包含一个DNS请求,一个SYN请求。 2个curl所以是4个数据包
[0:0] -A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Mon Jun 15 11:47:29 2020

这里和ping没有差异

容器访问容器的web服务

启动nginx

docker run --rm -p 8080:80 --name some-nginx
    -v /home/me/Dockerfile_kunpeng/Dockerfile_nginx/default.conf:/etc/nginx/conf.d/default.conf
    -v /home/me/Dockerfile_kunpeng:/usr/share/nginx/html:ro -d nginx

容器1执行curl

curl http://172.17.0.4:80
_images/docker_packet_flow-容器curl容器.svg

外部访问容器的web服务

同上, 但是在另一台主机上执行curl

curl http://192.168.1.180:8080
_images/docker_packet_flow-host_curl容器.svg

待处理

为什么masquerade匹配到第一个数据包? [1]

[1]https://unix.stackexchange.com/questions/484868/in-iptables-does-masquerade-match-only-on-new-connections-syn-packets

docker network

这里讨论容器网络性能。 分别在x86和ARM两种平台进行比较, arm cpu选择 kunpeng 920。x86 cpu 选择Intel 6248。 主要评价工具是iperf3。

容器网络性能TCP对比结果
是否跨主机 技术 Kunpeng Intel
同一主机 Docker bridge 35Gbis/s 25Gbit/s
同一主机 Open vSwitch 51Gbitls 32Gbit/s
跨主机 Docker overlay 900Mbit/s 876Mbit/s
跨主机 OVS overlay 904Mbit/s 880Mbit/s
Docker bridge
安装Docker时,会创建一个网络接口,名字是docker0。docker0是一个虚拟以太网桥,用于连接容器和本地宿主网络。 Docker会为每一个容器创建一对 veth 网络接口。
Open vSwitch
开源虚拟交换机 ovs。虚拟交换机解决方案,同时提供内核态和用户态实现
Docker overlay
使用Docker Swarm创建overlay网络。Docker自带的多容器跨主机通信方案。
OVS overlay
基于OVS创建overlay网络。 多台主机上的虚拟交换机组成二层交换网络。

硬件配置 Kunpeng 920 vs Intel 6248

Kunpeng 920

CPU            : Kunpeng 920-6426 2600MHz
CPU Core       : 128
Memory         : Samsung 2666 MT/s 32 GB * 16

Host OS        : CentOS Linux release 7.7.1908 (AltArch)
docker         : 19.03.8
Container Image: Ubuntu 18.04.4 LTS
iperf3         : 3.1.3
Net Speed      : 1000Mb/s

Intel 6248

CPU            : Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
CPU Core       : 80
Memory         : Hynix 2666 MT/s 32 GB * 16

Host OS        : CentOS Linux release 7.7.1908
docker         : 19.03.7
Container Image: Ubuntu 18.04.4 LTS
iperf3         : 3.1.3
Net Speed      : 1000Mb/s

Docker bridge

组网模型是:

docker_bridge

启动容器,不做任何特殊配置

docker run -itd --name container1 ubuntu /bin/bash
docker run -itd --name container2 ubuntu /bin/bash

两台设备设置一样

[user1@localhost ~]$ brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.024257803194       no              vetha6c37c1
                                                        vethe61f5c0
virbr0          8000.5254003110e8       yes             virbr0-nic
[user1@localhost ~]$ docker ps
CONTAINER ID        IMAGE     COMMAND       CREATED      STATUS       PORTS  NAMES
a51cac518006        ubuntu    "/bin/bash"   2 hours ago  Up 2 hours          container2
1726251481ee        ubuntu    "/bin/bash"   2 hours ago  Up 2 hours          container1

apt update
apt install -y iproute2 iputils-ping iperf3
Docker bridge Kunpeng 920 TCP:13~35Gbit/s

Kunpeng 测试结果在13~35Gbit/s之间浮动,表现稳定

root@1726251481ee:/# iperf3 -c 172.17.0.3 -t 3000
Connecting to host 172.17.0.3, port 5201
[  4] local 172.17.0.2 port 35342 connected to 172.17.0.3 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  4.06 GBytes  34.9 Gbits/sec  1008   1011 KBytes
[  4]   1.00-2.00   sec  4.06 GBytes  34.9 Gbits/sec    4   1.07 MBytes
[  4]   2.00-3.00   sec  4.02 GBytes  34.5 Gbits/sec    6   1.15 MBytes
[  4]   3.00-4.00   sec  4.04 GBytes  34.7 Gbits/sec    0   1.21 MBytes
[  4]   4.00-5.00   sec  4.02 GBytes  34.5 Gbits/sec    0   1.29 MBytes
[  4]   5.00-6.00   sec  4.02 GBytes  34.5 Gbits/sec    0   1.37 MBytes
[  4]   6.00-7.00   sec  4.04 GBytes  34.7 Gbits/sec    0   1.42 MBytes
[  4]   7.00-8.00   sec  4.09 GBytes  35.1 Gbits/sec    0   1.47 MBytes
[  4]   8.00-9.00   sec  3.57 GBytes  30.7 Gbits/sec    0   1.53 MBytes
[  4]   9.00-10.00  sec  2.33 GBytes  20.0 Gbits/sec    0   1.57 MBytes
[  4]  10.00-11.00  sec  1.60 GBytes  13.8 Gbits/sec   90   1.22 MBytes
[  4]  11.00-12.00  sec  2.42 GBytes  20.8 Gbits/sec    0   1.32 MBytes
[  4]  12.00-13.00  sec  1.92 GBytes  16.5 Gbits/sec    0   1.40 MBytes
[  4]  13.00-14.00  sec  1.66 GBytes  14.2 Gbits/sec    0   1.47 MBytes
[  4]  14.00-15.00  sec  1.84 GBytes  15.8 Gbits/sec    0   1.51 MBytes
[  4]  15.00-16.00  sec  1.79 GBytes  15.4 Gbits/sec    0   1.54 MBytes
[  4]  16.00-17.00  sec  3.59 GBytes  30.9 Gbits/sec   91   1.12 MBytes
[  4]  17.00-18.00  sec  4.12 GBytes  35.4 Gbits/sec   45    899 KBytes
[  4]  18.00-19.00  sec  4.14 GBytes  35.5 Gbits/sec    0    994 KBytes
[  4]  19.00-20.00  sec  4.11 GBytes  35.3 Gbits/sec    0   1.06 MBytes
[  4]  20.00-21.00  sec  4.15 GBytes  35.7 Gbits/sec    0   1.12 MBytes
[  4]  21.00-22.00  sec  4.15 GBytes  35.7 Gbits/sec    0   1.19 MBytes
Docker bridge Intel 6248 25Gbit/s

Intel的测试结果稳定在25Gbit/s左右

root@3c7da2e893b8:/# iperf3 -c 172.17.0.2 -t 3000
Connecting to host 172.17.0.2, port 5201
[  4] local 172.17.0.3 port 48094 connected to 172.17.0.2 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  2.50 GBytes  21.5 Gbits/sec  135    321 KBytes
[  4]   1.00-2.00   sec  2.94 GBytes  25.3 Gbits/sec    0    321 KBytes
[  4]   2.00-3.00   sec  2.95 GBytes  25.4 Gbits/sec    0    321 KBytes
[  4]   3.00-4.00   sec  2.95 GBytes  25.3 Gbits/sec    0    321 KBytes
[  4]   4.00-5.00   sec  2.95 GBytes  25.3 Gbits/sec    0    321 KBytes
[  4]   5.00-6.00   sec  2.63 GBytes  22.6 Gbits/sec  631    230 KBytes
[  4]   6.00-7.00   sec  2.67 GBytes  23.0 Gbits/sec    0    232 KBytes
[  4]   7.00-8.00   sec  2.85 GBytes  24.5 Gbits/sec    0    341 KBytes
[  4]   8.00-9.00   sec  2.88 GBytes  24.8 Gbits/sec    0    341 KBytes
[  4]   9.00-10.00  sec  2.79 GBytes  24.0 Gbits/sec    0    345 KBytes
[  4]  10.00-11.00  sec  2.96 GBytes  25.4 Gbits/sec    0    345 KBytes
[  4]  11.00-12.00  sec  2.87 GBytes  24.6 Gbits/sec    0    352 KBytes
[  4]  12.00-13.00  sec  2.84 GBytes  24.4 Gbits/sec    0    361 KBytes
[  4]  13.00-14.00  sec  2.68 GBytes  23.0 Gbits/sec  532    221 KBytes
[  4]  14.00-15.00  sec  2.61 GBytes  22.4 Gbits/sec    0    221 KBytes
[  4]  15.00-16.00  sec  2.66 GBytes  22.8 Gbits/sec    0    376 KBytes
[  4]  16.00-17.00  sec  2.63 GBytes  22.6 Gbits/sec    0    376 KBytes
[  4]  17.00-18.00  sec  2.75 GBytes  23.7 Gbits/sec    0    376 KBytes
[  4]  18.00-19.00  sec  2.46 GBytes  21.1 Gbits/sec    0    376 KBytes
[  4]  19.00-20.00  sec  2.96 GBytes  25.4 Gbits/sec    0    376 KBytes
[  4]  20.00-21.00  sec  2.51 GBytes  21.5 Gbits/sec    0    376 KBytes
[  4]  21.00-22.00  sec  2.87 GBytes  24.7 Gbits/sec    0    376 KBytes
[  4]  22.00-23.00  sec  2.80 GBytes  24.0 Gbits/sec    0    400 KBytes
[  4]  23.00-24.00  sec  2.88 GBytes  24.7 Gbits/sec    0    403 KBytes
[  4]  24.00-25.00  sec  2.85 GBytes  24.5 Gbits/sec  125    290 KBytes
原因分析: iperf3的进程在Kunpeng上频繁核间迁移,在intel上较固定
Kunpeng iperf3进程分布
1  [               0.0%]   33 [               0.0%]   65 [               0.0%]   97 [      0.0%]
2  [||             2.6%]   34 [               0.0%]   66 [               0.0%]   98 [      0.0%]
3  [|              1.3%]   35 [               0.0%]   67 [               0.0%]   99 [      0.0%]
4  [               0.0%]   36 [               0.0%]   68 [               0.0%]   100[      0.0%]
5  [||||||        31.0%]   37 [               0.0%]   69 [               0.0%]   101[      0.0%]
6  [|||||||||||   51.9%]   38 [               0.0%]   70 [               0.0%]   102[      0.0%]
7  [|||           11.0%]   39 [               0.0%]   71 [               0.0%]   103[      0.0%]
8  [               0.0%]   40 [               0.0%]   72 [               0.0%]   104[      0.0%]
9  [               0.0%]   41 [               0.0%]   73 [               0.0%]   105[      0.0%]
10 [               0.0%]   42 [               0.0%]   74 [               0.0%]   106[      0.0%]
11 [               0.0%]   43 [               0.0%]   75 [               0.0%]   107[      0.0%]
12 [               0.0%]   44 [               0.0%]   76 [               0.0%]   108[      0.0%]
13 [               0.0%]   45 [               0.0%]   77 [               0.0%]   109[      0.0%]
14 [               0.0%]   46 [               0.0%]   78 [               0.0%]   110[      0.0%]
15 [               0.0%]   47 [               0.0%]   79 [               0.0%]   111[      0.0%]
16 [               0.0%]   48 [               0.0%]   80 [               0.0%]   112[      0.0%]
17 [               0.0%]   49 [               0.0%]   81 [               0.0%]   113[      0.0%]
18 [               0.0%]   50 [               0.0%]   82 [               0.0%]   114[      0.0%]
19 [               0.0%]   51 [               0.0%]   83 [               0.0%]   115[      0.0%]
20 [               0.0%]   52 [               0.0%]   84 [               0.0%]   116[      0.0%]
21 [               0.0%]   53 [               0.0%]   85 [               0.0%]   117[      0.0%]
22 [               0.0%]   54 [               0.0%]   86 [|||||||       32.9%]   118[      0.0%]
23 [               0.0%]   55 [               0.0%]   87 [|||            6.5%]   119[      0.0%]
24 [               0.0%]   56 [               0.0%]   88 [||||          18.8%]   120[      0.0%]
25 [               0.0%]   57 [               0.0%]   89 [|              3.2%]   121[      0.0%]
26 [               0.0%]   58 [               0.0%]   90 [|              3.3%]   122[      0.0%]
27 [               0.0%]   59 [               0.0%]   91 [||||||        31.2%]   123[      0.0%]
28 [               0.0%]   60 [               0.0%]   92 [|              2.6%]   124[      0.0%]
29 [               0.0%]   61 [               0.0%]   93 [               0.0%]   125[      0.0%]
30 [               0.0%]   62 [               0.0%]   94 [               0.0%]   126[      0.0%]
31 [               0.0%]   63 [               0.0%]   95 [               0.0%]   127[      0.0%]
32 [               0.0%]   64 [               0.0%]   96 [               0.0%]   128[      0.0%]
Mem[||||                                11.6G/511G]   Tasks: 64, 288 thr; 3 running
Swp[                                      0K/4.00G]   Load average: 1.01 0.53 0.36
Intel iperf3进程分布
1  [|           4.7%]   21 [||||||||||100.0%]   41 [            0.0%]   61 [            0.0%]
2  [            0.0%]   22 [|||||||||||90.0%]   42 [            0.0%]   62 [            0.0%]
3  [            0.0%]   23 [            0.0%]   43 [            0.0%]   63 [||          2.0%]
4  [            0.0%]   24 [            0.0%]   44 [            0.0%]   64 [            0.0%]
5  [            0.0%]   25 [            0.0%]   45 [            0.0%]   65 [            0.0%]
6  [            0.0%]   26 [            0.0%]   46 [            0.0%]   66 [            0.0%]
7  [            0.0%]   27 [            0.0%]   47 [            0.0%]   67 [            0.0%]
8  [            0.0%]   28 [            0.0%]   48 [            0.0%]   68 [            0.0%]
9  [            0.0%]   29 [            0.0%]   49 [            0.0%]   69 [            0.0%]
10 [            0.0%]   30 [            0.0%]   50 [            0.0%]   70 [            0.0%]
11 [            0.0%]   31 [            0.0%]   51 [            0.0%]   71 [            0.0%]
12 [            0.0%]   32 [|           0.6%]   52 [            0.0%]   72 [            0.0%]
13 [            0.0%]   33 [            0.0%]   53 [            0.0%]   73 [            0.0%]
14 [            0.0%]   34 [            0.0%]   54 [            0.0%]   74 [            0.0%]
15 [            0.0%]   35 [|           0.6%]   55 [            0.0%]   75 [            0.0%]
16 [            0.0%]   36 [            0.0%]   56 [            0.0%]   76 [            0.0%]
17 [            0.0%]   37 [            0.0%]   57 [            0.0%]   77 [            0.0%]
18 [            0.0%]   38 [            0.0%]   58 [            0.0%]   78 [            0.0%]
19 [            0.0%]   39 [            0.0%]   59 [            0.0%]   79 [            0.0%]
20 [            0.0%]   40 [            0.0%]   60 [            0.0%]   80 [            0.0%]
Mem[|||                           4.62G/503G]   Tasks: 69, 337 thr; 3 running
Swp[                                0K/4.00G]   Load average: 0.39 0.15 0.14
                                                Uptime: 1 day, 02:20:37

在Kunpengs进行绑核操作后测试, 结果稳定在35Gbit/s左右

taskset -cp 0 33802
taskset -cp 1 33022
[root@localhost user1]# taskset -cp 0 39081
pid 39081's current affinity list: 0-127
pid 39081's new affinity list: 0
[root@localhost user1]# taskset -cp 1 39082
pid 39082's current affinity list: 0
pid 39082's new affinity list: 1
[root@localhost user1]#
[  4] 149.00-150.00 sec  4.06 GBytes  34.8 Gbits/sec    0   3.00 MBytes
[  4] 150.00-151.00 sec  4.04 GBytes  34.7 Gbits/sec    0   3.00 MBytes
[  4] 151.00-152.00 sec  4.07 GBytes  35.0 Gbits/sec    0   3.00 MBytes
[  4] 152.00-153.00 sec  4.10 GBytes  35.2 Gbits/sec    0   3.00 MBytes
[  4] 153.00-154.00 sec  4.08 GBytes  35.0 Gbits/sec    0   3.00 MBytes
[  4] 154.00-155.00 sec  4.07 GBytes  35.0 Gbits/sec    0   3.00 MBytes
[  4] 155.00-156.00 sec  4.09 GBytes  35.1 Gbits/sec    0   3.00 MBytes
[  4] 156.00-157.00 sec  3.91 GBytes  33.6 Gbits/sec    0   3.00 MBytes
[  4] 157.00-158.00 sec  4.06 GBytes  34.8 Gbits/sec    0   3.00 MBytes
[  4] 158.00-159.00 sec  4.07 GBytes  35.0 Gbits/sec    0   3.00 MBytes
[  4] 159.00-160.00 sec  4.07 GBytes  34.9 Gbits/sec    0   3.00 MBytes
[  4] 160.00-161.00 sec  4.08 GBytes  35.0 Gbits/sec    0   3.00 MBytes
[  4] 161.00-162.00 sec  4.09 GBytes  35.2 Gbits/sec    0   3.00 MBytes
[  4] 162.00-163.00 sec  4.06 GBytes  34.9 Gbits/sec    0   3.00 MBytes

OVS(open vswitch)

ovs的安装运行查看 ovs

组网模型:

ovs_bridge

创建ovs虚拟交换机, 添加接口到容器A和容器B内,使用iperf3进行测试。 [2] [3]

ovs-vsctl add-br ovs-br1
ip addr add 173.16.1.1/24 dev ovs-br1
ovs-docker add-port ovs-br1 eth1 containerA --ipaddress=173.16.1.2/24
ovs-docker add-port ovs-br1 eth1 containerB --ipaddress=173.16.1.3/24

容器安装必要工具

apt install -y iproute2 iputils-ping iperf3

测试命令:

iperf3 -s                       #在服务端 173.16.1.2
iperf3 -c 173.16.1.3 -t 30000   #在客户端
OVS brige Kunpeng TCP:51Gbit/s

未绑核的情况带宽是比较低的, 绑核后获得大幅度提升。 同时可以看到ovs的性能要比linux的vswitch好。

[  4] 113.00-114.00 sec  1.88 GBytes  16.1 Gbits/sec    0   1.29 MBytes
[  4] 114.00-115.00 sec  2.15 GBytes  18.5 Gbits/sec    0   1.33 MBytes
[  4] 115.00-116.00 sec  2.24 GBytes  19.2 Gbits/sec    0   1.35 MBytes
[  4] 116.00-117.00 sec  2.34 GBytes  20.1 Gbits/sec    0   1.42 MBytes
[  4] 117.00-118.00 sec  2.29 GBytes  19.7 Gbits/sec    0   1.55 MBytes
[  4] 118.00-119.00 sec  2.26 GBytes  19.4 Gbits/sec    0   1.72 MBytes
[  4] 119.00-120.00 sec  5.26 GBytes  45.2 Gbits/sec    0   1.89 MBytes
[  4] 120.00-121.00 sec  5.26 GBytes  45.2 Gbits/sec    0   2.10 MBytes
[  4] 121.00-122.00 sec  3.08 GBytes  26.5 Gbits/sec    2   2.34 MBytes
[  4] 122.00-123.00 sec  5.43 GBytes  46.7 Gbits/sec    2   2.35 MBytes
[  4] 123.00-124.00 sec  4.62 GBytes  39.7 Gbits/sec    0   2.36 MBytes
[  4] 124.00-125.00 sec  5.61 GBytes  48.2 Gbits/sec    0   2.36 MBytes
[  4] 125.00-126.00 sec  6.16 GBytes  52.9 Gbits/sec    0   2.37 MBytes
[  4] 126.00-127.00 sec  5.68 GBytes  48.8 Gbits/sec    0   2.40 MBytes
[  4] 127.00-128.00 sec  6.10 GBytes  52.4 Gbits/sec    0   2.42 MBytes
[  4] 128.00-129.00 sec  5.76 GBytes  49.5 Gbits/sec    0   2.49 MBytes
[  4] 129.00-130.00 sec  5.96 GBytes  51.2 Gbits/sec    0   2.54 MBytes
[  4] 130.00-131.00 sec  5.96 GBytes  51.2 Gbits/sec  136   1.89 MBytes
OVS bridge Intel TCP:32Gbit/s
[  4]  20.00-21.00  sec  3.73 GBytes  32.0 Gbits/sec    0    378 KBytes
[  4]  21.00-22.00  sec  3.45 GBytes  29.7 Gbits/sec    0    427 KBytes
[  4]  22.00-23.00  sec  3.30 GBytes  28.4 Gbits/sec    0    427 KBytes
[  4]  23.00-24.00  sec  3.59 GBytes  30.8 Gbits/sec    0    427 KBytes
[  4]  24.00-25.00  sec  3.70 GBytes  31.8 Gbits/sec    0    427 KBytes
[  4]  25.00-26.00  sec  3.50 GBytes  30.1 Gbits/sec    0    427 KBytes
[  4]  26.00-27.00  sec  3.32 GBytes  28.5 Gbits/sec    0    427 KBytes
[  4]  27.00-28.00  sec  3.67 GBytes  31.5 Gbits/sec    0    458 KBytes
[  4]  28.00-29.00  sec  3.75 GBytes  32.2 Gbits/sec    0    458 KBytes
[  4]  29.00-30.00  sec  3.55 GBytes  30.5 Gbits/sec    0    458 KBytes
[  4]  30.00-31.00  sec  3.69 GBytes  31.7 Gbits/sec    0    465 KBytes
[  4]  31.00-32.00  sec  3.52 GBytes  30.2 Gbits/sec    0    465 KBytes
[  4]  32.00-33.00  sec  3.61 GBytes  31.0 Gbits/sec    0    465 KBytes
[  4]  33.00-34.00  sec  3.53 GBytes  30.3 Gbits/sec    0    465 KBytes

Docker overlay

Docker overlay的组网模型是:

docker_overlay

使用docker swarm创建manager和worker关系, 创建一个net0的overlay网络,三台主机上分别运行ubuntu容器, 需要指定容器运行的网络为net0.

创建docker自带的overlay网络,中间可能会遇到问题,如果涉及firewalld和dockerd的重启话,最好重启一下设备。

向overlay网络添加容器,并进行测试。

host1的操作

docker network create --driver overlay --attachable net0
docker run -itd --name ubuntu1 --network net0 ubuntux86
docker exec -it ubuntu1 bash
iperf3 -s

host2的操作

docker run -itd --name ubuntu2 --network net0 ubuntux86
docker exec -it ubuntu2 bash
iperf3 -c 10.0.2.4
iperf3 -u -c 10.0.2.4 -b 920M
Docker overlay Kunpeng 920 TCP: 900Mbit/s UDP:920Mbit/s

Docker overlay Kunpeng 920 TCP测试结果:900Mbit/s

root@47bc82102ad2:/# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.0.2.4, port 34312
[  5] local 10.0.2.8 port 5201 connected to 10.0.2.4 port 34314
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   104 MBytes   875 Mbits/sec
[  5]   1.00-2.00   sec   108 MBytes   909 Mbits/sec
[  5]   2.00-3.00   sec   108 MBytes   909 Mbits/sec
[  5]   3.00-4.00   sec   108 MBytes   909 Mbits/sec
[  5]   4.00-5.00   sec   108 MBytes   908 Mbits/sec
[  5]   5.00-6.00   sec   106 MBytes   885 Mbits/sec
[  5]   6.00-7.00   sec   107 MBytes   899 Mbits/sec
[  5]   7.00-8.00   sec   108 MBytes   909 Mbits/sec
[  5]   8.00-9.00   sec   106 MBytes   888 Mbits/sec
[  5]   9.00-10.00  sec   108 MBytes   909 Mbits/sec
[  5]  10.00-10.05  sec  5.32 MBytes   908 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.05  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.05  sec  1.05 GBytes   900 Mbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

Docker overlay Kunpeng 920 UDP测试结果:910Mbit/s

Accepted connection from 10.0.2.4, port 34444
[  5] local 10.0.2.8 port 5201 connected to 10.0.2.4 port 49708
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  93.6 MBytes   785 Mbits/sec  0.056 ms  383/12364 (3.1%)
[  5]   1.00-2.00   sec   109 MBytes   912 Mbits/sec  0.056 ms  0/13921 (0%)
[  5]   2.00-3.00   sec   109 MBytes   910 Mbits/sec  0.059 ms  0/13890 (0%)
[  5]   3.00-4.00   sec   108 MBytes   910 Mbits/sec  0.058 ms  0/13881 (0%)
[  5]   4.00-5.00   sec   108 MBytes   910 Mbits/sec  0.058 ms  0/13886 (0%)
[  5]   5.00-6.00   sec   108 MBytes   910 Mbits/sec  0.057 ms  0/13885 (0%)
[  5]   6.00-7.00   sec   108 MBytes   910 Mbits/sec  0.057 ms  0/13886 (0%)
[  5]   7.00-8.00   sec   107 MBytes   900 Mbits/sec  0.055 ms  0/13736 (0%)
[  5]   8.00-9.00   sec   109 MBytes   914 Mbits/sec  0.056 ms  95/14038 (0.68%)
[  5]   9.00-10.00  sec   108 MBytes   910 Mbits/sec  0.056 ms  0/13886 (0%)
[  5]  10.00-10.04  sec  4.12 MBytes   884 Mbits/sec  4.247 ms  0/527 (0%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec  4.247 ms  478/137900 (0.35%)
Docker overlay Intel 6248 TCP: 876Mbit/s UDP: 920Mbit/s

Docker overlay Intel 6248 TCP测试结果:870Mbit/s

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.0.2.6, port 35886
[  5] local 10.0.2.4 port 5201 connected to 10.0.2.6 port 35888
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   103 MBytes   861 Mbits/sec
[  5]   1.00-2.00   sec   106 MBytes   889 Mbits/sec
[  5]   2.00-3.00   sec   105 MBytes   879 Mbits/sec
[  5]   3.00-4.00   sec   106 MBytes   887 Mbits/sec
[  5]   4.00-5.00   sec   105 MBytes   878 Mbits/sec
[  5]   5.00-6.00   sec   104 MBytes   871 Mbits/sec
[  5]   6.00-7.00   sec   105 MBytes   881 Mbits/sec
[  5]   7.00-8.00   sec   104 MBytes   873 Mbits/sec
[  5]   8.00-9.00   sec   104 MBytes   876 Mbits/sec
[  5]   9.00-10.00  sec   103 MBytes   866 Mbits/sec
[  5]  10.00-10.04  sec  3.74 MBytes   850 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.04  sec  1.02 GBytes   876 Mbits/sec                  receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

Docker overlay Intel 6248 UDP测试结果:920Mbit/s

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.0.2.6, port 35926
[  5] local 10.0.2.4 port 5201 connected to 10.0.2.6 port 41926
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  94.8 MBytes   795 Mbits/sec  0.068 ms  61/12189 (0.5%)
[  5]   1.00-2.00   sec   110 MBytes   926 Mbits/sec  0.069 ms  8/14136 (0.057%)
[  5]   2.00-3.00   sec   110 MBytes   920 Mbits/sec  0.070 ms  0/14031 (0%)
[  5]   3.00-4.00   sec   110 MBytes   921 Mbits/sec  0.069 ms  0/14047 (0%)
[  5]   4.00-5.00   sec   110 MBytes   921 Mbits/sec  0.069 ms  0/14046 (0%)
[  5]   5.00-6.00   sec   110 MBytes   919 Mbits/sec  0.067 ms  6/14029 (0.043%)
[  5]   6.00-7.00   sec   110 MBytes   920 Mbits/sec  0.069 ms  0/14039 (0%)
[  5]   7.00-8.00   sec   110 MBytes   919 Mbits/sec  0.068 ms  0/14026 (0%)
[  5]   8.00-9.00   sec   110 MBytes   920 Mbits/sec  0.068 ms  0/14042 (0%)
[  5]   9.00-10.00  sec   110 MBytes   920 Mbits/sec  0.070 ms  0/14045 (0%)
[  5]  10.00-10.04  sec  4.27 MBytes   925 Mbits/sec  0.067 ms  0/547 (0%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec  0.067 ms  75/139177 (0.054%)

OVS overlay

overlay组网模型是:

ovs_overlay

以一台Intel 6248座位服务器, Kunepng和另一台 Intel 6248上的容器, 通过OVS的overlay网络进行链接。

注意在服务端和客户端主机上添加ovs的转发规则:

[root@centos86 user1]# ovs-ofctl dump-flows ovs-br2
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=58541.701s, table=0, n_packets=97516, n_bytes=6504861, idle_age=57619, dl_dst=66:54:7a:62:b6:10 actions=output:1
cookie=0x0, duration=58405.390s, table=0, n_packets=13621374, n_bytes=20082918183, idle_age=57619, dl_src=66:54:7a:62:b6:10 actions=output:8
cookie=0x0, duration=232287.907s, table=0, n_packets=218038, n_bytes=17877238, idle_age=65534, hard_age=65534, priority=1,in_port=8 actions=output:3
cookie=0x0, duration=232279.101s, table=0, n_packets=12857841, n_bytes=18850928879, idle_age=65534, hard_age=65534, priority=1,in_port=3 actions=output:8

测试命令是:

iperf3 -s -p 3333
iperf3 -c 10.10.10.203 -p 3333
iperf3 -u -c 10.10.10.203 -p 3333 -b 800M -t 3000
OVS overlay Kunpeng 920 TCP:904Mbit/s UDP:800Mbit/s

OVS overlay Kunpeng 920 TCP测试结果:904Mbit/s

root@774b2f613874:/# iperf3 -s -p 3333
-----------------------------------------------------------
Server listening on 3333
-----------------------------------------------------------
Accepted connection from 10.10.10.180, port 53102
[  5] local 10.10.10.203 port 3333 connected to 10.10.10.180 port 53104
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   103 MBytes   868 Mbits/sec
[  5]   1.00-2.00   sec   108 MBytes   909 Mbits/sec
[  5]   2.00-3.00   sec   108 MBytes   909 Mbits/sec
[  5]   3.00-4.00   sec   108 MBytes   909 Mbits/sec
[  5]   4.00-5.00   sec   108 MBytes   909 Mbits/sec
[  5]   5.00-6.00   sec   108 MBytes   909 Mbits/sec
[  5]   6.00-7.00   sec   108 MBytes   909 Mbits/sec
[  5]   7.00-8.00   sec   108 MBytes   909 Mbits/sec
[  5]   8.00-9.00   sec   108 MBytes   906 Mbits/sec
[  5]   9.00-10.00  sec   108 MBytes   909 Mbits/sec
[  5]  10.00-10.04  sec  4.03 MBytes   908 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.04  sec  1.06 GBytes   904 Mbits/sec                  receiver
-----------------------------------------------------------

OVS overlay Kunpeng 920 UDP测试结果:800Mbit/s

Accepted connection from 10.10.10.180, port 53114
[  5] local 10.10.10.203 port 3333 connected to 10.10.10.180 port 48230
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  87.1 MBytes   731 Mbits/sec  0.061 ms  0/11152 (0%)
[  5]   1.00-2.00   sec  93.8 MBytes   787 Mbits/sec  0.063 ms  0/12004 (0%)
[  5]   2.00-3.00   sec  97.7 MBytes   820 Mbits/sec  0.057 ms  0/12510 (0%)
[  5]   3.00-4.00   sec  98.2 MBytes   824 Mbits/sec  0.063 ms  0/12570 (0%)
[  5]   4.00-5.00   sec  91.6 MBytes   768 Mbits/sec  0.051 ms  15/11740 (0.13%)
[  5]   5.00-6.00   sec  97.0 MBytes   814 Mbits/sec  0.056 ms  0/12418 (0%)
[  5]   6.00-7.00   sec  95.8 MBytes   804 Mbits/sec  0.060 ms  0/12261 (0%)
[  5]   7.00-8.00   sec  95.7 MBytes   803 Mbits/sec  0.059 ms  0/12252 (0%)
[  5]   8.00-9.00   sec  91.8 MBytes   770 Mbits/sec  0.059 ms  0/11751 (0%)
[  5]   9.00-10.00  sec  97.3 MBytes   817 Mbits/sec  0.053 ms  0/12460 (0%)
[  5]  10.00-11.00  sec  96.9 MBytes   813 Mbits/sec  0.056 ms  0/12406 (0%)
[  5]  11.00-12.00  sec  96.3 MBytes   808 Mbits/sec  0.060 ms  0/12326 (0%)
[  5]  12.00-13.00  sec  94.1 MBytes   789 Mbits/sec  0.061 ms  0/12041 (0%)
[  5]  13.00-14.00  sec  90.7 MBytes   761 Mbits/sec  0.057 ms  0/11605 (0%)
[  5]  14.00-15.00  sec   101 MBytes   848 Mbits/sec  0.062 ms  0/12946 (0%)
OVS overlay Intel 6248 TCP: 880Mbit/s UDP: 730Mbit/s

测试命令

iperf3 -s -p 3333
iperf3 -c 10.10.10.203 -p 3333
iperf3 -u -c 10.10.10.203 -p 3333 -b 750M -t 3000

OVS overlay Intel 6248 TCP测试结果:

Accepted connection from 10.10.10.202, port 57518
[  5] local 10.10.10.203 port 3333 connected to 10.10.10.202 port 57520
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   101 MBytes   844 Mbits/sec
[  5]   1.00-2.00   sec   104 MBytes   876 Mbits/sec
[  5]   2.00-3.00   sec   105 MBytes   878 Mbits/sec
[  5]   3.00-4.00   sec   105 MBytes   880 Mbits/sec
[  5]   4.00-5.00   sec   106 MBytes   886 Mbits/sec
[  5]   5.00-6.00   sec   105 MBytes   883 Mbits/sec
[  5]   6.00-7.00   sec   107 MBytes   896 Mbits/sec
[  5]   7.00-8.00   sec   105 MBytes   882 Mbits/sec
[  5]   8.00-9.00   sec   106 MBytes   892 Mbits/sec
[  5]   9.00-10.00  sec   106 MBytes   890 Mbits/sec
[  5]  10.00-10.03  sec  3.53 MBytes   893 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.03  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.03  sec  1.03 GBytes   881 Mbits/sec                  receiver

OVS overlay Kunpeng 920 UDP测试结果

Accepted connection from 10.10.10.202, port 57546
[  5] local 10.10.10.203 port 3333 connected to 10.10.10.202 port 47677
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  78.8 MBytes   661 Mbits/sec  0.067 ms  72/10153 (0.71%)
[  5]   1.00-2.00   sec  89.2 MBytes   749 Mbits/sec  0.068 ms  0/11422 (0%)
[  5]   2.00-3.00   sec  87.5 MBytes   734 Mbits/sec  0.069 ms  240/11435 (2.1%)
[  5]   3.00-4.00   sec  87.4 MBytes   733 Mbits/sec  0.070 ms  253/11444 (2.2%)
[  5]   4.00-5.00   sec  87.3 MBytes   732 Mbits/sec  0.066 ms  269/11443 (2.4%)
[  5]   5.00-6.00   sec  87.3 MBytes   732 Mbits/sec  0.065 ms  273/11444 (2.4%)
[  5]   6.00-7.00   sec  87.3 MBytes   732 Mbits/sec  0.065 ms  274/11445 (2.4%)
[  5]   7.00-8.00   sec  87.2 MBytes   732 Mbits/sec  0.066 ms  281/11444 (2.5%)
[  5]   8.00-9.00   sec  87.2 MBytes   732 Mbits/sec  0.065 ms  280/11443 (2.4%)
[  5]   9.00-10.00  sec  87.2 MBytes   732 Mbits/sec  0.066 ms  278/11444 (2.4%)
[  5]  10.00-11.00  sec  87.2 MBytes   731 Mbits/sec  0.069 ms  285/11445 (2.5%)
[  5]  11.00-12.00  sec  87.1 MBytes   731 Mbits/sec  0.069 ms  290/11444 (2.5%)
[  5]  12.00-13.00  sec  87.1 MBytes   731 Mbits/sec  0.069 ms  292/11444 (2.6%)
[  5]  13.00-14.00  sec  87.1 MBytes   731 Mbits/sec  0.069 ms  296/11444 (2.6%)
[  5]  14.00-15.00  sec  87.1 MBytes   731 Mbits/sec  0.068 ms  297/11444 (2.6%)
[  5]  15.00-16.00  sec  87.7 MBytes   735 Mbits/sec  0.066 ms  222/11443 (1.9%)
[  5]  16.00-17.00  sec  88.0 MBytes   738 Mbits/sec  0.067 ms  180/11443 (1.6%)
[  5]  17.00-18.00  sec  89.0 MBytes   747 Mbits/sec  0.068 ms  66/11463 (0.58%)

问题记录

iptables no docker0 No chain/target/match by that name.
[root@centos86 ~]# docker run  -it --rm --name=iperf3-server -p 5201:5201 networkstatic/iperf3 -s
docker: Error response from daemon: driver failed programming external connectivity on endpoint iperf3-server
(3c03a70a814556d08e368b35898aa50284470d2b4b4e18e6ca9bd3dd698874fd):  (iptables failed:
iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 5201 -j DNAT --to-destination 172.17.0.7:5201 ! -i docker0: iptables:
No chain/target/match by that name.
(exit status 1)).
[root@centos86 ~]# systemctl restart docker
[root@centos86 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain DOCKER (1 references)
target     prot opt source               destination

解决办法: 可能和防火墙相关, docker启动之后对防火墙进行操作, 导致没有docker0的iptables规则

systemctl restart docker
iperf3 TCP测速是0, UDP测试服务端无法收到数据包

两个容器ping是正常的,用nc测试,tcp和udp端口都是正常的, 但是就是无法用iperf3测试。

iperf Done.
root@fff54a208fff:/# iperf3 -c 10.10.10.203 -p 3333
Connecting to host 10.10.10.203, port 3333
[  4] local 10.10.10.202 port 57514 connected to 10.10.10.203 port 3333
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  84.8 KBytes   694 Kbits/sec    2   1.41 KBytes
[  4]   1.00-2.00   sec  0.00 Bytes  0.00 bits/sec    1   1.41 KBytes
[  4]   2.00-3.00   sec  0.00 Bytes  0.00 bits/sec    0   1.41 KBytes
[  4]   3.00-4.00   sec  0.00 Bytes  0.00 bits/sec    1   1.41 KBytes
[  4]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec    0   1.41 KBytes
[  4]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec    0   1.41 KBytes
[  4]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec    1   1.41 KBytes
[  4]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec    0   1.41 KBytes
[  4]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec    0   1.41 KBytes
[  4]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec    0   1.41 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  84.8 KBytes  69.5 Kbits/sec    5             sender
[  4]   0.00-10.00  sec  0.00 Bytes  0.00 bits/sec                  receiver

iperf Done.
root@fff54a208fff:/# iperf3 -u -c 10.10.10.203 -p 3333
Connecting to host 10.10.10.203, port 3333
[  4] local 10.10.10.202 port 39060 connected to 10.10.10.203 port 3333
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec   128 KBytes  1.05 Mbits/sec  16
[  4]   1.00-2.00   sec   128 KBytes  1.05 Mbits/sec  16
[  4]   2.00-3.00   sec   128 KBytes  1.05 Mbits/sec  16
[  4]   3.00-4.00   sec   128 KBytes  1.05 Mbits/sec  16
[  4]   4.00-5.00   sec   128 KBytes  1.05 Mbits/sec  16
[  4]   5.00-6.00   sec   128 KBytes  1.05 Mbits/sec  16
[  4]   6.00-7.00   sec   128 KBytes  1.05 Mbits/sec  16
[  4]   7.00-8.00   sec   128 KBytes  1.05 Mbits/sec  16
[  4]   8.00-9.00   sec   128 KBytes  1.05 Mbits/sec  16
[  4]   9.00-10.00  sec   128 KBytes  1.05 Mbits/sec  16
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.25 MBytes  1.05 Mbits/sec  0.000 ms  0/0 (0%)
[  4] Sent 0 datagrams

解决办法: 可能是MTU的问题。 [6]

docker: Error response from daemon
[root@intel6248 user1]# docker run -d --net=my-attachable-overlay-network --name=c1 busybox top
c5ba0656fedd9e05acf296c61fcffc9ad978f442e70da5b8315760ffe8386eca
docker: Error response from daemon: attaching to network failed, make sure your
network options are correct and check manager logs: context deadline exceeded.
Mar 31 15:11:27 intel6248 dockerd[22177]: level=info msg="worker 0x22armc3kg844zqkiickl4nx was successfully registered" method="(*Dispatcher).register"
Mar 31 15:11:27 intel6248 dockerd[22177]: level=info msg="Node 576c35de7a81/192.168.1.202, joined gossip cluster"
Mar 31 15:11:27 intel6248 dockerd[22177]: level=info msg="Node 576c35de7a81/192.168.1.202, added to nodes list"
Mar 31 15:12:47 intel6248 dockerd[22177]: level=info msg="initialized VXLAN UDP port to 4789 "
Mar 31 15:12:47 intel6248 dockerd[22177]: level=error msg="error reading the kernel parameter net.ipv4.vs.expire_nodest_conn" error="open /proc/sys/net/ipv4/vs/expire_no
Mar 31 15:12:47 intel6248 dockerd[22177]: level=error msg="error reading the kernel parameter net.ipv4.vs.expire_nodest_conn" error="open /proc/sys/net/ipv4/vs/expire_no
Mar 31 15:12:47 intel6248 dockerd[22177]: level=error msg="moving interface ov-001000-ti2f2 to host ns failed, invalid argument, after config error error setting interfa
Mar 31 15:12:47 intel6248 dockerd[22177]: level=error msg="failed removing container name resolution for a8fb2e8227966d0e749225b1c2feddd188832e60614b1d579d55c33f0a555f9e
Mar 31 15:12:47 intel6248 dockerd[22177]: level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count ti2f2pgpth0my5q3as
Mar 31 15:12:47 intel6248 dockerd[22177]: level=error msg="Failed creating ingress network: network sandbox join failed: subnet sandbox join failed for \"10.0.0.0/24\":
Mar 31 15:12:48 intel6248 dockerd[22177]: level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count nvkcpo8njn4se8osy6
Mar 31 15:12:48 intel6248 dockerd[22177]: level=error msg="fatal task error" error="network sandbox join failed: subnet sandbox join failed for \"10.0.1.0/24\": error cr
Mar 31 15:12:48 intel6248 dockerd[22177]: level=warning msg="Peer operation failed:Unable to find the peerDB for nid:nvkcpo8njn4se8osy6h25p017 op:&{3 nvkcpo8njn4se8osy6h
Mar 31 15:12:48 intel6248 dockerd[22177]: level=info msg="initialized VXLAN UDP port to 4789 "
Mar 31 15:12:48 intel6248 dockerd[22177]: level=error msg="error reading the kernel parameter net.ipv4.vs.expire_nodest_conn" error="open /proc/sys/net/ipv4/vs/expire_no
Mar 31 15:12:48 intel6248 dockerd[22177]: level=error msg="moving interface ov-001000-ti2f2 to host ns failed, invalid argument, after config error error setting interfa
Mar 31 15:12:48 intel6248 dockerd[22177]: level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count ti2f2pgpth0my5q3as
Mar 31 15:12:48 intel6248 dockerd[22177]: level=error msg="failed removing container name resolution for a8fb2e8227966d0e749225b1c2feddd188832e60614b1d579d55c33f0a555f9e
Mar 31 15:12:48 intel6248 dockerd[22177]: level=error msg="Failed creating ingress network: network sandbox join failed: subnet sandbox join failed for \"10.0.0.0/24\":
Mar 31 15:13:07 intel6248 dockerd[22177]: level=error msg="438df95d9494edfe68b7078928d8c59554f43586eaa915306937c80386c041a8 cleanup: failed to delete container from cont
Mar 31 15:13:07 intel6248 dockerd[22177]: level=error msg="Handler for POST /v1.40/containers/438df95d9494edfe68b7078928d8c59554f43586eaa915306937c80386c041a8/start retu
Mar 31 15:16:17 intel6248 dockerd[22177]: level=info msg="NetworkDB stats intel6248(1b3c4eec767b) - netID:nvkcpo8njn4se8osy6h25p017 leaving:true netPeers:0 entries:0 Que
Mar 31 15:16:17 intel6248 dockerd[22177]: level=info msg="NetworkDB stats intel6248(1b3c4eec767b) - netID:ti2f2pgpth0my5q3asb2vc83w leaving:true netPeers:1 entries:0 Que
~

解决办法:

排查多次, 操作并没有什么错误。论坛里面可能原因是中途涉及到重启docker daemon [7] ,重启设备问题消失。

待处理

io, k8s, 10G net

[1]https://www.sdnlab.com/23191.html
[2]http://containertutorials.com/network/ovs_docker.html
[3]https://developer.ibm.com/recipes/tutorials/using-ovs-bridge-for-docker-networking/
[4]https://docker-k8s-lab.readthedocs.io/en/latest/docker/docker-ovs.html
[5]https://hustcat.github.io/overlay-network-base-ovs/
[6]http://dockone.io/article/228
[7]https://success.docker.com/article/error-network-sandbox-join-failed-during-service-restarts

docker swarm

docker swarm 常用命令

docker service create --replicas 1 --name pingthem busybox ping baidu.com
docker service ps pingthem
docker service inspect pingthem
docker service scale pingthem=5
docker service ls
docker service rm pingthem

创建manager

[user1@centos86 ~]$ docker swarm init --advertise-addr 192.168.1.203
Swarm initialized: current node (4nj18pipvg0rg4879psiql8xe) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5i86qowshahqf67m0a2569i2y6pnpo25muu1ne5hn3eeo3k9bi-3efz4kdw8ol43nj4nw23ckv17 192.168.1.203:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

一台主机加入swarm

docker swarm join --token SWMTKN-1-5i86qowshahqf67m0a2569i2y6pnpo25muu1ne5hn3eeo3k9bi-3efz4kdw8ol43nj4nw23ckv17 192.168.1.203:2377

另一台主机加入swarm

docker swarm join --token SWMTKN-1-5i86qowshahqf67m0a2569i2y6pnpo25muu1ne5hn3eeo3k9bi-3efz4kdw8ol43nj4nw23ckv17 192.168.1.203:2377

创建完毕查看集群状态,已经加入了三个节点。

[user1@intel6248 ~]$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
4nj18pipvg0rg4879psiql8xe *   intel6248           Ready               Active              Leader              19.03.7
jle3s6my1znz1yg9z4o450kkp     kunpeng916          Ready               Active                                  19.03.8
k7cndxruwpyjcauxxdlc83b3t     kunpeng920          Ready               Active                                  19.03.8

创建一个工作任务

dstat

dstat 功能丰富,可用于替代vmstat, iostat,netstat 和ifstat。

root@ubuntu:~# dstat
You did not select any stats, using -cdngy by default.
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
  0   0 100   0   0|2019B 1422B|   0     0 |   0     0 |  62    73
  1   0  85  13   0|   0   124k|1031k 1449k|   0     0 |6324  4614
  0   0  88  12   0|   0    36k| 351k 2178k|   0     0 |5484  3732
  0   1  88  11   0|  20k  264k| 289k 1969k|   0     0 |8681  5838
  0   1  90   8   0|   0     0 | 512k 1843k|   0     0 |  13k 9888
  0   0  89  11   0|   0     0 | 428k 3169k|   0     0 |6685  4854
  1   1  91   8   0|   0  8192B| 355k 2070k|   0     0 |9556  7076
  1   1  92   7   0|   0  4096B| 396k 2796k|   0     0 |9253  6856
  0   0  92   7   0|   0    40k| 269k 1867k|   0     0 |7361  5017
  0   1  92   7   0|   0     0 | 392k 1774k|   0     0 |9138  6676
  1   1  91   7   0|   0     0 | 416k 1764k|   0     0 |  11k 7635
  3   1  90   6   0|   0   200k| 369k 1861k|   0     0 |9995  6864
  0   1  93   6   0|   0     0 | 286k 2162k|   0     0 |7412  5229
  0   1  93   6   0|   0     0 | 932k 2113k|   0     0 |9725  7276
  0   0  94   5   0|   0     0 | 340k 2799k|   0     0 |7805  5675
  0   0  95   4   0|   0     0 | 211k 1843k|   0     0 |4179  3235
usr 执行用户程序耗时
sys 执行系统程序耗时
idl 空闲百分比
wai 等待耗时
stl

read 读硬盘速度(每秒)
writ 写硬盘速度(每秒)
recv 接收到字节数
send 发送字节数

in  页面换入
out 页面换出

int 系统中断
csw 上下文切换contxt switch

edac-util

内存检测工具

正常的输出:

[root@hs home]# edac-util -v
mc0: 0 Uncorrected Errors with no DIMM info
mc0: 0 Corrected Errors with no DIMM info
mc0: csrow0: 0 Uncorrected Errors
mc0: csrow0: mc#0memory#0: 0 Corrected Errors
mc0: csrow10: 0 Uncorrected Errors
mc0: csrow10: mc#0memory#10: 0 Corrected Errors
mc0: csrow12: 0 Uncorrected Errors
mc0: csrow12: mc#0memory#12: 0 Corrected Errors
mc0: csrow14: 0 Uncorrected Errors
mc0: csrow14: mc#0memory#14: 0 Corrected Errors
mc0: csrow16: 0 Uncorrected Errors
mc0: csrow16: mc#0memory#16: 0 Corrected Errors
mc0: csrow18: 0 Uncorrected Errors
mc0: csrow18: mc#0memory#18: 0 Corrected Errors
mc0: csrow2: 0 Uncorrected Errors
mc0: csrow2: mc#0memory#2: 0 Corrected Errors
mc0: csrow20: 0 Uncorrected Errors
mc0: csrow20: mc#0memory#20: 0 Corrected Errors
mc0: csrow22: 0 Uncorrected Errors
mc0: csrow22: mc#0memory#22: 0 Corrected Errors
mc0: csrow24: 0 Uncorrected Errors
mc0: csrow24: mc#0memory#24: 0 Corrected Errors
mc0: csrow26: 0 Uncorrected Errors
mc0: csrow26: mc#0memory#26: 0 Corrected Errors
mc0: csrow28: 0 Uncorrected Errors
mc0: csrow28: mc#0memory#28: 0 Corrected Errors
mc0: csrow30: 0 Uncorrected Errors
mc0: csrow30: mc#0memory#30: 0 Corrected Errors
mc0: csrow4: 0 Uncorrected Errors
mc0: csrow4: mc#0memory#4: 0 Corrected Errors
mc0: csrow6: 0 Uncorrected Errors
mc0: csrow6: mc#0memory#6: 0 Corrected Errors
mc0: csrow8: 0 Uncorrected Errors
mc0: csrow8: mc#0memory#8: 0 Corrected Errors

异常的设备上的输出:

[root@hisilicon11 ]# edac-util -v
mc0: 0 Uncorrected Errors with no DIMM info
mc0: 15 Corrected Errors with no DIMM info
mc0: csrow0: 0 Uncorrected Errors
mc0: csrow0: mc#0memory#0: 0 Corrected Errors
mc0: csrow10: 0 Uncorrected Errors
mc0: csrow10: mc#0memory#10: 0 Corrected Errors
mc0: csrow12: 0 Uncorrected Errors
mc0: csrow12: mc#0memory#12: 0 Corrected Errors
mc0: csrow14: 0 Uncorrected Errors
mc0: csrow14: mc#0memory#14: 0 Corrected Errors
mc0: csrow16: 0 Uncorrected Errors
mc0: csrow16: mc#0memory#16: 0 Corrected Errors
mc0: csrow18: 0 Uncorrected Errors
mc0: csrow18: mc#0memory#18: 0 Corrected Errors
mc0: csrow2: 0 Uncorrected Errors
mc0: csrow2: mc#0memory#2: 0 Corrected Errors
mc0: csrow20: 0 Uncorrected Errors
mc0: csrow20: mc#0memory#20: 0 Corrected Errors
mc0: csrow22: 0 Uncorrected Errors
mc0: csrow22: mc#0memory#22: 0 Corrected Errors
mc0: csrow24: 0 Uncorrected Errors
mc0: csrow24: mc#0memory#24: 0 Corrected Errors
mc0: csrow26: 0 Uncorrected Errors
mc0: csrow26: mc#0memory#26: 0 Corrected Errors
mc0: csrow28: 0 Uncorrected Errors
mc0: csrow28: mc#0memory#28: 0 Corrected Errors
mc0: csrow30: 0 Uncorrected Errors
mc0: csrow30: mc#0memory#30: 0 Corrected Errors
mc0: csrow4: 0 Uncorrected Errors
mc0: csrow4: mc#0memory#4: 0 Corrected Errors
mc0: csrow6: 0 Uncorrected Errors
mc0: csrow6: mc#0memory#6: 0 Corrected Errors
mc0: csrow8: 0 Uncorrected Errors
mc0: csrow8: mc#0memory#8: 0 Corrected Errors

email

怎么安全的在网上刘邮箱而不会机器人抓取

#编码
echo someone@gmail.com | base64
c29tZW9uZUBnbWFpbC5jb20K

#解码
echo c29tZW9uZUBnbWFpbC5jb20K | base64 -d
someone@gmail.com

emqx

emqx编译安装.emqx是一个MQTT消息服务器

1、编译安装Erlang

emqx 依赖Erlang,需要先编译安装Erlang

首先安装依赖

yum grouplist
yum groupinstall "Development Tools"

再安装Erlang

git clone https://github.com/erlang/otp.git
cd otp
git checkout -b OTP-22.0.7-build OTP-22.0.7
./otp_build autoconf
./configure
make
sudo make install

configure 报错请依次安装依赖包。参考问题列表。

sudo make install 成功会提示:

erlang.mk:30: Please upgrade to GNU Make 4 or later: https://erlang.mk/guide/installation.html
 ERLC   ELDAPv3.erl eldap2.erl
 APP    eldap2.app.src
make[1]: Leaving directory `/home/me/emqx-rel/_build/emqx/lib/eldap2`
===> Compiling emqx_auth_ldap
===> Starting relx build process ...
===> Resolving OTP Applications from directories:
          /home/me/emqx-rel/_build/emqx/lib
          /home/me/emqx-rel/_checkouts
          /usr/local/lib/erlang/lib
===> Resolved emqx-v3.2-beta.1-42-g663aee6
===> Including Erts from /usr/local/lib/erlang
===> release successfully created!
[me@centos emqx-rel]$
[me@centos emqx-rel]$

2. 编译安装emqx

git clone https://github.com/emqx/emqx-rel.git
cd emqx-rel
git checkout -b v3.2.2_build v3.2.2
cd emqx-rel && make
cd _build/emqx/rel/emqx && ./bin/emqx console

make 过程会fetch众多来自github的依赖包。Fetching不能失败,以避免编译后的emqx不能运行

[2019-08-12 19:34:00]                           {branch,"master"}})
[2019-08-12 19:34:00]  ===> Fetching emqx_auth_ldap (from {git,"https://github.com/emqx/emqx-auth-ldap",
[2019-08-12 19:34:07]                            {branch,"master"}})
[2019-08-12 19:34:07]  ===> Fetching emqx_auth_mongo (from {git,"https://github.com/emqx/emqx-auth-mongo",
[2019-08-12 19:34:13]                             {branch,"master"}})
[2019-08-12 19:34:13]  ===> Fetching emqx_auth_mysql (from {git,"https://github.com/emqx/emqx-auth-mysql",
[2019-08-12 19:34:18]                             {branch,"master"}})
[2019-08-12 19:34:18]  ===> Fetching emqx_auth_pgsql (from {git,"https://github.com/emqx/emqx-auth-pgsql",
[2019-08-12 19:34:23]                             {branch,"master"}})
[2019-08-12 19:34:23]  ===> Fetching emqx_auth_redis (from {git,"https://github.com/emqx/emqx-auth-redis",
[2019-08-12 19:34:29]                             {branch,"master"}})
[2019-08-12 19:34:29]  ===> Fetching emqx_auth_username (from {git,"https://github.com/emqx/emqx-auth-username",

console执行成功会有如下提示: image0 打开web界面无报错 image1

遇到问题记录:

问题1:Tomcat提示启动成功,当时没有后台进程

【tomcat not start】

问题2:escript: No such file or directory
/usr/bin/env: escript: No such file or directory

make: *** [get-deps] Error 127

  解决办法:

编译安装erlang

问题3:rebar3执行bootstrap报错
[root@izuf66apgccn7tpnaw8k8lz rebar3]# ./bootstrap
/usr/local/rebar3/_build/default/lib/parse_trans/src/ct_expand.erl:206: illegal guard expression

解决办法:

【https://github.com/erlang/rebar3/issues/2059】

安装新版本Erlang:

It has not been supported for over a year since it is now almost 6 years old (OTP-22 should be out in a couple of months at the most); there's one breaking release a year, and 3 minor releases a year as well. Release 3.5.2 is the last one to support R16: https://github.com/erlang/rebar3/releases/tag/3.5.2

You may fetch one of these older versions if you must.
问题4: 缺少OpenGL
configure: WARNING: No OpenGL headers found, wx will NOT be usable
configure: WARNING: No GLU headers found, wx will NOT be usable

http://www.prinmath.com/csci5229/misc/install.html

yum install freeglut-devel
问题5: 缺少 wxWidgets
./configure: line 4661: wx-config: command not found
configure: WARNING:
                wxWidgets must be installed on your system.

                Please check that wx-config is in path, the directory
                where wxWidgets libraries are installed (returned by
                'wx-config --libs' or 'wx-config --static --libs' command)
                is in LD_LIBRARY_PATH or equivalent variable and
                wxWidgets version is 2.8.4 or above.

*********************************************************************
**********************  APPLICATIONS INFORMATION  *******************
*********************************************************************

wx             : wxWidgets not found, wx will NOT be usable

*********************************************************************

解决办法

yum install https://mirrors.huaweicloud.com/epel/epel-release-latest-7.noarch.rpm
rpm --import https://mirrors.huaweicloud.com/epel/RPM-GPG-KEY-EPEL-7
yum install wxGTK-devel
问题6: 缺少 odbc
*********************************************************************
**********************  APPLICATIONS DISABLED  **********************
*********************************************************************

odbc           : ODBC library - link check failed

*********************************************************************

解决办法:

yum install unixODBC-devel.aarch64
问题7: 缺少 fop
*********************************************************************
**********************  DOCUMENTATION INFORMATION  ******************
*********************************************************************

documentation  :
                 fop is missing.
                 Using fakefop to generate placeholder PDF files.

*********************************************************************

解决办法:

yum install fop-1.1-6.el7.noarch
问题8: 没有java开发环境
jinterface     : No Java compiler found
sudo yum install java-11-openjdk-devel.aarch64

erlang

yum install -y ncurses-devel zlib-devel texinfo gtk+-devel gtk2-devel qt-devel tcl-devel tk-devel libX11-devel kernel-headers kernel-devel
yum install -y gcc gcc-c++ kernel-devel
yum install wxWidgets-devel
yum install -y openssl-devel odbc-devel
yum install -y java-11-openjdk.x86_64 java-11-openjdk-devel.x86_64
yum install -y fop
yum install unixODBC-devel

ethtool

查看和配置网卡的命令行工具

ethtool -p enP2p233s0f1     #端口闪灯,识别是哪一个物理网口

查看网卡enahisic2i0的基本参数

me@ubuntu:~$ ethtool enahisic2i0
Settings for enahisic2i0:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: Symmetric
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Link partner advertised link modes:  10baseT/Full
                                             100baseT/Full
                                             1000baseT/Full
        Link partner advertised pause frame use: No
        Link partner advertised auto-negotiation: Yes
        Link partner advertised FEC modes: Not reported
        Speed: 1000Mb/s #网口速率
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: off (auto)
Cannot get wake-on-lan settings: Operation not permitted
        Link detected: yes #网线是否连接

查看网卡驱动

me@ubuntu:~$ ethtool -i enahisic2i0
driver: hns 驱动
version: 2.0 驱动版本
firmware-version: N/A
expansion-rom-version:
bus-info: platform
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
me@ubuntu:~$

查看网卡高级设置

me@ubuntu:~$ sudo ethtool -k enahisic2i0
[sudo] password for me:
Features for enahisic2i0:
rx-checksumming: on
tx-checksumming: on
        tx-checksum-ipv4: on
        tx-checksum-ip-generic: off [fixed]
        tx-checksum-ipv6: on
        tx-checksum-fcoe-crc: off [fixed]
        tx-checksum-sctp: off [fixed]
scatter-gather: on
        tx-scatter-gather: on
        tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
        tx-tcp-segmentation: on
        tx-tcp-ecn-segmentation: off [fixed]
        tx-tcp-mangleid-segmentation: off
        tx-tcp6-segmentation: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed] #fixed 代表默认配置
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: off [fixed]
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-ipxip4-segmentation: off [fixed

打开或者关闭网卡参数

ethtool -K enp125s0f2 rx-vlan-offload off
ethtool -K enp125s0f2 tx-vlan-offload off
ethtool -K enp125s0f2 rx-vlan-filter off
ethtool -K enp125s0f2 tx-gre-segmentation off
ethtool -K enp125s0f2 tx-udp_tnl-segmentation on
ethtool -K enp125s0f2 tx-udp_tnl-csum-segmentation on

网卡队列和中断

1、判断当前系统环境是否支持多队列网卡,执行命令:

lspci -vvv
root@ubuntu:~# lspci -vvv | grep MSI-X
pcilib: sysfs_read_vpd: read failed: Input/output error
pcilib: sysfs_read_vpd: read failed: Input/output error
        Capabilities: [c0] MSI-X: Enable+ Count=97 Masked-
        Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
        Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
pcilib: sysfs_read_vpd: read failed: Input/output error

如果在Ethernet项中。含有Capabilities: [c0] MSI-X: Enable+ Count=97 Masked-语句,则说明当前系统环境是支持多队列网卡的,否则不支持。

2、查看网卡接口是否支持多队列,最多支持多少、当前开启多少

ethtool -l eth0

不同设备的输出结果 [ethtool -l结果]

ARM

me@ubuntu:~$ ethtool -l enahisic2i0
Channel parameters for enahisic2i0:
Pre-set maximums:
RX:             16
TX:             16
Other:          0
Combined:       0
Current hardware settings:
RX:             16
TX:             16
Other:          0
Combined:       0

X86

root@ubuntu:~# ethtool -l enp2s0f0
Channel parameters for enp2s0f0:
Pre-set maximums:
RX:             0
TX:             0
Other:          1
Combined:       63
Current hardware settings:
RX:             0
TX:             0
Other:          1
Combined:       63

3、设置网卡当前使用多队列。

ethtool -L eth0 combined <N>  #N为要使能的队列数

在96核ARM服务器上试验

[root@localhost ~]# ethtool -l eno3
Channel parameters for eno3:
Pre-set maximums:
RX:             0
TX:             0
Other:          1
Combined:       8
Current hardware settings:
RX:             0
TX:             0
Other:          1
Combined:       8

[root@localhost ~]# ethtool -L eno3 combined 4

[root@localhost ~]# ethtool -l eno3
Channel parameters for eno3:
Pre-set maximums:
RX:             0
TX:             0
Other:          1
Combined:       8
Current hardware settings:
RX:             0
TX:             0
Other:          1
Combined:       4

[root@localhost ~]#

4、要确保多队列确实生效,可以查看文件

root@ubuntu:~# ls /sys/class/net/enp2s0f0/queues/
rx-0   rx-14  rx-2   rx-25  rx-30  rx-36  rx-41  rx-47  rx-52  rx-58  rx-7   tx-11  tx-17  tx-22  tx-28  tx-33  tx-39  tx-44  tx-5   tx-55  tx-60
rx-1   rx-15  rx-20  rx-26  rx-31  rx-37  rx-42  rx-48  rx-53  rx-59  rx-8   tx-12  tx-18  tx-23  tx-29  tx-34  tx-4   tx-45  tx-50  tx-56  tx-61
rx-10  rx-16  rx-21  rx-27  rx-32  rx-38  rx-43  rx-49  rx-54  rx-6   rx-9   tx-13  tx-19  tx-24  tx-3   tx-35  tx-40  tx-46  tx-51  tx-57  tx-62
rx-11  rx-17  rx-22  rx-28  rx-33  rx-39  rx-44  rx-5   rx-55  rx-60  tx-0   tx-14  tx-2   tx-25  tx-30  tx-36  tx-41  tx-47  tx-52  tx-58  tx-7
rx-12  rx-18  rx-23  rx-29  rx-34  rx-4   rx-45  rx-50  rx-56  rx-61  tx-1   tx-15  tx-20  tx-26  tx-31  tx-37  tx-42  tx-48  tx-53  tx-59  tx-8
rx-13  rx-19  rx-24  rx-3   rx-35  rx-40  rx-46  rx-51  rx-57  rx-62  tx-10  tx-16  tx-21  tx-27  tx-

Excel

转换日期的函数:

=DATE(RIGHT(A2,4),LEFT(A2,2),MID(A2,4,2))

DATE的输入是:

=DATE(,,)

RIGHT提取右边4个字符作为年,LEFT提取左边2个字符作为月,MID提取中间2个字符作为日

转换前 转换后
02/08/2022 2022/02/08
01/06/2022 2022/01/06
11/30/2021 2021/11/30
10/26/2021 2021/10/26
10/08/2021 2021/10/08

fdisk

各个盘上装有很多系统。想要格掉应该如何操作。

fdisk /dev/sdb
g
w

如果格不掉,先格下一级,如

fdisk /dev/sdb

ffmpeg

某一客户需要使用ffmpeg生成串流,需要编辑安装ffmpeg以及ffmpeg的组件。

软件列表

  • gcc 7.3.0
  • cmake 3.15.1
  • ffmpeg 4.2
  • x264 代码仓只有一个版本
  • x265 3.1_RC2
  • fdk_acc v2.0.0
编译安装

安装必要编译工具:

sudo yum install autoconf automake bzip2 bzip2-devel cmake freetype-devel gcc gcc-c++ git libtool make mercurial pkgconfig zlib-devel
sudo yum groupinstall "Development Tools"
#如果出现没有汇编器
sudo yum install nasm -y
切换gcc

客户使用的gcc版本是7.3,如果没有特殊需求,可以默认。

scl enable devtoolset-7 bash
克隆工程

耗时比较长,可能需要1个小时

git clone https://github.com/mirror/x264.git
git clone https://github.com/videolan/x265.git
git clone https://github.com/mstorsjo/fdk-aac.git
git clone https://git.ffmpeg.org/ffmpeg.git
X264
cd x264
PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-static
make
make install
X265
cd x265
git checkout -b 3.1_wangda_build 3.1
cd x265/build/arm-linux
cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="$HOME/ffmpeg_build" -DENABLE_SHARED:bool=off ../../source
make
make install
fdk-aac
git checkout -b v2.0.0_wangda_build v2.0.0
autoreconf -fiv
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
ffmpeg编译
cd ffmpeg
git checkout -b n4.2_wangda_build n4.2

PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \
  --prefix="$HOME/ffmpeg_build" \
  --pkg-config-flags="--static" \
  --extra-cflags="-I$HOME/ffmpeg_build/include" \
  --extra-ldflags="-L$HOME/ffmpeg_build/lib" \
  --extra-libs=-lpthread \
  --extra-libs=-lm \
  --bindir="$HOME/bin" \
  --enable-gpl \
  --enable-libfdk_aac \
  --enable-libfreetype \
  --enable-libx264 \
  --enable-libx265 \
  --enable-nonfree

make
make install
执行

输入视频:

/home/me/video/FM_1080p.mp4
ffmpeg  -y  -re  -itsoffset 0.5  -stream_loop -1   -i '/home/me/video/FM_1080p.mp4'  \
 -c:v  libx264 -s 1920x1080  -refs 4 -preset medium -profile:v high -x264-params keyint=50:scenecut=0  \
 -pix_fmt yuv420p  -b:v 3000k  -vsync cfr -bufsize 3000k -maxrate 4500k -c:a  libfdk_aac -profile:a aac_low  \
 -b:a 128k  -ac 2.0 -ar 44100.0 -sn -dn -ignore_unknown  -metadata service_provider='WONDERTEK' \
 -metadata service_name='Service01' -mpegts_service_id '1' -mpegts_pmt_start_pid 4096 -streamid 0:256 \
 -mpegts_start_pid 256 -pcr_period 20 -f mpegts  -max_interleave_delta 1000M  \
 -mpegts_flags +latm 'udp://237.0.1.1:1511?ttl=255&pkt_size=1316&fifo_size=10000000&overrun_nonfatal=0' \
 -c:v  libx264  -s 1280x720 -refs 4 -preset medium -profile:v high -x264-params keyint=50:scenecut=0 -pix_fmt yuv420p  \
 -b:v 2000k  -vsync cfr -bufsize 2000k -maxrate 3000k -c:a  libfdk_aac -profile:a aac_low  \
 -b:a 128k  -ac 2.0 -ar 44100.0 -sn -dn -ignore_unknown  -metadata service_provider='WONDERTEK' \
 -metadata service_name='Service01' -mpegts_service_id '1' -mpegts_pmt_start_pid 4096 -streamid 0:256 \
 -mpegts_start_pid 256 -pcr_period 20 -f mpegts  -max_interleave_delta 1000M  \
 -mpegts_flags +latm 'udp://237.0.1.1:1521?ttl=255&pkt_size=1316&fifo_size=10000000&overrun_nonfatal=0'  \
 -c:v  libx264 -s 960x540  -refs 4 -preset medium -profile:v high -x264-params keyint=50:scenecut=0 -pix_fmt yuv420p \
 -b:v 1000k  -vsync cfr -bufsize 1000k -maxrate 1500k -c:a  libfdk_aac -profile:a aac_low  -b:a 128k  \
 -ac 2.0 -ar 44100.0 -sn -dn -ignore_unknown  -metadata service_provider='WONDERTEK' -metadata service_name='Service01' \
 -mpegts_service_id '1' -mpegts_pmt_start_pid 4096 -streamid 0:256 -mpegts_start_pid 256 -pcr_period 20 -f mpegts \
 -max_interleave_delta 1000M  -mpegts_flags +latm 'udp://237.0.1.1:1531?ttl=255&pkt_size=1316&fifo_size=10000000&overrun_nonfatal=0'

命令 run_ffmpeg

运行结果:

[libx264 @ 0x22869630] profile Progressive High, level 3.1, 4:2:0, 8-bit
Output #2, mpegts, to 'udp://237.0.1.1:1531?ttl=255&pkt_size=1316&fifo_size=10000000&overrun_nonfatal=0':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41
    service_provider: WONDERTEK
    service_name    : Service01
    encoder         : Lavf58.29.100
    Stream #2:0(eng): Video: h264 (libx264), yuv420p, 960x540, q=-1--1, 1000 kb/s, 25 fps, 90k tbn, 25 tbc (default)
    Metadata:
      creation_time   : 2018-04-17T10:04:47.000000Z
      handler_name    : ?Mainconcept Video Media Handler
      encoder         : Lavc58.54.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 1500000/0/1000000 buffer size: 1000000 vbv_delay: -1
    Stream #2:1(eng): Audio: aac (libfdk_aac) (LC), 44100 Hz, stereo, s16, 128 kb/s (default)
    Metadata:
      creation_time   : 2018-04-17T10:04:47.000000Z
      handler_name    : #Mainconcept MP4 Sound Media Handler
      encoder         : Lavc58.54.100 libfdk_aac
frame= 3084 fps= 25 q=31.0 q=29.0 q=31.0 size=   48914kB time=00:02:03.70 bitrate=3239.2kbits/s dup=39 drop=0 speed=0.998x

speed运行5分钟左右会稳定在0.998x,最终达到1X

可以打开4个窗口执行正常,但是第五个窗口会出现报错:

[me@centos ~]$ ~/bin/run_ffmpeg.sh
ffmpeg version n4.2 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 7 (GCC)
  configuration: --prefix=/home/me/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/me/ffmpeg_build/include --extra-ldflags=-L/home/me/ffmpeg_build/lib --ex
tra-libs=-lpthread --extra-libs=-lm --bindir=/home/me/bin --enable-gpl --enable-libfdk_aac --enable-libfreetype --enable-libx264 --enable-libx265 --enable-nonfree
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/me/video/FM_1080p.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41
    creation_time   : 2018-04-17T10:04:47.000000Z
  Duration: 00:02:00.09, start: 0.000000, bitrate: 10658 kb/s
    Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 10533 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc (default)
    Metadata:
      creation_time   : 2018-04-17T10:04:47.000000Z
      handler_name    : ?Mainconcept Video Media Handler
      encoder         : AVC Coding
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)
    Metadata:
      creation_time   : 2018-04-17T10:04:47.000000Z
      handler_name    : #Mainconcept MP4 Sound Media Handler
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (libfdk_aac))
  Stream #0:0 -> #1:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #1:1 (aac (native) -> aac (libfdk_aac))
  Stream #0:0 -> #2:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #2:1 (aac (native) -> aac (libfdk_aac))
Press [q] to stop, [?] for help
[libx264 @ 0x326eb320] using cpu capabilities: ARMv8 NEONe=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[libfdk_aac @ 0x326ed290] 2 frames left in the queue on closing
[libfdk_aac @ 0x327425f0] 2 frames left in the queue on closing
[libfdk_aac @ 0x32745360] 2 frames left in the queue on closing
Conversion failed!

定位过程:ffmpeg的每个进程生成了很多线程,CentOS默认普通用户的最大线程数量是4096,root用户的是不受限。

[me@centos ffmpeg]$ ulimit -a

max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

使用ulimit -u 设置最大进程数量

max user processes              (-u) 65535
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

修改后不再报错。

注意ulimit -u仅对当前窗口有效,需要永久改变的,需要写到文件当中

[me@centos ffmpeg]$ cat /etc/security/limits.d/20-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     65535
root       soft    nproc     unlimited
[me@centos ffmpeg]$

添加修改patch

Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (libfdk_aac))
  Stream #0:0 -> #1:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #1:1 (aac (native) -> aac (libfdk_aac))
  Stream #0:0 -> #2:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #2:1 (aac (native) -> aac (libfdk_aac))
Press [q] to stop, [?] for help
libavcodec/utils.c 548 avcodec_open2 ......       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A
    Last message repeated 2 times
libavcodec/utils.c 548 avcodec_open2 ......       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A
[libx264 @ 0x3e33b320] using cpu capabilities: ARMv8 NEON
x264 threadpool can not create thread!
fftools/ffmpeg.c 3520 avcodec_open2 ......
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[libfdk_aac @ 0x3e33d290] 2 frames left in the queue on closing
[libfdk_aac @ 0x3e3925f0] 2 frames left in the queue on closing
[libfdk_aac @ 0x3e395360] 2 frames left in the queue on closing
Conversion failed!

问题记录

ERROR: freetype2 not found using pkg-config

If you think configure made a mistake, make sure you are using the latest
version from Git.  If the latest version fails, report the problem to the
ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
Include the log file "ffbuild/config.log" produced by configure as this will help
solve the problem.

解决办法:

yum install  pkgconfig

filecoin

去中心化的存储网络

filecoin 目前没有ARM64版本。

[user1@centos filecoin]$ pwd
/home/user1/open_software/filecoin-release/filecoin
[user1@centos filecoin]$ file ./*
./go-filecoin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=357de502b13f0450cbe7b1fc0ed73fadffe9e1f5, not stripped
./paramcache:  ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=1c5add2b39bb2cd4c383af6cbef91fe9c4495af3, not stripped

filecoin的编译需要下载很多go模块, 被屏蔽。

[user1@centos go-filecoin]$
[user1@centos go-filecoin]$ FILECOIN_USE_PRECOMPILED_RUST_PROOFS=true go run ./build                                                                                                                     uild deps
pkg-config --version
0.27.1
Installing dependencies...
go mod download
 13.32 KiB / 13.32 KiB [===============================] 100.00% 100.43 KiB/s 0s
 147.90 MiB / 147.90 MiB [================================================================================================================================================] 100.00% 588.52 KiB/s 4m17s
 4.88 KiB / 4.88 KiB [========================================================================================================================================================] 100.00% 27.33 KiB/s 0s
 13.32 KiB / 13.32 KiB [======================================================================================================================================================] 100.00% 81.60 KiB/s 0s
 4.88 KiB / 4.88 KiB [========================================================================================================================================================] 100.00% 55.46 MiB/s 0s
 13.32 KiB / 13.32 KiB [=====================================================================================================================================================] 100.00% 378.19 KiB/s 0s
 2.04 GiB / 2.48 GiB [=======================================================================================================================>--------------------------]  82.07% 587.53 KiB/s 1h0m35s
 4.88 KiB / 4.88 KiB [========================================================================================================================================================] 100.00% 10.93 MiB/s 0s
 4.88 KiB / 4.88 KiB [========================================================================================================================================================] 100.00% 44.05 MiB/s 0s
 4.88 KiB / 4.88 KiB [===============================

执行成功出现:

                                 Dload  Upload   Total   Spent    Left  Speed
100 9498k  100 9498k    0     0   548k      0  0:00:17  0:00:17 --:--:--  593k
+ [[ 0 -ne 0 ]]
+ eval 'tarball_path='\''/tmp/filecoin-ffi-Linux_16941733.tar.gz'\'''
++ tarball_path=/tmp/filecoin-ffi-Linux_16941733.tar.gz
++ mktemp -d
+ tmp_dir=/tmp/tmp.hWE9Bq7GHa
+ tar -C /tmp/tmp.hWE9Bq7GHa -xzf /tmp/filecoin-ffi-Linux_16941733.tar.gz
+ find -L /tmp/tmp.hWE9Bq7GHa -type f -name filecoin.h -exec cp -- '{}' . ';'
+ find -L /tmp/tmp.hWE9Bq7GHa -type f -name libfilecoin.a -exec cp -- '{}' . ';'
+ find -L /tmp/tmp.hWE9Bq7GHa -type f -name filecoin.pc -exec cp -- '{}' . ';'
+ echo 'successfully installed prebuilt libfilecoin'
successfully installed prebuilt libfilecoin

filecoine工程分析

lotus---------------------------------主工程 https://github.com/filecoin-project/lotus.git
|-- extern
|   |-- filecoin-ffi------------------向量化 https://github.com/filecoin-project/filecoin-ffi.git
|   |                                 filcrypto.h filcrypto.pc libfilcrypto.a
|   |
|   `-- serialization-vectors---------rust库 https://github.com/filecoin-project/serialization-vectors

问题记录

缺少opencl
# github.com/filecoin-project/filecoin-ffi
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: cannot find -lOpenCL
collect2: error: ld returned 1 exit status

解决办法

sudo dnf install -y ocl-icd-devel.aarch64
输入文件是x86的
lecoin.a(futures_cpupool-1f3bf26aa9279af0.futures_cpupool.ahnnhqyk-cgu.3.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(futures_cpupool-1f3bf26aa9279af0.futures_cpupool.ahnnhqyk-cgu.4.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file \`/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(qutex-8dfbe8197b98ccc5.qutex.8mzkyvtz-cgu.0.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(qutex-8dfbe8197b98ccc5.qutex.8mzkyvtz-cgu.1.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(blake2s_simd-e06fbb96181f173a.blake2s_simd.cqrh7vav-cgu.11.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(crossbeam_utils-e8dfdc01aecf4d4c.crossbeam_utils.av4hkwzx-cgu.0.rcgu.o)' is incompatible with aarch64 output
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: i386:x86-64 architecture of input file `/home/user1/open_software/gopath/src/github.com/filecoin-project/go-filecoin/vendors/filecoin-ffi/libfilecoin.a(blake2b_simd-8e21006b644a8dcd.blake2b_simd.du1wdeab-cgu.11.rcgu.o)' is incompatible with aarch64 o

未解决。可能是go编译工程没有成功

fio

fio是多线程IO负载生成测试工具,是测试服务器硬盘性能的优秀工具。

命令行参数:

fio --ramp_time=5 --runtime=60 --size=100% --ioengine=libaio --filename=/dev/sdb --name=4k_read --numjobs=1 --iodepth=64 --rw=read --bs=4k --direct=1

#测试硬盘读带宽,读io不会影响ceph的文件系统
taskset -c 1 fio --ioengine=libaio --direct=1 --rw=read --bs=4096k --iodepth=32 --name=test --numjobs=1 --filename=/dev/sdv --runtime=60

配置文件参数:

; -- start job file including.fio --
[global]
filename=/tmp/test
filesize=1m
include glob-include.fio

[test]
rw=randread
bs=4k
time_based=1
runtime=10
include test-include.fio
; -- end job file including.fio --

详细说明可以参考[官方文档]

配置文件参数可以转化成命令行的写法:

fio configfile --showcmd

一些基础知识

以下内容摘自 系统技术非业余研究

随着块设备的发展,特别是SSD盘的出现,设备的并行度越来越高。利用好这些设备,有个诀窍就是提高设备的iodepth, 一把喂给设备更多的IO请求,让电梯算法和设备有机会来安排合并以及内部并行处理,提高总体效率。

应用使用IO通常有二种方式:同步和异步。 同步的IO一次只能发出一个IO请求,等待内核完成才返回, 这样对于单个线程iodepth总是小于1,但是可以透过多个线程并发执行来解决,通常我们会用16-32个线程同时工作把iodepth塞满。 异步的话就是用类似libaio这样的linux native aio一次提交一批,然后等待一批的完成,减少交互的次数,会更有效率。

参数配置要求

bs          #块大小必须是扇区(512字节)
ramp_time   #作用是减少日志对高速IO的影响
direct      #使用direct,fsync就不会发生

查看硬盘支持的最大队列深度

lsscsi在redhat,centOS,ubuntu都支持 ,每个操作系统的设置都不一样

X86

[root@localhost queue]# lsscsi -l
[0:0:0:0]    enclosu 12G SAS  Expander         RevB  -
  state=running queue_depth=256 scsi_level=7 type=13 device_blocked=0 timeout=90
[0:0:13:0]   disk    HUAWEI   HWE32SS3008M001N 2774  /dev/sda
  state=running queue_depth=64 scsi_level=7 type=0 device_blocked=0 timeout=90
[0:2:0:0]    disk    AVAGO    AVAGO            4.65  /dev/sdb
  state=running queue_depth=256 scsi_level=6 type=0 device_blocked=0 timeout=90

ARM-ubuntu

root@ubuntu:~/app/fio-fio-3.13# lsscsi -l
[0:0:0:0]    disk    ATA      HUS726040ALA610  T7R4  /dev/sda
  state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30
[0:0:1:0]    disk    ATA      HUS726040ALA610  T7R4  /dev/sdb
  state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30
[0:0:2:0]    disk    HUAWEI   HWE32SS3008M001N 2774  /dev/sdc
  state=running queue_depth=64 scsi_level=7 type=0 device_blocked=0 timeout=30
[0:0:3:0]    enclosu 12G SAS  Expander         RevB  -
  state=running queue_depth=64 scsi_level=7 type=13 device_blocked=0 timeout=0

redhat支持, centOS不支持

cat /sys/block/sdb/device/queue_depth
32

fio编译

./configure 提示一些fio特性会依赖zlib

yum install zlib-devel.aarch64

编译安装好之后,version还是不对,需要重新登录系统才会生效。

[root@localhost fio-fio-3.13]# fio -v
fio-3.7
[root@localhost ~]# which fio
/usr/local/bin/fio
[root@localhost ~]# /usr/local/bin/fio -v
fio-3.13
[root@localhost ~]#
[root@localhost fio-fio-3.13]# make install
install -m 755 -d /usr/local/bin
install fio t/fio-genzipf t/fio-btrace2fio t/fio-dedupe t/fio-verify-state ./tools/fio_generate_plots ./tools/plot/fio2gnuplot ./tools/genfio ./tools/fiologparser.py ./tools/hist/fiologparser_hist.py ./tools/fio_jsonplus_clat2csv /usr/local/bin
install -m 755 -d /usr/local/man/man1
install -m 644 ./fio.1 /usr/local/man/man1
install -m 644 ./tools/fio_generate_plots.1 /usr/local/man/man1
install -m 644 ./tools/plot/fio2gnuplot.1 /usr/local/man/man1
install -m 644 ./tools/hist/fiologparser_hist.py.1 /usr/local/man/man1
install -m 755 -d /usr/local/share/fio
install -m 644 ./tools/plot/*gpm /usr/local/share/fio/

fio 调优指导

  1. 测试硬盘direct读写时,请使用erase命令清除硬盘数据
  2. BIOS关闭CPU节能模式,选择performance模式。同事设置风扇全速。
  3. 硬盘测试请如果有raid卡,请设置硬盘为JBOD模式
  4. 关闭SMMU可以提升随机读和随机写,顺序写性能
  5. fio 指定–ioengine=libaio时,应当指定 –direct=1。 这是避免使用主机页缓存的方法,写入输入会直接写入硬盘. 这样的测试结果是最低的,但是也是最接近真实的。 –direct=1对读测试的影响是,read操作不会因为内存大而结果变好
  6. 开启硬盘多队列 scsi_mod.use_blk_mq=y。内核启动时,按e,进入编辑,在linux启动先后添加
  7. 设置NUMA亲和性。 查看硬盘在哪个节点上,并使用–cpus_allowed或者taskset或者numctl手动亲核
  8. 绑中断。 建议设备中断、fio在同一个NUMA节点上。
  9. IRQ balancing。查看/proc/interrupts,是否均衡,如果没有,/etc/ini.d/irq_balance stop手动设置
  10. 拓展卡可能会影响4k测试性能,在需要测试的场景硬盘数量不多的情况下可以不使用拓展卡。
  11. 硬盘测试请设置–size=100%
  12. 硬盘测试时,256k, 512k和1M –filename=/dev/sdb单盘测试时,numjobs很大,带宽会上升,但是不准确(待核实)
  13. 发现numberjob不起作用时添加–thread
  14. –bs小于4k时,可以格式化硬盘sector size为512B。–bs>=4k时,格式化硬盘sector 为4KB可以获得更好性能。

第8条如下:

[global]
ioengine=libaio
direct=1
iodepth=32
rw=randread
bs=4k
thread
numjobs=1
runtime=100
group_reporting
[/dev/sdc]

参考参数

4k randwrite Peak IOPS
[global]
readwrite=randrw
rwmixread=0
blocksize=4k
ioengine=libaio
numjobs=4
thread=0
direct=1
iodepth=128
iodepth_batch=4
iodepth_batch_complete=4
group_reporting=1
ramp_time=5
norandommap=1
description=fio random 4k write peak IOPS
time_based=1
runtime=30
randrepeat=0
[/dev/fioa]
filename=/dev/fioa
cpus_allowed=1-4
4k randread Peak IOPS
[global]
readwrite=randrw
rwmixread=100
blocksize=4k
ioengine=libaio
numjobs=4
thread=0
direct=1
iodepth=128
iodepth_batch=4
iodepth_batch_complete=4
group_reporting=1
ramp_time=5
norandommap=1
description=fio random 4k read peak IOPS
time_based=1
runtime=30
randrepeat=0
[/dev/fioa]
filename=/dev/fioa
cpus_allowed=1-
1M randwrite Peak Bandwith
[global]
readwrite=randrw
rwmixread=0
blocksize=1M
ioengine=libaio
numjobs=4
thread=0
direct=1
iodepth=128
iodepth_batch=4
iodepth_batch_complete=4
group_reporting=1
ramp_time=5
norandommap=1
description=fio random 1M write peak BW
time_based=1
runtime=30
randrepeat=0
[/dev/fioa]
filename=/dev/fioa
cpus_allowed=1-4
1M write Peak Bandwith
[global]
readwrite=write
rwmixread=0
blocksize=1M
ioengine=libaio
thread=0
size=100%
iodepth=16
group_reporting=1
description=fio PRECONDITION sequential 1M complete write
21ioMemory VSL Peak Performance Guide
[/dev/fioa]
filename=/dev/fioa
cpus_allowed=1-4
1M read Peak Bandwith
[global]
readwrite=randrw
rwmixread=100
blocksize=1M
ioengine=libaio
numjobs=4
thread=0
direct=1
iodepth=128
iodepth_batch=4
iodepth_batch_complete=4
group_reporting=1
ramp_time=5
norandommap=1
description=fio random 1M read peak BW
time_based=1
runtime=30
randrepeat=0
[/dev/fioa]
filename=/dev/fioa
cpus_allowed=1-

编译安装fio以支持ceph rbd测试

[2019-07-20 20:59:26]  [root@192e168e100e111 ~]# unzip fio-3.15.zip
[2019-07-20 22:19:37]  [root@192e168e100e111 ~]# yum install librbd1-devel
[2019-07-20 22:20:15]  [root@192e168e100e111 fio-fio-3.15]# ./configure
[2019-07-20 22:20:21]  Rados engine                  yes
[2019-07-20 22:20:21]  Rados Block Device engine     yes # 有这几个代表安装librbd成功
[2019-07-20 22:20:21]  rbd_poll                      yes
[2019-07-20 22:20:21]  rbd_invalidate_cache          yes
[2019-07-20 22:20:26]  [root@192e168e100e111 fio-fio-3.15]# make -j8

如果不先安装librbd,编译完之后执行会出现

[2019-07-20 22:15:43]  fio: engine rbd not loadable
[2019-07-20 22:15:43]  fio: engine rbd not loadable
[2019-07-20 22:15:43]  fio: failed to load engine

除此之外,要想可以执行成功,就好是ceph节点上的/etc/ceph拷贝到当前的主机上。

问题记录:

问题1: ubuntu下缺少libaio库
4k_read: No I/O performed by libaio, perhaps try --debug=io option for details?

解决办法

sudo apt-get install libaio-dev
问题2:如何限制带宽和IOPS
--rate 400k,300k

把读速率设置为400kB/s, 把写速率设置为300kB/s

问题3:编译安装后发现libaio无法加载
[root@localhost fio_scripts]# perf record -ag -o fio_symbol.data fio --ramp_time=5 --runtime=60 --size=10g --ioengine=libaio --filename=/dev/sdb --name=4k_read --numjobs=1 --rw=read --bs=4k --direct=1
fio: engine libaio not loadable
fio: engine libaio not loadable
fio: failed to load engine

查看当前系统支持的io引擎

fio -enghelp

解决办法: 安装libaio

sudo apt-get install libaio-dev

fio on hdd and ssd

Taishan 2280 V2

系统盘是2块三星480G固态硬盘做raid1, 12个东芝8T HDD,单盘raid0. image0 image1 image2 image3

硬件信息汇总:

系统盘            : 2块三星480G固态硬盘raid1

interface Type    : SATA
Health Status     : Normal
Manufacturer      : SAMSUNG
Model             : SAMSUNG MZ7LH480HAHQ-00005
Serial Number     : S45PNA0M520238
Firmware Version  : HXT7304Q
Media Type        : SSD
Temperature       : 35 ℃
Remaining Lifespan: 99%
Firmware State    : ONLINE
SAS Address (0)   : 52c97b1c6dfe500c
SAS Address (1)   : 0000000000000000
Capacity          : 446.103 GB
Capable Speed     : 6.0 Gbps
Negotiated Speed  : 12.0 Gbps
Power State       : Spun Up
Hot Spare State   : None
Rebuild Status    : Stopped
Patrol Status     : Stopped
Location State    : Off
Power-On Hours    : 1277 h

NVMe硬盘          : 2块 单盘raid0
Manufacturer      : Huawei
Model             : HWE52P433T2M002N
Serial Number     : 032WFKFSK3000006
Firmware Version  : 2158
Media Type        : SSD
Temperature       : 45 ℃
Remaining Lifespan: 100%
Capable Speed     : 32.0 Gbps
Location State    : Off
Connected To      : CPU2

机械盘            : 12块 单盘raid0
anufacturer       : TOSHIBA
Model             : MG05ACA800E
Serial Number     : 59PYK31BFGLE
Firmware Version  : GX6J
Media Type        : HDD
Temperature       : 27 ℃
Firmware State    : JBOD
SAS Address (0)   : 52c97b1c6dfe500a
SAS Address (1)   : 0000000000000000
Capacity          : 7.277 TB
Capable Speed     : 6.0 Gbps
Negotiated Speed  : 12.0 Gbps
Power State       : Spun Up
Hot Spare State   : None
Rebuild Status    : Stopped
Patrol Status     : Stopped
Location State    : Off
Power-On Hours    : 1247 h

测试脚本: 【disk_fio_test.sh】 测试结果:

在一台kunpeng920上的测试

SMMU on, 在一台设备上尽可能多的做了组合测试,选出了最好的数据

[me@centos tmp]$ cat b.txt | column -t
host_name       runtime  size  bs    rw         ioengine  direct  numjobs  iodepth  filename  bw_KiB   iops         bw_MiB      lat_ms_meam  lat_ns_mean  lat_ns_max
192e168e100e12  600      100%  256k  randread   libaio    1       64       64       /dev/sdf  43420    169.7310     42.40234    23678.8729   23678872851  37065039490
192e168e100e12  600      100%  256k  randwrite  libaio    1       1        1        /dev/sdf  54462    212.7446     53.18555    4.6996       4699625.73   413772790
192e168e100e12  600      100%  256k  read       libaio    1       32       1        /dev/sdf  6255582  24435.9296   6108.96680  1.3089       1308864.408  623503530
192e168e100e12  600      100%  256k  write      libaio    1       64       256      /dev/sdf  5847007  22840.0085   5709.96777  716.9766     716976583.5  920424040
192e168e100e12  600      100%  4k    randread   libaio    1       16       256      /dev/sdf  897      225.9579     0.87598     17864.1867   17864186715  21114456910
192e168e100e12  600      100%  4k    randwrite  libaio    1       1        1        /dev/sdf  1391     347.7627     1.35840     2.8746       2874559.147  919949600
192e168e100e12  600      100%  4k    read       libaio    1       8        64       /dev/sdf  746481   186621.4393  728.98535   3.2837       3283683.535  1933942960
192e168e100e12  600      100%  4k    write      libaio    1       32       1        /dev/sdf  585402   146354.9459  571.68164   0.2180       217960.2171  18817890
192e168e100e12  600      100%  4m    randread   libaio    1       64       1        /dev/sdf  94628    23.1104      92.41016    2769.3687    2769368694   4212875440
192e168e100e12  600      100%  4m    randwrite  libaio    1       32       32       /dev/sdf  126823   30.9658      123.85059   32264.9330   32264933015  35820490480
192e168e100e12  600      100%  4m    read       libaio    1       64       256      /dev/sdf  6435719  1571.2295    6284.88184  10336.5298   10336529812  10737981310
192e168e100e12  600      100%  4m    write      libaio    1       64       32       /dev/sdf  5847967  1427.7340    5710.90527  1433.0398    1433039821   1615802560

完整的测试结果 【fio硬盘测试数据.xlsx】

SMMU off: 一共在8台设备上进行测试,可以避免偶然结果。其中一台的数据如下。 筛选出了数据最好的numjob和iodepth组合

host_name                runtime  size  bs    rw         ioengine  direct  filename      numjobs  iodepth  bw_KiB   iops         lat_ns_mean  lat_ns_max
192e168e100e101_nvme0n1  600      100%  256k  randread   libaio    1       /dev/nvme0n1  32       16       3210342  12540.45931  45808513.38  127630640
192e168e100e101_nvme0n1  600      100%  256k  randwrite  libaio    1       /dev/nvme0n1  1        256      2023135  7902.873418  32392154.82  102818630
192e168e100e101_nvme0n1  600      100%  256k  read       libaio    1       /dev/nvme0n1  16       32       3210366  12540.5233   46035985.6   119502680
192e168e100e101_nvme0n1  600      100%  256k  write      libaio    1       /dev/nvme0n1  1        128      2083106  8137.134058  15729582.49  50559270
192e168e100e101_nvme0n1  600      100%  4k    randwrite  libaio    1       /dev/nvme0n1  8        8        1700734  425184.5933  149629.4443  41212780
192e168e100e101_nvme0n1  600      100%  4k    read       libaio    1       /dev/nvme0n1  64       8        3091790  772955.5189  661690.5031  61792710
192e168e100e101_nvme0n1  600      100%  4k    write      libaio    1       /dev/nvme0n1  32       16       2560743  640189.8002  948572.273   36494820
192e168e100e101_nvme0n1  600      100%  4m    randread   libaio    1       /dev/nvme0n1  64       8        3211001  783.94301    718261711.3  1776198370
192e168e100e101_nvme0n1  600      100%  4m    randwrite  libaio    1       /dev/nvme0n1  1        128      1762644  430.333107   297392182    373856810
192e168e100e101_nvme0n1  600      100%  4m    read       libaio    1       /dev/nvme0n1  64       8        3210607  783.847676   728630599.3  1626636960
192e168e100e101_nvme0n1  600      100%  4m    write      libaio    1       /dev/nvme0n1  1        256      1960360  478.603709   534703842.7  916430350
192e168e100e101_sdj      600      100%  256k  randread   libaio    1       /dev/sdj      8        64       43441    169.709248   3012850673   6341512500
192e168e100e101_sdj      600      100%  256k  randwrite  libaio    1       /dev/sdj      16       32       48482    189.413206   2699572233   8316524460
192e168e100e101_sdj      600      100%  256k  read       libaio    1       /dev/sdj      8        64       491148   1918.558361  266861695    524374220
192e168e100e101_sdj      600      100%  256k  write      libaio    1       /dev/sdj      64       8        443254   1731.585755  295665421.6  582109830
192e168e100e101_sdj      600      100%  4k    randread   libaio    1       /dev/sdj      1        256      907      226.950732   1127753164   3086573000
192e168e100e101_sdj      600      100%  4k    randwrite  libaio    1       /dev/sdj      1        1        1242     310.638447   3217803.061  394770430
192e168e100e101_sdj      600      100%  4k    read       libaio    1       /dev/sdj      16       32       338390   84599.71719  6051475.778  156057670
192e168e100e101_sdj      600      100%  4k    write      libaio    1       /dev/sdj      1        256      239885   59971.4669   4268190.523  210018050
192e168e100e101_sdj      600      100%  4m    randread   libaio    1       /dev/sdj      64       8        138438   33.80584     14984108079  31247029840
192e168e100e101_sdj      600      100%  4m    randwrite  libaio    1       /dev/sdj      32       16       119707   29.228836    17304441367  27612403770
192e168e100e101_sdj      600      100%  4m    read       libaio    1       /dev/sdj      64       1        482399   117.78211    543373713.3  989296860
192e168e100e101_sdj      600      100%  4m    write      libaio    1       /dev/sdj      64       1        438998   107.186515   597085277    889477200

完整的测试结果 【fio硬盘测试数据.xlsx】

firewall

CentOS和redhat使用firewall [1] [2] 作为防火墙

systemctl status firewalld.service #查看防火墙服务运行状态,systemctl 也可以用来启动关闭,重启防火墙
firewall-cmd --state               #查看防火墙是否在运行
firewall-cmd --get-log-denied      #查看防火墙是否记录已拒绝的数据包
firewall-cmd --set-log-denied=all  #记录所有拒绝的请求, 设置之后可以在/var/log/messages看到所拒绝的请求log

由于命令行可能不熟悉,可以使用图形界面进行设置。

firewall-config

封锁一个IP

firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='192.168.1.1' reject"

封锁一个IP段

firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='192.168.1.0/24' reject"

添加一条rich rule

sudo firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" source address="198.51.100.0/32" port protocol="tcp" port="10000" log prefix="test-firewalld-log" level="info" accept"
firewall-cmd --add-rich-rule='rule family="ipv4" source address="139.159.243.11" destination address="192.168.100.12" protocol value="tcp" log prefix="upnpc" level="warning" accept'

防火墙开放80端口防火墙。

开放前,发起80端口的http请求会失败

me@ubuntu:~$ curl -X GET http://192.168.1.112/
curl: (7) Failed to connect to 192.168.1.112 port 80: No route to host
me@ubuntu:~$

可以观察/var/log/messages可以看到拒绝日志

Jun  7 23:29:25 localhost kernel: FINAL_REJECT: IN=enahisic2i0 OUT= MAC=c0:a8:02:ba:00:04:c0:a8:02:81:00:04:08:00 SRC=192.168.1.201 DST=192.168.1.112 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=26463 DF PROTO=TCP SPT=47840 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0
Jun  7 23:29:26 localhost kernel: FINAL_REJECT: IN=enahisic2i0 OUT= MAC=c0:a8:02:ba:00:04:c0:a8:02:81:00:04:08:00 SRC=192.168.1.201 DST=192.168.1.112 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=54899 DF PROTO=TCP SPT=47842 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0

防火墙允许80端口接收请求

firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --reload       #重要, 否则不起作用,在firewall-cmd --list-all也无法看到

开放后:

me@ubuntu:~$ curl -X GET http://192.168.1.112/
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">

防火墙设置NAT

第一种方法:

firewall-cmd --permanent --zone=public --add-masquerade   #开启NAT转发
firewall-cmd --zone=public --add-port=53/tcp --permanent  #开放DNS使用的53端口,否则可能导致内网服务器虽然设置正确的DNS,但是依然无法进行域名解析。
systemctl restart firewalld.service   #重启防火墙
firewall-cmd --query-masquerade  #检查是否允许NAT转发
firewall-cmd --remove-masquerade #关闭NAT转发

第二种方法:

net.ipv4.ip_forward=1 #开启ip_forward转发 在/etc/sysctl.conf配置文件尾部添加
sysctl -p #然后让其生效
firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -o enoxxxxxx -j MASQUERADE -s 192.168.1.0/24 #执行firewalld命令进行转发:
                                                                                                                        #注意enoxxxxxx对应外网网口名称
systemctl restart firewalld.service  #重启防火墙

问题记录:

问题:ERROR: ‘/usr/sbin/iptables-restore -w -n’ failed: Bad argument 53333

执行以下命令导致防火墙工作不正常, 表现为 firewall-cmd --reload 提示failed

firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I PREROUTING -dport 53333 -j DNAT --to 10.10.10.1:53333
firewall-cmd --permanent --direct --passthrough ipv4 -t nat -I POSTROUTING -d 10.10.10.1 -j SNAT --to 10.10.10.5

可以看到防火墙日志有报错

[root@vm_centos ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2020-01-04 02:47:03 CST; 1s ago
     Docs: man:firewalld(1)
 Main PID: 5729 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─5729 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Jan 04 02:47:03 vm_centos systemd[1]: Stopped firewalld - dynamic firewall daemon.
Jan 04 02:47:03 vm_centos systemd[1]: Starting firewalld - dynamic firewall daemon...
Jan 04 02:47:03 vm_centos systemd[1]: Started firewalld - dynamic firewall daemon.
Jan 04 02:47:04 vm_centos firewalld[5729]: ERROR: '/usr/sbin/iptables-restore -w -n' failed: Bad argument `53333'
                                                Error occurred at line: 2
                                                Try `iptables-restore -h' or 'iptables-restore --help' for more information....
Jan 04 02:47:04 vm_centos firewalld[5729]: ERROR: COMMAND_FAILED: Direct: '/usr/sbin/iptables-restore -w -n' failed: Bad argument `53333'
                                                Error occurred at line: 2
                                                Try `iptables-restore -h' or 'iptables-restore --help' for more information....
Hint: Some lines were ellipsized, use -l to show in full.

解决办法: 删掉新添加的规则。

进入/etc/firewalld/可以看到firewalld的配置文件

[root@vm_centos firewalld]# tree .
.
|-- direct.xml
|-- direct.xml.old
|-- firewalld.conf
|-- firewalld.conf.old
|-- helpers
|-- icmptypes
|-- ipsets
|-- lockdown-whitelist.xml
|-- services
`-- zones
    |-- public.xml
    |-- public.xml.old
    `-- trusted.xml

查找和53333相关的文件并删除

5 directories, 8 files
[root@vm_centos firewalld]# grep 53333 -rn .
./direct.xml:3:  <passthrough ipv="ipv4">-t nat -I PREROUTING -dport 53333 -j DNAT --to 10.1.1.1:53333</passthrough>
./zones/public.xml:10:  <port protocol="tcp" port="53333"/>
./zones/public.xml:11:  <port protocol="udp" port="53333"/>
./zones/public.xml.old:10:  <port protocol="tcp" port="53333"/>
./zones/public.xml.old:11:  <port protocol="udp" port="53333"/>
./direct.xml.old:3:  <passthrough ipv="ipv4">-t nat -I PREROUTING -dport 53333 -j DNAT --to 10.1.1.1:53333</passthrough>
[root@vm_centos firewalld]# rm direct.xml
[1]firewall-cmd基础用法 https://havee.me/linux/2015-01/using-firewalls-on-centos-7.html
[2]firewall-cmd防火墙命令2 https://wangchujiang.com/linux-command/c/firewall-cmd.html

GCC

https://mirrors.huaweicloud.com/gnu/gcc/ 找到对应版本源码 安装步骤为:

wget -c https://mirrors.huaweicloud.com/gnu/gcc/gcc-8.3.0/gcc-8.3.0.tar.xz
tar -zxf gcc-8.3.0.tar.gz
cd gcc-8.3.0/
./configure
make
make install

.configure 出现问题:

checking for gnatbind... no
checking for gnatmake... no
checking whether compiler driver understands Ada... no
checking how to compare bootstrapped objects... cmp --ignore-initial=16 $$f1 $$f2
checking for objdir... .libs
checking for the correct version of gmp.h... yes
checking for the correct version of mpfr.h... no
configure: error: Building GCC requires GMP 4.2+, MPFR 2.4.0+ and MPC 0.8.0+.
Try the --with-gmp, --with-mpfr and/or --with-mpc options to specify
their locations.  Source code for these libraries can be found at
their respective hosting sites as well as at
ftp://gcc.gnu.org/pub/gcc/infrastructure/.  See also
http://gcc.gnu.org/install/prerequisites.html for additional info.  If
you obtained GMP, MPFR and/or MPC from a vendor distribution package,
make sure that you have installed both the libraries and the header
files.  They may be located in separate packages.

原因是缺少依赖库。有一个安装脚本可以解决以来执行

root@ubuntu:~/1620-mount-point/gcc/gcc-8.3.0# ./contrib/download_prerequisites
2019-02-25 20:33:24 URL: ftp://gcc.gnu.org/pub/gcc/infrastructure/gmp-6.1.0.tar.bz2 [2383840] -> "./gmp-6.1.0.tar.bz2" [2]
2019-02-25 20:34:13 URL: ftp://gcc.gnu.org/pub/gcc/infrastructure/mpfr-3.1.4.tar.bz2 [1279284] -> "./mpfr-3.1.4.tar.bz2" [1]
2019-02-25 20:34:32 URL: ftp://gcc.gnu.org/pub/gcc/infrastructure/mpc-1.0.3.tar.gz [669925] -> "./mpc-1.0.3.tar.gz" [1]
2019-02-25 20:35:56 URL: ftp://gcc.gnu.org/pub/gcc/infrastructure/isl-0.18.tar.bz2 [1658291] -> "./isl-0.18.tar.bz2" [1]
gmp-6.1.0.tar.bz2: OK
mpfr-3.1.4.tar.bz2: OK
mpc-1.0.3.tar.gz: OK
isl-0.18.tar.bz2: OK
All prerequisites downloaded successfully.

重新执行./contrib/download_prerequisites即可

手动下载的方式

wget ftp://gcc.gnu.org/pub/gcc/infrastructure/isl-0.18.tar.bz2
wget ftp://gcc.gnu.org/pub/gcc/infrastructure/gmp-6.1.0.tar.bz2
wget ftp://gcc.gnu.org/pub/gcc/infrastructure/mpc-1.0.3.tar.gz
wget ftp://gcc.gnu.org/pub/gcc/infrastructure/mpfr-3.1.4.tar.bz2

查看GCC编译选项:

gcc -Q --help=target    #查询和target相关的编译选项
gcc -Q -v alpha.c       #查看编译某个文件的具体选项
gcc -print-search-dirs  #打印搜索路径

GCC 编译选项

-static                 #静态链接程序
-Wl,option              #把静态链接选项传递给连接器

gdb

gdb 常用命令

在gdb中给python代码打断点[#python_gdb]_ 。handle_uncaught_exception

b PyEval_EvalFrameEx if strcmp(PyString_AsString(f->f_code->co_name), "handle_uncaught_exception") == 0
[1]gdb 调试利器 https://linuxtools-rst.readthedocs.io/zh_CN/latest/tool/gdb.html
[2]https://stripe.com/blog/exploring-python-using-gdb

git bash windows

git bash 显示中文乱码

$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

        new file:   .gitignore
        new file:   Makefile
        new file:   make.bat
        new file:   source/KVM.md
        new file:   "source/ceph\346\265\213\350\257\225\345\221\275\344\273\244.md"
        new file:   "source/ceph\346\265\213\350\257\225\345\221\275\344\273\244_FS.md"
        new file:   source/conf.py
        new file:   source/index.rst
        new file:   "source/\345\246\202\344\275\225\344\275\277\347\224\250\347\224\237\346\210\220\346\226\207\346\241\243.md"
        new file:   "source/\346\237\245\347\234\213CPU\345\222\214\345\206\205\345\255\230\345\244\247\345\260\217.md"
解决办法
#### 1. git bash option 设置中文支持
git bash 界面右键→option→Text→local设置为zh_CN,Character Set设置为UTF-8

git bash option 设置中文支持

git config --global i18n.commitencoding utf-8
git config --global core.quotepath false

出现编码不对

user@DESKTOP MINGW64 /d/code/code2 (me-devel)
$ git status
On branch me-devel
Untracked files:
  (use "git add <file>..." to include in what will be committed)
        source/.vscode/
        source/categories/rst鍏抽敭.rst
        source/categories/鍩虹姒傚康_鍑哄彛_鍘熶骇_鐩存帴浜у搧_寰噺.rst
        source/categories/绠℃帶鍘熷洜鍜岃鍙緥澶?rst

解决办法

chcp 65001

设置windows unix文件格式

Checkout Windows-style, commit Unix-style [1]

Git will convert LF to CRLF when checking out text files. When committing text files, CRLF will be converted to LF. For cross-platform projects, this is the recommended setting on Windows (“core.autocrlf” is set to “true”)

Checkout as-is, commit Unix-style

Git will not perform any conversion when checking out text files. When committing text files, CRLF will be converted to LF. For cross-platform projects this is the recommended setting on Unix (“core.autocrlf” is set to “input”).

Checkout as-is, commit as-is

Git will not perform any conversions when checking out or committing text files. Choosing this option is not recommended for cross-platform projects (“core.autocrlf” is set to “false”)

[1]https://stackoverflow.com/questions/10418975/how-to-change-line-ending-settings

git command

git clone  ssh://[user@]host.xz[:port]/path/to/repo.git/
git config --global color.ui true   #有时候git没有颜色,可以这么设置

文件操作

rm readme.md           #删除文件, 但是文件还保存在暂存区
git rm readme.md       #从暂存区删除文件,以后不再追踪,从工作目录删除文件
git rm --cached README #从暂存区删除文件,但是仍然保留在工作区
git rm log/\*.log      #删除log目录下的所有.log文件,由于git有自己的展开,所以不需要shell进行展开
git clean -f           # 删除 untracked files
git clean -fd          # 连 untracked 的目录也一起删掉
git clean -xfd         # 连 gitignore 的untrack 文件/目录也一起删掉 (慎用,一般这个是用来删掉编译出来的 .o之类的文件用的)
git archive --format=zip --output ../kernel-alt-4.14.0-115.6.1.el7a.zip kernel-alt-4.14.0-115.6.1.el7a  #打包代码
# 在用上述 git clean 前,墙裂建议加上 -n 参数来先看看会删掉哪些文件,防止重要文件被误删
git clean -nxfd
git clean -nf
git clean -nfd

提交和历史

git log                                 #当前分支的提交历史
git log --oneline                       #单行显示log
git log --oneline --graph               #图形显示提交历史
git log --pretty=oneline pb/master      #远程仓库pb下的master提交历史
git log nfs-revert-and-hang             #查看某分支nfs-revert-and-hang的log
git log --name-only                     #仅仅显示修改的文件
git log --name-status                   #仅仅显示修改的文件,和文件状态
git log --oneline --decorate            #显示HEAD指针和分支指向的提交对象
git log --oneline master..origin/master #显示本地master和远程仓库的commit差异, 只显示远程仓库有,而本地master没有的部分
git log --oneline master...origin/master #显示,除了两个分支都有的部分之外的差异。 远程仓库有本地没有 + 远程仓库没有本地有
git log --oneline --decorate --left-right --graph master...origin/master #带<表示属于master, 带>表示属于远程仓库

git tag --contains <commit>             #查看包含commit的tag
git log -p -2                           #展开显示每次提交差异, -2 只显示最近两次更新git
git reset HEAD CONTRIBUTING.md          #从暂存区测出被误staged的文件
git reset HEAD^                         #回退最近一次提交, 这个提交的修改会保留,git status 显示待添加
git reset --hard HEAD^                  #回退最近一次提交,这个提交不会被保留, git status 显示clean

你提交后发现忘记了暂存某些需要的修改,可以像下面这样操作。最后,提交只有一个。

git commit -m 'initial commit'
git add forgotten_file
git commit --amend
git add *

远程仓库远程

git remote -v                                        #显示远程仓库
git remote show origin                               #显示远程仓库详细信息
git ls-remote                                        #查看远程库更多信息
git push origin master                               #推送本地master分支到远程仓库origin

git tag                                              #显示标签
git tag -l 'v1.8.5*'                                 #显示某个标签详细信息

git remote add pb https://github.com/paulboone/ticgit #添加远程仓库
git remote rename pb paul                            #重命名远程仓库

git log --oneline origin/master..master              #查看本地master比远程仓库多多少个commit
一般情况下

PR 拉取与测试

git fetch origin pull/124/head:fauxrep2

只需要跟还数字124和分支名fauxrep2即可

分支创建管理

git branch -a                                       #显示多有本地和远程分支
git checkout -b iss53                               #创建分支并切换
git branch iss53
git checkout iss53
git branch -r                                       #查看所有远程分支,所有分支
git branch -a
git branch -d hotfix                                #删本地分支
git push origin --delete me-linux-comments          #删除远程仓库origin的me-linux-comments分支
git branch -m oldname newname                       #重命名分支
git ls-tree -r master --name-only                   #查看分支已经tracked的file
git push origin serverfix:awesomebranch             #推送本地serverfix分支到远程仓库上的awesomebranch
git push origin serverfix:serverfix                 #推送本地的serverfix分支到远程的serverfix分支
git checkout -b serverfix origin/serverfix          #创建并切换到跟踪远程分支的本地分支serverfix
git checkout -b sf origin/serverfix                 #创建并切换到跟踪远程分支的本地分支sf
git checkout --track origin/serverfix               #自动切换到跟踪远程分支的本地分支
git checkout --patch master include/uapi/linux/mii.h#把master分支的指定文件合并到当前分支

生成patch与合入patch

diff 和 patch 命令组合
使用diff比较文件差异并生成patch文件, 然后使用patch合入修订,适用于没有版本管理的场景 例子请查看[diff]
git diff 和 git apply 组合
使用git diff 成patch, 使用git apply 命令合入代码。 git apply 可以加参数–check,可以更加安全的合入和撤销代码
git diff > add_function.patch                 #当前仓库中修改,但是未暂存的文件生成patch
git diff --cached > add_function.patch        #当前仓库已经暂存,但是没提交的文件生成patch
git diff --staged --binary > mypatch.patch    #二进制文件patch
git diff --relative 1bc4aa..1c7b4e            #以相对当前路径,生成两个commit之间的patch,适合用于生成模块的patch


git apply add_function.patch                  #git apply 可以保证一个patch可以完整合入或者完全不合入
git apply -p0 add_function                    #如果需要去除前缀路径
git format-patch和git am组合
git format-patch可以针对git仓库的commit和版本生成patch,使用git am 可以完整合入patch中的commit信息,也就是作者和message等。前面的patch版本管理方式都是只针对代码改动,不包含提交的commit信息。
git format-patch master                                 #在当前分支,生成master到当前分支的patch,一个commit一个patch。默认当前分支是从参数中的分支(master)分出来的
git format-patch master --stdout > add_function.patch   #生成单个文件的patch
git format-patch -s fe21342443 -o today/                #生成自从fe21342443以来的patch,每个comit一个patch

git am add_function.patch                                #以提交方式合入patch
git apply add_function.patch                            #以修改,未暂存方式合入patch

如果错误向github提交了敏感信息如密码:

包含敏感信息的文件为server_start_up_log.txt

git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch docs/resources/server_start_up_log.txt' --prune-empty --tag-name-filter cat -- --all
git push origin master --force

use git over a SSH proxy

ssh -f -N -D 127.0.0.1:3128 xxx@xx.x.xx.xx

git config --global http.proxy 'socks5://127.0.0.1:3128'
git config --global https.proxy 'socks5://127.0.0.1:3128'

git submodule

如果在一个git仓库中,想要包含另一个git仓库,这个时候有多种实现方式。 可以使用repo 或者 git 自带的submodule。 这里介绍一下submodule。

克隆一个带子模块的仓库,这个时候会把仓库下的所有子模块都下载下来

git clone --recurse-submodules git@lixianfa.github.com:LyleLee/GoodCommand.git

也可以切换到工程之后使用命令更新

git submodule update

添加一个子工程

git submodule add git@github.com:LyleLee/arm_neon_example.git source/src/arm_neon_example
[user1@centos GoodCommand]$ git submodule add git@github.com:LyleLee/arm_neon_example.git source/src/arm_neon_example
Cloning into 'source/src/arm_neon_example'...
Enter passphrase for key '/home/user1/.ssh/id_rsa':
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 5 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (5/5), 1.64 KiB | 0 bytes/s, done.

修改.gitmodule后需要执行sync来同步url

git submodule sync

github two account

账户信息配置,取消全局账户

#查看配置信息
git config -l

#如果配置有全局账户,建议取消全局账户,因为我们需要每个不同的仓库使用自己的账户提交代码
git config --global --unset user.name
git config --global --unset user.email

如果想重新配置全局账户

git config --global user.name "zhangshan@gmail.com"
git config --global user.email "zhangshan"

假设这两个账号分别是

SSH的配置

配置ssh的目的是,每次提交代码的时候不需要使用https的方式每次都输入账户和密码。
我们需要分别产生这两个账号的公钥添加的github settting下面的 SSH and GPG keys下的SSH keys当中

查看是否有之前的公钥,并删除,否则git会自己选择默认公钥进行链接

ls ~/.ssh/
rm id_rsa_*

查看之前添加公钥

ssh-add -l

如果执行不成功

$ ssh-add -l
Could not open a connection to your authentication agent.

需要执行

eval `ssh-agent -s`

生成私钥公钥对

ssh-keygen -t rsa -C "one@gmail.com" -f ~/.ssh/id_rsa_one
ssh-keygen -t rsa -C "two@gmail.com" -f ~/.ssh/id_rsa_two

这个时候会得到文件

-rw-r--r-- 1 Administrator 197121 1831 2  12 17:36 id_rsa_one
-rw-r--r-- 1 Administrator 197121  405 2  12 17:36 id_rsa_one.pub
-rw-r--r-- 1 Administrator 197121 1831 2  12 19:09 id_rsa_two
-rw-r--r-- 1 Administrator 197121  409 2  12 19:09 id_rsa_two.pub

把id_rsa_one.pub和id_rsa_two.pub的内容添加到github账户的ssh-keys当中

cat id_rsa_one.pub
#复制内容,在浏览器中添加到github账户的ssh-keys当中

编辑~/.ssh/config文件,其中的Host是可以指定的,后面远程仓库的url需要和它一致

#one
Host one.github.com
HostName github.com
User git
IdentityFile ~/.ssh/id_rsa_one

#two
Host two.github.com
HostName github.com
User git
IdentityFile ~/.ssh/id_rsa_two

测试ssh是否成功

ssh -T git@one.github.com
ssh -T git@two.github.com
#如果没有添加公钥.pub到文件到相应的github账户会出现
Permission denied (publickey).
#如果已经添加公钥,会提示成功
Hi tom! You've successfully authenticated, but GitHub does not provide shell access.
me@ubuntu:~/.ssh$ ssh -T git@goodcommand.github.com
Enter passphrase for key '/home/me/.ssh/id_rsa_github':

me@ubuntu:~/.ssh$ eval `ssh-agent`
Agent pid 50820
me@ubuntu:~/.ssh$ ssh-add ~/.ssh/id_rsa_github
Enter passphrase for /home/me/.ssh/id_rsa_github:
Identity added: /home/me/.ssh/id_rsa_github (/home/me/.ssh/id_rsa_github)
me@ubuntu:~/.ssh$
me@ubuntu:~/.ssh$
me@ubuntu:~/.ssh$ ssh -T git@goodcommand.github.com
Hi LyleLee! You've successfully authenticated, but GitHub does not provide shell access.

教程提到每次重启都要执行:

ssh-add ~/.ssh/id_rsa_one
ssh-add ~/.ssh/id_rsa_two

可以使用-k避免每次重启都要执行添加动作

ssh-add -k ~/.ssh/id_rsa_one
ssh-add -k ~/.ssh/id_rsa_two

仓库配置

到每个仓库与下设置user.name 和 user.email

#仓库1
git config user.name "tom"
git config user.email "one@gmail.com"
#仓库2
git config user.name "sam"
git config user.email "two@gmail.com"

到每个仓库下修改,修改远程仓库地址,如果不修改,提交将不成功

#查看旧值
git config -l
remote.origin.url=git@two.github.com:LyleLee/GoodCommand.git
#设置新值
git config remote.origin.url "git@two.github.com:LyleLee/GoodCommand.git"

这个时候查看远程仓库的信息,可以看到已经修改好。

git remote -v
origin  git@two.github.com:LyleLee/GoodCommand.git (fetch)
origin  git@two.github.com:LyleLee/GoodCommand.git (push)

这个时候git push origin 就可以了

更换电脑,指定ssh使用的私钥

https://blog.csdn.net/SCHOLAR_II/article/details/72191042

设置代理

..code-block:: shell

git config –global http.proxy “http://username:password@proxy.server.name:8080” git config –global https.proxy “http://username:password@proxy.server.name:8080

待确认问题

ssh-keygen -f "/home/me/.ssh/known_hosts" -R "192.168.1.215"

这个命令是什么意思

问题: Bad owner or permissions on /home/me/.ssh/config

在config当中设置了连接github的私钥之后出现权限不对

[me@centos ~]$ ssh -T git@github.com
Bad owner or permissions on /home/me/.ssh/config

这个时候不要听信别人的把文件乱chown和chmod。查看现在的文件是,是664

[me@centos ~]$ ls -la /home/me/.ssh/config
-rw-rw-r-- 1 me me 88 Aug 29 11:38 /home/me/.ssh/config

其实只需要改成600就可以了, 也就是除了owner之外,组用户和其他用户都不可读,不可写

[me@centos .ssh]$ chmod 600 /home/me/.ssh/config
[me@centos .ssh]$ ssh -T git@github.com
Warning: Permanently added the RSA host key for IP address '13.250.177.223' to the list of known hosts.
Hi  You've successfully authenticated, but GitHub does not provide shell access.
[me@centos .ssh]$ ls -la
-rw-------   1 me me   88 Aug 29 11:38 config

这个问题第一次遇到,权限多了还不行

glibc

编译安装glibc [1]

tar -zxf glibc-2.30.tar.gz


cd glibc-2.30/build/
scl enable devtoolset-8 bash
../configure --prefix=/home/user1/install-dir
make -j96
make install

有时候需要安装比较高版本的make 和python

替换glibc之后出现补救办法

LD_PRELOAD=/lib64/libc-2.5.so  ln -s /lib64/libc-2.5.so /lib64/libc.so.6
[1]Glibc源码仓库 https://sourceware.org/git/?p=glibc.git;a=tree

glusterfs

一种网络文件系统

性能优化方法 [1] 性能优化方法,优化小文件存储

gluster volume set [VOLUME] [OPTION] [PARAMETER]
gluster volume get performance.io-thread-count
vi /etc/glusterfs/glusterfs.vol                     #或者在文件中配置。
[1]https://www.jamescoyle.net/how-to/559-glusterfs-performance-tuning
[2]https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/small_file_performance_enhancements

gmake

编译安装gmake

wget http://ftp.gnu.org/gnu/make/make-4.2.tar.gz
tar -zxf make-4.2.tar.gz
cd make-4.2
mkdir build
cd build
../configure --prefix=/home/sjtu_chifei/lxf/toolcollect/
make -j64
make install

go

编译安装golang

yum install golang       #安装软件源默认的golang,用于编译新版本的golang
git clone https://github.com/golang/go
cd src
./all.bash

如果编译成功

##### ../test/bench/go1
testing: warning: no tests to run
PASS
ok      _/home/me/go/test/bench/go1     1.914s

##### ../test

##### API check
Go version is "go1.12.7", ignoring -next /home/me/go/api/next.txt

ALL TESTS PASSED
---
Installed Go for linux/arm64 in /home/me/go
Installed commands in /home/me/go/bin
*** You need to add /home/me/go/bin to your PATH.

设置path,注意替换成自己的路径

[me@centos src]$ go version
go version go1.11.5 linux/arm64
[me@centos src]$ export PATH=/home/me/go/bin:$PATH
[me@centos src]$ echo $PATH
/home/me/go/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/me/.local/bin:/home/me/bin
[me@centos src]$
[me@centos src]$ go version
go version go1.12.7 linux/arm64
[me@centos src]$

go proxy

go 有时候需要使用代理才能访问

可以参考使用ssh tunnel,然后在终端

export http_proxy=socks5://127.0.0.1:7777
export https_proxy=socks5://127.0.0.1:7777

go build static

go build -a -ldflags '-extldflags "-static"'

怎么让go静态编译出目标文件 [1]

how to write Go code [必读]

https://golang.org/doc/code.html

package 必须在一个文件夹内,且一个文件夹内也只能有一个package,但是一个文件夹可以有多个文件 [2] 文件名跟包名没有直接关系。如果只有一个文件,通常可以写成包名。但是导入的时候,必须导入包所在的文件夹的路径。其实可以这样理解,import 的是 path(路径) [2]

怎么样组织golang工程文件 https://eli.thegreenplace.net/2019/simple-go-project-layout-with-modules/

问题记录

从源码安装go不成功
[2019-08-13 17:18:11]  [me@centos src]$ ./all.bash
[2019-08-13 17:18:16]  Building Go cmd/dist using /home/me/go1.4.
[2019-08-13 17:18:16]  ERROR: Cannot find /home/me/go1.4/bin/go.
[2019-08-13 17:18:16]  Set $GOROOT_BOOTSTRAP to a working Go tree >= Go 1.4.

解决办法:

先安装默认版本的go

yum install golang
[1]http://blog.wrouesnel.com/articles/Totally%20static%20Go%20builds/
[2](1, 2) https://www.jianshu.com/p/07ffc5827b26

go images

如何生成golang镜像,包含编译程序所需的依赖。

程序需要依赖pcap。 如果用官方的golang镜像, 默认不会包含pcap,所以会报错

pcap.h: No such file or directory

这个时候基于官方依赖,把依赖打包进去。

FROM golang:1.14-alpine

WORKDIR /go/src/app
RUN cp -a /etc/apk/repositories /etc/apk/repositories.bak && \
    sed -i "s@http://dl-cdn.alpinelinux.org/@https://mirrors.huaweicloud.com/@g" /etc/apk/repositories
RUN apk update && apk add libpcap-dev && apk add git && apk add gcc && apk add g++ && apk add openssh-client

VOLUME /go/src/app


CMD ["/bin/sh"]

构建镜像

user@server:~/Dockerfile_kunpeng/Dockerfile_golang_build$ docker build -t compiler .
Sending build context to Docker daemon  2.048kB
Step 1/9 : FROM golang:1.14-alpine
---> 3289bf11c284
Step 2/9 : WORKDIR /go/src/app
---> Using cache
---> 3a90cbce712d

......

Step 9/9 : CMD ["/bin/sh"]
---> Running in 289b9617108f
Removing intermediate container 289b9617108f
---> 5c4471bf0685
Successfully built 5c4471bf0685
Successfully tagged compiler:latest

之后就可以在容器里对工程进行编译了,指定ld是为了静态链接依赖库,要不然运行环境也需要有依赖库:

docker run --rm -it --name compiler -v "$(pwd):/go/src/app" compiler

/go/src/app # go build -a -ldflags '-extldflags "-static"' -o app.out .

在宿主机的目录下生成了目标文件

user@server:~/program/pcabapp$ ls app.out -lh
-rwxr-xr-x 1 root root 4.6M Jun 18 09:14 app.out

grep

如何使用 grep匹配多个数据

grub

如何设置启动选项

RedHat保存本次设置,作为下次的默认启动项. 需要设置这两项。 选中什么下次启动仍然是这个启动项

GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true

RedHat

grub模板位置

/etc/default/grub

grub.cfg位置

/boot/efi/EFI/redhat/grub.cfg

修改/etc/default/grub后更新命令

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

CentOS 7 1810

grub模板位置

/etc/default/grub

grub.cfg位置

/boot/grub2/grub.cfg

修改/etc/default/grub后更新命令

grub2-mkconfig -o /boot/grub2/grub.cfg

ubuntu 18.04 LTS

选择指定启动项启动

root@intel6248:~# sudo grub-reboot Ubuntu\,\ with\ Linux\ 4.15.0-112-generic
root@intel6248:~# systemctl reboot -i

grub模板位置

/etc/default/grub

grub.cfg位置

/boot/grub/grub.cfg

修改/etc/default/grub后更新命令

sudo grub-mkconfig -o /boot/grub/grub.cfg

查看系统已有的开机启动项:

grep "^menuentry" /boot/efi/EFI/redhat/grub.cfg
# 需要以menuentry开头

$ sudo grub-set-default 0
上面这条语句将会持续有效,直到下一次修改;下面的命令则只有下一次启动的时候生效:

$ sudo grub-reboot 0
将下次选择的启动项设为默认

grub官方文档:https://www.gnu.org/software/grub/manual/grub/grub.html#Introduction

hinicadm

1822网卡管理工具

hinicadm info                 #查看1822网口信息
hinicadm reset -i hinicX -p X #恢复出厂设置,X换位相应的ID
hinicadm reset -i hinic0 -p 0

1822驱动自动加载

cp hinic.ko /lib/modules/`uname -r`/updates
depmod `uname -r`

history

使用history命令可以查看历史命令。

现在默认的bash在多个终端窗口的表现是:

在窗口1执行

ls
rm foo -rf
pwd

在窗口2执行

git commit
git clone
git log

在窗口3执行

./configure
make
make install

依次关闭窗口1、2,3,重新打开一个窗口, 这个新窗口只会保留窗口3的内容。

我们希望history保留所有窗口的内容。 好处,不会漏。坏处,在新窗口看到的命令比较乱。

vim ~/.bashrc
# Avoid duplicates
export HISTCONTROL=ignoredups:erasedups
# When the shell exits, append to the history file instead of overwriting it
shopt -s histappend

# After each command, append to the history file and reread it
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"

hostnamectl

修改设置系统的hostname

hostnamectl --static set-hostname ceph1

注意–static

查看hostname,设置hostname,重新登陆生效。

[root@localhost home]# hostnamectl
   Static hostname: localhost.localdomain
         Icon name: computer-server
           Chassis: server
        Machine ID: b4dddad914c64fe7a35349040093ae45
           Boot ID: 58dff55183954147b79a39d4273f8c54
  Operating System: Red Hat Enterprise Linux 8.0 Beta (Ootpa)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:8.0:beta
            Kernel: Linux 4.18.0-68.el8.aarch64
      Architecture: arm64

[root@localhost home]# hostnamectl set-hostname redhat80
[root@localhost home]# exit

[root@redhat80 ~]# hostnamectl
   Static hostname: redhat80
         Icon name: computer-server
           Chassis: server
        Machine ID: b4dddad914c64fe7a35349040093ae45
           Boot ID: 58dff55183954147b79a39d4273f8c54
  Operating System: Red Hat Enterprise Linux 8.0 Beta (Ootpa)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:8.0:beta
            Kernel: Linux 4.18.0-68.el8.aarch64
      Architecture: arm64
[root@redhat80 ~]#

hostname的配置文件在下面路径, 往里面写入一个名字就可以了

/etc/hostname

ifstat

网络监控工具很多,但是一直想找针对指定网口流量的监控工具。 [1]

  1. Overall bandwidth - nload, bmon, slurm, bwm-ng, cbm, speedometer, netload
  2. Overall bandwidth (batch style output) - vnstat, ifstat, dstat, collectl
  3. Bandwidth per socket connection - iftop, iptraf, tcptrack, pktstat, netwatch, trafshow
  4. Bandwidth per process - nethogs

从大到小的观察方式: 整个系统 dstat -> 各个接口 ifstat -> 某个TCP连接 iftop

安装

yum install nload iftop nethogs htop ifstat pktstat

查看整个系统的流量。dstat

[root@localhost ~]# dstat
You did not select any stats, using -cdngy by default.
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw
  0   0  99   0   0   0|  16k   22k|   0     0 |  14k   20k|  10k 8862
  1   0  99   0   0   0|   0     0 |2476k  476M|   0     0 |  22k   11k
  1   0  99   0   0   0|   0     0 |2236k  400M|   0     0 |  20k   11k
  0   0  99   0   0   0|   0  2968k|2112k  418M|   0     0 |  20k   11k
  1   0  99   0   0   0|   0     0 |2646k  499M|   0     0 |  24k   13k
  1   0  99   0   0   0|   0     0 |2494k  446M|   0     0 |  23k   11k
  1   0  99   0   0   0|   0     0 |2333k  445M|   0     0 |  22k   11k
  1   0  99   0   0   0|   0     0 |2890k  531M|   0     0 |  25k   11k
  1   0  99   0   0   0|   0     0 |2743k  481M|   0     0 |  24k   11k
[root@localhost ~]#

查看指定网络端口的流量。ifstat nload

ifstat 可以观察所有接口或者指定接口的带宽

user1@Arm64-server:~$ ifstat -i eno1,eno2,eno3,enp189s0f0
      eno1                eno2                eno3             enp189s0f0
KB/s in  KB/s out   KB/s in  KB/s out   KB/s in  KB/s out   KB/s in  KB/s out
   0.00      0.00      0.00      0.00      0.00      0.00      0.06      0.10
   0.00      0.00      0.00      0.00      0.00      0.00      0.18      0.10
   0.00      0.00      0.00      0.00      0.00      0.00      0.06      0.10
   0.00      0.00      0.00      0.00      0.00      0.00      0.18      0.10
   0.00      0.00      0.00      0.00      0.00      0.00      0.12      0.10
   0.00      0.00      0.00      0.00      0.00      0.00      0.66      0.20

ifstat 目前在iproute2项目维护 [2]

nload 也用于观察接口的带宽

nload -m


Device em1 [192.168.100.118] (1/9):
==============================================================================================================================
Incoming:                                                      Outgoing:
Curr: 2.12 kBit/s                                              Curr: 28.61 kBit/s
Avg: 4.32 kBit/s                                               Avg: 49.00 kBit/s
Min: 0.00 Bit/s                                                Min: 0.00 Bit/s
Max: 7.30 kBit/s                                               Max: 100.50 kBit/s
Ttl: 84.48 MByte                                               Ttl: 95.69 MByte

Device em2 (2/9):
==============================================================================================================================
Incoming:                                                      Outgoing:
Curr: 0.00 Bit/s                                               Curr: 0.00 Bit/s
Avg: 0.00 Bit/s                                                Avg: 0.00 Bit/s
Min: 0.00 Bit/s                                                Min: 0.00 Bit/s
Max: 0.00 Bit/s                                                Max: 0.00 Bit/s
Ttl: 0.00 Byte                                                 Ttl: 0.00 Byte

查看端口下的TCP和UDP数据包

pktstat 可以查看各种类型数据包的占比

sudo pktstat -B

interface: enp189s0f0
Bps

   Bps    % desc
71.9   8% arp
73.1   8% ethertype 0x88cc
71.3   8% llc 802.1d -> 802.1d
111.8  12% tcp 192.168.1.107:34116 <-> Arm64-server:ssh
            udp Arm64-server:43057 <-> ubuntu:domain
            udp Arm64-server:59122 <-> ubuntu:domain
            udp Arm64-server:60086 <-> ubuntu:domain

查看进程的流量 iftop 和 nethogs

iftop 可以查看主机到各个主机的tcp socket连接

                         12.5Kb                   25.0Kb                   37.5Kb                   50.0Kb              62.5Kb
└────────────────────────┴────────────────────────┴────────────────────────┴────────────────────────┴─────────────────────────
localhost.localdomain:ssh                         => 115.171.85.202:51346                              32.1Kb  27.2Kb  23.1Kb
                                                  <=                                                   1.77Kb  1.38Kb   828b
localhost.localdomain:ssh                         => 192.168.100.12:41678                              2.28Kb  1.73Kb  2.09Kb
                                                  <=                                                    208b    208b    379b
255.255.255.255:bootps                            => 0.0.0.0:bootpc                                       0b      0b      0b
                                                  <=                                                      0b    266b     66b
localhost.localdomain:54269                       => public1.114dns.com:domain                            0b     59b     15b
                                                  <=                                                      0b     87b     22b
localhost.localdomain:33555                       => public1.114dns.com:domain                            0b      0b     13b
                                                  <=                                                      0b      0b     20b

nethogs 有同样的功能,但是有时候经常无法刷新

NetHogs version 0.8.5

    PID USER     PROGRAM                         DEV        SENT      RECEIVED
 155017 root     fio                             p7p2    40193.922     269.434 KB/sec
 155035 root     fio                             p7p2    42799.801     249.772 KB/sec
 155065 root     fio                             p7p2    27634.619     180.794 KB/sec
 155057 root     fio                             p7p2    29825.311     165.916 KB/sec
 155079 root     fio                             p7p2    30595.211     162.005 KB/sec
 155009 root     fio                             p7p2    22149.711     134.591 KB/sec
 155059 root     fio                             p7p2     5550.278      32.793 KB/sec
 155069 root     fio                             p7p2     5945.441      31.159 KB/sec
 158413 root     sshd: root@pts/1                em1         4.339       0.245 KB/sec
 155027 root     fio                             p7p2        0.119       0.089 KB/sec
[1]https://www.binarytides.com/linux-commands-monitor-network
[2]https://git.kernel.org/pub/scm/network/iproute2/iproute2.git/tree/misc/ifstat.c

interrupts

中断,它是一种由设备使用的硬件资源异步向处理器发信号。实际上,中断就是由硬件来打断操作系统。 大多数现代硬件都通过中断与操作系统通信。对给定硬件进行管理的驱动程序注册中断处理程序,是为了响应并处理来自相关硬件的中断。中断过程所做的工作包括应答并重新设置硬件, 从设备拷贝数据到内存以及反之,处理硬件请求,并发送新的硬件请求。 《linux内核设计与实现》

不同设备的中断:

解读中断

这里以树莓派的中断为例。

           CPU0       CPU1       CPU2       CPU3
 16:          0          0          0          0  bcm2836-timer   0 Edge      arch_timer
 17:    3047829    2104689    4451895    1361536  bcm2836-timer   1 Edge      arch_timer
 23:      15893          0          0          0  ARMCTRL-level   1 Edge      3f00b880.mailbox
 24:          2          0          0          0  ARMCTRL-level   2 Edge      VCHIQ doorbell
 46:          0          0          0          0  ARMCTRL-level  48 Edge      bcm2708_fb dma
 48:          0          0          0          0  ARMCTRL-level  50 Edge      DMA IRQ
 50:          0          0          0          0  ARMCTRL-level  52 Edge      DMA IRQ
 51:      35573          0          0          0  ARMCTRL-level  53 Edge      DMA IRQ
 54:        206          0          0          0  ARMCTRL-level  56 Edge      DMA IRQ
 59:          0          0          0          0  ARMCTRL-level  61 Edge      bcm2835-auxirq
 62:  139285704          0          0          0  ARMCTRL-level  64 Edge      dwc_otg, dwc_otg_pcd, dwc_otg_hcd:usb1
 79:          0          0          0          0  ARMCTRL-level  81 Edge      3f200000.gpio:bank0
 80:          0          0          0          0  ARMCTRL-level  82 Edge      3f200000.gpio:bank1
 86:      21597          0          0          0  ARMCTRL-level  88 Edge      mmc0
 87:       5300          0          0          0  ARMCTRL-level  89 Edge      uart-pl011
 92:       4489          0          0          0  ARMCTRL-level  94 Edge      mmc1
FIQ:              usb_fiq
IPI0:          0          0          0          0  CPU wakeup interrupts
IPI1:          0          0          0          0  Timer broadcast interrupts
IPI2:     590271     437681    1438135     374644  Rescheduling interrupts
IPI3:         21         22        346         94  Function call interrupts
IPI4:          0          0          0          0  CPU stop interrupts
IPI5:     550412     395048    1834241     236945  IRQ work interrupts
IPI6:          0          0          0          0  completion interrupts
Err:          0

kernel/irq/proc.c 中的函数可以看到打印函数

int show_interrupts(struct seq_file *p, void *v)
前半部分
第一列:是中断号。
第二、三、四、五列:每列一个CPU,是在该CPU上的中断计数器。可以看到17号中断产生了非常多,它是时钟中断。
/* print header and calculate the width of the first column */
if (i == 0) {
        for (prec = 3, j = 1000; prec < 10 && j <= nr_irqs; ++prec)
                j *= 10;

        seq_printf(p, "%*s", prec + 8, "");
        for_each_online_cpu(j)
                seq_printf(p, "CPU%-8d", j);
        seq_putc(p, '\n');
}
第六列是中断控制器。[bcm2836]是树莓派2的CPU。bcm2836-timer是cpu时钟中断控制器。[ARMCTRL-level]是bcm2836的顶层中断控制器。
第七列:硬件中断号?
if (desc->irq_data.domain)
        seq_printf(p, " %*d", prec, (int) desc->irq_data.hwirq);
else
        seq_printf(p, " %*s", prec, "");

第八列:中断级别。

#ifdef CONFIG_GENERIC_IRQ_SHOW_LEVEL
        seq_printf(p, " %-8s", irqd_is_level_type(&desc->irq_data) ? "Level" : "Edge");
#endif

第九列:就是注册的终端处理程序。有多个逗号的表示这个中断号对应有多个中断处理程序。

action = desc->action;
if (action) {
        seq_printf(p, "  %s", action->name);
        while ((action = action->next) != NULL)
                seq_printf(p, ", %s", action->name);
}

dwc_otg, dwc_otg_pcd, dwc_otg_hcd:usb1代表以太网或者USB中断 [x86云主机的中断]中的i8042代表键盘控制器中断 ##术语 IRQ 中断请求 ISR Interrupt Service Routine 中断服务例程

fio benchmark

常用测试硬盘性能的工具有fio和vdbench。
fio的介绍安装请查看fio
vdbench的介绍和安装请查看vdbench

通常,我们认为普通机械硬盘的吞吐量是100MB/s [参考],固态硬盘的吞吐量是200MB/s ## x86服务器

cpu

ubuntu@ubuntu:~$ lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              64
On-line CPU(s) list: 0-63
Thread(s) per core:  2
Core(s) per socket:  16
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               79
Model name:          Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
Stepping:            1
CPU MHz:             2598.100
CPU max MHz:         2600.0000
CPU min MHz:         1200.0000
BogoMIPS:            5188.28
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            40960K
NUMA node0 CPU(s):   0-15,32-47
NUMA node1 CPU(s):   16-31,48-63

硬盘

=== START OF INFORMATION SECTION ===
Vendor:               HUAWEI
Product:              HWE32SS3008M001N
Revision:             2774
Compliance:           SPC-4
User Capacity:        800,166,076,416 bytes [800 GB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate:        Solid State Device
Form Factor:          2.5 inches
Logical Unit id:      0x5d0efc1ec8047002
Serial number:        2102311TNB10J8000371
Device type:          disk
Transport protocol:   SAS (SPL-3)
Local Time is:        Fri Mar 15 18:00:41 2019 CST
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled

不确定raid卡是否对测试有影响:

SAS3108
SAS 12G
支持条带大小范围是 64 KB ~ 1 MB

两个硬盘,都是raid0,分别添加到两个逻辑盘当中

软件

OS: 18.04.2 LTS (Bionic Beaver)
内核: Linux ubuntu 4.15.0-46-generic
fio: fio-3.13

测试结果

使用如下命令,仅改变filename、numbjobs、iodepth、rw、bs

fio --ramp_time=5 --runtime=15 --size=20g --ioengine=libaio --filename=/dev/sdb --name=4k-read-64-64 --numjobs=64 --iodepth=64 --rw=read --bs=4k --direct=1 --group_report

测试脚本如下:

/src/io_all.sh

测试log如下:

这个测试还有影响测试的因素,一个前后两个测试之间还有影响, 导致手动执行时结果更好。 测试时间较短,可靠行不足。 可以考虑绑核以提升性能

指定--size 20g 或者10g,测试结果偏好,应该只指定runtime更接近真实情况。

绑核的影响

绑核性能可以提升一倍。 测试命令

numactl -C 0-7 -m 0 fio -name=iops -rw=read -bs=4k -runtime=1000 -iodepth=64 -numjobs=8 -filename=/dev/sdc -ioengine=libaio -direct=1 -group_reporting
fio -name=iops -rw=read -bs=4k -runtime=1000 -iodepth=64 -numjobs=8 -filename=/dev/sdc -ioengine=libaio -direct=1 -group_reporting

numa的影响

使用如下命令观察numactl设置对测试结果的影响

numactl -C 0-7 -m 0 fio --name=iops --rw=read --bs=4k --runtime=60 --iodepth=64 --numjobs=8 --filename=/dev/sdc --ioengine=libaio --direct=1 --group_reporting
numactl -C 48-56 -m 1 fio --name=iops --rw=read --bs=4k --runtime=60 --iodepth=64 --numjobs=8 --filename=/dev/sdc --ioengine=libaio --direct=1 --group_reporting

测试结果,前面的CPU测试结果偏好,内存区域0测试结果较好

32-40 -m 0 674
32-40 -m 1 665
32-40 -m 2 655
32-40 -m 3 630

48-56 -m 0 515
48-56 -m 1 543
48-56 -m 2 495
48-56 -m 3 540

选项--size的影响

不建议设置size,因为fio会尝试对指定size的文件或者硬盘进行这个区域内的循环读写。裸盘测试不建议设置size。

hdparm -t可以简单对硬盘进行测试,测试结果待分析

sudo hdparm -t /dev/sdc

/dev/sdc:
 Timing buffered disk reads: 782 MB in  3.01 seconds = 260.07 MB/sec

ip

ip a add 192.168.1.50/24 dev enp3s0
ip a del 192.168.1.50/24 dev enp3s0
ip link set enp3s0 up
ip link set enp3s0 down
ip monitor

ip命令参考[https://www.cyberciti.biz/faq/linux-ip-command-examples-usage-syntax/]

iperf

网络吞吐量测试

测试TCP带宽

# 服务器
iperf -s                #默认是TCP
# 客户端
iperf -c 192.168.1.166

测试UDP带宽

# 服务器
iperf -u -s             #如果不设置-u选项,服务器默认是tcp,会出现read failed: Connection refused
# 客户端
iperf -u -c 192.168.1.166

ipfs

去中心化web文件系统

二进制下载安装

wget https://dist.ipfs.io/go-ipfs/v0.4.23/go-ipfs_v0.4.23_linux-arm64.tar.gz
tar -xf go-ipfs_v0.4.23_linux-arm64.tar.gz
sudo ./install.sh
ipfs
me@ubuntu:~$ ipfs add a.txt
added QmRAcHC1XgoZKC9hi3Uwdww18AcU9u7o7FsjiLxX98fNJv a.txt
 398 B / 398 B [=============================================================================================================================================================================] 100.00%me@ubuntu:~$ ipfs add a.png
added QmWQJSkFsqcrg97sSLKGzPgXq5aawr3GJCoE4b45rTk78A a.png
 77.07 KiB / 77.07 KiB [=====================================================================================================================================================================] 100.00%me@ubuntu:~$ ipfs add app.js
added QmaFR91MabEgYzZSoPv7PUkEmWMknGVoN9HGqypqNwdQz4 app.js
 234 B / 234 B [=============================================================================================================================================================================] 100.00%me@ubuntu:~$

go 包管理器下载安装

go get github.com/jbenet/go-ipfs/cmd/ipfs

运行效果:

image0

ipmitool

好像是一种管理工具。可以连接到服务器的串口输出

ipmitool -I lanplus -H 192.168.2.151 -U Administrator -P Adminpasscode sol activate
192.168.2.151 是IP地址
Administrator 是用户名
Adminpasscode 是密码

当发现ipmi无法使用时,可以另起session,kill掉现在的连接

ipmitool -I lanplus -H 192.168.2.151 -U Administrator -P Adminpasscode sol deactivate

如果ipmitool 连接到了目标单板但是没有输出。有两种设置方法:

方法一:修改BIOS设置

#开源版本
BIOS -> Device Manager -> Console Preference Selection -> Preferred console Serial
#产品版本
BIOS -> Advanced -> MISC Config -> Support SPCR  <Enabled>
                         BIOS Setup Utility V2.0
          Advanced
/--------------------------------------------------------+---------------------\
|                     MISC Config                        |    Help Message     |
|--------------------------------------------------------+---------------------|
|   Support Smmu                 <Enabled>               |Memory Print Level   |
|   Support GOP FB for SM750     <Disabled>              |Set. Disable: Do     |
|   Support SPCR                 <Enabled>               |not print any MRC    |
|                                                        |statement/ Minimum:  |
|   System Debug Level           <Debug>                 |Print the most       |
|   Memory Print Level           <Minimum>               |important(High       |
|   CPU Prefetching              <Enabled>               |level) MRC           |
|   Configuration                                        |statement/ Minmax:   |
|   Support Down Core            <Disabled>              |Print the            |
|                                                        |Mid-important(Mid    |
|                                                        |level) and most      |
|                                                        |important MRC        |
|                                                        |statement/ Maximum:  |
|                                                        |MRC statement        |
|                                                        |                     |

方法二:修改OS的/etc/default/grub,设置串口重定向, 鲲鹏设备在quiet后面添加 console=ttyAMA0,115200 , intel设备添加 console=ttyS0,115200

CentOS、RetHat:Kunpeng

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap
      rhggb quiet console=ttyAMA0,115200"

CentOS、RetHat:Intel

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos00/root rd.lvm.lv=centos00/swap
      rhgb quiet console=ttyS0,115200"

ubuntu

GRUB_CMDLINE_LINUX="console=ttyAMA0,115200"

更新grub.cfg文件。

#RedHat
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
#CentOS
grub2-mkconfig -o /boot/grub2/grub.cfg
#ubuntu
sudo grub-mkconfig -o /boot/grub/grub.cfg

设置结果, 可以查看grub.cfg

### BEGIN /etc/grub.d/10_linux ###
menuentry 'CentOS Linux (3.10.0-957.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-957.el7.x86_64-advanced-bdd56b03-059d-4192-af2e-e70610dcd3d5' {
      load_video
      set gfxpayload=keep
      insmod gzio
      insmod part_msdos
      insmod xfs
      set root='hd0,msdos1'
      if [ x$feature_platform_search_hint = xy ]; then
         search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  934e58ff-667e-49df-9779-f6a32a7a98a5
      else
         search --no-floppy --fs-uuid --set=root 934e58ff-667e-49df-9779-f6a32a7a98a5
      fi
      linux16 /vmlinuz-3.10.0-957.el7.x86_64 root=/dev/mapper/centos00-root ro crashkernel=auto rd.lvm.lv=centos00/root rd.lvm.lv=centos00/swap rhgb quiet console=ttyAMA0,115200
      initrd16 /initramfs-3.10.0-957.el7.x86_64.img
}

警告

这里主要注意更新grub.cfg的方式,grub更多内容请参考 grub

以下所有命令需要先执行:

ipmitool -H 192.168.1.59 -I lanplus -U Administrator -P Adminpasscode

#电源管理:

ipmitool -H 192.168.1.59 -I lanplus -U Administrator -P Adminpasscode chassis power off     #(硬关机,直接切断电源)
ipmitool -H 192.168.1.59 -I lanplus -U Administrator -P Adminpasscode chassis power power soft      #(软关机,即如同轻按一下开机按钮)
ipmitool -H 192.168.1.59 -I lanplus -U Administrator -P Adminpasscode chassis power power on        #(硬开机)
ipmitool -H 192.168.1.59 -I lanplus -U Administrator -P Adminpasscode chassis power power reset     #(硬重启,断电上电)
ipmitool -H 192.168.1.59 -I lanplus -U Administrator -P Adminpasscode chassis power power status    #(获取当前电源状态)
ipmitool -H 192.168.1.59 -I lanplus -U Administrator -P Adminpasscode chassis power cycle #(断电1秒后上电)

上面的命令很长,每次打那么多字会太不友好了,可以进入ipmitool交互模式,后面直接输入命令就可以了。

ipmitool -I lanplus -H 192.168.1.233 -U Administrator -P Admin@9000 shell

远程引导(当次有效)

chassis bootdev pxe     #网络引导
chassis bootdev disk    #硬盘引导
chassis bootdev cdrom   #光驱引导
chassis bootdev bios    #重启后停在BIOS菜单
chassis bootdev pxe     #重启后从PXE启动

chassis bootdev 在1620有. 在1620 CS上可以。 要再OS里面systemctl reboot -i 有效。 ipmitool

读取系统状态

sensor list   #显示系统所有传感器列表
fru list   #显示系统所有现场可替代器件的列表
sdr list   #显示系统所有SDRRepository设备列表
pef list      #显示系统平台时间过滤的列表

#系统日志类

sel elist    #显示所有系统事件日志
sel clear    #删除所有系统时间日志
sel delete ID   #删除第ID条SEL
sel time get    #显示当前BMC的时间
sel time set    #设置当前BMC的时间

#BMC系统相关的命令

mc info             #显示BMC版本信息
bmc reset cold      #BMC热启动
bmc reset warm      #BMC冷启动

#通道相关命令

channel info #显示系统默认channel
channel authcap channel-number privilege  #修改通道的优先级别
channel getaccess channel-number user-id #读取用户在通道上的权限
channel setacccess channel-number  user-id callin=on ipmi=on link=onprivilege=5   #设置用户在通道上的权限
Channel 0x1 info:   #通道1
  Channel Medium Type   : 802.3 LAN
  Channel Protocol Type : IPMB-1.0
  Session Support       : multi-session
  Active Session Count  : 1
  Protocol Vendor ID    : 7154
  Volatile(active) Settings
    Alerting            : disabled
    Per-message Auth    : enabled
    User Level Auth     : enabled
    Access Mode         : always available
  Non-Volatile Settings
    Alerting            : enabled
    Per-message Auth    : enabled
    User Level Auth     : enabled
    Access Mode         : disabled

#网络接口相关命令

lan print                               #显示通道 1的网络配置信息
lan set 1 ipaddr 10.32.2.2              #设置通道 1的IP地址
lan set 1 netmask 255.255.0.0           #设置通道 1的netmask
lan set 4 defgw ipaddr255.255.0.254     #设置通道 4的网关
lan set 2 defgw macaddr  <macaddr>      #设置通道 2的网关mac address
lan set 2 ipsrc dhcp                    #设置通道 2的ip 源在DHCP
lan set 3 ipsrc static                  #设置通道 2的ip是静态获得的

ipmitool -I lanplus -H 172.92.17.58 -U Administrator -P Admin@9000 raw 0x30 0x90 0x44 0x02 0x00 0x18 0xe1 0xc5 0xd8 0x67 #修改mac地址
                                                                                           0x00 0x18 0xe1 0xc5 0xd8 0x67 #mac地址,前面的raw数据是握手字段
                                                                                           00:18:e1:c5:d8:67             #实际mac地址

#看门狗相关命令

mc watchdog get #读取当前看门狗的设置
watchdog  off    #关掉看门狗
watchdog reset  #在最近设置的计数器的基础上重启看门狗

#用户管理相关命令

ipmitool user list chan-id                      #显示某通道上的所有用户
ipmitool set password <user id>[<password>]     #修改某用户的密码
ipmitool disable      <user id>                 #禁止掉某用户
ipmitool enable       <user id>                 #使能某用户
ipmitool priv         <user id> <privilegelevel> [<channel number>] #修改某用户在某通道上的权限
ipmitool test         <user id> <16|20>[<password]> #测试用户

#升级固件

ipmitool hpm upgrade <xxxxx.hpm> -z 25000 forces

报错处理

[user1@localhost network-scripts]$ ipmitool
Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory

首先确保已经加载ipmitool模块

[user1@localhost ~]$ lsmod | grep ipmi
ipmi_poweroff         262144  0
ipmi_watchdog         262144  0
ipmi_si               262144  0
ipmi_devintf          262144  0
ipmi_msghandler       262144  4 ipmi_devintf,ipmi_si,ipmi_watchdog,ipmi_poweroff

如果没有使用modprobe命令加载模块,如:

modprobe ipmi_poweroff

更多命令亲参考 [1]

[1]https://blog.51cto.com/bovin/2128475

ipset

ip集合工具, 创建ip集合,iptables可以匹配集合种的内容,从而创建单条干净的规则。

http://ipset.netfilter.org/

iptables

iptables 是管理防火墙规则的工具,由iptables管理的规则会下发到netfiter进行应用。 netfilter通过过在内核协议栈中添加 钩子函数来实现对数据包的匹配和过滤

iptables -L -v -n      # 列出所有链和匹配数据包,可以看到接收的数据包和丢弃的数量
iptables -Z            # 清除计数
iptables -t nat -S     # 列出nat规则
iptables -t nat -D ..  # 删除某条规则

iptables -S                 #查看添加的iptables规则
ufw status                  #查看防火墙规则
iptables -S                 #To list all IPv4 rules
ip6tables -S                #To list all IPv6 rules
iptables -L INPUT -v -n     #To list all rules for INPUT tables
iptables -S INPUT           #To list all rules for INPUT tables

iptables -L INPUT           #查看INPUT链的规则
iptables -L FORWARD
iptables -L OUTPUT
iptables -L
iptables -t filter -L
iptables -t raw -L
iptables -t security -L
iptables -t mangle -L
iptables -t nat -L -n -v    #查看nat表的规则, -v带数据包统计

iptables和ufw的关系

ufw是ubuntu的防火墙工具 :doc: ufw

ufw的设置会转变为iptables规则, iptables的规则ufw并不会管理。

执行ufw命令

ufw allow 22/tcp

查看添加的iptables规则

iptables -S
-A ufw-user-input -p tcp -m tcp --dport 22 -j ACCEPT

执行ufw命令

ufw allow 2222

查看iptables规则,端口2222的tcp和udp流量会被允许

-A ufw-user-input -p tcp -m tcp --dport 2222 -j ACCEPT
-A ufw-user-input -p udp -m udp --dport 2222 -j ACCEPT

反过来,手动添加iptables规则,并不会影响ufw

-A ufw-user-input -p tcp -m tcp --dport 3333 -j ACCEPT
-A ufw-user-input -p udp -m udp --dport 3333 -j ACCEPT


root@server:~/play_iptables# iptables -A ufw-user-input -p tcp -m tcp --dport 3333 -j ACCEPT
root@server:~/play_iptables#
root@server:~/play_iptables# ufw status
Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
2222                       ALLOW       Anywhere
33222                      ALLOW       Anywhere
33000                      ALLOW       Anywhere
22/tcp (v6)                ALLOW       Anywhere (v6)
2222 (v6)                  ALLOW       Anywhere (v6)

NAT转换, 注意,这两条规则在CentOS上,firewall-cmd --reload 的 时候会失效

iptables -t nat -A PREROUTING -p tcp --dport 3212 -j DNAT --to-destination 10.1.1.1:312
iptables -t nat -A POSTROUTING -p tcp -d 10.1.1.1 -j SNAT --to-source 10.1.1.5

firewall-cmd --zone=public --add-masquerade --permanent #目前需要添加这条才能工作,原因未知。

MASQUERADE 和 SNAT什么关系

在docker的iptables种就有这一条 docker iptables详解

iptables -t nat -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE

为什么不写成

iptables -t nat -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j SNAT --to-sources 192.168.1.180

因为其实安装有docker的主机上并不是只有一个接口, 也有多个主机ip, 所以,最终数据包不一定从192.168.1.180出去; 主机的ip也可能会变, 这个时候重启之后,匹配这条规则的回程数据包就会被发到已经消失的192.168.1.180上。 参考一个回答[#ask_masquerade]_

MASQUERADE is an iptables target that can be used instead of SNAT target (source NAT) when external
ip of the inet interface is not known at the moment of writing the rule (when server gets external ip dynamically).
[1]https://askubuntu.com/a/466458/928809

journalctl

查看systemd的日志 [1]

Without arguments, all collected logs are shown unfiltered:

journalctl

Show all logs generated by the D-Bus executable:

journalctl /usr/bin/dbus-daemon

Show all kernel logs from previous boot:

journalctl -k -b -1

Show a live log display from a system service apache.service:

journalctl -f -u apache
[1]http://man7.org/linux/man-pages/man1/journalctl.1.html

kubernetes

下载安装kubectl [1]

如果是和 :doc: minikube <./minikube> 一起使用的话,只需要下载client端就可以了。

curl -OL https://dl.k8s.io/v1.18.0/kubernetes-client-linux-arm64.tar.gz

常用命令

kubectl cluster-info        # 查看集群信息
kubectl config view         #
-------------------
kubectl get                 # 列出资源
kubectl get nodes           # 查看节点信息
kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 # 创建deployments
kubectl get deployments     # 查看deployments
kubectl get pods            # 查看pods
kubectl get events          # 查看事件, 操作出错记录
kubectl get services        # 查看服务
---------------------
kubectl describe            # 显示资源详情
---------------------
kubectl logs                # 打印容器的日志
---------------------
kubectl exec                # 在一个容器中执行命令
---------------------
kubectl -n service rollout restart deployment <name>    # 重启服务

也可以设置命令自动补全 [6]

kubectl completion bash >/etc/bash_completion.d/kubectl
kubeadm completion bash >/etc/bash_completion.d/kubeadm

minikube [5] 官方未支持aarch64

简单概念

  • Matser 负责管理集群 [3]
  • Node 是一个VM或者是物理机, kubernetes 集群的 worker [4] 一个Node至少要运行
    • Kubelet 一个在Node上负责和Master沟通的进程,管理运行在Node上的容器。
    • 容器引擎,如Docker,拉取镜像,运行容器
  • Deployments 一个部署, 描述使用什么镜像,多少个副本容器等配置。通过过Kubernetes API告诉集群执行部署。
* Creating kvm2 VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
E0421 09:15:04.389896   79372 cache_images.go:86] CacheImage k8s.gcr.io/coredns-arm64:1.6.5 -> /home/user1/.minikube/cache/images/k8s.gcr.io/coredns-arm64_1.6.5 failed: write: MANIFEST_UNKNOWN: "fetch \"1.6.5\" from request \"/v2/coredns-arm64/manifests/1.6.5\"."
*
X Unable to start VM. Please investigate and run 'minikube delete' if possible
* Error: [DRIVER_CORRUPT] new host: Error attempting to get plugin server address for RPC: Failed to dial the plugin server in 10s
* Suggestion: The VM driver exited with an error, and may be corrupt. Run 'minikube start' with --alsologtostderr -v=8 to see the error
* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/

安装部署集群

添加kubernetes软件源

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

检查所需的镜像是否能获得

kubeadm config images pull

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件

sudo kubeadm init
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

kubectl get nodes #确认master ready

kubeadm token create --print-join-command
workder

手动部署: [2]

加载br_netfilter

lsmod | grep br_netfilter
sudo modprobe br_netfilter

设置操作系统参数,br_netfilter没有加载的话时没有这两个变量的

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system    # 应用到系统

添加kubernetes软件源

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

加入集群

sudo kubeadm join 192.168.1.180:6443 --token yzep8d.7svs6hvljrhqk562 \
    --discovery-token-ca-cert-hash sha256:83e29e1b29c1a11cdcb067c5da9ae58d9e11c2c15dfaa092f5b0ce3aa625b0f9

haproxy

编辑配置文件

global
        daemon
defaults
        mode http

frontend k8s-api-server-in
        bind 0.0.0.0:8443
        mode tcp
        default_backend k8s-api-server-host

backend k8s-api-server-host
        balance roundrobin
        server master1 192.168.122.100:6443
        server master2 192.168.122.101:6443
        server master3 192.168.122.102:6443

启动服务

docker run -d --name my-haproxy \
    -v /etc/haproxy:/usr/local/etc/haproxy:ro \
    -p 8443:8443 \
    -p 1080:1080 \
    --restart always \
    haproxy:latest

kubernetes yaml

yaml文件描述:

apiVersion:api版本
kind:资源类型。可以是pod, node, configMap
metadata:元数据。 名称,标签,注解
spec:规格。 容器列表,volume
status:状态。 内部详细状态

问题记录

running with swap on is not supported. Please disable swap
user1@Arm64-server:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
I0510 21:10:40.951053   25602 version.go:240] remote version is much newer: v1.18.2; falling back to: stable-1.14
[init] Using Kubernetes version: v1.14.10
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.8. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

解决办法

sudo swapoff -a
WARNING: kubeadm cannot validate component configs for API group
user1@Arm64-server:~$ kubeadm config images pull
W0511 23:20:25.155396   59650 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.18.2
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.18.2
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.18.2
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.18.2
[config/images] Pulled k8s.gcr.io/pause:3.2
[config/images] Pulled k8s.gcr.io/etcd:3.4.3-0
[config/images] Pulled k8s.gcr.io/coredns:1.6.7
Public key for is not installed
Public key for fdd1728b8dd0026e64a99ebb87d5b7a6c026a8e2f4796e383cc7ac43e7d7ccf2-kubelet-1.18.2-0.aarch64.rpm is not installed
Public key for 98b57cf856484f0d15a58705136d9319e57c5b80bea2eea93cf02bb2365651dc-kubernetes-cni-0.7.5-0.aarch64.rpm is not installed
Public key for socat-1.7.3.2-6.el8.aarch64.rpm is not installed. Failing package is: socat-1.7.3.2-6.el8.aarch64
GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Public key for conntrack-tools-1.4.4-9.el8.aarch64.rpm is not installed. Failing package is: conntrack-tools-1.4.4-9.el8.aarch64
GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Public key for iptables-1.8.2-16.el8.aarch64.rpm is not installed. Failing packa
Failed to set locale, defaulting to C.UTF-8” in CentOS 8

解决办法

dnf install langpacks-en glibc-all-langpacks -y
UnicodeEncodeError: ‘ascii’ codec can’t encode character u’u2013’
[root@localhost ~]# dnf install -y kubelet kubeadm kubectl –disableexcludes=kubernetes
Last metadata expiration check: 0:00:07 ago on Mon 08 Jun 2020 07:56:12 PM CST.
No match for argument: kubelet
No match for argument: kubeadm
No match for argument: kubectl

File "/usr/lib/python2.7/site-packages/dnf/cli/commands/install.py", line 180, in _install_packages
    logger.info(msg, self.base.output.term.bold(pkg_spec))
File "/usr/lib/python2.7/site-packages/dnf/cli/term.py", line 247, in bold
    return self.color('bold', s)
File "/usr/lib/python2.7/site-packages/dnf/cli/term.py", line 243, in color
    return (self.MODE[color] + str(s) + self.MODE['normal'])
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2013' in position 0: ordinal not in range(128)

我的情况是kubenetes.yaml含有中文字符,修改exclude之后成功

dnf install -y kubelet kubeadm kubectl
[1]https://kubernetes.io/docs/setup/release/notes/#downloads-for-v1-18-0
[2]https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
[3]https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/
[4]https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/
[5]https://kubernetes.io/docs/tasks/tools/install-minikube/
[6]https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion
[7]https://console.cloud.google.com/gcr/images/google-containers/GLOBAL
[8]https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/

kubernetes persistent

kubernetes的基本理念, 向应用程序及开发人员隐藏真实的基础设施, 使他们不必担心基础设施的具体状态,并使应用程序可在大量云服务商 和数据企业直接进行功能迁移。

我们常见的emptyDir,hostPath,gitRepo,nfs卷,都需要开发人员知道底层存储技术的细节, 比如nfs的服务器地址, hostPath路径等, 如果应用发生迁移,在另一个节点上可能就找不到对应的卷了。

这个时候引入了持久卷PersistentVolume和持久卷声明PersistentVolumeClaim。 简单的说, 就是pod的开发人员发布持久卷声明PVC,说明 需要的存储容量、存储属性, kubernetes在集群管理员发布的持久卷PV中找到可以满足的卷,分配给pod开发人员。 集群管理使用什么存储介质对于 pod开发人员和应用程序是无感知的。

kvm

有一天需要安装ceph集群来看看分布式系统的性能,想找几台机器来测测,一看发现至少需要3台,机器不够怎么办,起一个虚拟机。

有一天需要整点危险的事情,如果在服务器上搞,容易导致设备数据损坏,导致其他任务影响。

有一天需要装个操作系统看看redhat好还是ubuntu好, 在物理机上装实在太久了,用虚拟机好一些。

怎么搞虚拟机?和在window一样装个VMware或者virtualbox,然后挂上ISO。以前都是这么搞的。坏处就是慢、久、坑。还是在linux上搞好一点。

KVM就是我们一直寻找的东西下面在ARM64设备上进行操作。

看看你的服务器到底支不支持。

sudo apt install cpu-checker
sudo kvm-ok
me@ubuntu:~/virtual_machine$ sudo kvm-ok
[sudo] password for me:
INFO: /dev/kvm exists
KVM acceleration can be used

安装qemu工具

ubuntu18.04验证通过过

sudo apt-get install qemu-kvm libvirt-bin bridge-utils virtinst
#如果需要图形化管理界面:
sudo apt-get install virt-manager

redhat8.0 CentOS7.6 arm验证通过

yum install qemu-kvm libvirt virt-install

创建一台虚拟机

可以想到:需要指定虚拟机的CPU、内存、硬盘,ISO文件等。 命令写成一行装不下,写成多行,把下面的命令保存为文件,添加执行权限,执行即可。

脚本1:./install_ubuntu.sh

在ubuntu18.04 安装一个ubuntu18.04虚机

#!/bin/bash
#install_ubuntu.sh
sudo virt-install               \
 --name ubuntu_1              \
 --description "ubuntu vm setup for ceph"       \
 --os-type linux                \
 --os-variant "ubuntu18.04"     \
 --memory 4096                  \
 --vcpus 2                      \
 --disk path=/var/lib/libvirt/images/ubuntu_1.img,bus=virtio,size=50  \
 --network bridge:virbr0                                \
 --accelerate                                           \
 --graphics vnc,listen=0.0.0.0,keymap=en-us             \
 --location /home/me/ubuntu-18.04-server-arm64.iso      \
 --extra-args console=ttyS0
--name ubuntu_1                 是虚拟机的名字,待会儿查看有多少台虚拟机时会列出来的名字,并不是虚拟机的主机名。
--os-variant "ubuntu18.04"      必须是指定的版本,可以使用命令查询osinfo-query os 如果缺少相应软件包:sudo apt install libosinfo-bin
--memory                        4096指定虚拟机的内存,以M为单位,这里是4个G。也就是4*1024
--vcpus 2                       指定虚拟机的CPU数量
--disk path                     指定虚拟机的硬盘文件,也就是虚拟机的硬盘,大小是50G。
--network bridge:virbr0         指定链接到的网桥,请用自己主机上对应的网桥,具体参考KVM网络配置
--graphics vnc,listen=0.0.0.0,keymap=en-us  据说可以用VNC看到图形界面, 我没有图形界面环境,没研究什么意思
--extra-args console=ttyS0      指定登陆虚拟机的串口,非常重要,进入虚拟机有三种方式:SSH、VNS、串口,这里是串口的配置。

脚本2:./install_vm.sh

一个可供选择的简单脚本(没有vnc图形界面)。改脚本在redhat8.0上验证通过。。

#!/bin/bash
virt-install \
  --name suse \
  --memory 2048 \
  --vcpus 2 \
  --disk size=20 \
  --cdrom /root/iso/SLE-15-SP1-Installer-DVD-aarch64-Beta4-DVD1.iso

脚本3:./install_vm.sh

在CentOS7.6上安装CentOS7.6

#!/bin/bash
virt-install \
  --name CentOS7.6 \
  --os-variant "centos7.0" \
  --memory 8192 \
  --vcpus 4 \
  --disk size=20 \
  --graphics vnc,listen=0.0.0.0,keymap=en-us \
  --location /home/me/isos/CentOS-7-aarch64-Minimal-1810.iso \
  --extra-args console=ttyS0

提示安装成功后可以使用命令查看设备。

[me@centos ~]$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     CentOS7.6                      running
 2     2-centos7.6                    running

部署网络

ubuntu18.04网络配置文件:/etc/netplan/01-netcfg.yaml

CentOS7、redhat7.5、redhat8.0网络配置文件: /etc/sysconfig/network-scripts/ifcfg-enp1s0,参考linux网络操作

这里给出两个例子:

host机Bridge模式ubuntu 8.0

路径一般是对的,文件名有可能不一样。

me@ubuntu:/etc/netplan$ cat 01-netcfg.yaml
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    enahisic2i0:
      dhcp4: yes
    enahisic2i1:
      dhcp4: yes
    enahisic2i2:
      dhcp4: yes
    enahisic2i3:
      dhcp4: yes

  bridges:
        virbr0:
                interfaces: [enahisic2i0]
                dhcp4: yes
                addresses: [192.168.1.201/24]
                gateway4: 192.168.1.2
                nameservers:
                        addresses: [127.0.0.53]
me@ubuntu:/etc/netplan$

本人主机上有4个网口,网卡enahisic2i0上有内网IP,安装好kvm工具后会自动生成网桥virbr0, 使用ip a可以查到,这里是把enahisic2i0加到了网桥上,这样后面加入的虚拟机也会自己挂到这个网桥上, 即可和外部网络接通,这里的网关,和nameservers保持和原来主机上的一致即可。

host机Bridge模式 CentOS 7.6

设置host的网络。 我的设备联网的网口是enp189s0f0,一般情况下, 它会dhcp获得一个IP地址。 安装kvm之后, 会生成一个bridge设备:virbr0。 需要设置virbr0自动获取IP地址,并且把enp189s0f0添加到virbr0 slave device当中。

sudo brctl addif virbr0 enp189s0f0  # 把接口添加到虚拟交换机当中
sudo brctl show                     # 显示配置结果
虚拟机设置
virsh edit CentOS7.6

使用脚本3创建的VM的interface字段是:

<interface type='user'>
  <mac address='52:54:00:bf:37:a0'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

修改user为bridge, 添加:

<interface type='bridge'>
  <mac address='52:54:00:bf:37:a0'/>
  <source bridge='virbr0'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

查看配置结果:

[user1@centos ~]$ sudo brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.00182d04005c       yes             enp189s0f0
                                                      tap0
[user1@centos ~]$

如果没有看到新添加的tap0, 需要关机重启一下:

virsh shutdown vm1
virsh start vm1

设置之后,在host的bridge上会自动添加一个tab设备。这个时候重新进入VM就可以看到VM已经获得了和 Host一样的由DHCP服务器分配的地址:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:0a:e3:0c brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.216/24 brd 192.168.2.255 scope global noprefixroute dynamic eth0
       valid_lft 86363sec preferred_lft 86363sec
    inet6 fe80::1be7:b0db:e5af:65ab/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::2a4a:917b:1d4a:a231/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

查看当前虚拟机

virsh list --all

通过串口登录虚拟机

virsh console ubuntu_1

退出串口登录

ctrl + ]

启动VM

virsh start ubuntu_2

停止VM,

virsh shutdown ubuntu_2

删除VM

virsh destroy ubuntu_2
virsh undefine ubuntu_2
virsh undefine ubuntu_2 --nvram

克隆VM

有时候发现一台装系统太慢了,直接复制一下多好,这个时候就可以用克隆工具完成。克隆需要虚拟机暂停运行,可以使用前面的shutdown命令停止。 克隆完成之后最好查看一下各个虚拟机的mac地址是否相同,一般现在工具可以自动生成,这样可以避免MAC地址冲突,结果就是dhcp分配的一个ip地址在两台虚拟机上变来变去。

sudo virt-clone \
        --original ubuntu_1     \
        --name ubuntu_7         \
        --auto-clone

强烈建议确认mac地址不一样之后,在每台虚拟机里面重启网络服务,等待DHCP分配地址。由于我装的是ubuntu18.04,我的命令是如下,其他系统请自行搜索。

sudo systemctl restart systemd-networkd.serivce

查看网络信息

virsh net-list
virsh net-info default
virsh net-dhcp-leases default

动态添加网卡

virsh attach-interface vm3 --type bridge --source br0
virsh detach-interface --domain vm3 --type bridge --mac 52:54:00:f8:bd:31

添加或者卸载硬盘

搞着搞着会发现50G的硬盘可能不够用,这个时候想给虚拟机再挂一个硬盘

#主机上创建硬盘文件100G,也有其他类型的硬盘例如RAW,请自行搜索
sudo qemu-img create -f qcow2 ubuntu_vm7_disk_100G 100G
#查看创建好的镜像信息
qemu-img info ubuntu_vm7_disk_100G
#添加到虚拟机上,vdb需要是虚拟机ubuntu_7上未使用的盘符,必须制定驱动,--subdriver=qcow2,否则虚拟机里面看不到
virsh attach-disk ubuntu_7 /var/lib/libvirt/images/ubuntu_vm7_disk_100G vdb --subdriver=qcow2
#卸载可以使用detach命令
virsh detach-disk ubuntu_7 /var/lib/libvirt/images/ubuntu_vm7_disk_100G
#这个时候进入虚拟机中
virsh console ubuntu_7
#执行下面的命令就可以观察到硬盘
fdisk -l
lsblk
#可以创建文件系统
sudo mke2fs -t ext4 /dev/vdb
#挂在硬盘
mount /dev/vdb /mnt/data_disk

#其他虚拟机操作类似
virsh attach-disk ubuntu_1 /var/lib/libvirt/images/ubuntu_vm1_disk_100G vdb --subdriver=qcow2
virsh attach-disk ubuntu_2 /var/lib/libvirt/images/ubuntu_vm2_disk_100G vdb --subdriver=qcow2
virsh attach-disk ubuntu_3 /var/lib/libvirt/images/ubuntu_vm3_disk_100G vdb --subdriver=qcow2
virsh attach-disk ubuntu_4 /var/lib/libvirt/images/ubuntu_vm4_disk_100G vdb --subdriver=qcow2
virsh attach-disk ubuntu_5 /var/lib/libvirt/images/ubuntu_vm5_disk_100G vdb --subdriver=qcow2
virsh attach-disk ubuntu_6 /var/lib/libvirt/images/ubuntu_vm6_disk_100G vdb --subdriver=qcow2

编辑虚拟机配置文件

virsh edit ubuntu_1

可以查询到images文件保存的路径为

/var/lib/libvirt/images

日志文件

$HOME/.virtinst/virt-install.log        #virt-install tool log file.
$HOME/.virt-manager/virt-manager.log    #virt-manager tool log file.
/var/log/libvirt/qemu/                  #VM的运行日志,每个VM一个文件

网络NAT模式

前面的网桥模式一般来说可以满足比较普遍的需求。 如果不希望外部网络知道虚机的网络结构,可以选中NAT模式。

默认情况下有一个default网络在运行

[user1@kunpeng920 ~]$ sudo virsh net-list --all
[sudo] password for user1:
Name                 State      Autostart     Persistent
----------------------------------------------------------
default              active     yes           yes

sudo virsh net-edit default 可以看到配置的网络内容, 有一个virbr0的网桥

<network>
<name>default</name>
<uuid>17642016-bbbc-48e0-9404-bbd0b5d3f74b</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:31:10:e8'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
   <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
   </dhcp>
</ip>
</network>

这个时候只需要修改vm1的配置文件 virsh edit vm1。 type指定为xml, 指定source bridge为virbr0后启动vm就可以

<interface type='bridge'>
  <mac address='52:54:00:c0:29:14'/>
  <source bridge='virbr0'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

问题:无法连接到libvirt-sock

[root@localhost ~]# ./install_vm.sh
ERROR    Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory

解决

systemctl start libvirtd

问题:无法读取iso,权限不对

Starting install...
Allocating 'suse-02.qcow2'                                                                                                                       |  20 GB  00:00:01
ERROR    internal error: qemu unexpectedly closed the monitor: 2019-03-01T03:15:50.278936Z qemu-kvm: -drive file=/root/iso/SLE-15-SP1-Installer-DVD-aarch64-Beta4-DVD1.iso,format=raw,if=none,id=drive-scsi0-0-0-1,readonly=on: Could not open '/root/iso/SLE-15-SP1-Installer-DVD-aarch64-Beta4-DVD1.iso': Permission denied
Removing disk 'suse-02.qcow2'                                                                                                                    |    0 B  00:00:00
Domain installation does not appear to have been successful.

解决办法

vim /etc/libvirt/qemu.conf

取消user = "root"group = "root"前面的注释并重启

#
user = "root"

# The group for QEMU processes run by the system instance. It can be
# specified in a similar way to user.
group = "root"

# Whether libvirt should dynamically change file ownership
systemctl restart libvirtd

问题:unsupported configuration: ACPI requires UEFI on this architecture

[me@centos bin]$ ./install_vm.sh
WARNING  Couldn't configure UEFI: Did not find any UEFI binary path for arch 'aarch64'
WARNING  Your aarch64 VM may not boot successfully.

Starting install...
Retrieving file .treeinfo...                                                   |  274 B  00:00:00
Retrieving file vmlinuz...                                                     | 5.8 MB  00:00:00
Retrieving file initrd.img...                                                  |  41 MB  00:00:00
Allocating 'CentOS7.6.qcow2'                                                   |  20 GB  00:00:00
ERROR    unsupported configuration: ACPI requires UEFI on this architecture
Removing disk 'CentOS7.6.qcow2'                                                |    0 B  00:00:00
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
  virsh --connect qemu:///session start CentOS7.6
otherwise, please restart your installation.

CentOS解决办法

yum install AAVMF
AAVMF.noarch : UEFI firmware for aarch64 virtual machines

ubuntu 解决办法

sudo apt install qemu-efi-aarch64/bionic-updates

问题:error: Refusing to undefine while domain managed save image exists

[me@centos instruction_set]$ virsh undefine vm1
error: Refusing to undefine while domain managed save image exists

解决办法:

[me@centos instruction_set]$ virsh managedsave-remove --domain vm1
Removed managedsave image for domain vm1
[me@centos instruction_set]$ virsh undefine --nvram --remove-all-storage vm1
Domain vm1 has been undefined
Volume 'sda'(/home/me/.local/share/libvirt/images/CentOS7.6.qcow2) removed.

qemu 命令行参数和 libvirt xml转换

参考 [1]

问题: virsh exit xml 和dump处的xml不一样

virsh edit 的结果:

<interface type='bridge'>
  <mac address='52:54:00:38:06:f9'/>
  <source bridge='br0'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

virsh dumpxml 的结果

[user1@centos ~]$ virsh dumpxml vm1 | grep interface -A 10
    <interface type='user'>
      <mac address='52:54:00:38:06:f9'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

解决办法:

Soft-reboot isn’t good enough because it doesn’t restart the qemu process and doesn’t use new XML. You need to shutdown and start the VM again in order to load the new XML. [2]

virsh shutdown vm1
virsh start vm1

问题: failed to communicate with bridge

[user1@centos ~]$
[user1@centos ~]$ virsh start vm1
error: Failed to start domain vm1
error: internal error: /usr/libexec/qemu-bridge-helper --use-vnet --br=br0 --fd=27: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=access denied by acl file

解决办法:

在host上,编辑 vim /etc/qemu-kvm/bridge.conf, 其他设备可能是: vim /etc/qemu/bridge.conf [3]

[user1@centos ~]$ sudo cat /etc/qemu-kvm/bridge.conf
allow virbr0
allow br0

待确认问题

kvm可以跑X86的linux?

error: unexpected data '-all'
[root@192e168e100e118 ~]# virsh list
 Id    Name                           State
----------------------------------------------------
 1     instance-8e278c38-2559-4499-81af-37166cf78f3d running

[root@192e168e100e118 ~]# virsh console instance-8e278c38-2559-4499-81af-37166cf78f3d
Connected to domain instance-8e278c38-2559-4499-81af-37166cf78f3d
Escape character is ^]

CentOS Linux 7 (Core)
Kernel 3.10.0-862.el7.x86_64 on an x86_64

ceshi-03 login:
[1]https://blog.csdn.net/beckdon/article/details/50883754
[2]https://bugzilla.redhat.com/show_bug.cgi?id=1347219
[3]https://mike42.me/blog/2019-08-how-to-use-the-qemu-bridge-helper-on-debian-10

lastb

查看登录失败记录,来源以IP显示

在公网上的设备会不断地被别人尝试登录, 使用:

lastb -i

查看登录失败记录:

test     ssh:notty    116.89.189.37    Tue Nov 26 12:45 - 12:45  (00:00)
debian   ssh:notty    116.89.189.37    Tue Nov 26 12:45 - 12:45  (00:00)
centos   ssh:notty    116.89.189.37    Tue Nov 26 12:45 - 12:45  (00:00)
ubuntu   ssh:notty    116.89.189.37    Tue Nov 26 12:44 - 12:44  (00:00)
mysql    ssh:notty    116.89.189.37    Tue Nov 26 12:44 - 12:44  (00:00)
user     ssh:notty    116.89.189.37    Tue Nov 26 12:44 - 12:44  (00:00)
git      ssh:notty    116.89.189.37    Tue Nov 26 12:43 - 12:43  (00:00)
postgres ssh:notty    116.89.189.37    Tue Nov 26 12:43 - 12:43  (00:00)
oracle   ssh:notty    116.89.189.37    Tue Nov 26 12:43 - 12:43  (00:00)
db       ssh:notty    116.89.189.37    Tue Nov 26 12:42 - 12:42  (00:00)
database ssh:notty    116.89.189.37    Tue Nov 26 12:42 - 12:42  (00:00)
panel    ssh:notty    116.89.189.37    Tue Nov 26 12:42 - 12:42  (00:00)
server   ssh:notty    116.89.189.37    Tue Nov 26 12:41 - 12:41  (00:00)
server   ssh:notty    116.89.189.37    Tue Nov 26 00:04 - 00:04  (00:00)
debian   ssh:notty    116.89.189.37    Mon Nov 25 04:10 - 04:10  (00:00)
centos   ssh:notty    116.89.189.37    Mon Nov 25 04:10 - 04:10  (00:00)
ubuntu   ssh:notty    116.89.189.37    Mon Nov 25 04:09 - 04:09  (00:00)
mysql    ssh:notty    116.89.189.37    Mon Nov 25 04:09 - 04:09  (00:00)
user     ssh:notty    116.89.189.37    Mon Nov 25 04:09 - 04:09  (00:00)
git      ssh:notty    116.89.189.37    Mon Nov 25 04:08 - 04:08  (00:00)
postgres ssh:notty    116.89.189.37    Mon Nov 25 04:08 - 04:08  (00:00)
oracle   ssh:notty    116.89.189.37    Mon Nov 25 04:08 - 04:08  (00:00)
db       ssh:notty    116.89.189.37    Mon Nov 25 04:08 - 04:08  (00:00)
database ssh:notty    116.89.189.37    Mon Nov 25 04:07 - 04:07  (00:00)
panel    ssh:notty    116.89.189.37    Mon Nov 25 04:07 - 04:07  (00:00)
server   ssh:notty    116.89.189.37    Mon Nov 25 04:07 - 04:07  (00:00)
test     ssh:notty    116.89.189.37    Sun Nov 24 12:31 - 12:31  (00:00)
debian   ssh:notty    116.89.189.37    Sun Nov 24 12:31 - 12:31  (00:00)
centos   ssh:notty    116.89.189.37    Sun Nov 24 12:31 - 12:31  (00:00)
ubuntu   ssh:notty    116.89.189.37    Sun Nov 24 12:31 - 12:31  (00:00)
mysql    ssh:notty    116.89.189.37    Sun Nov 24 12:30 - 12:30  (00:00)
user     ssh:notty    116.89.189.37    Sun Nov 24 12:30 - 12:30  (00:00)
git      ssh:notty    116.89.189.37    Sun Nov 24 12:30 - 12:30  (00:00)
postgres ssh:notty    116.89.189.37    Sun Nov 24 12:29 - 12:29  (00:00)
oracle   ssh:notty    116.89.189.37    Sun Nov 24 12:29 - 12:29  (00:00)
db       ssh:notty    116.89.189.37    Sun Nov 24 12:29 - 12:29  (00:00)
database ssh:notty    116.89.189.37    Sun Nov 24 12:29 - 12:29  (00:00)
panel    ssh:notty    116.89.189.37    Sun Nov 24 12:28 - 12:28  (00:00)
server   ssh:notty    116.89.189.37    Sun Nov 24 12:28 - 12:28  (00:00)
test     ssh:notty    116.89.189.37    Sat Nov 23 20:10 - 20:10  (00:00)
debian   ssh:notty    116.89.189.37    Sat Nov 23 20:10 - 20:10  (00:00)
centos   ssh:notty    116.89.189.37    Sat Nov 23 20:10 - 20:10  (00:00)
ubuntu   ssh:notty    116.89.189.37    Sat Nov 23 20:10 - 20:10  (00:00)
mysql    ssh:notty    116.89.189.37    Sat Nov 23 20:09 - 20:09  (00:00)
user     ssh:notty    116.89.189.37    Sat Nov 23 20:09 - 20:09  (00:00)
git      ssh:notty    116.89.189.37    Sat Nov 23 20:08 - 20:08  (00:00)
postgres ssh:notty    116.89.189.37    Sat Nov 23 20:08 - 20:08  (00:00)
oracle   ssh:notty    116.89.189.37    Sat Nov 23 20:08 - 20:08  (00:00)
db       ssh:notty    116.89.189.37    Sat Nov 23 20:07 - 20:07  (00:00)
database ssh:notty    116.89.189.37    Sat Nov 23 20:07 - 20:07  (00:00)
panel    ssh:notty    116.89.189.37    Sat Nov 23 20:07 - 20:07  (00:00)
server   ssh:notty    116.89.189.37    Sat Nov 23 20:06 - 20:06  (00:00)
pi       ssh:notty    179.60.167.231   Sun Nov 17 08:22 - 08:22  (00:00)
pi       ssh:notty    179.60.167.231   Sun Nov 17 08:22 - 08:22  (00:00)
pi       ssh:notty    179.60.167.231   Sun Nov 17 08:22 - 08:22  (00:00)
plexuser ssh:notty    179.60.167.231   Sun Nov 17 08:22 - 08:22  (00:00)
misp     ssh:notty    179.60.167.231   Sun Nov 17 08:22 - 08:22  (00:00)
nexthink ssh:notty    179.60.167.231   Sun Nov 17 08:22 - 08:22  (00:00)
NetLinx  ssh:notty    179.60.167.231   Sun Nov 17 08:22 - 08:22  (00:00)
openhabi ssh:notty    179.60.167.231   Sun Nov 17 08:22 - 08:22  (00:00)
osbash   ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
netscree ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
support  ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
osboxes  ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
ubnt     ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
admin    ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
admin    ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
admin    ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
admin    ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
admin    ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
admin    ssh:notty    179.60.167.231   Sun Nov 17 08:21 - 08:21  (00:00)
vx       ssh:notty    92.222.72.234    Sat Nov 16 16:35 - 16:35  (00:00)
vx       ssh:notty    92.222.72.234    Sat Nov 16 16:35 - 16:35  (00:00)
hhhhh    ssh:notty    59.145.221.103   Sat Nov 16 16:35 - 16:35  (00:00)
hhhhh    ssh:notty    59.145.221.103   Sat Nov 16 16:35 - 16:35  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 16:34 - 16:34  (00:00)
hovengen ssh:notty    178.128.209.176  Sat Nov 16 16:34 - 16:34  (00:00)
hovengen ssh:notty    178.128.209.176  Sat Nov 16 16:34 - 16:34  (00:00)
margaux  ssh:notty    123.207.241.223  Sat Nov 16 16:33 - 16:33  (00:00)
margaux  ssh:notty    123.207.241.223  Sat Nov 16 16:33 - 16:33  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 16:32 - 16:32  (00:00)
javierma ssh:notty    104.50.8.212     Sat Nov 16 16:32 - 16:32  (00:00)
javierma ssh:notty    104.50.8.212     Sat Nov 16 16:32 - 16:32  (00:00)
root     ssh:notty    92.222.72.234    Sat Nov 16 16:31 - 16:31  (00:00)
joker    ssh:notty    59.145.221.103   Sat Nov 16 16:29 - 16:29  (00:00)
joker    ssh:notty    59.145.221.103   Sat Nov 16 16:29 - 16:29  (00:00)
akon     ssh:notty    124.238.116.155  Sat Nov 16 16:29 - 16:29  (00:00)
akon     ssh:notty    124.238.116.155  Sat Nov 16 16:29 - 16:29  (00:00)
test     ssh:notty    178.128.209.176  Sat Nov 16 16:29 - 16:29  (00:00)
test     ssh:notty    178.128.209.176  Sat Nov 16 16:29 - 16:29  (00:00)
root     ssh:notty    123.207.241.223  Sat Nov 16 16:29 - 16:29  (00:00)
ssh      ssh:notty    104.50.8.212     Sat Nov 16 16:28 - 16:28  (00:00)
ssh      ssh:notty    104.50.8.212     Sat Nov 16 16:28 - 16:28  (00:00)
admin    ssh:notty    106.13.16.205    Sat Nov 16 16:27 - 16:27  (00:00)
admin    ssh:notty    106.13.16.205    Sat Nov 16 16:27 - 16:27  (00:00)
backup   ssh:notty    92.222.72.234    Sat Nov 16 16:26 - 16:26  (00:00)
moczygem ssh:notty    178.128.209.176  Sat Nov 16 16:25 - 16:25  (00:00)
moczygem ssh:notty    178.128.209.176  Sat Nov 16 16:25 - 16:25  (00:00)
inderpal ssh:notty    124.238.116.155  Sat Nov 16 16:25 - 16:25  (00:00)
inderpal ssh:notty    124.238.116.155  Sat Nov 16 16:25 - 16:25  (00:00)
leahy    ssh:notty    123.207.241.223  Sat Nov 16 16:24 - 16:24  (00:00)
leahy    ssh:notty    123.207.241.223  Sat Nov 16 16:24 - 16:24  (00:00)
@@@@     ssh:notty    59.145.221.103   Sat Nov 16 16:24 - 16:24  (00:00)
@@@@     ssh:notty    59.145.221.103   Sat Nov 16 16:24 - 16:24  (00:00)
vinluan  ssh:notty    104.50.8.212     Sat Nov 16 16:24 - 16:24  (00:00)
vinluan  ssh:notty    104.50.8.212     Sat Nov 16 16:24 - 16:24  (00:00)
kjetsaa  ssh:notty    106.13.16.205    Sat Nov 16 16:23 - 16:23  (00:00)
kjetsaa  ssh:notty    106.13.16.205    Sat Nov 16 16:23 - 16:23  (00:00)
root     ssh:notty    92.222.72.234    Sat Nov 16 16:22 - 16:22  (00:00)
news     ssh:notty    178.128.209.176  Sat Nov 16 16:21 - 16:21  (00:00)
stevy    ssh:notty    124.238.116.155  Sat Nov 16 16:20 - 16:20  (00:00)
stevy    ssh:notty    124.238.116.155  Sat Nov 16 16:20 - 16:20  (00:00)
gina     ssh:notty    104.50.8.212     Sat Nov 16 16:20 - 16:20  (00:00)
gina     ssh:notty    104.50.8.212     Sat Nov 16 16:20 - 16:20  (00:00)
mysql    ssh:notty    123.207.241.223  Sat Nov 16 16:19 - 16:19  (00:00)
mysql    ssh:notty    123.207.241.223  Sat Nov 16 16:19 - 16:19  (00:00)
com      ssh:notty    59.145.221.103   Sat Nov 16 16:19 - 16:19  (00:00)
com      ssh:notty    59.145.221.103   Sat Nov 16 16:19 - 16:19  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 16:18 - 16:18  (00:00)
connie   ssh:notty    92.222.72.234    Sat Nov 16 16:17 - 16:17  (00:00)
connie   ssh:notty    92.222.72.234    Sat Nov 16 16:17 - 16:17  (00:00)
sitasube ssh:notty    178.128.209.176  Sat Nov 16 16:16 - 16:16  (00:00)
sitasube ssh:notty    178.128.209.176  Sat Nov 16 16:16 - 16:16  (00:00)
user     ssh:notty    104.50.8.212     Sat Nov 16 16:15 - 16:15  (00:00)
user     ssh:notty    104.50.8.212     Sat Nov 16 16:15 - 16:15  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 16:15 - 16:15  (00:00)
gisele   ssh:notty    123.207.241.223  Sat Nov 16 16:15 - 16:15  (00:00)
gisele   ssh:notty    123.207.241.223  Sat Nov 16 16:15 - 16:15  (00:00)
squid    ssh:notty    122.228.89.95    Sat Nov 16 16:14 - 16:14  (00:00)
squid    ssh:notty    122.228.89.95    Sat Nov 16 16:14 - 16:14  (00:00)
root     ssh:notty    59.145.221.103   Sat Nov 16 16:13 - 16:13  (00:00)
admin    ssh:notty    92.222.72.234    Sat Nov 16 16:12 - 16:12  (00:00)
admin    ssh:notty    92.222.72.234    Sat Nov 16 16:12 - 16:12  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 16:12 - 16:12  (00:00)
apache   ssh:notty    178.128.209.176  Sat Nov 16 16:12 - 16:12  (00:00)
apache   ssh:notty    178.128.209.176  Sat Nov 16 16:12 - 16:12  (00:00)
root     ssh:notty    104.50.8.212     Sat Nov 16 16:11 - 16:11  (00:00)
webadmin ssh:notty    122.228.89.95    Sat Nov 16 16:10 - 16:10  (00:00)
webadmin ssh:notty    122.228.89.95    Sat Nov 16 16:10 - 16:10  (00:00)
mysql    ssh:notty    124.238.116.155  Sat Nov 16 16:10 - 16:10  (00:00)
mysql    ssh:notty    124.238.116.155  Sat Nov 16 16:10 - 16:10  (00:00)
douville ssh:notty    59.145.221.103   Sat Nov 16 16:08 - 16:08  (00:00)
douville ssh:notty    59.145.221.103   Sat Nov 16 16:08 - 16:08  (00:00)
root     ssh:notty    178.128.209.176  Sat Nov 16 16:08 - 16:08  (00:00)
f030     ssh:notty    92.222.72.234    Sat Nov 16 16:08 - 16:08  (00:00)
f030     ssh:notty    92.222.72.234    Sat Nov 16 16:08 - 16:08  (00:00)
games    ssh:notty    106.13.16.205    Sat Nov 16 16:07 - 16:07  (00:00)
demetria ssh:notty    104.50.8.212     Sat Nov 16 16:07 - 16:07  (00:00)
demetria ssh:notty    104.50.8.212     Sat Nov 16 16:07 - 16:07  (00:00)
prema    ssh:notty    122.228.89.95    Sat Nov 16 16:06 - 16:06  (00:00)
prema    ssh:notty    122.228.89.95    Sat Nov 16 16:06 - 16:06  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 16:05 - 16:05  (00:00)
kurose   ssh:notty    123.207.241.223  Sat Nov 16 16:05 - 16:05  (00:00)
kurose   ssh:notty    123.207.241.223  Sat Nov 16 16:05 - 16:05  (00:00)
root     ssh:notty    178.128.209.176  Sat Nov 16 16:04 - 16:04  (00:00)
juste    ssh:notty    104.50.8.212     Sat Nov 16 16:03 - 16:03  (00:00)
juste    ssh:notty    104.50.8.212     Sat Nov 16 16:03 - 16:03  (00:00)
haleyrya ssh:notty    92.222.72.234    Sat Nov 16 16:03 - 16:03  (00:00)
jkamande ssh:notty    59.145.221.103   Sat Nov 16 16:03 - 16:03  (00:00)
haleyrya ssh:notty    92.222.72.234    Sat Nov 16 16:03 - 16:03  (00:00)
jkamande ssh:notty    59.145.221.103   Sat Nov 16 16:03 - 16:03  (00:00)
christop ssh:notty    106.13.16.205    Sat Nov 16 16:02 - 16:02  (00:00)
christop ssh:notty    106.13.16.205    Sat Nov 16 16:02 - 16:02  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 16:01 - 16:01  (00:00)
svn      ssh:notty    124.238.116.155  Sat Nov 16 16:00 - 16:00  (00:00)
svn      ssh:notty    124.238.116.155  Sat Nov 16 16:00 - 16:00  (00:00)
august   ssh:notty    123.207.241.223  Sat Nov 16 16:00 - 16:00  (00:00)
august   ssh:notty    123.207.241.223  Sat Nov 16 16:00 - 16:00  (00:00)
granet   ssh:notty    178.128.209.176  Sat Nov 16 15:59 - 15:59  (00:00)
granet   ssh:notty    178.128.209.176  Sat Nov 16 15:59 - 15:59  (00:00)
test     ssh:notty    104.50.8.212     Sat Nov 16 15:59 - 15:59  (00:00)
test     ssh:notty    104.50.8.212     Sat Nov 16 15:59 - 15:59  (00:00)
backup   ssh:notty    106.13.16.205    Sat Nov 16 15:58 - 15:58  (00:00)
passari  ssh:notty    106.13.65.18     Sat Nov 16 15:57 - 15:57  (00:00)
passari  ssh:notty    106.13.65.18     Sat Nov 16 15:57 - 15:57  (00:00)
root     ssh:notty    59.145.221.103   Sat Nov 16 15:57 - 15:57  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 15:57 - 15:57  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 15:57 - 15:57  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 15:57 - 15:57  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 15:57 - 15:57  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 15:57 - 15:57  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 15:57 - 15:57  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 15:56 - 15:56  (00:00)
test1234 ssh:notty    123.207.241.223  Sat Nov 16 15:55 - 15:55  (00:00)
test1234 ssh:notty    123.207.241.223  Sat Nov 16 15:55 - 15:55  (00:00)
bruzzese ssh:notty    104.50.8.212     Sat Nov 16 15:55 - 15:55  (00:00)
bruzzese ssh:notty    104.50.8.212     Sat Nov 16 15:54 - 15:54  (00:00)
odroid   ssh:notty    106.13.16.205    Sat Nov 16 15:53 - 15:53  (00:00)
odroid   ssh:notty    106.13.16.205    Sat Nov 16 15:53 - 15:53  (00:00)
gilares  ssh:notty    122.228.89.95    Sat Nov 16 15:53 - 15:53  (00:00)
gilares  ssh:notty    122.228.89.95    Sat Nov 16 15:53 - 15:53  (00:00)
jorgus   ssh:notty    106.13.65.18     Sat Nov 16 15:52 - 15:52  (00:00)
jorgus   ssh:notty    106.13.65.18     Sat Nov 16 15:52 - 15:52  (00:00)
1234     ssh:notty    92.222.72.234    Sat Nov 16 15:51 - 15:51  (00:00)
1234     ssh:notty    92.222.72.234    Sat Nov 16 15:51 - 15:51  (00:00)
muh      ssh:notty    106.13.93.161    Sat Nov 16 15:51 - 15:51  (00:00)
muh      ssh:notty    106.13.93.161    Sat Nov 16 15:51 - 15:51  (00:00)
duke!@#  ssh:notty    123.207.241.223  Sat Nov 16 15:51 - 15:51  (00:00)
duke!@#  ssh:notty    123.207.241.223  Sat Nov 16 15:51 - 15:51  (00:00)
bos      ssh:notty    124.238.116.155  Sat Nov 16 15:51 - 15:51  (00:00)
bos      ssh:notty    124.238.116.155  Sat Nov 16 15:51 - 15:51  (00:00)
ftpuser  ssh:notty    104.50.8.212     Sat Nov 16 15:50 - 15:50  (00:00)
ftpuser  ssh:notty    104.50.8.212     Sat Nov 16 15:50 - 15:50  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 15:50 - 15:50  (00:00)
lux      ssh:notty    122.228.89.95    Sat Nov 16 15:49 - 15:49  (00:00)
lux      ssh:notty    122.228.89.95    Sat Nov 16 15:49 - 15:49  (00:00)
soowon   ssh:notty    106.13.16.205    Sat Nov 16 15:48 - 15:48  (00:00)
soowon   ssh:notty    106.13.16.205    Sat Nov 16 15:48 - 15:48  (00:00)
champagn ssh:notty    106.13.65.18     Sat Nov 16 15:48 - 15:48  (00:00)
champagn ssh:notty    106.13.65.18     Sat Nov 16 15:47 - 15:47  (00:00)
hung     ssh:notty    178.128.209.176  Sat Nov 16 15:47 - 15:47  (00:00)
hung     ssh:notty    178.128.209.176  Sat Nov 16 15:46 - 15:46  (00:00)
hatim    ssh:notty    104.50.8.212     Sat Nov 16 15:46 - 15:46  (00:00)
root     ssh:notty    222.186.169.194  Sat Nov 16 15:46 - 15:46  (00:00)
hatim    ssh:notty    104.50.8.212     Sat Nov 16 15:46 - 15:46  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 15:46 - 15:46  (00:00)
root     ssh:notty    222.186.169.194  Sat Nov 16 15:46 - 15:46  (00:00)
root     ssh:notty    222.186.169.194  Sat Nov 16 15:46 - 15:46  (00:00)
weiyi    ssh:notty    123.207.241.223  Sat Nov 16 15:46 - 15:46  (00:00)
weiyi    ssh:notty    123.207.241.223  Sat Nov 16 15:46 - 15:46  (00:00)
root     ssh:notty    222.186.169.194  Sat Nov 16 15:46 - 15:46  (00:00)
abc123   ssh:notty    124.238.116.155  Sat Nov 16 15:46 - 15:46  (00:00)
root     ssh:notty    222.186.169.194  Sat Nov 16 15:46 - 15:46  (00:00)
abc123   ssh:notty    124.238.116.155  Sat Nov 16 15:46 - 15:46  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 15:46 - 15:46  (00:00)
beshai   ssh:notty    59.145.221.103   Sat Nov 16 15:46 - 15:46  (00:00)
beshai   ssh:notty    59.145.221.103   Sat Nov 16 15:45 - 15:45  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 15:44 - 15:44  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 15:43 - 15:43  (00:00)
lp       ssh:notty    23.247.33.61     Sat Nov 16 15:43 - 15:43  (00:00)
hoelzel  ssh:notty    106.13.65.18     Sat Nov 16 15:43 - 15:43  (00:00)
hoelzel  ssh:notty    106.13.65.18     Sat Nov 16 15:43 - 15:43  (00:00)
root     ssh:notty    104.50.8.212     Sat Nov 16 15:42 - 15:42  (00:00)
ts2      ssh:notty    124.238.116.155  Sat Nov 16 15:41 - 15:41  (00:00)
ts2      ssh:notty    124.238.116.155  Sat Nov 16 15:41 - 15:41  (00:00)
12345    ssh:notty    123.207.241.223  Sat Nov 16 15:41 - 15:41  (00:00)
12345    ssh:notty    123.207.241.223  Sat Nov 16 15:41 - 15:41  (00:00)
tigger   ssh:notty    106.13.93.161    Sat Nov 16 15:41 - 15:41  (00:00)
tigger   ssh:notty    106.13.93.161    Sat Nov 16 15:41 - 15:41  (00:00)
tomcat   ssh:notty    180.100.212.73   Sat Nov 16 15:41 - 15:41  (00:00)
tomcat   ssh:notty    180.100.212.73   Sat Nov 16 15:41 - 15:41  (00:00)
root     ssh:notty    59.145.221.103   Sat Nov 16 15:40 - 15:40  (00:00)
svn      ssh:notty    122.228.89.95    Sat Nov 16 15:40 - 15:40  (00:00)
svn      ssh:notty    122.228.89.95    Sat Nov 16 15:40 - 15:40  (00:00)
ortez    ssh:notty    23.247.33.61     Sat Nov 16 15:39 - 15:39  (00:00)
ortez    ssh:notty    23.247.33.61     Sat Nov 16 15:39 - 15:39  (00:00)
sorgan   ssh:notty    104.50.8.212     Sat Nov 16 15:38 - 15:38  (00:00)
sorgan   ssh:notty    104.50.8.212     Sat Nov 16 15:38 - 15:38  (00:00)
bollman  ssh:notty    106.13.65.18     Sat Nov 16 15:38 - 15:38  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 15:38 - 15:38  (00:00)
bollman  ssh:notty    106.13.65.18     Sat Nov 16 15:38 - 15:38  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 15:36 - 15:36  (00:00)
stamback ssh:notty    122.228.89.95    Sat Nov 16 15:36 - 15:36  (00:00)
reh      ssh:notty    23.247.33.61     Sat Nov 16 15:36 - 15:36  (00:00)
stamback ssh:notty    122.228.89.95    Sat Nov 16 15:36 - 15:36  (00:00)
reh      ssh:notty    23.247.33.61     Sat Nov 16 15:36 - 15:36  (00:00)
elms     ssh:notty    106.13.93.161    Sat Nov 16 15:35 - 15:35  (00:00)
elms     ssh:notty    106.13.93.161    Sat Nov 16 15:35 - 15:35  (00:00)
root     ssh:notty    59.145.221.103   Sat Nov 16 15:35 - 15:35  (00:00)
orse     ssh:notty    61.12.67.133     Sat Nov 16 15:35 - 15:35  (00:00)
orse     ssh:notty    61.12.67.133     Sat Nov 16 15:35 - 15:35  (00:00)
kuri     ssh:notty    104.50.8.212     Sat Nov 16 15:34 - 15:34  (00:00)
kuri     ssh:notty    104.50.8.212     Sat Nov 16 15:34 - 15:34  (00:00)
wwwrun   ssh:notty    23.247.33.61     Sat Nov 16 15:33 - 15:33  (00:00)
wwwrun   ssh:notty    23.247.33.61     Sat Nov 16 15:33 - 15:33  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 15:33 - 15:33  (00:00)
teste    ssh:notty    106.13.16.205    Sat Nov 16 15:32 - 15:32  (00:00)
teste    ssh:notty    106.13.16.205    Sat Nov 16 15:32 - 15:32  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 15:32 - 15:32  (00:00)
server   ssh:notty    124.238.116.155  Sat Nov 16 15:31 - 15:31  (00:00)
server   ssh:notty    124.238.116.155  Sat Nov 16 15:31 - 15:31  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 15:30 - 15:30  (00:00)
katoff   ssh:notty    123.207.241.223  Sat Nov 16 15:30 - 15:30  (00:00)
katoff   ssh:notty    123.207.241.223  Sat Nov 16 15:30 - 15:30  (00:00)
guest    ssh:notty    104.50.8.212     Sat Nov 16 15:30 - 15:30  (00:00)
guest    ssh:notty    104.50.8.212     Sat Nov 16 15:30 - 15:30  (00:00)
asseltin ssh:notty    59.145.221.103   Sat Nov 16 15:30 - 15:30  (00:00)
asseltin ssh:notty    59.145.221.103   Sat Nov 16 15:30 - 15:30  (00:00)
brynjar  ssh:notty    23.247.33.61     Sat Nov 16 15:29 - 15:29  (00:00)
brynjar  ssh:notty    23.247.33.61     Sat Nov 16 15:29 - 15:29  (00:00)
root     ssh:notty    61.12.67.133     Sat Nov 16 15:29 - 15:29  (00:00)
anu      ssh:notty    106.13.65.18     Sat Nov 16 15:27 - 15:27  (00:00)
anu      ssh:notty    106.13.65.18     Sat Nov 16 15:27 - 15:27  (00:00)
creature ssh:notty    106.13.16.205    Sat Nov 16 15:27 - 15:27  (00:00)
creature ssh:notty    106.13.16.205    Sat Nov 16 15:27 - 15:27  (00:00)
marpoah  ssh:notty    122.228.89.95    Sat Nov 16 15:27 - 15:27  (00:00)
marpoah  ssh:notty    122.228.89.95    Sat Nov 16 15:27 - 15:27  (00:00)
admin    ssh:notty    124.238.116.155  Sat Nov 16 15:27 - 15:27  (00:00)
admin    ssh:notty    124.238.116.155  Sat Nov 16 15:27 - 15:27  (00:00)
dump     ssh:notty    23.247.33.61     Sat Nov 16 15:26 - 15:26  (00:00)
dump     ssh:notty    23.247.33.61     Sat Nov 16 15:26 - 15:26  (00:00)
somada   ssh:notty    104.50.8.212     Sat Nov 16 15:25 - 15:25  (00:00)
somada   ssh:notty    104.50.8.212     Sat Nov 16 15:25 - 15:25  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 15:25 - 15:25  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 15:25 - 15:25  (00:00)
kaschig  ssh:notty    123.207.241.223  Sat Nov 16 15:25 - 15:25  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 15:25 - 15:25  (00:00)
kaschig  ssh:notty    123.207.241.223  Sat Nov 16 15:25 - 15:25  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 15:25 - 15:25  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 15:25 - 15:25  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 15:25 - 15:25  (00:00)
nobody   ssh:notty    59.145.221.103   Sat Nov 16 15:24 - 15:24  (00:00)
cobley   ssh:notty    106.13.65.18     Sat Nov 16 15:23 - 15:23  (00:00)
cobley   ssh:notty    106.13.65.18     Sat Nov 16 15:22 - 15:22  (00:00)
sverre   ssh:notty    23.247.33.61     Sat Nov 16 15:22 - 15:22  (00:00)
sverre   ssh:notty    23.247.33.61     Sat Nov 16 15:22 - 15:22  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 15:22 - 15:22  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 15:22 - 15:22  (00:00)
wenyu    ssh:notty    124.238.116.155  Sat Nov 16 15:22 - 15:22  (00:00)
wenyu    ssh:notty    124.238.116.155  Sat Nov 16 15:22 - 15:22  (00:00)
riis     ssh:notty    104.50.8.212     Sat Nov 16 15:21 - 15:21  (00:00)
riis     ssh:notty    104.50.8.212     Sat Nov 16 15:21 - 15:21  (00:00)
dsanchez ssh:notty    123.207.241.223  Sat Nov 16 15:20 - 15:20  (00:00)
dsanchez ssh:notty    123.207.241.223  Sat Nov 16 15:20 - 15:20  (00:00)
daniel   ssh:notty    106.13.93.161    Sat Nov 16 15:20 - 15:20  (00:00)
daniel   ssh:notty    106.13.93.161    Sat Nov 16 15:20 - 15:20  (00:00)
pydio    ssh:notty    23.247.33.61     Sat Nov 16 15:19 - 15:19  (00:00)
pydio    ssh:notty    23.247.33.61     Sat Nov 16 15:19 - 15:19  (00:00)
carolea  ssh:notty    106.13.65.18     Sat Nov 16 15:18 - 15:18  (00:00)
carolea  ssh:notty    106.13.65.18     Sat Nov 16 15:18 - 15:18  (00:00)
karmienk ssh:notty    122.228.89.95    Sat Nov 16 15:17 - 15:17  (00:00)
karmienk ssh:notty    122.228.89.95    Sat Nov 16 15:17 - 15:17  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 15:17 - 15:17  (00:00)
libbi    ssh:notty    104.50.8.212     Sat Nov 16 15:17 - 15:17  (00:00)
libbi    ssh:notty    104.50.8.212     Sat Nov 16 15:17 - 15:17  (00:00)
pathak   ssh:notty    124.238.116.155  Sat Nov 16 15:17 - 15:17  (00:00)
pathak   ssh:notty    124.238.116.155  Sat Nov 16 15:17 - 15:17  (00:00)
isabella ssh:notty    123.207.241.223  Sat Nov 16 15:16 - 15:16  (00:00)
isabella ssh:notty    123.207.241.223  Sat Nov 16 15:16 - 15:16  (00:00)
anoop    ssh:notty    23.247.33.61     Sat Nov 16 15:16 - 15:16  (00:00)
anoop    ssh:notty    23.247.33.61     Sat Nov 16 15:16 - 15:16  (00:00)
guest    ssh:notty    59.145.221.103   Sat Nov 16 15:15 - 15:15  (00:00)
guest    ssh:notty    59.145.221.103   Sat Nov 16 15:15 - 15:15  (00:00)
admin    ssh:notty    106.13.93.161    Sat Nov 16 15:15 - 15:15  (00:00)
admin    ssh:notty    106.13.93.161    Sat Nov 16 15:15 - 15:15  (00:00)
chavda   ssh:notty    114.67.76.63     Sat Nov 16 15:14 - 15:14  (00:00)
chavda   ssh:notty    114.67.76.63     Sat Nov 16 15:14 - 15:14  (00:00)
root     ssh:notty    104.50.8.212     Sat Nov 16 15:13 - 15:13  (00:00)
feedback ssh:notty    106.13.65.18     Sat Nov 16 15:13 - 15:13  (00:00)
feedback ssh:notty    106.13.65.18     Sat Nov 16 15:13 - 15:13  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 15:12 - 15:12  (00:00)
backup   ssh:notty    122.228.89.95    Sat Nov 16 15:12 - 15:12  (00:00)
admin    ssh:notty    124.238.116.155  Sat Nov 16 15:12 - 15:12  (00:00)
admin    ssh:notty    124.238.116.155  Sat Nov 16 15:12 - 15:12  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 15:12 - 15:12  (00:00)
nfs      ssh:notty    185.153.198.185  Sat Nov 16 15:11 - 15:11  (00:00)
nfs      ssh:notty    185.153.198.185  Sat Nov 16 15:11 - 15:11  (00:00)
user     ssh:notty    59.145.221.103   Sat Nov 16 15:10 - 15:10  (00:00)
user     ssh:notty    59.145.221.103   Sat Nov 16 15:10 - 15:10  (00:00)
grieken  ssh:notty    106.13.93.161    Sat Nov 16 15:10 - 15:10  (00:00)
grieken  ssh:notty    106.13.93.161    Sat Nov 16 15:10 - 15:10  (00:00)
uucp     ssh:notty    114.67.76.63     Sat Nov 16 15:09 - 15:09  (00:00)
root     ssh:notty    104.50.8.212     Sat Nov 16 15:09 - 15:09  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 15:09 - 15:09  (00:00)
svn      ssh:notty    122.228.89.95    Sat Nov 16 15:08 - 15:08  (00:00)
beaurega ssh:notty    106.13.65.18     Sat Nov 16 15:08 - 15:08  (00:00)
svn      ssh:notty    122.228.89.95    Sat Nov 16 15:08 - 15:08  (00:00)
beaurega ssh:notty    106.13.65.18     Sat Nov 16 15:08 - 15:08  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 15:08 - 15:08  (00:00)
cong     ssh:notty    106.13.16.205    Sat Nov 16 15:07 - 15:07  (00:00)
cong     ssh:notty    106.13.16.205    Sat Nov 16 15:07 - 15:07  (00:00)
bizzel   ssh:notty    185.153.198.185  Sat Nov 16 15:07 - 15:07  (00:00)
bizzel   ssh:notty    185.153.198.185  Sat Nov 16 15:07 - 15:07  (00:00)
alex03   ssh:notty    123.207.241.223  Sat Nov 16 15:06 - 15:06  (00:00)
alex03   ssh:notty    123.207.241.223  Sat Nov 16 15:06 - 15:06  (00:00)
buster   ssh:notty    23.247.33.61     Sat Nov 16 15:05 - 15:05  (00:00)
buster   ssh:notty    23.247.33.61     Sat Nov 16 15:05 - 15:05  (00:00)
root     ssh:notty    104.50.8.212     Sat Nov 16 15:05 - 15:05  (00:00)
bobbin   ssh:notty    59.145.221.103   Sat Nov 16 15:05 - 15:05  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 15:05 - 15:05  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 15:04 - 15:04  (00:00)
bobbin   ssh:notty    59.145.221.103   Sat Nov 16 15:04 - 15:04  (00:00)
Ohto     ssh:notty    122.228.89.95    Sat Nov 16 15:04 - 15:04  (00:00)
Ohto     ssh:notty    122.228.89.95    Sat Nov 16 15:03 - 15:03  (00:00)
sync     ssh:notty    185.153.198.185  Sat Nov 16 15:03 - 15:03  (00:00)
mazigian ssh:notty    124.238.116.155  Sat Nov 16 15:03 - 15:03  (00:00)
mazigian ssh:notty    124.238.116.155  Sat Nov 16 15:03 - 15:03  (00:00)
moussa   ssh:notty    106.13.65.18     Sat Nov 16 15:03 - 15:03  (00:00)
moussa   ssh:notty    106.13.65.18     Sat Nov 16 15:03 - 15:03  (00:00)
katelynn ssh:notty    106.13.16.205    Sat Nov 16 15:03 - 15:03  (00:00)
katelynn ssh:notty    106.13.16.205    Sat Nov 16 15:03 - 15:03  (00:00)
amssys   ssh:notty    23.247.33.61     Sat Nov 16 15:02 - 15:02  (00:00)
amssys   ssh:notty    23.247.33.61     Sat Nov 16 15:02 - 15:02  (00:00)
ufo      ssh:notty    123.207.241.223  Sat Nov 16 15:01 - 15:01  (00:00)
ufo      ssh:notty    123.207.241.223  Sat Nov 16 15:01 - 15:01  (00:00)
root     ssh:notty    104.50.8.212     Sat Nov 16 15:01 - 15:01  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 15:00 - 15:00  (00:00)
iwema    ssh:notty    122.228.89.95    Sat Nov 16 15:00 - 15:00  (00:00)
iwema    ssh:notty    122.228.89.95    Sat Nov 16 15:00 - 15:00  (00:00)
admin    ssh:notty    106.13.93.161    Sat Nov 16 15:00 - 15:00  (00:00)
admin    ssh:notty    106.13.93.161    Sat Nov 16 14:59 - 14:59  (00:00)
tomcat   ssh:notty    185.153.198.185  Sat Nov 16 14:59 - 14:59  (00:00)
tomcat   ssh:notty    185.153.198.185  Sat Nov 16 14:59 - 14:59  (00:00)
root     ssh:notty    59.145.221.103   Sat Nov 16 14:59 - 14:59  (00:00)
daemon   ssh:notty    23.247.33.61     Sat Nov 16 14:59 - 14:59  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 14:58 - 14:58  (00:00)
operator ssh:notty    106.13.65.18     Sat Nov 16 14:58 - 14:58  (00:00)
operator ssh:notty    106.13.65.18     Sat Nov 16 14:58 - 14:58  (00:00)
segreter ssh:notty    106.13.16.205    Sat Nov 16 14:58 - 14:58  (00:00)
segreter ssh:notty    106.13.16.205    Sat Nov 16 14:58 - 14:58  (00:00)
ume_kika ssh:notty    104.50.8.212     Sat Nov 16 14:57 - 14:57  (00:00)
ume_kika ssh:notty    104.50.8.212     Sat Nov 16 14:57 - 14:57  (00:00)
mlh      ssh:notty    123.207.241.223  Sat Nov 16 14:56 - 14:56  (00:00)
mlh      ssh:notty    123.207.241.223  Sat Nov 16 14:56 - 14:56  (00:00)
postmast ssh:notty    122.228.89.95    Sat Nov 16 14:56 - 14:56  (00:00)
postmast ssh:notty    122.228.89.95    Sat Nov 16 14:56 - 14:56  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 14:55 - 14:55  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 14:55 - 14:55  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 14:55 - 14:55  (00:00)
osmun    ssh:notty    106.13.93.161    Sat Nov 16 14:54 - 14:54  (00:00)
osmun    ssh:notty    106.13.93.161    Sat Nov 16 14:54 - 14:54  (00:00)
awinter  ssh:notty    59.145.221.103   Sat Nov 16 14:54 - 14:54  (00:00)
awinter  ssh:notty    59.145.221.103   Sat Nov 16 14:54 - 14:54  (00:00)
checchi  ssh:notty    124.238.116.155  Sat Nov 16 14:53 - 14:53  (00:00)
checchi  ssh:notty    124.238.116.155  Sat Nov 16 14:53 - 14:53  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 14:53 - 14:53  (00:00)
root     ssh:notty    104.50.8.212     Sat Nov 16 14:52 - 14:52  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 14:52 - 14:52  (00:00)
trendims ssh:notty    23.247.33.61     Sat Nov 16 14:52 - 14:52  (00:00)
trendims ssh:notty    23.247.33.61     Sat Nov 16 14:52 - 14:52  (00:00)
horder   ssh:notty    122.228.89.95    Sat Nov 16 14:51 - 14:51  (00:00)
horder   ssh:notty    122.228.89.95    Sat Nov 16 14:51 - 14:51  (00:00)
bkroeker ssh:notty    185.153.198.185  Sat Nov 16 14:51 - 14:51  (00:00)
bkroeker ssh:notty    185.153.198.185  Sat Nov 16 14:51 - 14:51  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 14:50 - 14:50  (00:00)
ryota    ssh:notty    123.207.241.223  Sat Nov 16 14:50 - 14:50  (00:00)
ryota    ssh:notty    123.207.241.223  Sat Nov 16 14:50 - 14:50  (00:00)
eggestad ssh:notty    106.13.93.161    Sat Nov 16 14:49 - 14:49  (00:00)
eggestad ssh:notty    106.13.93.161    Sat Nov 16 14:49 - 14:49  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 14:48 - 14:48  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 14:48 - 14:48  (00:00)
frank    ssh:notty    59.145.221.103   Sat Nov 16 14:48 - 14:48  (00:00)
frank    ssh:notty    59.145.221.103   Sat Nov 16 14:48 - 14:48  (00:00)
mellon   ssh:notty    104.50.8.212     Sat Nov 16 14:48 - 14:48  (00:00)
meara    ssh:notty    106.13.65.18     Sat Nov 16 14:48 - 14:48  (00:00)
mellon   ssh:notty    104.50.8.212     Sat Nov 16 14:48 - 14:48  (00:00)
meara    ssh:notty    106.13.65.18     Sat Nov 16 14:48 - 14:48  (00:00)
amanda   ssh:notty    122.228.89.95    Sat Nov 16 14:48 - 14:48  (00:00)
amanda   ssh:notty    122.228.89.95    Sat Nov 16 14:47 - 14:47  (00:00)
lauterba ssh:notty    185.153.198.185  Sat Nov 16 14:47 - 14:47  (00:00)
lauterba ssh:notty    185.153.198.185  Sat Nov 16 14:47 - 14:47  (00:00)
admin    ssh:notty    106.13.16.205    Sat Nov 16 14:47 - 14:47  (00:00)
admin    ssh:notty    106.13.16.205    Sat Nov 16 14:47 - 14:47  (00:00)
Anita    ssh:notty    114.67.76.63     Sat Nov 16 14:46 - 14:46  (00:00)
Anita    ssh:notty    114.67.76.63     Sat Nov 16 14:46 - 14:46  (00:00)
dorgan   ssh:notty    123.207.241.223  Sat Nov 16 14:45 - 14:45  (00:00)
stroker  ssh:notty    23.247.33.61     Sat Nov 16 14:45 - 14:45  (00:00)
dorgan   ssh:notty    123.207.241.223  Sat Nov 16 14:45 - 14:45  (00:00)
stroker  ssh:notty    23.247.33.61     Sat Nov 16 14:45 - 14:45  (00:00)
twetie   ssh:notty    106.13.93.161    Sat Nov 16 14:44 - 14:44  (00:00)
twetie   ssh:notty    106.13.93.161    Sat Nov 16 14:44 - 14:44  (00:00)
iesvold  ssh:notty    104.50.8.212     Sat Nov 16 14:44 - 14:44  (00:00)
iesvold  ssh:notty    104.50.8.212     Sat Nov 16 14:44 - 14:44  (00:00)
pi       ssh:notty    124.238.116.155  Sat Nov 16 14:44 - 14:44  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 14:44 - 14:44  (00:00)
pi       ssh:notty    124.238.116.155  Sat Nov 16 14:44 - 14:44  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 14:43 - 14:43  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 14:43 - 14:43  (00:00)
chunyen  ssh:notty    59.145.221.103   Sat Nov 16 14:43 - 14:43  (00:00)
chunyen  ssh:notty    59.145.221.103   Sat Nov 16 14:43 - 14:43  (00:00)
hkn      ssh:notty    18.215.220.11    Sat Nov 16 14:42 - 14:42  (00:00)
hkn      ssh:notty    18.215.220.11    Sat Nov 16 14:42 - 14:42  (00:00)
ricker   ssh:notty    106.13.16.205    Sat Nov 16 14:42 - 14:42  (00:00)
ricker   ssh:notty    106.13.16.205    Sat Nov 16 14:42 - 14:42  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 14:42 - 14:42  (00:00)
tmp      ssh:notty    114.67.76.63     Sat Nov 16 14:42 - 14:42  (00:00)
SMSPASSW ssh:notty    223.220.159.78   Sat Nov 16 14:42 - 14:42  (00:00)
tmp      ssh:notty    114.67.76.63     Sat Nov 16 14:42 - 14:42  (00:00)
SMSPASSW ssh:notty    223.220.159.78   Sat Nov 16 14:42 - 14:42  (00:00)
66666666 ssh:notty    123.207.241.223  Sat Nov 16 14:40 - 14:40  (00:00)
66666666 ssh:notty    123.207.241.223  Sat Nov 16 14:40 - 14:40  (00:00)
hikiji   ssh:notty    104.50.8.212     Sat Nov 16 14:40 - 14:40  (00:00)
hikiji   ssh:notty    104.50.8.212     Sat Nov 16 14:40 - 14:40  (00:00)
pobiero  ssh:notty    122.228.89.95    Sat Nov 16 14:39 - 14:39  (00:00)
pobiero  ssh:notty    122.228.89.95    Sat Nov 16 14:39 - 14:39  (00:00)
fwdesign ssh:notty    106.13.93.161    Sat Nov 16 14:39 - 14:39  (00:00)
denine   ssh:notty    124.238.116.155  Sat Nov 16 14:39 - 14:39  (00:00)
admin    ssh:notty    18.215.220.11    Sat Nov 16 14:39 - 14:39  (00:00)
fwdesign ssh:notty    106.13.93.161    Sat Nov 16 14:39 - 14:39  (00:00)
denine   ssh:notty    124.238.116.155  Sat Nov 16 14:39 - 14:39  (00:00)
admin    ssh:notty    18.215.220.11    Sat Nov 16 14:39 - 14:39  (00:00)
constanc ssh:notty    185.153.198.185  Sat Nov 16 14:39 - 14:39  (00:00)
constanc ssh:notty    185.153.198.185  Sat Nov 16 14:39 - 14:39  (00:00)
brouste  ssh:notty    23.247.33.61     Sat Nov 16 14:39 - 14:39  (00:00)
brouste  ssh:notty    23.247.33.61     Sat Nov 16 14:39 - 14:39  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 14:38 - 14:38  (00:00)
root     ssh:notty    59.145.221.103   Sat Nov 16 14:37 - 14:37  (00:00)
test     ssh:notty    106.13.16.205    Sat Nov 16 14:37 - 14:37  (00:00)
test     ssh:notty    106.13.16.205    Sat Nov 16 14:37 - 14:37  (00:00)
!QAZXCFG ssh:notty    223.220.159.78   Sat Nov 16 14:37 - 14:37  (00:00)
!QAZXCFG ssh:notty    223.220.159.78   Sat Nov 16 14:37 - 14:37  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 14:37 - 14:37  (00:00)
nereida  ssh:notty    123.207.241.223  Sat Nov 16 14:36 - 14:36  (00:00)
nereida  ssh:notty    123.207.241.223  Sat Nov 16 14:36 - 14:36  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 14:36 - 14:36  (00:00)
pcap     ssh:notty    104.50.8.212     Sat Nov 16 14:36 - 14:36  (00:00)
pcap     ssh:notty    104.50.8.212     Sat Nov 16 14:35 - 14:35  (00:00)
squid    ssh:notty    23.247.33.61     Sat Nov 16 14:35 - 14:35  (00:00)
squid    ssh:notty    23.247.33.61     Sat Nov 16 14:35 - 14:35  (00:00)
develope ssh:notty    18.215.220.11    Sat Nov 16 14:35 - 14:35  (00:00)
develope ssh:notty    18.215.220.11    Sat Nov 16 14:35 - 14:35  (00:00)
conaway  ssh:notty    185.153.198.185  Sat Nov 16 14:35 - 14:35  (00:00)
conaway  ssh:notty    185.153.198.185  Sat Nov 16 14:35 - 14:35  (00:00)
wwwrun   ssh:notty    124.238.116.155  Sat Nov 16 14:34 - 14:34  (00:00)
wwwrun   ssh:notty    124.238.116.155  Sat Nov 16 14:34 - 14:34  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 14:34 - 14:34  (00:00)
villange ssh:notty    106.13.65.18     Sat Nov 16 14:33 - 14:33  (00:00)
villange ssh:notty    106.13.65.18     Sat Nov 16 14:33 - 14:33  (00:00)
org      ssh:notty    223.220.159.78   Sat Nov 16 14:32 - 14:32  (00:00)
org      ssh:notty    223.220.159.78   Sat Nov 16 14:32 - 14:32  (00:00)
barroso  ssh:notty    106.13.16.205    Sat Nov 16 14:32 - 14:32  (00:00)
hung     ssh:notty    114.67.76.63     Sat Nov 16 14:32 - 14:32  (00:00)
barroso  ssh:notty    106.13.16.205    Sat Nov 16 14:32 - 14:32  (00:00)
hung     ssh:notty    114.67.76.63     Sat Nov 16 14:32 - 14:32  (00:00)
greg     ssh:notty    23.247.33.61     Sat Nov 16 14:32 - 14:32  (00:00)
greg     ssh:notty    23.247.33.61     Sat Nov 16 14:32 - 14:32  (00:00)
root     ssh:notty    59.145.221.103   Sat Nov 16 14:32 - 14:32  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 14:32 - 14:32  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 14:31 - 14:31  (00:00)
barison  ssh:notty    104.50.8.212     Sat Nov 16 14:31 - 14:31  (00:00)
barison  ssh:notty    104.50.8.212     Sat Nov 16 14:31 - 14:31  (00:00)
support  ssh:notty    185.153.198.185  Sat Nov 16 14:31 - 14:31  (00:00)
support  ssh:notty    185.153.198.185  Sat Nov 16 14:31 - 14:31  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 14:29 - 14:29  (00:00)
mail     ssh:notty    23.247.33.61     Sat Nov 16 14:29 - 14:29  (00:00)
dovecot  ssh:notty    106.13.93.161    Sat Nov 16 14:29 - 14:29  (00:00)
dovecot  ssh:notty    106.13.93.161    Sat Nov 16 14:29 - 14:29  (00:00)
vinnell  ssh:notty    106.13.65.18     Sat Nov 16 14:28 - 14:28  (00:00)
vinnell  ssh:notty    106.13.65.18     Sat Nov 16 14:28 - 14:28  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 14:28 - 14:28  (00:00)
uucp     ssh:notty    114.67.76.63     Sat Nov 16 14:28 - 14:28  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 14:28 - 14:28  (00:00)
root     ssh:notty    223.220.159.78   Sat Nov 16 14:28 - 14:28  (00:00)
arietta  ssh:notty    106.13.16.205    Sat Nov 16 14:27 - 14:27  (00:00)
arietta  ssh:notty    106.13.16.205    Sat Nov 16 14:27 - 14:27  (00:00)
bergsand ssh:notty    104.50.8.212     Sat Nov 16 14:27 - 14:27  (00:00)
bergsand ssh:notty    104.50.8.212     Sat Nov 16 14:27 - 14:27  (00:00)
angell   ssh:notty    185.153.198.185  Sat Nov 16 14:27 - 14:27  (00:00)
angell   ssh:notty    185.153.198.185  Sat Nov 16 14:27 - 14:27  (00:00)
user     ssh:notty    59.145.221.103   Sat Nov 16 14:26 - 14:26  (00:00)
user     ssh:notty    59.145.221.103   Sat Nov 16 14:26 - 14:26  (00:00)
salon    ssh:notty    123.207.241.223  Sat Nov 16 14:26 - 14:26  (00:00)
salon    ssh:notty    123.207.241.223  Sat Nov 16 14:26 - 14:26  (00:00)
nahjir   ssh:notty    23.247.33.61     Sat Nov 16 14:26 - 14:26  (00:00)
nahjir   ssh:notty    23.247.33.61     Sat Nov 16 14:26 - 14:26  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 14:25 - 14:25  (00:00)
langendo ssh:notty    18.215.220.11    Sat Nov 16 14:24 - 14:24  (00:00)
langendo ssh:notty    18.215.220.11    Sat Nov 16 14:24 - 14:24  (00:00)
wwwrun   ssh:notty    122.228.89.95    Sat Nov 16 14:24 - 14:24  (00:00)
wwwrun   ssh:notty    122.228.89.95    Sat Nov 16 14:24 - 14:24  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 14:24 - 14:24  (00:00)
jzapata  ssh:notty    106.13.65.18     Sat Nov 16 14:23 - 14:23  (00:00)
jzapata  ssh:notty    106.13.65.18     Sat Nov 16 14:23 - 14:23  (00:00)
rpm      ssh:notty    114.67.76.63     Sat Nov 16 14:23 - 14:23  (00:00)
rpm      ssh:notty    114.67.76.63     Sat Nov 16 14:23 - 14:23  (00:00)
clazar   ssh:notty    223.220.159.78   Sat Nov 16 14:23 - 14:23  (00:00)
clazar   ssh:notty    223.220.159.78   Sat Nov 16 14:23 - 14:23  (00:00)
wwwrun   ssh:notty    185.153.198.185  Sat Nov 16 14:23 - 14:23  (00:00)
wwwrun   ssh:notty    185.153.198.185  Sat Nov 16 14:23 - 14:23  (00:00)
root     ssh:notty    106.13.16.205    Sat Nov 16 14:23 - 14:23  (00:00)
hood     ssh:notty    23.247.33.61     Sat Nov 16 14:22 - 14:22  (00:00)
hood     ssh:notty    23.247.33.61     Sat Nov 16 14:22 - 14:22  (00:00)
webmaste ssh:notty    123.207.241.223  Sat Nov 16 14:21 - 14:21  (00:00)
webmaste ssh:notty    123.207.241.223  Sat Nov 16 14:21 - 14:21  (00:00)
corina   ssh:notty    59.145.221.103   Sat Nov 16 14:21 - 14:21  (00:00)
corina   ssh:notty    59.145.221.103   Sat Nov 16 14:21 - 14:21  (00:00)
webmaste ssh:notty    18.215.220.11    Sat Nov 16 14:21 - 14:21  (00:00)
webmaste ssh:notty    18.215.220.11    Sat Nov 16 14:21 - 14:21  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 14:20 - 14:20  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 14:19 - 14:19  (00:00)
finger   ssh:notty    23.247.33.61     Sat Nov 16 14:19 - 14:19  (00:00)
finger   ssh:notty    23.247.33.61     Sat Nov 16 14:19 - 14:19  (00:00)
teamspea ssh:notty    114.67.76.63     Sat Nov 16 14:19 - 14:19  (00:00)
teamspea ssh:notty    114.67.76.63     Sat Nov 16 14:19 - 14:19  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 14:19 - 14:19  (00:00)
hynek    ssh:notty    106.13.93.161    Sat Nov 16 14:18 - 14:18  (00:00)
hynek    ssh:notty    106.13.93.161    Sat Nov 16 14:18 - 14:18  (00:00)
puff     ssh:notty    106.13.65.18     Sat Nov 16 14:18 - 14:18  (00:00)
puff     ssh:notty    106.13.65.18     Sat Nov 16 14:18 - 14:18  (00:00)
user     ssh:notty    223.220.159.78   Sat Nov 16 14:18 - 14:18  (00:00)
user     ssh:notty    223.220.159.78   Sat Nov 16 14:18 - 14:18  (00:00)
news     ssh:notty    106.13.16.205    Sat Nov 16 14:18 - 14:18  (00:00)
tccuser  ssh:notty    18.215.220.11    Sat Nov 16 14:17 - 14:17  (00:00)
tccuser  ssh:notty    18.215.220.11    Sat Nov 16 14:17 - 14:17  (00:00)
wiso2803 ssh:notty    123.207.241.223  Sat Nov 16 14:17 - 14:17  (00:00)
wiso2803 ssh:notty    123.207.241.223  Sat Nov 16 14:17 - 14:17  (00:00)
mangey   ssh:notty    122.228.89.95    Sat Nov 16 14:16 - 14:16  (00:00)
mangey   ssh:notty    122.228.89.95    Sat Nov 16 14:16 - 14:16  (00:00)
hata     ssh:notty    23.247.33.61     Sat Nov 16 14:16 - 14:16  (00:00)
hata     ssh:notty    23.247.33.61     Sat Nov 16 14:16 - 14:16  (00:00)
imprime  ssh:notty    124.238.116.155  Sat Nov 16 14:15 - 14:15  (00:00)
imprime  ssh:notty    124.238.116.155  Sat Nov 16 14:15 - 14:15  (00:00)
mingtien ssh:notty    185.153.198.185  Sat Nov 16 14:15 - 14:15  (00:00)
mingtien ssh:notty    185.153.198.185  Sat Nov 16 14:15 - 14:15  (00:00)
git      ssh:notty    114.67.76.63     Sat Nov 16 14:14 - 14:14  (00:00)
git      ssh:notty    114.67.76.63     Sat Nov 16 14:14 - 14:14  (00:00)
lautrido ssh:notty    106.13.65.18     Sat Nov 16 14:14 - 14:14  (00:00)
lautrido ssh:notty    106.13.65.18     Sat Nov 16 14:14 - 14:14  (00:00)
ching    ssh:notty    106.13.93.161    Sat Nov 16 14:13 - 14:13  (00:00)
ching    ssh:notty    106.13.93.161    Sat Nov 16 14:13 - 14:13  (00:00)
pcap     ssh:notty    18.215.220.11    Sat Nov 16 14:13 - 14:13  (00:00)
pcap     ssh:notty    18.215.220.11    Sat Nov 16 14:13 - 14:13  (00:00)
ftp      ssh:notty    223.220.159.78   Sat Nov 16 14:13 - 14:13  (00:00)
ftp      ssh:notty    223.220.159.78   Sat Nov 16 14:13 - 14:13  (00:00)
meriann  ssh:notty    106.13.16.205    Sat Nov 16 14:13 - 14:13  (00:00)
meriann  ssh:notty    106.13.16.205    Sat Nov 16 14:13 - 14:13  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 14:12 - 14:12  (00:00)
smuda    ssh:notty    122.228.89.95    Sat Nov 16 14:12 - 14:12  (00:00)
smuda    ssh:notty    122.228.89.95    Sat Nov 16 14:12 - 14:12  (00:00)
qun      ssh:notty    185.153.198.185  Sat Nov 16 14:11 - 14:11  (00:00)
qun      ssh:notty    185.153.198.185  Sat Nov 16 14:11 - 14:11  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 14:10 - 14:10  (00:00)
yietarng ssh:notty    114.67.76.63     Sat Nov 16 14:10 - 14:10  (00:00)
cyang    ssh:notty    18.215.220.11    Sat Nov 16 14:10 - 14:10  (00:00)
yietarng ssh:notty    114.67.76.63     Sat Nov 16 14:10 - 14:10  (00:00)
cyang    ssh:notty    18.215.220.11    Sat Nov 16 14:10 - 14:10  (00:00)
ojibwa   ssh:notty    104.50.8.212     Sat Nov 16 14:09 - 14:09  (00:00)
ojibwa   ssh:notty    104.50.8.212     Sat Nov 16 14:09 - 14:09  (00:00)
jiakkwan ssh:notty    23.247.33.61     Sat Nov 16 14:09 - 14:09  (00:00)
jiakkwan ssh:notty    23.247.33.61     Sat Nov 16 14:09 - 14:09  (00:00)
lightdm  ssh:notty    106.13.65.18     Sat Nov 16 14:09 - 14:09  (00:00)
lightdm  ssh:notty    106.13.65.18     Sat Nov 16 14:09 - 14:09  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 14:08 - 14:08  (00:00)
root     ssh:notty    223.220.159.78   Sat Nov 16 14:08 - 14:08  (00:00)
prange   ssh:notty    122.228.89.95    Sat Nov 16 14:08 - 14:08  (00:00)
mt       ssh:notty    106.13.16.205    Sat Nov 16 14:08 - 14:08  (00:00)
prange   ssh:notty    122.228.89.95    Sat Nov 16 14:08 - 14:08  (00:00)
mt       ssh:notty    106.13.16.205    Sat Nov 16 14:08 - 14:08  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 14:07 - 14:07  (00:00)
kagiyama ssh:notty    18.215.220.11    Sat Nov 16 14:06 - 14:06  (00:00)
kagiyama ssh:notty    18.215.220.11    Sat Nov 16 14:06 - 14:06  (00:00)
sll      ssh:notty    23.247.33.61     Sat Nov 16 14:06 - 14:06  (00:00)
sll      ssh:notty    23.247.33.61     Sat Nov 16 14:06 - 14:06  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 14:05 - 14:05  (00:00)
holinski ssh:notty    124.238.116.155  Sat Nov 16 14:05 - 14:05  (00:00)
holinski ssh:notty    124.238.116.155  Sat Nov 16 14:05 - 14:05  (00:00)
server   ssh:notty    122.228.89.95    Sat Nov 16 14:04 - 14:04  (00:00)
server   ssh:notty    122.228.89.95    Sat Nov 16 14:04 - 14:04  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 14:04 - 14:04  (00:00)
root     ssh:notty    223.220.159.78   Sat Nov 16 14:03 - 14:03  (00:00)
sshd     ssh:notty    106.13.93.161    Sat Nov 16 14:03 - 14:03  (00:00)
apache   ssh:notty    106.13.16.205    Sat Nov 16 14:03 - 14:03  (00:00)
apache   ssh:notty    106.13.16.205    Sat Nov 16 14:03 - 14:03  (00:00)
guest    ssh:notty    23.247.33.61     Sat Nov 16 14:03 - 14:03  (00:00)
guest    ssh:notty    23.247.33.61     Sat Nov 16 14:03 - 14:03  (00:00)
M        ssh:notty    185.153.198.185  Sat Nov 16 14:03 - 14:03  (00:00)
M        ssh:notty    185.153.198.185  Sat Nov 16 14:03 - 14:03  (00:00)
bergo    ssh:notty    18.215.220.11    Sat Nov 16 14:02 - 14:02  (00:00)
bergo    ssh:notty    18.215.220.11    Sat Nov 16 14:02 - 14:02  (00:00)
alcatel  ssh:notty    114.67.76.63     Sat Nov 16 14:01 - 14:01  (00:00)
alcatel  ssh:notty    114.67.76.63     Sat Nov 16 14:01 - 14:01  (00:00)
stivende ssh:notty    124.238.116.155  Sat Nov 16 14:00 - 14:00  (00:00)
stivende ssh:notty    124.238.116.155  Sat Nov 16 14:00 - 14:00  (00:00)
User     ssh:notty    122.228.89.95    Sat Nov 16 14:00 - 14:00  (00:00)
User     ssh:notty    122.228.89.95    Sat Nov 16 14:00 - 14:00  (00:00)
guest    ssh:notty    23.247.33.61     Sat Nov 16 13:59 - 13:59  (00:00)
guest    ssh:notty    23.247.33.61     Sat Nov 16 13:59 - 13:59  (00:00)
gabriell ssh:notty    168.243.232.149  Sat Nov 16 13:59 - 13:59  (00:00)
gabriell ssh:notty    168.243.232.149  Sat Nov 16 13:59 - 13:59  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 13:59 - 13:59  (00:00)
victor   ssh:notty    106.13.16.205    Sat Nov 16 13:59 - 13:59  (00:00)
victor   ssh:notty    106.13.16.205    Sat Nov 16 13:59 - 13:59  (00:00)
squid    ssh:notty    185.153.198.185  Sat Nov 16 13:59 - 13:59  (00:00)
squid    ssh:notty    185.153.198.185  Sat Nov 16 13:59 - 13:59  (00:00)
chi-pang ssh:notty    106.13.65.18     Sat Nov 16 13:58 - 13:58  (00:00)
chi-pang ssh:notty    106.13.65.18     Sat Nov 16 13:58 - 13:58  (00:00)
douglas  ssh:notty    114.67.76.63     Sat Nov 16 13:56 - 13:56  (00:00)
mail     ssh:notty    59.145.221.103   Sat Nov 16 13:56 - 13:56  (00:00)
wang     ssh:notty    23.247.33.61     Sat Nov 16 13:56 - 13:56  (00:00)
douglas  ssh:notty    114.67.76.63     Sat Nov 16 13:56 - 13:56  (00:00)
wang     ssh:notty    23.247.33.61     Sat Nov 16 13:56 - 13:56  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 13:56 - 13:56  (00:00)
wischhus ssh:notty    168.243.232.149  Sat Nov 16 13:55 - 13:55  (00:00)
wischhus ssh:notty    168.243.232.149  Sat Nov 16 13:55 - 13:55  (00:00)
couey    ssh:notty    18.215.220.11    Sat Nov 16 13:55 - 13:55  (00:00)
couey    ssh:notty    18.215.220.11    Sat Nov 16 13:55 - 13:55  (00:00)
wanadoo  ssh:notty    185.153.198.185  Sat Nov 16 13:55 - 13:55  (00:00)
wanadoo  ssh:notty    185.153.198.185  Sat Nov 16 13:55 - 13:55  (00:00)
mus      ssh:notty    106.13.16.205    Sat Nov 16 13:54 - 13:54  (00:00)
mus      ssh:notty    106.13.16.205    Sat Nov 16 13:54 - 13:54  (00:00)
seracett ssh:notty    23.247.33.61     Sat Nov 16 13:53 - 13:53  (00:00)
seracett ssh:notty    23.247.33.61     Sat Nov 16 13:53 - 13:53  (00:00)
buttgere ssh:notty    106.13.65.18     Sat Nov 16 13:52 - 13:52  (00:00)
buttgere ssh:notty    106.13.65.18     Sat Nov 16 13:52 - 13:52  (00:00)
tirado   ssh:notty    114.67.76.63     Sat Nov 16 13:52 - 13:52  (00:00)
tirado   ssh:notty    114.67.76.63     Sat Nov 16 13:52 - 13:52  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 13:52 - 13:52  (00:00)
ionutz   ssh:notty    168.243.232.149  Sat Nov 16 13:51 - 13:51  (00:00)
ionutz   ssh:notty    168.243.232.149  Sat Nov 16 13:51 - 13:51  (00:00)
al       ssh:notty    106.13.93.161    Sat Nov 16 13:51 - 13:51  (00:00)
al       ssh:notty    106.13.93.161    Sat Nov 16 13:51 - 13:51  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 13:51 - 13:51  (00:00)
guest    ssh:notty    185.153.198.185  Sat Nov 16 13:51 - 13:51  (00:00)
guest    ssh:notty    185.153.198.185  Sat Nov 16 13:51 - 13:51  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 13:50 - 13:50  (00:00)
root     ssh:notty    123.207.241.223  Sat Nov 16 13:50 - 13:50  (00:00)
webmaste ssh:notty    106.13.16.205    Sat Nov 16 13:49 - 13:49  (00:00)
webmaste ssh:notty    106.13.16.205    Sat Nov 16 13:49 - 13:49  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 13:49 - 13:49  (00:00)
root     ssh:notty    223.220.159.78   Sat Nov 16 13:48 - 13:48  (00:00)
charlett ssh:notty    18.215.220.11    Sat Nov 16 13:48 - 13:48  (00:00)
charlett ssh:notty    18.215.220.11    Sat Nov 16 13:48 - 13:48  (00:00)
wills    ssh:notty    106.13.65.18     Sat Nov 16 13:48 - 13:48  (00:00)
wills    ssh:notty    106.13.65.18     Sat Nov 16 13:48 - 13:48  (00:00)
guest    ssh:notty    114.67.76.63     Sat Nov 16 13:48 - 13:48  (00:00)
guest    ssh:notty    114.67.76.63     Sat Nov 16 13:47 - 13:47  (00:00)
admin    ssh:notty    168.243.232.149  Sat Nov 16 13:47 - 13:47  (00:00)
admin    ssh:notty    168.243.232.149  Sat Nov 16 13:47 - 13:47  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 13:47 - 13:47  (00:00)
lashbroo ssh:notty    122.228.89.95    Sat Nov 16 13:47 - 13:47  (00:00)
lashbroo ssh:notty    122.228.89.95    Sat Nov 16 13:47 - 13:47  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 13:46 - 13:46  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 13:46 - 13:46  (00:00)
http     ssh:notty    106.13.16.205    Sat Nov 16 13:44 - 13:44  (00:00)
http     ssh:notty    106.13.16.205    Sat Nov 16 13:44 - 13:44  (00:00)
tierno   ssh:notty    18.215.220.11    Sat Nov 16 13:44 - 13:44  (00:00)
tierno   ssh:notty    18.215.220.11    Sat Nov 16 13:44 - 13:44  (00:00)
wosser   ssh:notty    124.238.116.155  Sat Nov 16 13:44 - 13:44  (00:00)
wosser   ssh:notty    124.238.116.155  Sat Nov 16 13:44 - 13:44  (00:00)
root     ssh:notty    168.243.232.149  Sat Nov 16 13:43 - 13:43  (00:00)
advantag ssh:notty    223.220.159.78   Sat Nov 16 13:43 - 13:43  (00:00)
advantag ssh:notty    223.220.159.78   Sat Nov 16 13:43 - 13:43  (00:00)
ringside ssh:notty    114.67.76.63     Sat Nov 16 13:43 - 13:43  (00:00)
ringside ssh:notty    114.67.76.63     Sat Nov 16 13:43 - 13:43  (00:00)
admin    ssh:notty    23.247.33.61     Sat Nov 16 13:43 - 13:43  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 13:43 - 13:43  (00:00)
admin    ssh:notty    23.247.33.61     Sat Nov 16 13:43 - 13:43  (00:00)
jab      ssh:notty    106.13.65.18     Sat Nov 16 13:43 - 13:43  (00:00)
jab      ssh:notty    106.13.65.18     Sat Nov 16 13:43 - 13:43  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 13:42 - 13:42  (00:00)
vandersc ssh:notty    106.13.93.161    Sat Nov 16 13:41 - 13:41  (00:00)
vandersc ssh:notty    106.13.93.161    Sat Nov 16 13:41 - 13:41  (00:00)
admin    ssh:notty    18.215.220.11    Sat Nov 16 13:41 - 13:41  (00:00)
admin    ssh:notty    18.215.220.11    Sat Nov 16 13:41 - 13:41  (00:00)
zennie   ssh:notty    23.247.33.61     Sat Nov 16 13:40 - 13:40  (00:00)
zennie   ssh:notty    23.247.33.61     Sat Nov 16 13:40 - 13:40  (00:00)
named    ssh:notty    106.13.16.205    Sat Nov 16 13:39 - 13:39  (00:00)
named    ssh:notty    106.13.16.205    Sat Nov 16 13:39 - 13:39  (00:00)
server   ssh:notty    168.243.232.149  Sat Nov 16 13:39 - 13:39  (00:00)
server   ssh:notty    168.243.232.149  Sat Nov 16 13:39 - 13:39  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 13:39 - 13:39  (00:00)
fastfeat ssh:notty    185.153.198.185  Sat Nov 16 13:39 - 13:39  (00:00)
fastfeat ssh:notty    185.153.198.185  Sat Nov 16 13:39 - 13:39  (00:00)
withey   ssh:notty    114.67.76.63     Sat Nov 16 13:39 - 13:39  (00:00)
withey   ssh:notty    114.67.76.63     Sat Nov 16 13:39 - 13:39  (00:00)
boguth   ssh:notty    223.220.159.78   Sat Nov 16 13:39 - 13:39  (00:00)
boguth   ssh:notty    223.220.159.78   Sat Nov 16 13:39 - 13:39  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 13:38 - 13:38  (00:00)
redmine  ssh:notty    122.228.89.95    Sat Nov 16 13:37 - 13:37  (00:00)
redmine  ssh:notty    122.228.89.95    Sat Nov 16 13:37 - 13:37  (00:00)
coons    ssh:notty    18.215.220.11    Sat Nov 16 13:37 - 13:37  (00:00)
coons    ssh:notty    18.215.220.11    Sat Nov 16 13:37 - 13:37  (00:00)
ralf     ssh:notty    23.247.33.61     Sat Nov 16 13:37 - 13:37  (00:00)
ralf     ssh:notty    23.247.33.61     Sat Nov 16 13:37 - 13:37  (00:00)
ftp      ssh:notty    106.13.93.161    Sat Nov 16 13:36 - 13:36  (00:00)
ftp      ssh:notty    106.13.93.161    Sat Nov 16 13:36 - 13:36  (00:00)
bjoernto ssh:notty    168.243.232.149  Sat Nov 16 13:35 - 13:35  (00:00)
bjoernto ssh:notty    168.243.232.149  Sat Nov 16 13:35 - 13:35  (00:00)
vcsa     ssh:notty    185.153.198.185  Sat Nov 16 13:35 - 13:35  (00:00)
vcsa     ssh:notty    185.153.198.185  Sat Nov 16 13:35 - 13:35  (00:00)
ln       ssh:notty    124.238.116.155  Sat Nov 16 13:35 - 13:35  (00:00)
ln       ssh:notty    124.238.116.155  Sat Nov 16 13:34 - 13:34  (00:00)
hugo     ssh:notty    114.67.76.63     Sat Nov 16 13:34 - 13:34  (00:00)
hugo     ssh:notty    114.67.76.63     Sat Nov 16 13:34 - 13:34  (00:00)
chelius  ssh:notty    223.220.159.78   Sat Nov 16 13:34 - 13:34  (00:00)
chelius  ssh:notty    223.220.159.78   Sat Nov 16 13:34 - 13:34  (00:00)
test     ssh:notty    23.247.33.61     Sat Nov 16 13:33 - 13:33  (00:00)
test     ssh:notty    23.247.33.61     Sat Nov 16 13:33 - 13:33  (00:00)
Test     ssh:notty    18.215.220.11    Sat Nov 16 13:33 - 13:33  (00:00)
backup   ssh:notty    106.13.65.18     Sat Nov 16 13:33 - 13:33  (00:00)
Test     ssh:notty    18.215.220.11    Sat Nov 16 13:33 - 13:33  (00:00)
compta   ssh:notty    122.228.89.95    Sat Nov 16 13:33 - 13:33  (00:00)
compta   ssh:notty    122.228.89.95    Sat Nov 16 13:33 - 13:33  (00:00)
server   ssh:notty    168.243.232.149  Sat Nov 16 13:31 - 13:31  (00:00)
server   ssh:notty    168.243.232.149  Sat Nov 16 13:31 - 13:31  (00:00)
pana     ssh:notty    185.153.198.185  Sat Nov 16 13:31 - 13:31  (00:00)
pana     ssh:notty    185.153.198.185  Sat Nov 16 13:31 - 13:31  (00:00)
kt       ssh:notty    106.13.93.161    Sat Nov 16 13:31 - 13:31  (00:00)
kt       ssh:notty    106.13.93.161    Sat Nov 16 13:31 - 13:31  (00:00)
root     ssh:notty    23.247.33.61     Sat Nov 16 13:30 - 13:30  (00:00)
nq       ssh:notty    114.67.76.63     Sat Nov 16 13:30 - 13:30  (00:00)
nq       ssh:notty    114.67.76.63     Sat Nov 16 13:30 - 13:30  (00:00)
murai2   ssh:notty    124.238.116.155  Sat Nov 16 13:30 - 13:30  (00:00)
murai2   ssh:notty    124.238.116.155  Sat Nov 16 13:30 - 13:30  (00:00)
yoyo     ssh:notty    18.215.220.11    Sat Nov 16 13:30 - 13:30  (00:00)
yoyo     ssh:notty    18.215.220.11    Sat Nov 16 13:30 - 13:30  (00:00)
santabar ssh:notty    223.220.159.78   Sat Nov 16 13:29 - 13:29  (00:00)
santabar ssh:notty    223.220.159.78   Sat Nov 16 13:29 - 13:29  (00:00)
pcap     ssh:notty    106.13.65.18     Sat Nov 16 13:28 - 13:28  (00:00)
pcap     ssh:notty    106.13.65.18     Sat Nov 16 13:28 - 13:28  (00:00)
sea      ssh:notty    122.228.89.95    Sat Nov 16 13:28 - 13:28  (00:00)
sea      ssh:notty    122.228.89.95    Sat Nov 16 13:28 - 13:28  (00:00)
server   ssh:notty    168.243.232.149  Sat Nov 16 13:27 - 13:27  (00:00)
server   ssh:notty    168.243.232.149  Sat Nov 16 13:27 - 13:27  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 13:27 - 13:27  (00:00)
bradd    ssh:notty    23.247.33.61     Sat Nov 16 13:27 - 13:27  (00:00)
bradd    ssh:notty    23.247.33.61     Sat Nov 16 13:27 - 13:27  (00:00)
teampspe ssh:notty    18.215.220.11    Sat Nov 16 13:26 - 13:26  (00:00)
teampspe ssh:notty    18.215.220.11    Sat Nov 16 13:26 - 13:26  (00:00)
pengam   ssh:notty    106.13.93.161    Sat Nov 16 13:26 - 13:26  (00:00)
pengam   ssh:notty    106.13.93.161    Sat Nov 16 13:26 - 13:26  (00:00)
witty    ssh:notty    114.67.76.63     Sat Nov 16 13:26 - 13:26  (00:00)
witty    ssh:notty    114.67.76.63     Sat Nov 16 13:26 - 13:26  (00:00)
ninchan  ssh:notty    124.238.116.155  Sat Nov 16 13:25 - 13:25  (00:00)
ninchan  ssh:notty    124.238.116.155  Sat Nov 16 13:25 - 13:25  (00:00)
admin    ssh:notty    223.220.159.78   Sat Nov 16 13:25 - 13:25  (00:00)
admin    ssh:notty    223.220.159.78   Sat Nov 16 13:25 - 13:25  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 13:23 - 13:23  (00:00)
melville ssh:notty    185.153.198.185  Sat Nov 16 13:23 - 13:23  (00:00)
melville ssh:notty    185.153.198.185  Sat Nov 16 13:23 - 13:23  (00:00)
backup   ssh:notty    23.247.33.61     Sat Nov 16 13:23 - 13:23  (00:00)
savala   ssh:notty    106.13.65.18     Sat Nov 16 13:23 - 13:23  (00:00)
savala   ssh:notty    106.13.65.18     Sat Nov 16 13:23 - 13:23  (00:00)
backup   ssh:notty    168.243.232.149  Sat Nov 16 13:23 - 13:23  (00:00)
user     ssh:notty    106.13.16.205    Sat Nov 16 13:23 - 13:23  (00:00)
user     ssh:notty    106.13.16.205    Sat Nov 16 13:23 - 13:23  (00:00)
admin    ssh:notty    18.215.220.11    Sat Nov 16 13:22 - 13:22  (00:00)
admin    ssh:notty    18.215.220.11    Sat Nov 16 13:22 - 13:22  (00:00)
natal    ssh:notty    114.67.76.63     Sat Nov 16 13:21 - 13:21  (00:00)
natal    ssh:notty    114.67.76.63     Sat Nov 16 13:21 - 13:21  (00:00)
venning  ssh:notty    106.13.93.161    Sat Nov 16 13:21 - 13:21  (00:00)
venning  ssh:notty    106.13.93.161    Sat Nov 16 13:21 - 13:21  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 13:20 - 13:20  (00:00)
ddhagel  ssh:notty    23.247.33.61     Sat Nov 16 13:20 - 13:20  (00:00)
ddhagel  ssh:notty    23.247.33.61     Sat Nov 16 13:20 - 13:20  (00:00)
www-data ssh:notty    223.220.159.78   Sat Nov 16 13:20 - 13:20  (00:00)
test     ssh:notty    185.153.198.185  Sat Nov 16 13:20 - 13:20  (00:00)
test     ssh:notty    185.153.198.185  Sat Nov 16 13:20 - 13:20  (00:00)
root     ssh:notty    122.228.89.95    Sat Nov 16 13:19 - 13:19  (00:00)
bin      ssh:notty    168.243.232.149  Sat Nov 16 13:19 - 13:19  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 13:19 - 13:19  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 13:18 - 13:18  (00:00)
tannell  ssh:notty    23.247.33.61     Sat Nov 16 13:17 - 13:17  (00:00)
tannell  ssh:notty    23.247.33.61     Sat Nov 16 13:17 - 13:17  (00:00)
gate     ssh:notty    114.67.76.63     Sat Nov 16 13:17 - 13:17  (00:00)
gate     ssh:notty    114.67.76.63     Sat Nov 16 13:17 - 13:17  (00:00)
veliz    ssh:notty    185.153.198.185  Sat Nov 16 13:16 - 13:16  (00:00)
veliz    ssh:notty    185.153.198.185  Sat Nov 16 13:16 - 13:16  (00:00)
hung     ssh:notty    106.13.93.161    Sat Nov 16 13:16 - 13:16  (00:00)
hung     ssh:notty    106.13.93.161    Sat Nov 16 13:16 - 13:16  (00:00)
wwwadmin ssh:notty    124.238.116.155  Sat Nov 16 13:15 - 13:15  (00:00)
wwwadmin ssh:notty    124.238.116.155  Sat Nov 16 13:15 - 13:15  (00:00)
webmaste ssh:notty    122.228.89.95    Sat Nov 16 13:15 - 13:15  (00:00)
enuffgra ssh:notty    168.243.232.149  Sat Nov 16 13:15 - 13:15  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 13:15 - 13:15  (00:00)
enuffgra ssh:notty    168.243.232.149  Sat Nov 16 13:15 - 13:15  (00:00)
webmaste ssh:notty    122.228.89.95    Sat Nov 16 13:15 - 13:15  (00:00)
root     ssh:notty    223.220.159.78   Sat Nov 16 13:15 - 13:15  (00:00)
games    ssh:notty    23.247.33.61     Sat Nov 16 13:14 - 13:14  (00:00)
sinus    ssh:notty    106.13.65.18     Sat Nov 16 13:14 - 13:14  (00:00)
sinus    ssh:notty    106.13.65.18     Sat Nov 16 13:13 - 13:13  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 13:12 - 13:12  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 13:12 - 13:12  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 13:11 - 13:11  (00:00)
david    ssh:notty    168.243.232.149  Sat Nov 16 13:11 - 13:11  (00:00)
david    ssh:notty    168.243.232.149  Sat Nov 16 13:11 - 13:11  (00:00)
soweidan ssh:notty    122.228.89.95    Sat Nov 16 13:11 - 13:11  (00:00)
soweidan ssh:notty    122.228.89.95    Sat Nov 16 13:11 - 13:11  (00:00)
lisa     ssh:notty    124.238.116.155  Sat Nov 16 13:11 - 13:11  (00:00)
lisa     ssh:notty    124.238.116.155  Sat Nov 16 13:11 - 13:11  (00:00)
backup   ssh:notty    106.13.93.161    Sat Nov 16 13:11 - 13:11  (00:00)
system   ssh:notty    23.247.33.61     Sat Nov 16 13:10 - 13:10  (00:00)
system   ssh:notty    23.247.33.61     Sat Nov 16 13:10 - 13:10  (00:00)
test     ssh:notty    223.220.159.78   Sat Nov 16 13:10 - 13:10  (00:00)
test     ssh:notty    223.220.159.78   Sat Nov 16 13:10 - 13:10  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 13:09 - 13:09  (00:00)
uucp     ssh:notty    185.153.198.185  Sat Nov 16 13:08 - 13:08  (00:00)
lalibert ssh:notty    18.215.220.11    Sat Nov 16 13:08 - 13:08  (00:00)
lalibert ssh:notty    18.215.220.11    Sat Nov 16 13:08 - 13:08  (00:00)
webmaste ssh:notty    114.67.76.63     Sat Nov 16 13:08 - 13:08  (00:00)
webmaste ssh:notty    114.67.76.63     Sat Nov 16 13:08 - 13:08  (00:00)
serre    ssh:notty    168.243.232.149  Sat Nov 16 13:07 - 13:07  (00:00)
serre    ssh:notty    168.243.232.149  Sat Nov 16 13:07 - 13:07  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 13:06 - 13:06  (00:00)
aDmin    ssh:notty    106.13.93.161    Sat Nov 16 13:06 - 13:06  (00:00)
aDmin    ssh:notty    106.13.93.161    Sat Nov 16 13:06 - 13:06  (00:00)
ching    ssh:notty    223.220.159.78   Sat Nov 16 13:06 - 13:06  (00:00)
ching    ssh:notty    223.220.159.78   Sat Nov 16 13:06 - 13:06  (00:00)
supervis ssh:notty    18.215.220.11    Sat Nov 16 13:04 - 13:04  (00:00)
supervis ssh:notty    18.215.220.11    Sat Nov 16 13:04 - 13:04  (00:00)
chesley  ssh:notty    185.153.198.185  Sat Nov 16 13:04 - 13:04  (00:00)
chesley  ssh:notty    185.153.198.185  Sat Nov 16 13:04 - 13:04  (00:00)
bouncer  ssh:notty    106.13.65.18     Sat Nov 16 13:04 - 13:04  (00:00)
bouncer  ssh:notty    106.13.65.18     Sat Nov 16 13:04 - 13:04  (00:00)
yugoo2   ssh:notty    168.243.232.149  Sat Nov 16 13:03 - 13:03  (00:00)
yugoo2   ssh:notty    168.243.232.149  Sat Nov 16 13:03 - 13:03  (00:00)
samba    ssh:notty    114.67.76.63     Sat Nov 16 13:03 - 13:03  (00:00)
samba    ssh:notty    114.67.76.63     Sat Nov 16 13:03 - 13:03  (00:00)
killig   ssh:notty    122.228.89.95    Sat Nov 16 13:03 - 13:03  (00:00)
killig   ssh:notty    122.228.89.95    Sat Nov 16 13:03 - 13:03  (00:00)
ganny    ssh:notty    124.238.116.155  Sat Nov 16 13:02 - 13:02  (00:00)
ganny    ssh:notty    124.238.116.155  Sat Nov 16 13:02 - 13:02  (00:00)
doran    ssh:notty    62.80.164.18     Sat Nov 16 13:02 - 13:02  (00:00)
doran    ssh:notty    62.80.164.18     Sat Nov 16 13:02 - 13:02  (00:00)
oc       ssh:notty    223.220.159.78   Sat Nov 16 13:01 - 13:01  (00:00)
oc       ssh:notty    223.220.159.78   Sat Nov 16 13:01 - 13:01  (00:00)
server   ssh:notty    106.13.93.161    Sat Nov 16 13:01 - 13:01  (00:00)
server   ssh:notty    106.13.93.161    Sat Nov 16 13:01 - 13:01  (00:00)
sync     ssh:notty    18.215.220.11    Sat Nov 16 13:01 - 13:01  (00:00)
basil    ssh:notty    185.153.198.185  Sat Nov 16 13:00 - 13:00  (00:00)
basil    ssh:notty    185.153.198.185  Sat Nov 16 13:00 - 13:00  (00:00)
daniel   ssh:notty    106.13.65.18     Sat Nov 16 12:59 - 12:59  (00:00)
daniel   ssh:notty    106.13.65.18     Sat Nov 16 12:59 - 12:59  (00:00)
gillon   ssh:notty    122.228.89.95    Sat Nov 16 12:59 - 12:59  (00:00)
farlie   ssh:notty    168.243.232.149  Sat Nov 16 12:59 - 12:59  (00:00)
gillon   ssh:notty    122.228.89.95    Sat Nov 16 12:59 - 12:59  (00:00)
farlie   ssh:notty    168.243.232.149  Sat Nov 16 12:59 - 12:59  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 12:59 - 12:59  (00:00)
test     ssh:notty    23.247.33.61     Sat Nov 16 12:57 - 12:57  (00:00)
test     ssh:notty    23.247.33.61     Sat Nov 16 12:57 - 12:57  (00:00)
asterisk ssh:notty    18.215.220.11    Sat Nov 16 12:57 - 12:57  (00:00)
asterisk ssh:notty    18.215.220.11    Sat Nov 16 12:57 - 12:57  (00:00)
bozeman  ssh:notty    185.153.198.185  Sat Nov 16 12:57 - 12:57  (00:00)
bozeman  ssh:notty    185.153.198.185  Sat Nov 16 12:57 - 12:57  (00:00)
joa      ssh:notty    223.220.159.78   Sat Nov 16 12:56 - 12:56  (00:00)
joa      ssh:notty    223.220.159.78   Sat Nov 16 12:56 - 12:56  (00:00)
bergwerf ssh:notty    106.13.93.161    Sat Nov 16 12:56 - 12:56  (00:00)
bergwerf ssh:notty    106.13.93.161    Sat Nov 16 12:56 - 12:56  (00:00)
webadmin ssh:notty    122.228.89.95    Sat Nov 16 12:56 - 12:56  (00:00)
webadmin ssh:notty    122.228.89.95    Sat Nov 16 12:55 - 12:55  (00:00)
root     ssh:notty    168.243.232.149  Sat Nov 16 12:55 - 12:55  (00:00)
w        ssh:notty    106.13.65.18     Sat Nov 16 12:55 - 12:55  (00:00)
w        ssh:notty    106.13.65.18     Sat Nov 16 12:55 - 12:55  (00:00)
roccafor ssh:notty    114.67.76.63     Sat Nov 16 12:55 - 12:55  (00:00)
roccafor ssh:notty    114.67.76.63     Sat Nov 16 12:55 - 12:55  (00:00)
21       ssh:notty    62.80.164.18     Sat Nov 16 12:55 - 12:55  (00:00)
21       ssh:notty    62.80.164.18     Sat Nov 16 12:55 - 12:55  (00:00)
chrstph  ssh:notty    122.154.241.134  Sat Nov 16 12:54 - 12:54  (00:00)
chrstph  ssh:notty    122.154.241.134  Sat Nov 16 12:54 - 12:54  (00:00)
s70rm    ssh:notty    18.215.220.11    Sat Nov 16 12:54 - 12:54  (00:00)
s70rm    ssh:notty    18.215.220.11    Sat Nov 16 12:54 - 12:54  (00:00)
ric      ssh:notty    185.153.198.185  Sat Nov 16 12:53 - 12:53  (00:00)
ric      ssh:notty    185.153.198.185  Sat Nov 16 12:53 - 12:53  (00:00)
root     ssh:notty    223.220.159.78   Sat Nov 16 12:52 - 12:52  (00:00)
bin      ssh:notty    122.228.89.95    Sat Nov 16 12:52 - 12:52  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 12:51 - 12:51  (00:00)
share    ssh:notty    168.243.232.149  Sat Nov 16 12:51 - 12:51  (00:00)
share    ssh:notty    168.243.232.149  Sat Nov 16 12:51 - 12:51  (00:00)
backup   ssh:notty    114.67.76.63     Sat Nov 16 12:50 - 12:50  (00:00)
wwwadmin ssh:notty    106.13.65.18     Sat Nov 16 12:50 - 12:50  (00:00)
wwwadmin ssh:notty    106.13.65.18     Sat Nov 16 12:50 - 12:50  (00:00)
andromac ssh:notty    18.215.220.11    Sat Nov 16 12:50 - 12:50  (00:00)
andromac ssh:notty    18.215.220.11    Sat Nov 16 12:50 - 12:50  (00:00)
root     ssh:notty    122.154.241.134  Sat Nov 16 12:50 - 12:50  (00:00)
server   ssh:notty    185.153.198.185  Sat Nov 16 12:49 - 12:49  (00:00)
server   ssh:notty    185.153.198.185  Sat Nov 16 12:49 - 12:49  (00:00)
root     ssh:notty    62.80.164.18     Sat Nov 16 12:47 - 12:47  (00:00)
mysql    ssh:notty    223.220.159.78   Sat Nov 16 12:47 - 12:47  (00:00)
mysql    ssh:notty    223.220.159.78   Sat Nov 16 12:47 - 12:47  (00:00)
lisa     ssh:notty    168.243.232.149  Sat Nov 16 12:47 - 12:47  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 12:47 - 12:47  (00:00)
lisa     ssh:notty    168.243.232.149  Sat Nov 16 12:47 - 12:47  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 12:47 - 12:47  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 12:47 - 12:47  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 12:47 - 12:47  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 12:47 - 12:47  (00:00)
ps       ssh:notty    106.13.93.161    Sat Nov 16 12:47 - 12:47  (00:00)
ps       ssh:notty    106.13.93.161    Sat Nov 16 12:47 - 12:47  (00:00)
seiichir ssh:notty    18.215.220.11    Sat Nov 16 12:47 - 12:47  (00:00)
seiichir ssh:notty    18.215.220.11    Sat Nov 16 12:47 - 12:47  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 12:46 - 12:46  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 12:46 - 12:46  (00:00)
server   ssh:notty    122.154.241.134  Sat Nov 16 12:45 - 12:45  (00:00)
server   ssh:notty    122.154.241.134  Sat Nov 16 12:45 - 12:45  (00:00)
marissa  ssh:notty    185.153.198.185  Sat Nov 16 12:45 - 12:45  (00:00)
marissa  ssh:notty    185.153.198.185  Sat Nov 16 12:45 - 12:45  (00:00)
root     ssh:notty    124.238.116.155  Sat Nov 16 12:45 - 12:45  (00:00)
root     ssh:notty    168.243.232.149  Sat Nov 16 12:43 - 12:43  (00:00)
root     ssh:notty    223.220.159.78   Sat Nov 16 12:43 - 12:43  (00:00)
herrud   ssh:notty    18.215.220.11    Sat Nov 16 12:43 - 12:43  (00:00)
herrud   ssh:notty    18.215.220.11    Sat Nov 16 12:43 - 12:43  (00:00)
rpm      ssh:notty    114.67.76.63     Sat Nov 16 12:42 - 12:42  (00:00)
rpm      ssh:notty    114.67.76.63     Sat Nov 16 12:42 - 12:42  (00:00)
alex     ssh:notty    106.13.93.161    Sat Nov 16 12:42 - 12:42  (00:00)
alex     ssh:notty    106.13.93.161    Sat Nov 16 12:42 - 12:42  (00:00)
alwek    ssh:notty    185.153.198.185  Sat Nov 16 12:41 - 12:41  (00:00)
alwek    ssh:notty    185.153.198.185  Sat Nov 16 12:41 - 12:41  (00:00)
penyweit ssh:notty    122.154.241.134  Sat Nov 16 12:41 - 12:41  (00:00)
penyweit ssh:notty    122.154.241.134  Sat Nov 16 12:41 - 12:41  (00:00)
biddle   ssh:notty    106.13.65.18     Sat Nov 16 12:41 - 12:41  (00:00)
biddle   ssh:notty    106.13.65.18     Sat Nov 16 12:41 - 12:41  (00:00)
root     ssh:notty    62.80.164.18     Sat Nov 16 12:40 - 12:40  (00:00)
wwwadm   ssh:notty    18.215.220.11    Sat Nov 16 12:39 - 12:39  (00:00)
wwwadm   ssh:notty    18.215.220.11    Sat Nov 16 12:39 - 12:39  (00:00)
eszabo   ssh:notty    168.243.232.149  Sat Nov 16 12:39 - 12:39  (00:00)
eszabo   ssh:notty    168.243.232.149  Sat Nov 16 12:39 - 12:39  (00:00)
ith      ssh:notty    223.220.159.78   Sat Nov 16 12:38 - 12:38  (00:00)
ith      ssh:notty    223.220.159.78   Sat Nov 16 12:38 - 12:38  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 12:37 - 12:37  (00:00)
adam     ssh:notty    122.228.89.95    Sat Nov 16 12:37 - 12:37  (00:00)
mysql    ssh:notty    114.67.76.63     Sat Nov 16 12:37 - 12:37  (00:00)
adam     ssh:notty    122.228.89.95    Sat Nov 16 12:37 - 12:37  (00:00)
mysql    ssh:notty    114.67.76.63     Sat Nov 16 12:37 - 12:37  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 12:37 - 12:37  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 12:37 - 12:37  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 12:37 - 12:37  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 12:36 - 12:36  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 12:36 - 12:36  (00:00)
root     ssh:notty    122.154.241.134  Sat Nov 16 12:36 - 12:36  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 12:36 - 12:36  (00:00)
guest    ssh:notty    106.13.65.18     Sat Nov 16 12:36 - 12:36  (00:00)
guest    ssh:notty    106.13.65.18     Sat Nov 16 12:36 - 12:36  (00:00)
rabillou ssh:notty    18.215.220.11    Sat Nov 16 12:36 - 12:36  (00:00)
rabillou ssh:notty    18.215.220.11    Sat Nov 16 12:36 - 12:36  (00:00)
befring  ssh:notty    168.243.232.149  Sat Nov 16 12:35 - 12:35  (00:00)
befring  ssh:notty    168.243.232.149  Sat Nov 16 12:35 - 12:35  (00:00)
guest    ssh:notty    223.220.159.78   Sat Nov 16 12:34 - 12:34  (00:00)
guest    ssh:notty    223.220.159.78   Sat Nov 16 12:34 - 12:34  (00:00)
ragnhild ssh:notty    185.153.198.185  Sat Nov 16 12:34 - 12:34  (00:00)
ragnhild ssh:notty    185.153.198.185  Sat Nov 16 12:34 - 12:34  (00:00)
reagen   ssh:notty    106.13.93.161    Sat Nov 16 12:32 - 12:32  (00:00)
reagen   ssh:notty    106.13.93.161    Sat Nov 16 12:32 - 12:32  (00:00)
dolf     ssh:notty    18.215.220.11    Sat Nov 16 12:32 - 12:32  (00:00)
dolf     ssh:notty    18.215.220.11    Sat Nov 16 12:32 - 12:32  (00:00)
panored  ssh:notty    122.154.241.134  Sat Nov 16 12:32 - 12:32  (00:00)
panored  ssh:notty    122.154.241.134  Sat Nov 16 12:32 - 12:32  (00:00)
perren   ssh:notty    106.13.65.18     Sat Nov 16 12:32 - 12:32  (00:00)
perren   ssh:notty    106.13.65.18     Sat Nov 16 12:32 - 12:32  (00:00)
benutzer ssh:notty    168.243.232.149  Sat Nov 16 12:31 - 12:31  (00:00)
benutzer ssh:notty    168.243.232.149  Sat Nov 16 12:31 - 12:31  (00:00)
sshd     ssh:notty    185.153.198.185  Sat Nov 16 12:30 - 12:30  (00:00)
nayan    ssh:notty    223.220.159.78   Sat Nov 16 12:29 - 12:29  (00:00)
nayan    ssh:notty    223.220.159.78   Sat Nov 16 12:29 - 12:29  (00:00)
keith    ssh:notty    18.215.220.11    Sat Nov 16 12:29 - 12:29  (00:00)
keith    ssh:notty    18.215.220.11    Sat Nov 16 12:29 - 12:29  (00:00)
zzz      ssh:notty    106.13.93.161    Sat Nov 16 12:28 - 12:28  (00:00)
zzz      ssh:notty    106.13.93.161    Sat Nov 16 12:28 - 12:28  (00:00)
accessel ssh:notty    122.154.241.134  Sat Nov 16 12:27 - 12:27  (00:00)
accessel ssh:notty    122.154.241.134  Sat Nov 16 12:27 - 12:27  (00:00)
tim      ssh:notty    114.67.76.63     Sat Nov 16 12:27 - 12:27  (00:00)
tim      ssh:notty    114.67.76.63     Sat Nov 16 12:27 - 12:27  (00:00)
wooff    ssh:notty    106.13.65.18     Sat Nov 16 12:27 - 12:27  (00:00)
wooff    ssh:notty    106.13.65.18     Sat Nov 16 12:27 - 12:27  (00:00)
Yoshimit ssh:notty    168.243.232.149  Sat Nov 16 12:27 - 12:27  (00:00)
Yoshimit ssh:notty    168.243.232.149  Sat Nov 16 12:27 - 12:27  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:27 - 12:27  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:27 - 12:27  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:27 - 12:27  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:26 - 12:26  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:26 - 12:26  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:26 - 12:26  (00:00)
melvyn   ssh:notty    185.153.198.185  Sat Nov 16 12:26 - 12:26  (00:00)
melvyn   ssh:notty    185.153.198.185  Sat Nov 16 12:26 - 12:26  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:26 - 12:26  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:26 - 12:26  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:26 - 12:26  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:26 - 12:26  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 12:26 - 12:26  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 12:25 - 12:25  (00:00)
rcesd    ssh:notty    223.220.159.78   Sat Nov 16 12:25 - 12:25  (00:00)
rcesd    ssh:notty    223.220.159.78   Sat Nov 16 12:25 - 12:25  (00:00)
mooken   ssh:notty    114.67.76.63     Sat Nov 16 12:23 - 12:23  (00:00)
mooken   ssh:notty    114.67.76.63     Sat Nov 16 12:23 - 12:23  (00:00)
server   ssh:notty    122.154.241.134  Sat Nov 16 12:23 - 12:23  (00:00)
root     ssh:notty    168.243.232.149  Sat Nov 16 12:23 - 12:23  (00:00)
server   ssh:notty    122.154.241.134  Sat Nov 16 12:23 - 12:23  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 12:23 - 12:23  (00:00)
shell    ssh:notty    106.13.65.18     Sat Nov 16 12:23 - 12:23  (00:00)
shell    ssh:notty    106.13.65.18     Sat Nov 16 12:23 - 12:23  (00:00)
naustvol ssh:notty    185.153.198.185  Sat Nov 16 12:22 - 12:22  (00:00)
naustvol ssh:notty    185.153.198.185  Sat Nov 16 12:22 - 12:22  (00:00)
enrique  ssh:notty    18.215.220.11    Sat Nov 16 12:22 - 12:22  (00:00)
enrique  ssh:notty    18.215.220.11    Sat Nov 16 12:22 - 12:22  (00:00)
pi       ssh:notty    197.155.111.132  Sat Nov 16 12:21 - 12:21  (00:00)
pi       ssh:notty    197.155.111.132  Sat Nov 16 12:21 - 12:21  (00:00)
pi       ssh:notty    197.155.111.132  Sat Nov 16 12:21 - 12:21  (00:00)
pi       ssh:notty    197.155.111.132  Sat Nov 16 12:21 - 12:21  (00:00)
mysql    ssh:notty    223.220.159.78   Sat Nov 16 12:21 - 12:21  (00:00)
mysql    ssh:notty    223.220.159.78   Sat Nov 16 12:20 - 12:20  (00:00)
wwwadmin ssh:notty    114.67.76.63     Sat Nov 16 12:19 - 12:19  (00:00)
wwwadmin ssh:notty    114.67.76.63     Sat Nov 16 12:19 - 12:19  (00:00)
pcap     ssh:notty    168.243.232.149  Sat Nov 16 12:19 - 12:19  (00:00)
pcap     ssh:notty    168.243.232.149  Sat Nov 16 12:19 - 12:19  (00:00)
guest6   ssh:notty    185.153.198.185  Sat Nov 16 12:19 - 12:19  (00:00)
guest6   ssh:notty    185.153.198.185  Sat Nov 16 12:19 - 12:19  (00:00)
root     ssh:notty    122.154.241.134  Sat Nov 16 12:18 - 12:18  (00:00)
clearnon ssh:notty    106.13.65.18     Sat Nov 16 12:18 - 12:18  (00:00)
admin    ssh:notty    106.13.93.161    Sat Nov 16 12:18 - 12:18  (00:00)
clearnon ssh:notty    106.13.65.18     Sat Nov 16 12:18 - 12:18  (00:00)
admin    ssh:notty    106.13.93.161    Sat Nov 16 12:18 - 12:18  (00:00)
ts3music ssh:notty    18.215.220.11    Sat Nov 16 12:18 - 12:18  (00:00)
ts3music ssh:notty    18.215.220.11    Sat Nov 16 12:18 - 12:18  (00:00)
westerma ssh:notty    62.80.164.18     Sat Nov 16 12:17 - 12:17  (00:00)
westerma ssh:notty    62.80.164.18     Sat Nov 16 12:17 - 12:17  (00:00)
rumeno   ssh:notty    223.220.159.78   Sat Nov 16 12:16 - 12:16  (00:00)
rumeno   ssh:notty    223.220.159.78   Sat Nov 16 12:16 - 12:16  (00:00)
mcserv   ssh:notty    168.243.232.149  Sat Nov 16 12:15 - 12:15  (00:00)
mcserv   ssh:notty    168.243.232.149  Sat Nov 16 12:15 - 12:15  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 12:15 - 12:15  (00:00)
fix      ssh:notty    185.153.198.185  Sat Nov 16 12:15 - 12:15  (00:00)
fix      ssh:notty    185.153.198.185  Sat Nov 16 12:15 - 12:15  (00:00)
denzler  ssh:notty    18.215.220.11    Sat Nov 16 12:15 - 12:15  (00:00)
denzler  ssh:notty    18.215.220.11    Sat Nov 16 12:15 - 12:15  (00:00)
turney   ssh:notty    122.154.241.134  Sat Nov 16 12:14 - 12:14  (00:00)
turney   ssh:notty    122.154.241.134  Sat Nov 16 12:14 - 12:14  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 12:14 - 12:14  (00:00)
cards    ssh:notty    106.13.93.161    Sat Nov 16 12:14 - 12:14  (00:00)
cards    ssh:notty    106.13.93.161    Sat Nov 16 12:14 - 12:14  (00:00)
boozie   ssh:notty    223.220.159.78   Sat Nov 16 12:12 - 12:12  (00:00)
boozie   ssh:notty    223.220.159.78   Sat Nov 16 12:12 - 12:12  (00:00)
daemon   ssh:notty    185.153.198.185  Sat Nov 16 12:11 - 12:11  (00:00)
kleinber ssh:notty    168.243.232.149  Sat Nov 16 12:11 - 12:11  (00:00)
kleinber ssh:notty    168.243.232.149  Sat Nov 16 12:11 - 12:11  (00:00)
test     ssh:notty    18.215.220.11    Sat Nov 16 12:11 - 12:11  (00:00)
test     ssh:notty    18.215.220.11    Sat Nov 16 12:11 - 12:11  (00:00)
linux    ssh:notty    114.67.76.63     Sat Nov 16 12:11 - 12:11  (00:00)
linux    ssh:notty    114.67.76.63     Sat Nov 16 12:11 - 12:11  (00:00)
philiber ssh:notty    122.154.241.134  Sat Nov 16 12:10 - 12:10  (00:00)
philiber ssh:notty    122.154.241.134  Sat Nov 16 12:10 - 12:10  (00:00)
copeman  ssh:notty    106.13.65.18     Sat Nov 16 12:10 - 12:10  (00:00)
copeman  ssh:notty    106.13.65.18     Sat Nov 16 12:10 - 12:10  (00:00)
root     ssh:notty    106.13.93.161    Sat Nov 16 12:09 - 12:09  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 12:08 - 12:08  (00:00)
takei    ssh:notty    185.153.198.185  Sat Nov 16 12:08 - 12:08  (00:00)
takei    ssh:notty    185.153.198.185  Sat Nov 16 12:08 - 12:08  (00:00)
guest    ssh:notty    168.243.232.149  Sat Nov 16 12:08 - 12:08  (00:00)
guest    ssh:notty    168.243.232.149  Sat Nov 16 12:08 - 12:08  (00:00)
azout    ssh:notty    223.220.159.78   Sat Nov 16 12:08 - 12:08  (00:00)
azout    ssh:notty    223.220.159.78   Sat Nov 16 12:07 - 12:07  (00:00)
willough ssh:notty    114.67.76.63     Sat Nov 16 12:06 - 12:06  (00:00)
willough ssh:notty    114.67.76.63     Sat Nov 16 12:06 - 12:06  (00:00)
gaylen   ssh:notty    122.154.241.134  Sat Nov 16 12:05 - 12:05  (00:00)
gaylen   ssh:notty    122.154.241.134  Sat Nov 16 12:05 - 12:05  (00:00)
root     ssh:notty    106.13.65.18     Sat Nov 16 12:05 - 12:05  (00:00)
psycholo ssh:notty    106.13.93.161    Sat Nov 16 12:05 - 12:05  (00:00)
psycholo ssh:notty    106.13.93.161    Sat Nov 16 12:05 - 12:05  (00:00)
admin    ssh:notty    18.215.220.11    Sat Nov 16 12:04 - 12:04  (00:00)
admin    ssh:notty    18.215.220.11    Sat Nov 16 12:04 - 12:04  (00:00)
root     ssh:notty    185.153.198.185  Sat Nov 16 12:04 - 12:04  (00:00)
tiger    ssh:notty    168.243.232.149  Sat Nov 16 12:04 - 12:04  (00:00)
tiger    ssh:notty    168.243.232.149  Sat Nov 16 12:04 - 12:04  (00:00)
guest    ssh:notty    223.220.159.78   Sat Nov 16 12:03 - 12:03  (00:00)
guest    ssh:notty    223.220.159.78   Sat Nov 16 12:03 - 12:03  (00:00)
lammon   ssh:notty    114.67.76.63     Sat Nov 16 12:02 - 12:02  (00:00)
lammon   ssh:notty    114.67.76.63     Sat Nov 16 12:02 - 12:02  (00:00)
oppedisa ssh:notty    174.138.58.149   Sat Nov 16 12:02 - 12:02  (00:00)
oppedisa ssh:notty    174.138.58.149   Sat Nov 16 12:02 - 12:02  (00:00)
pi       ssh:notty    62.80.164.18     Sat Nov 16 12:01 - 12:01  (00:00)
pi       ssh:notty    62.80.164.18     Sat Nov 16 12:01 - 12:01  (00:00)
sommai   ssh:notty    18.215.220.11    Sat Nov 16 12:01 - 12:01  (00:00)
sommai   ssh:notty    18.215.220.11    Sat Nov 16 12:01 - 12:01  (00:00)
public   ssh:notty    122.154.241.134  Sat Nov 16 12:01 - 12:01  (00:00)
public   ssh:notty    122.154.241.134  Sat Nov 16 12:01 - 12:01  (00:00)
naeem    ssh:notty    168.243.232.149  Sat Nov 16 12:00 - 12:00  (00:00)
naeem    ssh:notty    168.243.232.149  Sat Nov 16 12:00 - 12:00  (00:00)
silas    ssh:notty    223.220.159.78   Sat Nov 16 11:59 - 11:59  (00:00)
silas    ssh:notty    223.220.159.78   Sat Nov 16 11:59 - 11:59  (00:00)
zlsj     ssh:notty    174.138.58.149   Sat Nov 16 11:58 - 11:58  (00:00)
zlsj     ssh:notty    174.138.58.149   Sat Nov 16 11:58 - 11:58  (00:00)
lapides  ssh:notty    114.67.76.63     Sat Nov 16 11:57 - 11:57  (00:00)
lapides  ssh:notty    114.67.76.63     Sat Nov 16 11:57 - 11:57  (00:00)
zalehah  ssh:notty    18.215.220.11    Sat Nov 16 11:57 - 11:57  (00:00)
zalehah  ssh:notty    18.215.220.11    Sat Nov 16 11:57 - 11:57  (00:00)
bourguig ssh:notty    122.154.241.134  Sat Nov 16 11:56 - 11:56  (00:00)
bourguig ssh:notty    122.154.241.134  Sat Nov 16 11:56 - 11:56  (00:00)
root     ssh:notty    168.243.232.149  Sat Nov 16 11:56 - 11:56  (00:00)
abc72314 ssh:notty    223.220.159.78   Sat Nov 16 11:54 - 11:54  (00:00)
abc72314 ssh:notty    223.220.159.78   Sat Nov 16 11:54 - 11:54  (00:00)
ferries  ssh:notty    174.138.58.149   Sat Nov 16 11:54 - 11:54  (00:00)
ferries  ssh:notty    174.138.58.149   Sat Nov 16 11:54 - 11:54  (00:00)
alima    ssh:notty    18.215.220.11    Sat Nov 16 11:54 - 11:54  (00:00)
alima    ssh:notty    18.215.220.11    Sat Nov 16 11:54 - 11:54  (00:00)
server   ssh:notty    62.80.164.18     Sat Nov 16 11:53 - 11:53  (00:00)
server   ssh:notty    62.80.164.18     Sat Nov 16 11:53 - 11:53  (00:00)
mysql    ssh:notty    114.67.76.63     Sat Nov 16 11:53 - 11:53  (00:00)
mysql    ssh:notty    114.67.76.63     Sat Nov 16 11:53 - 11:53  (00:00)
pass6666 ssh:notty    209.97.161.46    Sat Nov 16 11:52 - 11:52  (00:00)
pass6666 ssh:notty    209.97.161.46    Sat Nov 16 11:52 - 11:52  (00:00)
root     ssh:notty    168.243.232.149  Sat Nov 16 11:52 - 11:52  (00:00)
riley    ssh:notty    122.154.241.134  Sat Nov 16 11:52 - 11:52  (00:00)
riley    ssh:notty    122.154.241.134  Sat Nov 16 11:52 - 11:52  (00:00)
root     ssh:notty    18.215.220.11    Sat Nov 16 11:51 - 11:51  (00:00)
12345620 ssh:notty    174.138.58.149   Sat Nov 16 11:51 - 11:51  (00:00)
12345620 ssh:notty    174.138.58.149   Sat Nov 16 11:51 - 11:51  (00:00)
root     ssh:notty    223.220.159.78   Sat Nov 16 11:50 - 11:50  (00:00)
geromich ssh:notty    106.13.93.161    Sat Nov 16 11:48 - 11:48  (00:00)
geromich ssh:notty    106.13.93.161    Sat Nov 16 11:48 - 11:48  (00:00)
mastekaa ssh:notty    168.243.232.149  Sat Nov 16 11:48 - 11:48  (00:00)
mastekaa ssh:notty    168.243.232.149  Sat Nov 16 11:48 - 11:48  (00:00)
shadowed ssh:notty    209.97.161.46    Sat Nov 16 11:48 - 11:48  (00:00)
shadowed ssh:notty    209.97.161.46    Sat Nov 16 11:48 - 11:48  (00:00)
root     ssh:notty    114.67.76.63     Sat Nov 16 11:48 - 11:48  (00:00)
peter    ssh:notty    122.154.241.134  Sat Nov 16 11:48 - 11:48  (00:00)
peter    ssh:notty    122.154.241.134  Sat Nov 16 11:48 - 11:48  (00:00)
brashear ssh:notty    174.138.58.149   Sat Nov 16 11:47 - 11:47  (00:00)
brashear ssh:notty    174.138.58.149   Sat Nov 16 11:47 - 11:47  (00:00)
lucky    ssh:notty    18.215.220.11    Sat Nov 16 11:47 - 11:47  (00:00)
lucky    ssh:notty    18.215.220.11    Sat Nov 16 11:47 - 11:47  (00:00)
bp       ssh:notty    223.220.159.78   Sat Nov 16 11:45 - 11:45  (00:00)
bp       ssh:notty    223.220.159.78   Sat Nov 16 11:45 - 11:45  (00:00)
atheens  ssh:notty    62.80.164.18     Sat Nov 16 11:45 - 11:45  (00:00)
atheens  ssh:notty    62.80.164.18     Sat Nov 16 11:45 - 11:45  (00:00)
mysql    ssh:notty    168.243.232.149  Sat Nov 16 11:44 - 11:44  (00:00)
mysql    ssh:notty    168.243.232.149  Sat Nov 16 11:44 - 11:44  (00:00)
sony     ssh:notty    185.153.198.185  Sat Nov 16 11:44 - 11:44  (00:00)
sony     ssh:notty    185.153.198.185  Sat Nov 16 11:44 - 11:44  (00:00)
pitchamu ssh:notty    114.67.76.63     Sat Nov 16 11:44 - 11:44  (00:00)
root     ssh:notty    222.186.173.215  Sat Nov 16 11:44 - 11:44  (00:00)
pitchamu ssh:notty    114.67.76.63     Sat Nov 16 11:44 - 11:44  (00:00)
root     ssh:notty    222.186.173.215  Sat Nov 16 11:44 - 11:44  (00:00)
donald   ssh:notty    209.97.161.46    Sat Nov 16 11:44 - 11:44  (00:00)
donald   ssh:notty    209.97.161.46    Sat Nov 16 11:44 - 11:44  (00:00)
root     ssh:notty    222.186.173.215  Sat Nov 16 11:44 - 11:44  (00:00)
root     ssh:notty    222.186.173.215  Sat Nov 16 11:44 - 11:44  (00:00)
root     ssh:notty    222.186.173.215  Sat Nov 16 11:44 - 11:44  (00:00)
oper     ssh:notty    174.138.58.149   Sat Nov 16 11:43 - 11:43  (00:00)
oper     ssh:notty    174.138.58.149   Sat Nov 16 11:43 - 11:43  (00:00)
maroncel ssh:notty    122.154.241.134  Sat Nov 16 11:43 - 11:43  (00:00)
maroncel ssh:notty    122.154.241.134  Sat Nov 16 11:43 - 11:43  (00:00)
sayliss  ssh:notty    223.220.159.78   Sat Nov 16 11:41 - 11:41  (00:00)
sayliss  ssh:notty    223.220.159.78   Sat Nov 16 11:41 - 11:41  (00:00)
lorraine ssh:notty    168.243.232.149  Sat Nov 16 11:40 - 11:40  (00:00)
lorraine ssh:notty    168.243.232.149  Sat Nov 16 11:40 - 11:40  (00:00)
guest    ssh:notty    114.67.76.63     Sat Nov 16 11:40 - 11:40  (00:00)
guest    ssh:notty    114.67.76.63     Sat Nov 16 11:40 - 11:40  (00:00)
thode    ssh:notty    174.138.58.149   Sat Nov 16 11:39 - 11:39  (00:00)
thode    ssh:notty    174.138.58.149   Sat Nov 16 11:39 - 11:39  (00:00)
xingixin ssh:notty    209.97.161.46    Sat Nov 16 11:39 - 11:39  (00:00)
xingixin ssh:notty    209.97.161.46    Sat Nov 16 11:39 - 11:39  (00:00)
minecraf ssh:notty    106.13.65.18     Sat Nov 16 11:39 - 11:39  (00:00)
minecraf ssh:notty    106.13.65.18     Sat Nov 16 11:39 - 11:39  (00:00)
server   ssh:notty    122.154.241.134  Sat Nov 16 11:39 - 11:39  (00:00)
server   ssh:notty    122.154.241.134  Sat Nov 16 11:39 - 11:39  (00:00)
operator ssh:notty    168.243.232.149  Sat Nov 16 11:37 - 11:37  (00:00)
operator ssh:notty    168.243.232.149  Sat Nov 16 11:37 - 11:37  (00:00)
tzou     ssh:notty    223.220.159.78   Sat Nov 16 11:36 - 11:36  (00:00)
tzou     ssh:notty    223.220.159.78   Sat Nov 16 11:36 - 11:36  (00:00)
kura     ssh:notty    174.138.58.149   Sat Nov 16 11:36 - 11:36  (00:00)
kura     ssh:notty    174.138.58.149   Sat Nov 16 11:36 - 11:36  (00:00)
com      ssh:notty    209.97.161.46    Sat Nov 16 11:35 - 11:35  (00:00)
com      ssh:notty    209.97.161.46    Sat Nov 16 11:35 - 11:35  (00:00)
ctr      ssh:notty    122.154.241.134  Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
ctr      ssh:notty    122.154.241.134  Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:34 - 11:34  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:33 - 11:33  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 11:33 - 11:33  (00:00)
user     ssh:notty    168.243.232.149  Sat Nov 16 11:33 - 11:33  (00:00)
user     ssh:notty    168.243.232.149  Sat Nov 16 11:33 - 11:33  (00:00)
steven   ssh:notty    114.67.76.63     Sat Nov 16 11:32 - 11:32  (00:00)
steven   ssh:notty    114.67.76.63     Sat Nov 16 11:32 - 11:32  (00:00)
neptune  ssh:notty    174.138.58.149   Sat Nov 16 11:32 - 11:32  (00:00)
neptune  ssh:notty    174.138.58.149   Sat Nov 16 11:32 - 11:32  (00:00)
rycca    ssh:notty    223.220.159.78   Sat Nov 16 11:32 - 11:32  (00:00)
rycca    ssh:notty    223.220.159.78   Sat Nov 16 11:32 - 11:32  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 11:31 - 11:31  (00:00)
wwwrun   ssh:notty    122.154.241.134  Sat Nov 16 11:30 - 11:30  (00:00)
wwwrun   ssh:notty    122.154.241.134  Sat Nov 16 11:30 - 11:30  (00:00)
miki     ssh:notty    168.243.232.149  Sat Nov 16 11:29 - 11:29  (00:00)
miki     ssh:notty    168.243.232.149  Sat Nov 16 11:29 - 11:29  (00:00)
isiahi   ssh:notty    62.80.164.18     Sat Nov 16 11:29 - 11:29  (00:00)
isiahi   ssh:notty    62.80.164.18     Sat Nov 16 11:29 - 11:29  (00:00)
jean-fra ssh:notty    174.138.58.149   Sat Nov 16 11:28 - 11:28  (00:00)
jean-fra ssh:notty    174.138.58.149   Sat Nov 16 11:28 - 11:28  (00:00)
webmaste ssh:notty    209.97.161.46    Sat Nov 16 11:26 - 11:26  (00:00)
webmaste ssh:notty    209.97.161.46    Sat Nov 16 11:26 - 11:26  (00:00)
pinet    ssh:notty    84.45.251.243    Sat Nov 16 11:26 - 11:26  (00:00)
pinet    ssh:notty    84.45.251.243    Sat Nov 16 11:26 - 11:26  (00:00)
gortney  ssh:notty    122.154.241.134  Sat Nov 16 11:26 - 11:26  (00:00)
gortney  ssh:notty    122.154.241.134  Sat Nov 16 11:26 - 11:26  (00:00)
itschool ssh:notty    168.243.232.149  Sat Nov 16 11:25 - 11:25  (00:00)
itschool ssh:notty    168.243.232.149  Sat Nov 16 11:25 - 11:25  (00:00)
abc123   ssh:notty    174.138.58.149   Sat Nov 16 11:25 - 11:25  (00:00)
abc123   ssh:notty    174.138.58.149   Sat Nov 16 11:25 - 11:25  (00:00)
ident    ssh:notty    106.13.93.161    Sat Nov 16 11:23 - 11:23  (00:00)
ident    ssh:notty    106.13.93.161    Sat Nov 16 11:23 - 11:23  (00:00)
operator ssh:notty    84.45.251.243    Sat Nov 16 11:23 - 11:23  (00:00)
operator ssh:notty    84.45.251.243    Sat Nov 16 11:23 - 11:23  (00:00)
rettke   ssh:notty    223.220.159.78   Sat Nov 16 11:23 - 11:23  (00:00)
rettke   ssh:notty    223.220.159.78   Sat Nov 16 11:23 - 11:23  (00:00)
admin    ssh:notty    209.97.161.46    Sat Nov 16 11:22 - 11:22  (00:00)
admin    ssh:notty    209.97.161.46    Sat Nov 16 11:22 - 11:22  (00:00)
mcilvani ssh:notty    122.154.241.134  Sat Nov 16 11:22 - 11:22  (00:00)
mcilvani ssh:notty    122.154.241.134  Sat Nov 16 11:21 - 11:21  (00:00)
mysql    ssh:notty    168.243.232.149  Sat Nov 16 11:21 - 11:21  (00:00)
mysql    ssh:notty    168.243.232.149  Sat Nov 16 11:21 - 11:21  (00:00)
Password ssh:notty    174.138.58.149   Sat Nov 16 11:21 - 11:21  (00:00)
Password ssh:notty    174.138.58.149   Sat Nov 16 11:21 - 11:21  (00:00)
gdm      ssh:notty    62.80.164.18     Sat Nov 16 11:21 - 11:21  (00:00)
gdm      ssh:notty    62.80.164.18     Sat Nov 16 11:21 - 11:21  (00:00)
dldudfks ssh:notty    84.45.251.243    Sat Nov 16 11:19 - 11:19  (00:00)
dldudfks ssh:notty    84.45.251.243    Sat Nov 16 11:19 - 11:19  (00:00)
rpm      ssh:notty    209.97.161.46    Sat Nov 16 11:18 - 11:18  (00:00)
crosales ssh:notty    223.220.159.78   Sat Nov 16 11:18 - 11:18  (00:00)
rpm      ssh:notty    209.97.161.46    Sat Nov 16 11:18 - 11:18  (00:00)
crosales ssh:notty    223.220.159.78   Sat Nov 16 11:18 - 11:18  (00:00)
pacita   ssh:notty    168.243.232.149  Sat Nov 16 11:18 - 11:18  (00:00)
pacita   ssh:notty    168.243.232.149  Sat Nov 16 11:18 - 11:18  (00:00)
bogolub  ssh:notty    174.138.58.149   Sat Nov 16 11:17 - 11:17  (00:00)
smmsp    ssh:notty    122.154.241.134  Sat Nov 16 11:17 - 11:17  (00:00)
bogolub  ssh:notty    174.138.58.149   Sat Nov 16 11:17 - 11:17  (00:00)
smmsp    ssh:notty    122.154.241.134  Sat Nov 16 11:17 - 11:17  (00:00)
ssh      ssh:notty    84.45.251.243    Sat Nov 16 11:15 - 11:15  (00:00)
ssh      ssh:notty    84.45.251.243    Sat Nov 16 11:15 - 11:15  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:14 - 11:14  (00:00)
adan     ssh:notty    168.243.232.149  Sat Nov 16 11:14 - 11:14  (00:00)
adan     ssh:notty    168.243.232.149  Sat Nov 16 11:14 - 11:14  (00:00)
vboxadmi ssh:notty    174.138.58.149   Sat Nov 16 11:14 - 11:14  (00:00)
backup   ssh:notty    209.97.161.46    Sat Nov 16 11:14 - 11:14  (00:00)
vboxadmi ssh:notty    174.138.58.149   Sat Nov 16 11:14 - 11:14  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:14 - 11:14  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:14 - 11:14  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:13 - 11:13  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:13 - 11:13  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:13 - 11:13  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:13 - 11:13  (00:00)
yoyo     ssh:notty    122.154.241.134  Sat Nov 16 11:13 - 11:13  (00:00)
yoyo     ssh:notty    122.154.241.134  Sat Nov 16 11:13 - 11:13  (00:00)
daemon   ssh:notty    223.220.159.78   Sat Nov 16 11:13 - 11:13  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:13 - 11:13  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:13 - 11:13  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:13 - 11:13  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:13 - 11:13  (00:00)
root     ssh:notty    222.186.180.6    Sat Nov 16 11:13 - 11:13  (00:00)
info     ssh:notty    188.166.109.87   Sat Nov 16 11:12 - 11:12  (00:00)
info     ssh:notty    188.166.109.87   Sat Nov 16 11:12 - 11:12  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 11:11 - 11:11  (00:00)
1!2@3#4$ ssh:notty    174.138.58.149   Sat Nov 16 11:10 - 11:10  (00:00)
1!2@3#4$ ssh:notty    174.138.58.149   Sat Nov 16 11:10 - 11:10  (00:00)
root     ssh:notty    168.243.232.149  Sat Nov 16 11:10 - 11:10  (00:00)
csop     ssh:notty    209.97.161.46    Sat Nov 16 11:09 - 11:09  (00:00)
csop     ssh:notty    209.97.161.46    Sat Nov 16 11:09 - 11:09  (00:00)
abhi     ssh:notty    122.154.241.134  Sat Nov 16 11:09 - 11:09  (00:00)
abhi     ssh:notty    122.154.241.134  Sat Nov 16 11:09 - 11:09  (00:00)
server   ssh:notty    223.220.159.78   Sat Nov 16 11:08 - 11:08  (00:00)
server   ssh:notty    223.220.159.78   Sat Nov 16 11:08 - 11:08  (00:00)
loal     ssh:notty    188.166.109.87   Sat Nov 16 11:08 - 11:08  (00:00)
loal     ssh:notty    188.166.109.87   Sat Nov 16 11:08 - 11:08  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 11:07 - 11:07  (00:00)
yamuna   ssh:notty    174.138.58.149   Sat Nov 16 11:06 - 11:06  (00:00)
yamuna   ssh:notty    174.138.58.149   Sat Nov 16 11:06 - 11:06  (00:00)
test     ssh:notty    168.243.232.149  Sat Nov 16 11:06 - 11:06  (00:00)
test     ssh:notty    168.243.232.149  Sat Nov 16 11:06 - 11:06  (00:00)
schnitzl ssh:notty    209.97.161.46    Sat Nov 16 11:05 - 11:05  (00:00)
schnitzl ssh:notty    209.97.161.46    Sat Nov 16 11:05 - 11:05  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 11:05 - 11:05  (00:00)
christen ssh:notty    122.154.241.134  Sat Nov 16 11:04 - 11:04  (00:00)
christen ssh:notty    122.154.241.134  Sat Nov 16 11:04 - 11:04  (00:00)
home     ssh:notty    223.220.159.78   Sat Nov 16 11:04 - 11:04  (00:00)
home     ssh:notty    223.220.159.78   Sat Nov 16 11:04 - 11:04  (00:00)
fyzzy    ssh:notty    84.45.251.243    Sat Nov 16 11:03 - 11:03  (00:00)
fyzzy    ssh:notty    84.45.251.243    Sat Nov 16 11:03 - 11:03  (00:00)
passw0rd ssh:notty    174.138.58.149   Sat Nov 16 11:03 - 11:03  (00:00)
passw0rd ssh:notty    174.138.58.149   Sat Nov 16 11:03 - 11:03  (00:00)
tt       ssh:notty    168.243.232.149  Sat Nov 16 11:02 - 11:02  (00:00)
tt       ssh:notty    168.243.232.149  Sat Nov 16 11:02 - 11:02  (00:00)
ssh      ssh:notty    188.166.109.87   Sat Nov 16 11:01 - 11:01  (00:00)
ssh      ssh:notty    188.166.109.87   Sat Nov 16 11:01 - 11:01  (00:00)
jillayne ssh:notty    209.97.161.46    Sat Nov 16 11:01 - 11:01  (00:00)
jillayne ssh:notty    209.97.161.46    Sat Nov 16 11:01 - 11:01  (00:00)
ebonie   ssh:notty    180.68.177.15    Sat Nov 16 11:00 - 11:00  (00:00)
ebonie   ssh:notty    180.68.177.15    Sat Nov 16 11:00 - 11:00  (00:00)
ahidee   ssh:notty    122.154.241.134  Sat Nov 16 11:00 - 11:00  (00:00)
ahidee   ssh:notty    122.154.241.134  Sat Nov 16 11:00 - 11:00  (00:00)
root     ssh:notty    223.220.159.78   Sat Nov 16 11:00 - 11:00  (00:00)
juy      ssh:notty    84.45.251.243    Sat Nov 16 10:59 - 10:59  (00:00)
juy      ssh:notty    84.45.251.243    Sat Nov 16 10:59 - 10:59  (00:00)
11       ssh:notty    174.138.58.149   Sat Nov 16 10:59 - 10:59  (00:00)
11       ssh:notty    174.138.58.149   Sat Nov 16 10:59 - 10:59  (00:00)
tiaja    ssh:notty    168.243.232.149  Sat Nov 16 10:59 - 10:59  (00:00)
tiaja    ssh:notty    168.243.232.149  Sat Nov 16 10:59 - 10:59  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 10:57 - 10:57  (00:00)
admin    ssh:notty    62.80.164.18     Sat Nov 16 10:57 - 10:57  (00:00)
admin    ssh:notty    62.80.164.18     Sat Nov 16 10:57 - 10:57  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 10:56 - 10:56  (00:00)
password ssh:notty    168.181.104.30   Sat Nov 16 10:56 - 10:56  (00:00)
password ssh:notty    168.181.104.30   Sat Nov 16 10:56 - 10:56  (00:00)
amitabh  ssh:notty    122.154.241.134  Sat Nov 16 10:56 - 10:56  (00:00)
amitabh  ssh:notty    122.154.241.134  Sat Nov 16 10:56 - 10:56  (00:00)
sa       ssh:notty    84.45.251.243    Sat Nov 16 10:55 - 10:55  (00:00)
sa       ssh:notty    84.45.251.243    Sat Nov 16 10:55 - 10:55  (00:00)
git      ssh:notty    180.68.177.15    Sat Nov 16 10:55 - 10:55  (00:00)
git      ssh:notty    180.68.177.15    Sat Nov 16 10:55 - 10:55  (00:00)
vinas    ssh:notty    174.138.58.149   Sat Nov 16 10:55 - 10:55  (00:00)
vinas    ssh:notty    174.138.58.149   Sat Nov 16 10:55 - 10:55  (00:00)
mccullen ssh:notty    223.220.159.78   Sat Nov 16 10:55 - 10:55  (00:00)
mccullen ssh:notty    223.220.159.78   Sat Nov 16 10:55 - 10:55  (00:00)
squid    ssh:notty    188.166.109.87   Sat Nov 16 10:54 - 10:54  (00:00)
squid    ssh:notty    188.166.109.87   Sat Nov 16 10:54 - 10:54  (00:00)
stagner  ssh:notty    209.97.161.46    Sat Nov 16 10:52 - 10:52  (00:00)
stagner  ssh:notty    209.97.161.46    Sat Nov 16 10:52 - 10:52  (00:00)
lllll    ssh:notty    168.181.104.30   Sat Nov 16 10:52 - 10:52  (00:00)
ryder    ssh:notty    122.154.241.134  Sat Nov 16 10:52 - 10:52  (00:00)
lllll    ssh:notty    168.181.104.30   Sat Nov 16 10:52 - 10:52  (00:00)
ryder    ssh:notty    122.154.241.134  Sat Nov 16 10:52 - 10:52  (00:00)
mypasswo ssh:notty    174.138.58.149   Sat Nov 16 10:52 - 10:52  (00:00)
mypasswo ssh:notty    174.138.58.149   Sat Nov 16 10:51 - 10:51  (00:00)
sgwara   ssh:notty    84.45.251.243    Sat Nov 16 10:51 - 10:51  (00:00)
sgwara   ssh:notty    84.45.251.243    Sat Nov 16 10:51 - 10:51  (00:00)
codelyok ssh:notty    180.68.177.15    Sat Nov 16 10:50 - 10:50  (00:00)
codelyok ssh:notty    180.68.177.15    Sat Nov 16 10:50 - 10:50  (00:00)
kslewin  ssh:notty    188.166.109.87   Sat Nov 16 10:50 - 10:50  (00:00)
kslewin  ssh:notty    188.166.109.87   Sat Nov 16 10:50 - 10:50  (00:00)
root     ssh:notty    62.80.164.18     Sat Nov 16 10:49 - 10:49  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 10:48 - 10:48  (00:00)
123End   ssh:notty    174.138.58.149   Sat Nov 16 10:48 - 10:48  (00:00)
123End   ssh:notty    174.138.58.149   Sat Nov 16 10:48 - 10:48  (00:00)
root     ssh:notty    122.154.241.134  Sat Nov 16 10:47 - 10:47  (00:00)
hgfdsa   ssh:notty    168.181.104.30   Sat Nov 16 10:47 - 10:47  (00:00)
hgfdsa   ssh:notty    168.181.104.30   Sat Nov 16 10:47 - 10:47  (00:00)
backup   ssh:notty    84.45.251.243    Sat Nov 16 10:47 - 10:47  (00:00)
salado   ssh:notty    188.166.109.87   Sat Nov 16 10:47 - 10:47  (00:00)
salado   ssh:notty    188.166.109.87   Sat Nov 16 10:47 - 10:47  (00:00)
abby     ssh:notty    180.68.177.15    Sat Nov 16 10:45 - 10:45  (00:00)
abby     ssh:notty    180.68.177.15    Sat Nov 16 10:45 - 10:45  (00:00)
connor12 ssh:notty    174.138.58.149   Sat Nov 16 10:44 - 10:44  (00:00)
connor12 ssh:notty    174.138.58.149   Sat Nov 16 10:44 - 10:44  (00:00)
flash    ssh:notty    209.97.161.46    Sat Nov 16 10:44 - 10:44  (00:00)
flash    ssh:notty    209.97.161.46    Sat Nov 16 10:44 - 10:44  (00:00)
bezhan   ssh:notty    122.154.241.134  Sat Nov 16 10:43 - 10:43  (00:00)
bezhan   ssh:notty    122.154.241.134  Sat Nov 16 10:43 - 10:43  (00:00)
mysql    ssh:notty    84.45.251.243    Sat Nov 16 10:43 - 10:43  (00:00)
mysql    ssh:notty    84.45.251.243    Sat Nov 16 10:43 - 10:43  (00:00)
arbaiah  ssh:notty    188.166.109.87   Sat Nov 16 10:43 - 10:43  (00:00)
arbaiah  ssh:notty    188.166.109.87   Sat Nov 16 10:43 - 10:43  (00:00)
operator ssh:notty    168.181.104.30   Sat Nov 16 10:43 - 10:43  (00:00)
operator ssh:notty    168.181.104.30   Sat Nov 16 10:43 - 10:43  (00:00)
lisa     ssh:notty    168.243.232.149  Sat Nov 16 10:42 - 10:42  (00:00)
lisa     ssh:notty    168.243.232.149  Sat Nov 16 10:42 - 10:42  (00:00)
siah     ssh:notty    62.80.164.18     Sat Nov 16 10:41 - 10:41  (00:00)
siah     ssh:notty    62.80.164.18     Sat Nov 16 10:41 - 10:41  (00:00)
install! ssh:notty    174.138.58.149   Sat Nov 16 10:41 - 10:41  (00:00)
install! ssh:notty    174.138.58.149   Sat Nov 16 10:41 - 10:41  (00:00)
ditlevso ssh:notty    180.68.177.15    Sat Nov 16 10:39 - 10:39  (00:00)
ditlevso ssh:notty    180.68.177.15    Sat Nov 16 10:39 - 10:39  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 10:39 - 10:39  (00:00)
ftp      ssh:notty    84.45.251.243    Sat Nov 16 10:39 - 10:39  (00:00)
ftp      ssh:notty    84.45.251.243    Sat Nov 16 10:39 - 10:39  (00:00)
perrine  ssh:notty    122.154.241.134  Sat Nov 16 10:39 - 10:39  (00:00)
perrine  ssh:notty    122.154.241.134  Sat Nov 16 10:39 - 10:39  (00:00)
backup   ssh:notty    209.97.161.46    Sat Nov 16 10:39 - 10:39  (00:00)
nobody12 ssh:notty    168.181.104.30   Sat Nov 16 10:38 - 10:38  (00:00)
nobody12 ssh:notty    168.181.104.30   Sat Nov 16 10:38 - 10:38  (00:00)
guest    ssh:notty    223.220.159.78   Sat Nov 16 10:38 - 10:38  (00:00)
guest    ssh:notty    223.220.159.78   Sat Nov 16 10:38 - 10:38  (00:00)
mnbvcxz  ssh:notty    174.138.58.149   Sat Nov 16 10:37 - 10:37  (00:00)
mnbvcxz  ssh:notty    174.138.58.149   Sat Nov 16 10:37 - 10:37  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 10:36 - 10:36  (00:00)
julayne  ssh:notty    84.45.251.243    Sat Nov 16 10:35 - 10:35  (00:00)
julayne  ssh:notty    84.45.251.243    Sat Nov 16 10:35 - 10:35  (00:00)
user     ssh:notty    109.207.113.126  Sat Nov 16 10:35 - 10:35  (00:00)
user     ssh:notty    109.207.113.126  Sat Nov 16 10:35 - 10:35  (00:00)
dewitte  ssh:notty    122.154.241.134  Sat Nov 16 10:35 - 10:35  (00:00)
dewitte  ssh:notty    122.154.241.134  Sat Nov 16 10:35 - 10:35  (00:00)
myroon   ssh:notty    209.97.161.46    Sat Nov 16 10:35 - 10:35  (00:00)
myroon   ssh:notty    209.97.161.46    Sat Nov 16 10:35 - 10:35  (00:00)
ssssssss ssh:notty    168.181.104.30   Sat Nov 16 10:34 - 10:34  (00:00)
backup   ssh:notty    180.68.177.15    Sat Nov 16 10:34 - 10:34  (00:00)
cuba     ssh:notty    174.138.58.149   Sat Nov 16 10:34 - 10:34  (00:00)
ssssssss ssh:notty    168.181.104.30   Sat Nov 16 10:34 - 10:34  (00:00)
cuba     ssh:notty    174.138.58.149   Sat Nov 16 10:34 - 10:34  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 10:32 - 10:32  (00:00)
mysql    ssh:notty    84.45.251.243    Sat Nov 16 10:32 - 10:32  (00:00)
mysql    ssh:notty    84.45.251.243    Sat Nov 16 10:32 - 10:32  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 10:31 - 10:31  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 10:31 - 10:31  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 10:31 - 10:31  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 10:31 - 10:31  (00:00)
romelia  ssh:notty    122.154.241.134  Sat Nov 16 10:31 - 10:31  (00:00)
romelia  ssh:notty    122.154.241.134  Sat Nov 16 10:30 - 10:30  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 10:30 - 10:30  (00:00)
tk123    ssh:notty    174.138.58.149   Sat Nov 16 10:30 - 10:30  (00:00)
tk123    ssh:notty    174.138.58.149   Sat Nov 16 10:30 - 10:30  (00:00)
www      ssh:notty    209.97.161.46    Sat Nov 16 10:30 - 10:30  (00:00)
www      ssh:notty    209.97.161.46    Sat Nov 16 10:30 - 10:30  (00:00)
farrelly ssh:notty    168.181.104.30   Sat Nov 16 10:29 - 10:29  (00:00)
farrelly ssh:notty    168.181.104.30   Sat Nov 16 10:29 - 10:29  (00:00)
maxim    ssh:notty    188.166.109.87   Sat Nov 16 10:29 - 10:29  (00:00)
maxim    ssh:notty    188.166.109.87   Sat Nov 16 10:29 - 10:29  (00:00)
enger    ssh:notty    84.45.251.243    Sat Nov 16 10:28 - 10:28  (00:00)
enger    ssh:notty    84.45.251.243    Sat Nov 16 10:28 - 10:28  (00:00)
ahile    ssh:notty    180.68.177.15    Sat Nov 16 10:28 - 10:28  (00:00)
ahile    ssh:notty    180.68.177.15    Sat Nov 16 10:28 - 10:28  (00:00)
com      ssh:notty    174.138.58.149   Sat Nov 16 10:27 - 10:27  (00:00)
com      ssh:notty    174.138.58.149   Sat Nov 16 10:27 - 10:27  (00:00)
iubi     ssh:notty    122.154.241.134  Sat Nov 16 10:26 - 10:26  (00:00)
iubi     ssh:notty    122.154.241.134  Sat Nov 16 10:26 - 10:26  (00:00)
guest    ssh:notty    209.97.161.46    Sat Nov 16 10:26 - 10:26  (00:00)
guest    ssh:notty    209.97.161.46    Sat Nov 16 10:26 - 10:26  (00:00)
houle    ssh:notty    188.166.109.87   Sat Nov 16 10:25 - 10:25  (00:00)
houle    ssh:notty    188.166.109.87   Sat Nov 16 10:25 - 10:25  (00:00)
root1234 ssh:notty    168.181.104.30   Sat Nov 16 10:25 - 10:25  (00:00)
root1234 ssh:notty    168.181.104.30   Sat Nov 16 10:25 - 10:25  (00:00)
root     ssh:notty    62.80.164.18     Sat Nov 16 10:25 - 10:25  (00:00)
hertzog  ssh:notty    84.45.251.243    Sat Nov 16 10:24 - 10:24  (00:00)
hertzog  ssh:notty    84.45.251.243    Sat Nov 16 10:24 - 10:24  (00:00)
root     ssh:notty    174.138.58.149   Sat Nov 16 10:23 - 10:23  (00:00)
root     ssh:notty    122.154.241.134  Sat Nov 16 10:22 - 10:22  (00:00)
murat    ssh:notty    180.68.177.15    Sat Nov 16 10:22 - 10:22  (00:00)
smmsp    ssh:notty    209.97.161.46    Sat Nov 16 10:22 - 10:22  (00:00)
murat    ssh:notty    180.68.177.15    Sat Nov 16 10:22 - 10:22  (00:00)
smmsp    ssh:notty    209.97.161.46    Sat Nov 16 10:22 - 10:22  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 10:21 - 10:21  (00:00)
root     ssh:notty    222.186.190.2    Sat Nov 16 10:21 - 10:21  (00:00)
qwer1234 ssh:notty    168.181.104.30   Sat Nov 16 10:21 - 10:21  (00:00)
root     ssh:notty    222.186.190.2    Sat Nov 16 10:21 - 10:21  (00:00)
qwer1234 ssh:notty    168.181.104.30   Sat Nov 16 10:20 - 10:20  (00:00)
root     ssh:notty    222.186.190.2    Sat Nov 16 10:20 - 10:20  (00:00)
root     ssh:notty    222.186.190.2    Sat Nov 16 10:20 - 10:20  (00:00)
root     ssh:notty    222.186.190.2    Sat Nov 16 10:20 - 10:20  (00:00)
hurtes   ssh:notty    84.45.251.243    Sat Nov 16 10:20 - 10:20  (00:00)
hurtes   ssh:notty    84.45.251.243    Sat Nov 16 10:20 - 10:20  (00:00)
root     ssh:notty    174.138.58.149   Sat Nov 16 10:20 - 10:20  (00:00)
guest    ssh:notty    122.154.241.134  Sat Nov 16 10:18 - 10:18  (00:00)
guest    ssh:notty    122.154.241.134  Sat Nov 16 10:18 - 10:18  (00:00)
backup   ssh:notty    209.97.161.46    Sat Nov 16 10:18 - 10:18  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 10:18 - 10:18  (00:00)
root     ssh:notty    62.80.164.18     Sat Nov 16 10:17 - 10:17  (00:00)
glassfis ssh:notty    174.138.58.149   Sat Nov 16 10:16 - 10:16  (00:00)
glassfis ssh:notty    174.138.58.149   Sat Nov 16 10:16 - 10:16  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 10:16 - 10:16  (00:00)
bot      ssh:notty    168.181.104.30   Sat Nov 16 10:16 - 10:16  (00:00)
uucp     ssh:notty    180.68.177.15    Sat Nov 16 10:16 - 10:16  (00:00)
bot      ssh:notty    168.181.104.30   Sat Nov 16 10:16 - 10:16  (00:00)
snorre   ssh:notty    188.166.109.87   Sat Nov 16 10:14 - 10:14  (00:00)
snorre   ssh:notty    188.166.109.87   Sat Nov 16 10:14 - 10:14  (00:00)
senna    ssh:notty    209.97.161.46    Sat Nov 16 10:14 - 10:14  (00:00)
ftpuser  ssh:notty    122.154.241.134  Sat Nov 16 10:14 - 10:14  (00:00)
senna    ssh:notty    209.97.161.46    Sat Nov 16 10:14 - 10:14  (00:00)
ftpuser  ssh:notty    122.154.241.134  Sat Nov 16 10:14 - 10:14  (00:00)
root     ssh:notty    174.138.58.149   Sat Nov 16 10:13 - 10:13  (00:00)
console  ssh:notty    84.45.251.243    Sat Nov 16 10:13 - 10:13  (00:00)
console  ssh:notty    84.45.251.243    Sat Nov 16 10:13 - 10:13  (00:00)
ingebrig ssh:notty    51.38.237.214    Sat Nov 16 10:12 - 10:12  (00:00)
ingebrig ssh:notty    51.38.237.214    Sat Nov 16 10:12 - 10:12  (00:00)
slowik   ssh:notty    168.181.104.30   Sat Nov 16 10:12 - 10:12  (00:00)
slowik   ssh:notty    168.181.104.30   Sat Nov 16 10:12 - 10:12  (00:00)
daemon   ssh:notty    116.24.66.114    Sat Nov 16 10:11 - 10:11  (00:00)
james    ssh:notty    188.166.109.87   Sat Nov 16 10:11 - 10:11  (00:00)
james    ssh:notty    188.166.109.87   Sat Nov 16 10:11 - 10:11  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:11 - 10:11  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:11 - 10:11  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:10 - 10:10  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:10 - 10:10  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:10 - 10:10  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:10 - 10:10  (00:00)
home     ssh:notty    180.68.177.15    Sat Nov 16 10:10 - 10:10  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:10 - 10:10  (00:00)
home     ssh:notty    180.68.177.15    Sat Nov 16 10:10 - 10:10  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:10 - 10:10  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:10 - 10:10  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:10 - 10:10  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 10:10 - 10:10  (00:00)
backup   ssh:notty    209.97.161.46    Sat Nov 16 10:10 - 10:10  (00:00)
test     ssh:notty    122.154.241.134  Sat Nov 16 10:10 - 10:10  (00:00)
test     ssh:notty    122.154.241.134  Sat Nov 16 10:09 - 10:09  (00:00)
trussel  ssh:notty    174.138.58.149   Sat Nov 16 10:09 - 10:09  (00:00)
trussel  ssh:notty    174.138.58.149   Sat Nov 16 10:09 - 10:09  (00:00)
wwwrun   ssh:notty    62.80.164.18     Sat Nov 16 10:09 - 10:09  (00:00)
wwwrun   ssh:notty    62.80.164.18     Sat Nov 16 10:09 - 10:09  (00:00)
adm      ssh:notty    84.45.251.243    Sat Nov 16 10:09 - 10:09  (00:00)
adm      ssh:notty    84.45.251.243    Sat Nov 16 10:09 - 10:09  (00:00)
lp       ssh:notty    51.38.237.214    Sat Nov 16 10:09 - 10:09  (00:00)
jerijaer ssh:notty    188.166.109.87   Sat Nov 16 10:07 - 10:07  (00:00)
jerijaer ssh:notty    188.166.109.87   Sat Nov 16 10:07 - 10:07  (00:00)
rapport  ssh:notty    168.181.104.30   Sat Nov 16 10:07 - 10:07  (00:00)
rapport  ssh:notty    168.181.104.30   Sat Nov 16 10:07 - 10:07  (00:00)
guest    ssh:notty    174.138.58.149   Sat Nov 16 10:06 - 10:06  (00:00)
guest    ssh:notty    174.138.58.149   Sat Nov 16 10:06 - 10:06  (00:00)
carlisya ssh:notty    116.24.66.114    Sat Nov 16 10:06 - 10:06  (00:00)
carlisya ssh:notty    116.24.66.114    Sat Nov 16 10:06 - 10:06  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 10:06 - 10:06  (00:00)
root     ssh:notty    122.154.241.134  Sat Nov 16 10:05 - 10:05  (00:00)
cottrell ssh:notty    51.38.237.214    Sat Nov 16 10:05 - 10:05  (00:00)
cottrell ssh:notty    51.38.237.214    Sat Nov 16 10:05 - 10:05  (00:00)
balk     ssh:notty    84.45.251.243    Sat Nov 16 10:05 - 10:05  (00:00)
balk     ssh:notty    84.45.251.243    Sat Nov 16 10:05 - 10:05  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 10:04 - 10:04  (00:00)
heitfeld ssh:notty    188.166.109.87   Sat Nov 16 10:04 - 10:04  (00:00)
heitfeld ssh:notty    188.166.109.87   Sat Nov 16 10:04 - 10:04  (00:00)
lose     ssh:notty    168.181.104.30   Sat Nov 16 10:03 - 10:03  (00:00)
lose     ssh:notty    168.181.104.30   Sat Nov 16 10:03 - 10:03  (00:00)
yt       ssh:notty    174.138.58.149   Sat Nov 16 10:02 - 10:02  (00:00)
yt       ssh:notty    174.138.58.149   Sat Nov 16 10:02 - 10:02  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 10:02 - 10:02  (00:00)
limido   ssh:notty    209.97.161.46    Sat Nov 16 10:01 - 10:01  (00:00)
limido   ssh:notty    209.97.161.46    Sat Nov 16 10:01 - 10:01  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 10:01 - 10:01  (00:00)
creech   ssh:notty    122.154.241.134  Sat Nov 16 10:01 - 10:01  (00:00)
creech   ssh:notty    122.154.241.134  Sat Nov 16 10:01 - 10:01  (00:00)
test     ssh:notty    62.80.164.18     Sat Nov 16 10:01 - 10:01  (00:00)
test     ssh:notty    62.80.164.18     Sat Nov 16 10:01 - 10:01  (00:00)
nobody   ssh:notty    116.24.66.114    Sat Nov 16 10:00 - 10:00  (00:00)
test     ssh:notty    188.166.109.87   Sat Nov 16 10:00 - 10:00  (00:00)
test     ssh:notty    188.166.109.87   Sat Nov 16 10:00 - 10:00  (00:00)
root     ssh:notty    222.186.42.4     Sat Nov 16 10:00 - 10:00  (00:00)
root     ssh:notty    222.186.42.4     Sat Nov 16 10:00 - 10:00  (00:00)
root     ssh:notty    222.186.42.4     Sat Nov 16 10:00 - 10:00  (00:00)
root     ssh:notty    222.186.42.4     Sat Nov 16 09:59 - 09:59  (00:00)
root     ssh:notty    222.186.42.4     Sat Nov 16 09:59 - 09:59  (00:00)
jdobson  ssh:notty    174.138.58.149   Sat Nov 16 09:59 - 09:59  (00:00)
jdobson  ssh:notty    174.138.58.149   Sat Nov 16 09:59 - 09:59  (00:00)
games    ssh:notty    180.68.177.15    Sat Nov 16 09:59 - 09:59  (00:00)
8i9o0p   ssh:notty    168.181.104.30   Sat Nov 16 09:58 - 09:58  (00:00)
8i9o0p   ssh:notty    168.181.104.30   Sat Nov 16 09:58 - 09:58  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:58 - 09:58  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 09:57 - 09:57  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 09:57 - 09:57  (00:00)
tunnel   ssh:notty    188.166.109.87   Sat Nov 16 09:57 - 09:57  (00:00)
tunnel   ssh:notty    188.166.109.87   Sat Nov 16 09:57 - 09:57  (00:00)
temp     ssh:notty    122.154.241.134  Sat Nov 16 09:57 - 09:57  (00:00)
temp     ssh:notty    122.154.241.134  Sat Nov 16 09:57 - 09:57  (00:00)

        ssh:notty    40.117.129.28    Sat Nov 16 09:56 - 09:56  (00:00)

        ssh:notty    40.117.129.28    Sat Nov 16 09:56 - 09:56  (00:00)
wwwrun   ssh:notty    174.138.58.149   Sat Nov 16 09:56 - 09:56  (00:00)
wwwrun   ssh:notty    174.138.58.149   Sat Nov 16 09:56 - 09:56  (00:00)
kana     ssh:notty    40.117.129.28    Sat Nov 16 09:55 - 09:55  (00:00)
kana     ssh:notty    40.117.129.28    Sat Nov 16 09:55 - 09:55  (00:00)
root     ssh:notty    116.24.66.114    Sat Nov 16 09:55 - 09:55  (00:00)
prashant ssh:notty    40.117.129.28    Sat Nov 16 09:55 - 09:55  (00:00)
prashant ssh:notty    40.117.129.28    Sat Nov 16 09:55 - 09:55  (00:00)
nadmin   ssh:notty    49.235.240.21    Sat Nov 16 09:55 - 09:55  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:55 - 09:55  (00:00)
nadmin   ssh:notty    49.235.240.21    Sat Nov 16 09:55 - 09:55  (00:00)
kana1    ssh:notty    40.117.129.28    Sat Nov 16 09:55 - 09:55  (00:00)
kana1    ssh:notty    40.117.129.28    Sat Nov 16 09:55 - 09:55  (00:00)
peter    ssh:notty    40.117.129.28    Sat Nov 16 09:54 - 09:54  (00:00)
peter    ssh:notty    40.117.129.28    Sat Nov 16 09:54 - 09:54  (00:00)
atlbitbu ssh:notty    40.117.129.28    Sat Nov 16 09:54 - 09:54  (00:00)
atlbitbu ssh:notty    40.117.129.28    Sat Nov 16 09:54 - 09:54  (00:00)
yyy!@#$% ssh:notty    168.181.104.30   Sat Nov 16 09:54 - 09:54  (00:00)
yyy!@#$% ssh:notty    168.181.104.30   Sat Nov 16 09:54 - 09:54  (00:00)
furuichi ssh:notty    84.45.251.243    Sat Nov 16 09:54 - 09:54  (00:00)
furuichi ssh:notty    84.45.251.243    Sat Nov 16 09:54 - 09:54  (00:00)
siteguru ssh:notty    40.117.129.28    Sat Nov 16 09:54 - 09:54  (00:00)
ft       ssh:notty    188.166.109.87   Sat Nov 16 09:54 - 09:54  (00:00)
siteguru ssh:notty    40.117.129.28    Sat Nov 16 09:53 - 09:53  (00:00)
ft       ssh:notty    188.166.109.87   Sat Nov 16 09:53 - 09:53  (00:00)
sshd     ssh:notty    209.97.161.46    Sat Nov 16 09:53 - 09:53  (00:00)
Atlassof ssh:notty    40.117.129.28    Sat Nov 16 09:53 - 09:53  (00:00)
Atlassof ssh:notty    40.117.129.28    Sat Nov 16 09:53 - 09:53  (00:00)
walberg  ssh:notty    180.68.177.15    Sat Nov 16 09:53 - 09:53  (00:00)
walberg  ssh:notty    180.68.177.15    Sat Nov 16 09:53 - 09:53  (00:00)
ssm-user ssh:notty    40.117.129.28    Sat Nov 16 09:53 - 09:53  (00:00)
ssm-user ssh:notty    40.117.129.28    Sat Nov 16 09:53 - 09:53  (00:00)
reggello ssh:notty    122.154.241.134  Sat Nov 16 09:53 - 09:53  (00:00)
reggello ssh:notty    122.154.241.134  Sat Nov 16 09:53 - 09:53  (00:00)
cloud_us ssh:notty    40.117.129.28    Sat Nov 16 09:52 - 09:52  (00:00)
cloud_us ssh:notty    40.117.129.28    Sat Nov 16 09:52 - 09:52  (00:00)
letton   ssh:notty    174.138.58.149   Sat Nov 16 09:52 - 09:52  (00:00)
letton   ssh:notty    174.138.58.149   Sat Nov 16 09:52 - 09:52  (00:00)
admin    ssh:notty    40.117.129.28    Sat Nov 16 09:52 - 09:52  (00:00)
admin    ssh:notty    40.117.129.28    Sat Nov 16 09:52 - 09:52  (00:00)
ec2-user ssh:notty    40.117.129.28    Sat Nov 16 09:52 - 09:52  (00:00)
ec2-user ssh:notty    40.117.129.28    Sat Nov 16 09:51 - 09:51  (00:00)
mailer   ssh:notty    51.38.237.214    Sat Nov 16 09:51 - 09:51  (00:00)
mailer   ssh:notty    51.38.237.214    Sat Nov 16 09:51 - 09:51  (00:00)
bitcoin  ssh:notty    40.117.129.28    Sat Nov 16 09:51 - 09:51  (00:00)
bitcoin  ssh:notty    40.117.129.28    Sat Nov 16 09:51 - 09:51  (00:00)
arch     ssh:notty    40.117.129.28    Sat Nov 16 09:51 - 09:51  (00:00)
arch     ssh:notty    40.117.129.28    Sat Nov 16 09:51 - 09:51  (00:00)
docker   ssh:notty    40.117.129.28    Sat Nov 16 09:50 - 09:50  (00:00)
docker   ssh:notty    40.117.129.28    Sat Nov 16 09:50 - 09:50  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 09:50 - 09:50  (00:00)
rd       ssh:notty    188.166.109.87   Sat Nov 16 09:50 - 09:50  (00:00)
rd       ssh:notty    188.166.109.87   Sat Nov 16 09:50 - 09:50  (00:00)
ec2-user ssh:notty    84.45.251.243    Sat Nov 16 09:50 - 09:50  (00:00)
ec2-user ssh:notty    84.45.251.243    Sat Nov 16 09:50 - 09:50  (00:00)
ark      ssh:notty    40.117.129.28    Sat Nov 16 09:50 - 09:50  (00:00)
ark      ssh:notty    40.117.129.28    Sat Nov 16 09:50 - 09:50  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 09:50 - 09:50  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 09:50 - 09:50  (00:00)
sh-admin ssh:notty    40.117.129.28    Sat Nov 16 09:50 - 09:50  (00:00)
sh-admin ssh:notty    40.117.129.28    Sat Nov 16 09:50 - 09:50  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 09:50 - 09:50  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 09:49 - 09:49  (00:00)
server   ssh:notty    116.24.66.114    Sat Nov 16 09:49 - 09:49  (00:00)
server   ssh:notty    116.24.66.114    Sat Nov 16 09:49 - 09:49  (00:00)
root     ssh:notty    222.186.173.142  Sat Nov 16 09:49 - 09:49  (00:00)
q1w2     ssh:notty    168.181.104.30   Sat Nov 16 09:49 - 09:49  (00:00)
q1w2     ssh:notty    168.181.104.30   Sat Nov 16 09:49 - 09:49  (00:00)
s1-admin ssh:notty    40.117.129.28    Sat Nov 16 09:49 - 09:49  (00:00)
s1-admin ssh:notty    40.117.129.28    Sat Nov 16 09:49 - 09:49  (00:00)
backup   ssh:notty    209.97.161.46    Sat Nov 16 09:49 - 09:49  (00:00)
mondkalb ssh:notty    40.117.129.28    Sat Nov 16 09:49 - 09:49  (00:00)
mondkalb ssh:notty    40.117.129.28    Sat Nov 16 09:49 - 09:49  (00:00)
rigstad  ssh:notty    174.138.58.149   Sat Nov 16 09:49 - 09:49  (00:00)
rigstad  ssh:notty    174.138.58.149   Sat Nov 16 09:49 - 09:49  (00:00)
admin    ssh:notty    122.154.241.134  Sat Nov 16 09:48 - 09:48  (00:00)
admin    ssh:notty    122.154.241.134  Sat Nov 16 09:48 - 09:48  (00:00)
packer   ssh:notty    40.117.129.28    Sat Nov 16 09:48 - 09:48  (00:00)
packer   ssh:notty    40.117.129.28    Sat Nov 16 09:48 - 09:48  (00:00)
debian-s ssh:notty    40.117.129.28    Sat Nov 16 09:48 - 09:48  (00:00)
debian-s ssh:notty    40.117.129.28    Sat Nov 16 09:48 - 09:48  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:48 - 09:48  (00:00)
debian   ssh:notty    40.117.129.28    Sat Nov 16 09:48 - 09:48  (00:00)
debian   ssh:notty    40.117.129.28    Sat Nov 16 09:48 - 09:48  (00:00)
zaloni   ssh:notty    40.117.129.28    Sat Nov 16 09:47 - 09:47  (00:00)
sundholm ssh:notty    180.68.177.15    Sat Nov 16 09:47 - 09:47  (00:00)
zaloni   ssh:notty    40.117.129.28    Sat Nov 16 09:47 - 09:47  (00:00)
sundholm ssh:notty    180.68.177.15    Sat Nov 16 09:47 - 09:47  (00:00)
ftp      ssh:notty    40.117.129.28    Sat Nov 16 09:47 - 09:47  (00:00)
ericha   ssh:notty    188.166.109.87   Sat Nov 16 09:47 - 09:47  (00:00)
ftp      ssh:notty    40.117.129.28    Sat Nov 16 09:47 - 09:47  (00:00)
ericha   ssh:notty    188.166.109.87   Sat Nov 16 09:47 - 09:47  (00:00)
uftp     ssh:notty    40.117.129.28    Sat Nov 16 09:46 - 09:46  (00:00)
uftp     ssh:notty    40.117.129.28    Sat Nov 16 09:46 - 09:46  (00:00)
seiwa    ssh:notty    84.45.251.243    Sat Nov 16 09:46 - 09:46  (00:00)
seiwa    ssh:notty    84.45.251.243    Sat Nov 16 09:46 - 09:46  (00:00)
andreas  ssh:notty    40.117.129.28    Sat Nov 16 09:46 - 09:46  (00:00)
andreas  ssh:notty    40.117.129.28    Sat Nov 16 09:46 - 09:46  (00:00)
guest    ssh:notty    49.235.240.21    Sat Nov 16 09:46 - 09:46  (00:00)
guest    ssh:notty    49.235.240.21    Sat Nov 16 09:46 - 09:46  (00:00)
chbackup ssh:notty    40.117.129.28    Sat Nov 16 09:46 - 09:46  (00:00)
chbackup ssh:notty    40.117.129.28    Sat Nov 16 09:46 - 09:46  (00:00)
chilling ssh:notty    40.117.129.28    Sat Nov 16 09:45 - 09:45  (00:00)
chilling ssh:notty    40.117.129.28    Sat Nov 16 09:45 - 09:45  (00:00)
root     ssh:notty    174.138.58.149   Sat Nov 16 09:45 - 09:45  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 09:45 - 09:45  (00:00)
mail111  ssh:notty    168.181.104.30   Sat Nov 16 09:45 - 09:45  (00:00)
mail111  ssh:notty    168.181.104.30   Sat Nov 16 09:45 - 09:45  (00:00)
coin     ssh:notty    40.117.129.28    Sat Nov 16 09:45 - 09:45  (00:00)
coin     ssh:notty    40.117.129.28    Sat Nov 16 09:45 - 09:45  (00:00)
scanner  ssh:notty    40.117.129.28    Sat Nov 16 09:44 - 09:44  (00:00)
mail     ssh:notty    51.38.237.214    Sat Nov 16 09:44 - 09:44  (00:00)
scanner  ssh:notty    40.117.129.28    Sat Nov 16 09:44 - 09:44  (00:00)
wolf     ssh:notty    122.154.241.134  Sat Nov 16 09:44 - 09:44  (00:00)
wolf     ssh:notty    122.154.241.134  Sat Nov 16 09:44 - 09:44  (00:00)
cronjob  ssh:notty    40.117.129.28    Sat Nov 16 09:44 - 09:44  (00:00)
cronjob  ssh:notty    40.117.129.28    Sat Nov 16 09:44 - 09:44  (00:00)
hanson   ssh:notty    40.117.129.28    Sat Nov 16 09:44 - 09:44  (00:00)
hanson   ssh:notty    40.117.129.28    Sat Nov 16 09:44 - 09:44  (00:00)
webmaste ssh:notty    116.24.66.114    Sat Nov 16 09:43 - 09:43  (00:00)
webmaste ssh:notty    116.24.66.114    Sat Nov 16 09:43 - 09:43  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 09:43 - 09:43  (00:00)
node     ssh:notty    40.117.129.28    Sat Nov 16 09:43 - 09:43  (00:00)
node     ssh:notty    40.117.129.28    Sat Nov 16 09:43 - 09:43  (00:00)
sftp     ssh:notty    40.117.129.28    Sat Nov 16 09:43 - 09:43  (00:00)
sftp     ssh:notty    40.117.129.28    Sat Nov 16 09:43 - 09:43  (00:00)
ftpuser  ssh:notty    40.117.129.28    Sat Nov 16 09:43 - 09:43  (00:00)
petya    ssh:notty    84.45.251.243    Sat Nov 16 09:42 - 09:42  (00:00)
ftpuser  ssh:notty    40.117.129.28    Sat Nov 16 09:42 - 09:42  (00:00)
petya    ssh:notty    84.45.251.243    Sat Nov 16 09:42 - 09:42  (00:00)
odoo     ssh:notty    40.117.129.28    Sat Nov 16 09:42 - 09:42  (00:00)
odoo     ssh:notty    40.117.129.28    Sat Nov 16 09:42 - 09:42  (00:00)
wwwrun   ssh:notty    174.138.58.149   Sat Nov 16 09:42 - 09:42  (00:00)
wwwrun   ssh:notty    174.138.58.149   Sat Nov 16 09:42 - 09:42  (00:00)
centos   ssh:notty    40.117.129.28    Sat Nov 16 09:42 - 09:42  (00:00)
centos   ssh:notty    40.117.129.28    Sat Nov 16 09:42 - 09:42  (00:00)
sync     ssh:notty    180.68.177.15    Sat Nov 16 09:42 - 09:42  (00:00)
wildfly  ssh:notty    40.117.129.28    Sat Nov 16 09:41 - 09:41  (00:00)
wildfly  ssh:notty    40.117.129.28    Sat Nov 16 09:41 - 09:41  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 09:41 - 09:41  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:41 - 09:41  (00:00)
giusto   ssh:notty    209.97.161.46    Sat Nov 16 09:41 - 09:41  (00:00)
lldpd    ssh:notty    40.117.129.28    Sat Nov 16 09:41 - 09:41  (00:00)
giusto   ssh:notty    209.97.161.46    Sat Nov 16 09:41 - 09:41  (00:00)
lldpd    ssh:notty    40.117.129.28    Sat Nov 16 09:41 - 09:41  (00:00)
sonu     ssh:notty    168.181.104.30   Sat Nov 16 09:41 - 09:41  (00:00)
sonu     ssh:notty    168.181.104.30   Sat Nov 16 09:41 - 09:41  (00:00)
vagrant  ssh:notty    40.117.129.28    Sat Nov 16 09:41 - 09:41  (00:00)
vagrant  ssh:notty    40.117.129.28    Sat Nov 16 09:41 - 09:41  (00:00)
libuuid  ssh:notty    40.117.129.28    Sat Nov 16 09:40 - 09:40  (00:00)
libuuid  ssh:notty    40.117.129.28    Sat Nov 16 09:40 - 09:40  (00:00)
fucile   ssh:notty    122.154.241.134  Sat Nov 16 09:40 - 09:40  (00:00)
fucile   ssh:notty    122.154.241.134  Sat Nov 16 09:40 - 09:40  (00:00)
installa ssh:notty    188.166.109.87   Sat Nov 16 09:40 - 09:40  (00:00)
installa ssh:notty    188.166.109.87   Sat Nov 16 09:40 - 09:40  (00:00)
cisco    ssh:notty    40.117.129.28    Sat Nov 16 09:40 - 09:40  (00:00)
cisco    ssh:notty    40.117.129.28    Sat Nov 16 09:40 - 09:40  (00:00)
prueba   ssh:notty    40.117.129.28    Sat Nov 16 09:39 - 09:39  (00:00)
prueba   ssh:notty    40.117.129.28    Sat Nov 16 09:39 - 09:39  (00:00)
tajiriu  ssh:notty    40.117.129.28    Sat Nov 16 09:39 - 09:39  (00:00)
tajiriu  ssh:notty    40.117.129.28    Sat Nov 16 09:39 - 09:39  (00:00)
zikri    ssh:notty    84.45.251.243    Sat Nov 16 09:39 - 09:39  (00:00)
zikri    ssh:notty    84.45.251.243    Sat Nov 16 09:39 - 09:39  (00:00)
vncuser  ssh:notty    40.117.129.28    Sat Nov 16 09:39 - 09:39  (00:00)
vncuser  ssh:notty    40.117.129.28    Sat Nov 16 09:39 - 09:39  (00:00)
spencer  ssh:notty    174.138.58.149   Sat Nov 16 09:38 - 09:38  (00:00)
spencer  ssh:notty    174.138.58.149   Sat Nov 16 09:38 - 09:38  (00:00)
opc      ssh:notty    40.117.129.28    Sat Nov 16 09:38 - 09:38  (00:00)
opc      ssh:notty    40.117.129.28    Sat Nov 16 09:38 - 09:38  (00:00)
mc       ssh:notty    40.117.129.28    Sat Nov 16 09:38 - 09:38  (00:00)
mc       ssh:notty    40.117.129.28    Sat Nov 16 09:38 - 09:38  (00:00)
midgard  ssh:notty    116.24.66.114    Sat Nov 16 09:38 - 09:38  (00:00)
midgard  ssh:notty    116.24.66.114    Sat Nov 16 09:38 - 09:38  (00:00)
leytem   ssh:notty    51.38.237.214    Sat Nov 16 09:38 - 09:38  (00:00)
leytem   ssh:notty    51.38.237.214    Sat Nov 16 09:37 - 09:37  (00:00)
vnc      ssh:notty    40.117.129.28    Sat Nov 16 09:37 - 09:37  (00:00)
vnc      ssh:notty    40.117.129.28    Sat Nov 16 09:37 - 09:37  (00:00)
hadoop-u ssh:notty    40.117.129.28    Sat Nov 16 09:37 - 09:37  (00:00)
hadoop-u ssh:notty    40.117.129.28    Sat Nov 16 09:37 - 09:37  (00:00)
altvater ssh:notty    209.97.161.46    Sat Nov 16 09:37 - 09:37  (00:00)
altvater ssh:notty    209.97.161.46    Sat Nov 16 09:37 - 09:37  (00:00)
debian   ssh:notty    49.235.240.21    Sat Nov 16 09:37 - 09:37  (00:00)
debian   ssh:notty    49.235.240.21    Sat Nov 16 09:37 - 09:37  (00:00)
root     ssh:notty    40.117.129.28    Sat Nov 16 09:37 - 09:37  (00:00)
boarder  ssh:notty    188.166.109.87   Sat Nov 16 09:37 - 09:37  (00:00)
biofys   ssh:notty    62.80.164.18     Sat Nov 16 09:37 - 09:37  (00:00)
boarder  ssh:notty    188.166.109.87   Sat Nov 16 09:37 - 09:37  (00:00)
biofys   ssh:notty    62.80.164.18     Sat Nov 16 09:37 - 09:37  (00:00)
qihw     ssh:notty    40.117.129.28    Sat Nov 16 09:36 - 09:36  (00:00)
qihw     ssh:notty    40.117.129.28    Sat Nov 16 09:36 - 09:36  (00:00)
12345678 ssh:notty    168.181.104.30   Sat Nov 16 09:36 - 09:36  (00:00)
12345678 ssh:notty    168.181.104.30   Sat Nov 16 09:36 - 09:36  (00:00)
nfs      ssh:notty    122.154.241.134  Sat Nov 16 09:36 - 09:36  (00:00)
nfs      ssh:notty    122.154.241.134  Sat Nov 16 09:36 - 09:36  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 09:36 - 09:36  (00:00)
mc_admin ssh:notty    40.117.129.28    Sat Nov 16 09:36 - 09:36  (00:00)
mc_admin ssh:notty    40.117.129.28    Sat Nov 16 09:36 - 09:36  (00:00)
manish   ssh:notty    40.117.129.28    Sat Nov 16 09:36 - 09:36  (00:00)
manish   ssh:notty    40.117.129.28    Sat Nov 16 09:35 - 09:35  (00:00)
nitai    ssh:notty    40.117.129.28    Sat Nov 16 09:35 - 09:35  (00:00)
ned      ssh:notty    84.45.251.243    Sat Nov 16 09:35 - 09:35  (00:00)
ned      ssh:notty    84.45.251.243    Sat Nov 16 09:35 - 09:35  (00:00)
nitai    ssh:notty    40.117.129.28    Sat Nov 16 09:35 - 09:35  (00:00)
adrianna ssh:notty    174.138.58.149   Sat Nov 16 09:35 - 09:35  (00:00)
adrianna ssh:notty    174.138.58.149   Sat Nov 16 09:35 - 09:35  (00:00)
qenawy   ssh:notty    40.117.129.28    Sat Nov 16 09:35 - 09:35  (00:00)
qenawy   ssh:notty    40.117.129.28    Sat Nov 16 09:35 - 09:35  (00:00)
shakeel  ssh:notty    40.117.129.28    Sat Nov 16 09:34 - 09:34  (00:00)
shakeel  ssh:notty    40.117.129.28    Sat Nov 16 09:34 - 09:34  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:34 - 09:34  (00:00)
aakirti  ssh:notty    40.117.129.28    Sat Nov 16 09:34 - 09:34  (00:00)
aakirti  ssh:notty    40.117.129.28    Sat Nov 16 09:34 - 09:34  (00:00)
temp     ssh:notty    40.117.129.28    Sat Nov 16 09:33 - 09:33  (00:00)
temp     ssh:notty    40.117.129.28    Sat Nov 16 09:33 - 09:33  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 09:33 - 09:33  (00:00)
steam    ssh:notty    40.117.129.28    Sat Nov 16 09:33 - 09:33  (00:00)
steam    ssh:notty    40.117.129.28    Sat Nov 16 09:33 - 09:33  (00:00)
barbeau  ssh:notty    209.97.161.46    Sat Nov 16 09:33 - 09:33  (00:00)
barbeau  ssh:notty    209.97.161.46    Sat Nov 16 09:33 - 09:33  (00:00)
minecraf ssh:notty    40.117.129.28    Sat Nov 16 09:33 - 09:33  (00:00)
minecraf ssh:notty    40.117.129.28    Sat Nov 16 09:33 - 09:33  (00:00)
root     ssh:notty    40.117.129.28    Sat Nov 16 09:32 - 09:32  (00:00)
ff       ssh:notty    49.235.240.21    Sat Nov 16 09:32 - 09:32  (00:00)
ff       ssh:notty    49.235.240.21    Sat Nov 16 09:32 - 09:32  (00:00)
root     ssh:notty    40.117.129.28    Sat Nov 16 09:32 - 09:32  (00:00)
server   ssh:notty    122.154.241.134  Sat Nov 16 09:32 - 09:32  (00:00)
server   ssh:notty    122.154.241.134  Sat Nov 16 09:32 - 09:32  (00:00)
shima    ssh:notty    116.24.66.114    Sat Nov 16 09:32 - 09:32  (00:00)
shima    ssh:notty    116.24.66.114    Sat Nov 16 09:32 - 09:32  (00:00)
audy     ssh:notty    168.181.104.30   Sat Nov 16 09:32 - 09:32  (00:00)
audy     ssh:notty    168.181.104.30   Sat Nov 16 09:32 - 09:32  (00:00)
vyatta   ssh:notty    40.117.129.28    Sat Nov 16 09:32 - 09:32  (00:00)
vyatta   ssh:notty    40.117.129.28    Sat Nov 16 09:32 - 09:32  (00:00)
root     ssh:notty    174.138.58.149   Sat Nov 16 09:31 - 09:31  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 09:31 - 09:31  (00:00)
vpn      ssh:notty    40.117.129.28    Sat Nov 16 09:31 - 09:31  (00:00)
vpn      ssh:notty    40.117.129.28    Sat Nov 16 09:31 - 09:31  (00:00)
zero     ssh:notty    40.117.129.28    Sat Nov 16 09:31 - 09:31  (00:00)
zero     ssh:notty    40.117.129.28    Sat Nov 16 09:31 - 09:31  (00:00)
zerudhy  ssh:notty    51.38.237.214    Sat Nov 16 09:31 - 09:31  (00:00)
zerudhy  ssh:notty    51.38.237.214    Sat Nov 16 09:30 - 09:30  (00:00)
www-data ssh:notty    40.117.129.28    Sat Nov 16 09:30 - 09:30  (00:00)
guest    ssh:notty    180.68.177.15    Sat Nov 16 09:30 - 09:30  (00:00)
guest    ssh:notty    180.68.177.15    Sat Nov 16 09:30 - 09:30  (00:00)
redis    ssh:notty    40.117.129.28    Sat Nov 16 09:30 - 09:30  (00:00)
redis    ssh:notty    40.117.129.28    Sat Nov 16 09:30 - 09:30  (00:00)
guest    ssh:notty    188.166.109.87   Sat Nov 16 09:30 - 09:30  (00:00)
guest    ssh:notty    188.166.109.87   Sat Nov 16 09:30 - 09:30  (00:00)
sell     ssh:notty    40.117.129.28    Sat Nov 16 09:30 - 09:30  (00:00)
sell     ssh:notty    40.117.129.28    Sat Nov 16 09:30 - 09:30  (00:00)
jiqun    ssh:notty    40.117.129.28    Sat Nov 16 09:29 - 09:29  (00:00)
jiqun    ssh:notty    40.117.129.28    Sat Nov 16 09:29 - 09:29  (00:00)
www      ssh:notty    40.117.129.28    Sat Nov 16 09:29 - 09:29  (00:00)
www      ssh:notty    40.117.129.28    Sat Nov 16 09:29 - 09:29  (00:00)
wilie    ssh:notty    209.97.161.46    Sat Nov 16 09:29 - 09:29  (00:00)
wilie    ssh:notty    209.97.161.46    Sat Nov 16 09:29 - 09:29  (00:00)
kienle   ssh:notty    62.80.164.18     Sat Nov 16 09:29 - 09:29  (00:00)
kienle   ssh:notty    62.80.164.18     Sat Nov 16 09:29 - 09:29  (00:00)
webcupus ssh:notty    40.117.129.28    Sat Nov 16 09:29 - 09:29  (00:00)
webcupus ssh:notty    40.117.129.28    Sat Nov 16 09:28 - 09:28  (00:00)
postgres ssh:notty    40.117.129.28    Sat Nov 16 09:28 - 09:28  (00:00)
postgres ssh:notty    40.117.129.28    Sat Nov 16 09:28 - 09:28  (00:00)
mysql    ssh:notty    174.138.58.149   Sat Nov 16 09:28 - 09:28  (00:00)
mysql    ssh:notty    174.138.58.149   Sat Nov 16 09:28 - 09:28  (00:00)
www-data ssh:notty    122.154.241.134  Sat Nov 16 09:28 - 09:28  (00:00)
helayne  ssh:notty    49.235.240.21    Sat Nov 16 09:28 - 09:28  (00:00)
helayne  ssh:notty    49.235.240.21    Sat Nov 16 09:28 - 09:28  (00:00)
postgres ssh:notty    40.117.129.28    Sat Nov 16 09:28 - 09:28  (00:00)
postgres ssh:notty    40.117.129.28    Sat Nov 16 09:28 - 09:28  (00:00)
daemon   ssh:notty    84.45.251.243    Sat Nov 16 09:28 - 09:28  (00:00)
nobody88 ssh:notty    168.181.104.30   Sat Nov 16 09:27 - 09:27  (00:00)
git      ssh:notty    40.117.129.28    Sat Nov 16 09:27 - 09:27  (00:00)
nobody88 ssh:notty    168.181.104.30   Sat Nov 16 09:27 - 09:27  (00:00)
git      ssh:notty    40.117.129.28    Sat Nov 16 09:27 - 09:27  (00:00)
feld     ssh:notty    51.38.237.214    Sat Nov 16 09:27 - 09:27  (00:00)
feld     ssh:notty    51.38.237.214    Sat Nov 16 09:27 - 09:27  (00:00)
git      ssh:notty    40.117.129.28    Sat Nov 16 09:27 - 09:27  (00:00)
git      ssh:notty    40.117.129.28    Sat Nov 16 09:27 - 09:27  (00:00)
syslog   ssh:notty    40.117.129.28    Sat Nov 16 09:27 - 09:27  (00:00)
hoplite  ssh:notty    188.166.109.87   Sat Nov 16 09:26 - 09:26  (00:00)
hoplite  ssh:notty    188.166.109.87   Sat Nov 16 09:26 - 09:26  (00:00)
telegraf ssh:notty    40.117.129.28    Sat Nov 16 09:26 - 09:26  (00:00)
telegraf ssh:notty    40.117.129.28    Sat Nov 16 09:26 - 09:26  (00:00)
pagsisih ssh:notty    116.24.66.114    Sat Nov 16 09:26 - 09:26  (00:00)
pagsisih ssh:notty    116.24.66.114    Sat Nov 16 09:26 - 09:26  (00:00)
zabbix   ssh:notty    40.117.129.28    Sat Nov 16 09:26 - 09:26  (00:00)
zabbix   ssh:notty    40.117.129.28    Sat Nov 16 09:26 - 09:26  (00:00)
liveopt  ssh:notty    40.117.129.28    Sat Nov 16 09:25 - 09:25  (00:00)
liveopt  ssh:notty    40.117.129.28    Sat Nov 16 09:25 - 09:25  (00:00)
optuser  ssh:notty    40.117.129.28    Sat Nov 16 09:25 - 09:25  (00:00)
optuser  ssh:notty    40.117.129.28    Sat Nov 16 09:25 - 09:25  (00:00)
leroy    ssh:notty    209.97.161.46    Sat Nov 16 09:25 - 09:25  (00:00)
leroy    ssh:notty    209.97.161.46    Sat Nov 16 09:25 - 09:25  (00:00)
rube     ssh:notty    174.138.58.149   Sat Nov 16 09:25 - 09:25  (00:00)
cut      ssh:notty    180.68.177.15    Sat Nov 16 09:25 - 09:25  (00:00)
rube     ssh:notty    174.138.58.149   Sat Nov 16 09:25 - 09:25  (00:00)
cut      ssh:notty    180.68.177.15    Sat Nov 16 09:25 - 09:25  (00:00)
halfkin  ssh:notty    40.117.129.28    Sat Nov 16 09:25 - 09:25  (00:00)
halfkin  ssh:notty    40.117.129.28    Sat Nov 16 09:25 - 09:25  (00:00)
zeta     ssh:notty    40.117.129.28    Sat Nov 16 09:24 - 09:24  (00:00)
zeta     ssh:notty    40.117.129.28    Sat Nov 16 09:24 - 09:24  (00:00)
yuanwd   ssh:notty    138.68.50.18     Sat Nov 16 09:24 - 09:24  (00:00)
yuanwd   ssh:notty    138.68.50.18     Sat Nov 16 09:24 - 09:24  (00:00)
home     ssh:notty    84.45.251.243    Sat Nov 16 09:24 - 09:24  (00:00)
home     ssh:notty    84.45.251.243    Sat Nov 16 09:24 - 09:24  (00:00)
wxnd     ssh:notty    40.117.129.28    Sat Nov 16 09:24 - 09:24  (00:00)
wxnd     ssh:notty    40.117.129.28    Sat Nov 16 09:24 - 09:24  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:24 - 09:24  (00:00)
yinthu   ssh:notty    40.117.129.28    Sat Nov 16 09:23 - 09:23  (00:00)
yinthu   ssh:notty    40.117.129.28    Sat Nov 16 09:23 - 09:23  (00:00)
calendar ssh:notty    49.235.240.21    Sat Nov 16 09:23 - 09:23  (00:00)
calendar ssh:notty    49.235.240.21    Sat Nov 16 09:23 - 09:23  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 09:23 - 09:23  (00:00)
qadir    ssh:notty    168.181.104.30   Sat Nov 16 09:23 - 09:23  (00:00)
qadir    ssh:notty    168.181.104.30   Sat Nov 16 09:23 - 09:23  (00:00)
jumpuser ssh:notty    40.117.129.28    Sat Nov 16 09:23 - 09:23  (00:00)
jumpuser ssh:notty    40.117.129.28    Sat Nov 16 09:23 - 09:23  (00:00)
elk      ssh:notty    40.117.129.28    Sat Nov 16 09:23 - 09:23  (00:00)
elk      ssh:notty    40.117.129.28    Sat Nov 16 09:23 - 09:23  (00:00)
elk      ssh:notty    40.117.129.28    Sat Nov 16 09:22 - 09:22  (00:00)
elk      ssh:notty    40.117.129.28    Sat Nov 16 09:22 - 09:22  (00:00)
ubuntu   ssh:notty    40.117.129.28    Sat Nov 16 09:22 - 09:22  (00:00)
ubuntu   ssh:notty    40.117.129.28    Sat Nov 16 09:22 - 09:22  (00:00)
ts4      ssh:notty    40.117.129.28    Sat Nov 16 09:21 - 09:21  (00:00)
ts4      ssh:notty    40.117.129.28    Sat Nov 16 09:21 - 09:21  (00:00)
root     ssh:notty    174.138.58.149   Sat Nov 16 09:21 - 09:21  (00:00)
test1    ssh:notty    40.117.129.28    Sat Nov 16 09:21 - 09:21  (00:00)
test1    ssh:notty    40.117.129.28    Sat Nov 16 09:21 - 09:21  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 09:21 - 09:21  (00:00)
testuser ssh:notty    40.117.129.28    Sat Nov 16 09:21 - 09:21  (00:00)
testuser ssh:notty    40.117.129.28    Sat Nov 16 09:21 - 09:21  (00:00)
muranami ssh:notty    62.80.164.18     Sat Nov 16 09:20 - 09:20  (00:00)
muranami ssh:notty    62.80.164.18     Sat Nov 16 09:20 - 09:20  (00:00)
testing  ssh:notty    40.117.129.28    Sat Nov 16 09:20 - 09:20  (00:00)
testing  ssh:notty    40.117.129.28    Sat Nov 16 09:20 - 09:20  (00:00)
cherri   ssh:notty    84.45.251.243    Sat Nov 16 09:20 - 09:20  (00:00)
cherri   ssh:notty    84.45.251.243    Sat Nov 16 09:20 - 09:20  (00:00)
lylette  ssh:notty    116.24.66.114    Sat Nov 16 09:20 - 09:20  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:20 - 09:20  (00:00)
lylette  ssh:notty    116.24.66.114    Sat Nov 16 09:20 - 09:20  (00:00)
tester   ssh:notty    40.117.129.28    Sat Nov 16 09:20 - 09:20  (00:00)
tdh      ssh:notty    138.68.50.18     Sat Nov 16 09:20 - 09:20  (00:00)
tester   ssh:notty    40.117.129.28    Sat Nov 16 09:20 - 09:20  (00:00)
tdh      ssh:notty    138.68.50.18     Sat Nov 16 09:20 - 09:20  (00:00)
bin      ssh:notty    188.166.109.87   Sat Nov 16 09:20 - 09:20  (00:00)
test     ssh:notty    40.117.129.28    Sat Nov 16 09:20 - 09:20  (00:00)
test     ssh:notty    40.117.129.28    Sat Nov 16 09:19 - 09:19  (00:00)
bot1     ssh:notty    40.117.129.28    Sat Nov 16 09:19 - 09:19  (00:00)
bot1     ssh:notty    40.117.129.28    Sat Nov 16 09:19 - 09:19  (00:00)
landfald ssh:notty    49.235.240.21    Sat Nov 16 09:19 - 09:19  (00:00)
landfald ssh:notty    49.235.240.21    Sat Nov 16 09:19 - 09:19  (00:00)
bot      ssh:notty    40.117.129.28    Sat Nov 16 09:19 - 09:19  (00:00)
bot      ssh:notty    40.117.129.28    Sat Nov 16 09:19 - 09:19  (00:00)
tirsa    ssh:notty    180.68.177.15    Sat Nov 16 09:19 - 09:19  (00:00)
belkin   ssh:notty    168.181.104.30   Sat Nov 16 09:19 - 09:19  (00:00)
tirsa    ssh:notty    180.68.177.15    Sat Nov 16 09:19 - 09:19  (00:00)
belkin   ssh:notty    168.181.104.30   Sat Nov 16 09:19 - 09:19  (00:00)
droplet  ssh:notty    40.117.129.28    Sat Nov 16 09:18 - 09:18  (00:00)
droplet  ssh:notty    40.117.129.28    Sat Nov 16 09:18 - 09:18  (00:00)
dropbox  ssh:notty    40.117.129.28    Sat Nov 16 09:18 - 09:18  (00:00)
dropbox  ssh:notty    40.117.129.28    Sat Nov 16 09:18 - 09:18  (00:00)
backup   ssh:notty    174.138.58.149   Sat Nov 16 09:18 - 09:18  (00:00)
radio    ssh:notty    40.117.129.28    Sat Nov 16 09:18 - 09:18  (00:00)
radio    ssh:notty    40.117.129.28    Sat Nov 16 09:18 - 09:18  (00:00)
sinusbot ssh:notty    40.117.129.28    Sat Nov 16 09:17 - 09:17  (00:00)
sinusbot ssh:notty    40.117.129.28    Sat Nov 16 09:17 - 09:17  (00:00)
teamspea ssh:notty    40.117.129.28    Sat Nov 16 09:17 - 09:17  (00:00)
teamspea ssh:notty    40.117.129.28    Sat Nov 16 09:17 - 09:17  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 09:17 - 09:17  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:17 - 09:17  (00:00)
sync     ssh:notty    84.45.251.243    Sat Nov 16 09:16 - 09:16  (00:00)
ts       ssh:notty    40.117.129.28    Sat Nov 16 09:16 - 09:16  (00:00)
ts       ssh:notty    40.117.129.28    Sat Nov 16 09:16 - 09:16  (00:00)
ar       ssh:notty    188.166.109.87   Sat Nov 16 09:16 - 09:16  (00:00)
ar       ssh:notty    188.166.109.87   Sat Nov 16 09:16 - 09:16  (00:00)
teamspea ssh:notty    40.117.129.28    Sat Nov 16 09:16 - 09:16  (00:00)
teamspea ssh:notty    40.117.129.28    Sat Nov 16 09:16 - 09:16  (00:00)
cayuga   ssh:notty    138.68.50.18     Sat Nov 16 09:16 - 09:16  (00:00)
cayuga   ssh:notty    138.68.50.18     Sat Nov 16 09:16 - 09:16  (00:00)
cloud    ssh:notty    40.117.129.28    Sat Nov 16 09:16 - 09:16  (00:00)
cloud    ssh:notty    40.117.129.28    Sat Nov 16 09:16 - 09:16  (00:00)
progres  ssh:notty    40.117.129.28    Sat Nov 16 09:15 - 09:15  (00:00)
progres  ssh:notty    40.117.129.28    Sat Nov 16 09:15 - 09:15  (00:00)
mssql    ssh:notty    40.117.129.28    Sat Nov 16 09:15 - 09:15  (00:00)
mssql    ssh:notty    40.117.129.28    Sat Nov 16 09:15 - 09:15  (00:00)
mysql    ssh:notty    40.117.129.28    Sat Nov 16 09:14 - 09:14  (00:00)
root     ssh:notty    174.138.58.149   Sat Nov 16 09:14 - 09:14  (00:00)
mysql    ssh:notty    40.117.129.28    Sat Nov 16 09:14 - 09:14  (00:00)
test     ssh:notty    49.235.240.21    Sat Nov 16 09:14 - 09:14  (00:00)
test     ssh:notty    49.235.240.21    Sat Nov 16 09:14 - 09:14  (00:00)
mackanic ssh:notty    116.24.66.114    Sat Nov 16 09:14 - 09:14  (00:00)
mackanic ssh:notty    116.24.66.114    Sat Nov 16 09:14 - 09:14  (00:00)
eeeeeeee ssh:notty    168.181.104.30   Sat Nov 16 09:14 - 09:14  (00:00)
eeeeeeee ssh:notty    168.181.104.30   Sat Nov 16 09:14 - 09:14  (00:00)
data     ssh:notty    40.117.129.28    Sat Nov 16 09:14 - 09:14  (00:00)
data     ssh:notty    40.117.129.28    Sat Nov 16 09:14 - 09:14  (00:00)
etl      ssh:notty    40.117.129.28    Sat Nov 16 09:14 - 09:14  (00:00)
etl      ssh:notty    40.117.129.28    Sat Nov 16 09:14 - 09:14  (00:00)
tomcat   ssh:notty    40.117.129.28    Sat Nov 16 09:13 - 09:13  (00:00)
tomcat   ssh:notty    40.117.129.28    Sat Nov 16 09:13 - 09:13  (00:00)
mysql    ssh:notty    51.38.237.214    Sat Nov 16 09:13 - 09:13  (00:00)
mysql    ssh:notty    51.38.237.214    Sat Nov 16 09:13 - 09:13  (00:00)
ident    ssh:notty    188.166.109.87   Sat Nov 16 09:13 - 09:13  (00:00)
ident    ssh:notty    188.166.109.87   Sat Nov 16 09:13 - 09:13  (00:00)
nginx    ssh:notty    40.117.129.28    Sat Nov 16 09:13 - 09:13  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 09:13 - 09:13  (00:00)
nginx    ssh:notty    40.117.129.28    Sat Nov 16 09:13 - 09:13  (00:00)
test     ssh:notty    84.45.251.243    Sat Nov 16 09:13 - 09:13  (00:00)
test     ssh:notty    84.45.251.243    Sat Nov 16 09:13 - 09:13  (00:00)
nexus    ssh:notty    40.117.129.28    Sat Nov 16 09:13 - 09:13  (00:00)
nexus    ssh:notty    40.117.129.28    Sat Nov 16 09:13 - 09:13  (00:00)
hoesing  ssh:notty    209.97.161.46    Sat Nov 16 09:12 - 09:12  (00:00)
hoesing  ssh:notty    209.97.161.46    Sat Nov 16 09:12 - 09:12  (00:00)
nagios   ssh:notty    40.117.129.28    Sat Nov 16 09:12 - 09:12  (00:00)
nagios   ssh:notty    40.117.129.28    Sat Nov 16 09:12 - 09:12  (00:00)
admin    ssh:notty    116.97.178.163   Sat Nov 16 09:12 - 09:12  (00:00)
backup   ssh:notty    40.117.129.28    Sat Nov 16 09:12 - 09:12  (00:00)
admin    ssh:notty    116.97.178.163   Sat Nov 16 09:12 - 09:12  (00:00)
admin    ssh:notty    190.114.171.124  Sat Nov 16 09:12 - 09:12  (00:00)
admin    ssh:notty    190.114.171.124  Sat Nov 16 09:12 - 09:12  (00:00)
vetter   ssh:notty    138.68.50.18     Sat Nov 16 09:12 - 09:12  (00:00)
vetter   ssh:notty    138.68.50.18     Sat Nov 16 09:12 - 09:12  (00:00)
client   ssh:notty    40.117.129.28    Sat Nov 16 09:11 - 09:11  (00:00)
client   ssh:notty    40.117.129.28    Sat Nov 16 09:11 - 09:11  (00:00)
appuser  ssh:notty    40.117.129.28    Sat Nov 16 09:11 - 09:11  (00:00)
appuser  ssh:notty    40.117.129.28    Sat Nov 16 09:11 - 09:11  (00:00)
app      ssh:notty    40.117.129.28    Sat Nov 16 09:11 - 09:11  (00:00)
app      ssh:notty    40.117.129.28    Sat Nov 16 09:11 - 09:11  (00:00)
appadmin ssh:notty    40.117.129.28    Sat Nov 16 09:10 - 09:10  (00:00)
appadmin ssh:notty    40.117.129.28    Sat Nov 16 09:10 - 09:10  (00:00)
yochanan ssh:notty    174.138.58.149   Sat Nov 16 09:10 - 09:10  (00:00)
yochanan ssh:notty    174.138.58.149   Sat Nov 16 09:10 - 09:10  (00:00)
gpadmin  ssh:notty    40.117.129.28    Sat Nov 16 09:10 - 09:10  (00:00)
enda     ssh:notty    168.181.104.30   Sat Nov 16 09:10 - 09:10  (00:00)
gpadmin  ssh:notty    40.117.129.28    Sat Nov 16 09:10 - 09:10  (00:00)
sites    ssh:notty    49.235.240.21    Sat Nov 16 09:10 - 09:10  (00:00)
enda     ssh:notty    168.181.104.30   Sat Nov 16 09:10 - 09:10  (00:00)
sites    ssh:notty    49.235.240.21    Sat Nov 16 09:10 - 09:10  (00:00)
byerly   ssh:notty    51.38.237.214    Sat Nov 16 09:10 - 09:10  (00:00)
byerly   ssh:notty    51.38.237.214    Sat Nov 16 09:10 - 09:10  (00:00)
manage   ssh:notty    188.166.109.87   Sat Nov 16 09:10 - 09:10  (00:00)
manage   ssh:notty    188.166.109.87   Sat Nov 16 09:10 - 09:10  (00:00)
web      ssh:notty    40.117.129.28    Sat Nov 16 09:10 - 09:10  (00:00)
web      ssh:notty    40.117.129.28    Sat Nov 16 09:09 - 09:09  (00:00)
elastics ssh:notty    40.117.129.28    Sat Nov 16 09:09 - 09:09  (00:00)
elastics ssh:notty    40.117.129.28    Sat Nov 16 09:09 - 09:09  (00:00)
hasu     ssh:notty    84.45.251.243    Sat Nov 16 09:09 - 09:09  (00:00)
hasu     ssh:notty    84.45.251.243    Sat Nov 16 09:09 - 09:09  (00:00)
worker   ssh:notty    40.117.129.28    Sat Nov 16 09:09 - 09:09  (00:00)
worker   ssh:notty    40.117.129.28    Sat Nov 16 09:09 - 09:09  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 09:08 - 09:08  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 09:08 - 09:08  (00:00)
datascie ssh:notty    40.117.129.28    Sat Nov 16 09:08 - 09:08  (00:00)
datascie ssh:notty    40.117.129.28    Sat Nov 16 09:08 - 09:08  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 09:08 - 09:08  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 09:08 - 09:08  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 09:08 - 09:08  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 09:08 - 09:08  (00:00)
support  ssh:notty    40.117.129.28    Sat Nov 16 09:08 - 09:08  (00:00)
support  ssh:notty    40.117.129.28    Sat Nov 16 09:08 - 09:08  (00:00)
service  ssh:notty    40.117.129.28    Sat Nov 16 09:08 - 09:08  (00:00)
service  ssh:notty    40.117.129.28    Sat Nov 16 09:08 - 09:08  (00:00)
finstuen ssh:notty    138.68.50.18     Sat Nov 16 09:07 - 09:07  (00:00)
finstuen ssh:notty    138.68.50.18     Sat Nov 16 09:07 - 09:07  (00:00)
news     ssh:notty    180.68.177.15    Sat Nov 16 09:07 - 09:07  (00:00)
dev      ssh:notty    40.117.129.28    Sat Nov 16 09:07 - 09:07  (00:00)
dev      ssh:notty    40.117.129.28    Sat Nov 16 09:07 - 09:07  (00:00)
ident    ssh:notty    122.154.241.134  Sat Nov 16 09:07 - 09:07  (00:00)
devel    ssh:notty    40.117.129.28    Sat Nov 16 09:07 - 09:07  (00:00)
root     ssh:notty    174.138.58.149   Sat Nov 16 09:07 - 09:07  (00:00)
ident    ssh:notty    122.154.241.134  Sat Nov 16 09:07 - 09:07  (00:00)
devel    ssh:notty    40.117.129.28    Sat Nov 16 09:07 - 09:07  (00:00)
develope ssh:notty    40.117.129.28    Sat Nov 16 09:06 - 09:06  (00:00)
develope ssh:notty    40.117.129.28    Sat Nov 16 09:06 - 09:06  (00:00)
jail     ssh:notty    51.38.237.214    Sat Nov 16 09:06 - 09:06  (00:00)
jail     ssh:notty    51.38.237.214    Sat Nov 16 09:06 - 09:06  (00:00)
Julia    ssh:notty    188.166.109.87   Sat Nov 16 09:06 - 09:06  (00:00)
Julia    ssh:notty    188.166.109.87   Sat Nov 16 09:06 - 09:06  (00:00)
redhat   ssh:notty    40.117.129.28    Sat Nov 16 09:06 - 09:06  (00:00)
redhat   ssh:notty    40.117.129.28    Sat Nov 16 09:06 - 09:06  (00:00)
tianyong ssh:notty    40.117.129.28    Sat Nov 16 09:06 - 09:06  (00:00)
tianyong ssh:notty    40.117.129.28    Sat Nov 16 09:06 - 09:06  (00:00)
password ssh:notty    168.181.104.30   Sat Nov 16 09:06 - 09:06  (00:00)
password ssh:notty    168.181.104.30   Sat Nov 16 09:06 - 09:06  (00:00)
grid     ssh:notty    40.117.129.28    Sat Nov 16 09:05 - 09:05  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 09:05 - 09:05  (00:00)
grid     ssh:notty    40.117.129.28    Sat Nov 16 09:05 - 09:05  (00:00)
rpm      ssh:notty    49.235.240.21    Sat Nov 16 09:05 - 09:05  (00:00)
rpm      ssh:notty    49.235.240.21    Sat Nov 16 09:05 - 09:05  (00:00)
redmine  ssh:notty    40.117.129.28    Sat Nov 16 09:05 - 09:05  (00:00)
redmine  ssh:notty    40.117.129.28    Sat Nov 16 09:05 - 09:05  (00:00)
oracle   ssh:notty    40.117.129.28    Sat Nov 16 09:05 - 09:05  (00:00)
oracle   ssh:notty    40.117.129.28    Sat Nov 16 09:05 - 09:05  (00:00)
nessus   ssh:notty    209.97.161.46    Sat Nov 16 09:05 - 09:05  (00:00)
nessus   ssh:notty    209.97.161.46    Sat Nov 16 09:04 - 09:04  (00:00)
root     ssh:notty    62.80.164.18     Sat Nov 16 09:04 - 09:04  (00:00)
plesk    ssh:notty    40.117.129.28    Sat Nov 16 09:04 - 09:04  (00:00)
plesk    ssh:notty    40.117.129.28    Sat Nov 16 09:04 - 09:04  (00:00)
zamralik ssh:notty    40.117.129.28    Sat Nov 16 09:04 - 09:04  (00:00)
zamralik ssh:notty    40.117.129.28    Sat Nov 16 09:04 - 09:04  (00:00)
rlory    ssh:notty    40.117.129.28    Sat Nov 16 09:03 - 09:03  (00:00)
rlory    ssh:notty    40.117.129.28    Sat Nov 16 09:03 - 09:03  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 09:03 - 09:03  (00:00)
admin    ssh:notty    116.24.66.114    Sat Nov 16 09:03 - 09:03  (00:00)
admin    ssh:notty    116.24.66.114    Sat Nov 16 09:03 - 09:03  (00:00)
confluen ssh:notty    40.117.129.28    Sat Nov 16 09:03 - 09:03  (00:00)
roy      ssh:notty    188.166.109.87   Sat Nov 16 09:03 - 09:03  (00:00)
confluen ssh:notty    40.117.129.28    Sat Nov 16 09:03 - 09:03  (00:00)
roy      ssh:notty    188.166.109.87   Sat Nov 16 09:03 - 09:03  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:03 - 09:03  (00:00)
jirauser ssh:notty    40.117.129.28    Sat Nov 16 09:03 - 09:03  (00:00)
jirauser ssh:notty    40.117.129.28    Sat Nov 16 09:03 - 09:03  (00:00)
bamboous ssh:notty    40.117.129.28    Sat Nov 16 09:02 - 09:02  (00:00)
bamboous ssh:notty    40.117.129.28    Sat Nov 16 09:02 - 09:02  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 09:02 - 09:02  (00:00)
apache   ssh:notty    40.117.129.28    Sat Nov 16 09:02 - 09:02  (00:00)
apache   ssh:notty    40.117.129.28    Sat Nov 16 09:02 - 09:02  (00:00)
fugleber ssh:notty    84.45.251.243    Sat Nov 16 09:02 - 09:02  (00:00)
fugleber ssh:notty    84.45.251.243    Sat Nov 16 09:02 - 09:02  (00:00)
bitbucke ssh:notty    40.117.129.28    Sat Nov 16 09:02 - 09:02  (00:00)
bitbucke ssh:notty    40.117.129.28    Sat Nov 16 09:01 - 09:01  (00:00)
loud     ssh:notty    168.181.104.30   Sat Nov 16 09:01 - 09:01  (00:00)
loud     ssh:notty    168.181.104.30   Sat Nov 16 09:01 - 09:01  (00:00)
mahdi    ssh:notty    40.117.129.28    Sat Nov 16 09:01 - 09:01  (00:00)
mahdi    ssh:notty    40.117.129.28    Sat Nov 16 09:01 - 09:01  (00:00)
timy     ssh:notty    40.117.129.28    Sat Nov 16 09:01 - 09:01  (00:00)
timy     ssh:notty    40.117.129.28    Sat Nov 16 09:01 - 09:01  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 09:00 - 09:00  (00:00)
kyle     ssh:notty    40.117.129.28    Sat Nov 16 09:00 - 09:00  (00:00)
kyle     ssh:notty    40.117.129.28    Sat Nov 16 09:00 - 09:00  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 09:00 - 09:00  (00:00)
server   ssh:notty    40.117.129.28    Sat Nov 16 09:00 - 09:00  (00:00)
server   ssh:notty    40.117.129.28    Sat Nov 16 09:00 - 09:00  (00:00)
root     ssh:notty    174.138.58.149   Sat Nov 16 09:00 - 09:00  (00:00)
yvonne   ssh:notty    40.117.129.28    Sat Nov 16 09:00 - 09:00  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 09:00 - 09:00  (00:00)
yvonne   ssh:notty    40.117.129.28    Sat Nov 16 09:00 - 09:00  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 09:00 - 09:00  (00:00)
user1    ssh:notty    40.117.129.28    Sat Nov 16 08:59 - 08:59  (00:00)
user1    ssh:notty    40.117.129.28    Sat Nov 16 08:59 - 08:59  (00:00)
daemon   ssh:notty    138.68.50.18     Sat Nov 16 08:59 - 08:59  (00:00)
user     ssh:notty    40.117.129.28    Sat Nov 16 08:59 - 08:59  (00:00)
user     ssh:notty    40.117.129.28    Sat Nov 16 08:59 - 08:59  (00:00)
lel      ssh:notty    40.117.129.28    Sat Nov 16 08:59 - 08:59  (00:00)
lel      ssh:notty    40.117.129.28    Sat Nov 16 08:59 - 08:59  (00:00)
bob      ssh:notty    40.117.129.28    Sat Nov 16 08:58 - 08:58  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 08:58 - 08:58  (00:00)
bob      ssh:notty    40.117.129.28    Sat Nov 16 08:58 - 08:58  (00:00)
bentson  ssh:notty    116.24.66.114    Sat Nov 16 08:58 - 08:58  (00:00)
bentson  ssh:notty    116.24.66.114    Sat Nov 16 08:58 - 08:58  (00:00)
student  ssh:notty    40.117.129.28    Sat Nov 16 08:58 - 08:58  (00:00)
student  ssh:notty    40.117.129.28    Sat Nov 16 08:58 - 08:58  (00:00)
demo     ssh:notty    40.117.129.28    Sat Nov 16 08:57 - 08:57  (00:00)
demo     ssh:notty    40.117.129.28    Sat Nov 16 08:57 - 08:57  (00:00)
dspace   ssh:notty    40.117.129.28    Sat Nov 16 08:57 - 08:57  (00:00)
dspace   ssh:notty    40.117.129.28    Sat Nov 16 08:57 - 08:57  (00:00)
holiness ssh:notty    168.181.104.30   Sat Nov 16 08:57 - 08:57  (00:00)
holiness ssh:notty    168.181.104.30   Sat Nov 16 08:57 - 08:57  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 08:57 - 08:57  (00:00)
butter   ssh:notty    40.117.129.28    Sat Nov 16 08:57 - 08:57  (00:00)
butter   ssh:notty    40.117.129.28    Sat Nov 16 08:57 - 08:57  (00:00)
radio    ssh:notty    209.97.161.46    Sat Nov 16 08:56 - 08:56  (00:00)
alex     ssh:notty    40.117.129.28    Sat Nov 16 08:56 - 08:56  (00:00)
radio    ssh:notty    209.97.161.46    Sat Nov 16 08:56 - 08:56  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 08:56 - 08:56  (00:00)
alex     ssh:notty    40.117.129.28    Sat Nov 16 08:56 - 08:56  (00:00)
umountfs ssh:notty    51.38.237.214    Sat Nov 16 08:56 - 08:56  (00:00)
umountfs ssh:notty    51.38.237.214    Sat Nov 16 08:56 - 08:56  (00:00)
pelumi   ssh:notty    40.117.129.28    Sat Nov 16 08:56 - 08:56  (00:00)
pelumi   ssh:notty    40.117.129.28    Sat Nov 16 08:56 - 08:56  (00:00)
oracle   ssh:notty    49.235.240.21    Sat Nov 16 08:56 - 08:56  (00:00)
oracle   ssh:notty    49.235.240.21    Sat Nov 16 08:56 - 08:56  (00:00)
adam     ssh:notty    40.117.129.28    Sat Nov 16 08:56 - 08:56  (00:00)
adam     ssh:notty    40.117.129.28    Sat Nov 16 08:56 - 08:56  (00:00)
dare     ssh:notty    40.117.129.28    Sat Nov 16 08:55 - 08:55  (00:00)
dare     ssh:notty    40.117.129.28    Sat Nov 16 08:55 - 08:55  (00:00)
betcher  ssh:notty    138.68.50.18     Sat Nov 16 08:55 - 08:55  (00:00)
betcher  ssh:notty    138.68.50.18     Sat Nov 16 08:55 - 08:55  (00:00)
adewale  ssh:notty    40.117.129.28    Sat Nov 16 08:55 - 08:55  (00:00)
adewale  ssh:notty    40.117.129.28    Sat Nov 16 08:55 - 08:55  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 08:55 - 08:55  (00:00)
osamor   ssh:notty    40.117.129.28    Sat Nov 16 08:54 - 08:54  (00:00)
osamor   ssh:notty    40.117.129.28    Sat Nov 16 08:54 - 08:54  (00:00)
cynthia  ssh:notty    40.117.129.28    Sat Nov 16 08:54 - 08:54  (00:00)
cynthia  ssh:notty    40.117.129.28    Sat Nov 16 08:54 - 08:54  (00:00)
abdul    ssh:notty    40.117.129.28    Sat Nov 16 08:54 - 08:54  (00:00)
abdul    ssh:notty    40.117.129.28    Sat Nov 16 08:54 - 08:54  (00:00)
damilola ssh:notty    40.117.129.28    Sat Nov 16 08:53 - 08:53  (00:00)
damilola ssh:notty    40.117.129.28    Sat Nov 16 08:53 - 08:53  (00:00)
rufus    ssh:notty    40.117.129.28    Sat Nov 16 08:53 - 08:53  (00:00)
rufus    ssh:notty    40.117.129.28    Sat Nov 16 08:53 - 08:53  (00:00)
sween    ssh:notty    188.166.109.87   Sat Nov 16 08:53 - 08:53  (00:00)
sween    ssh:notty    188.166.109.87   Sat Nov 16 08:53 - 08:53  (00:00)
guest    ssh:notty    51.38.237.214    Sat Nov 16 08:53 - 08:53  (00:00)
guest    ssh:notty    51.38.237.214    Sat Nov 16 08:53 - 08:53  (00:00)
eunice   ssh:notty    40.117.129.28    Sat Nov 16 08:53 - 08:53  (00:00)
root     ssh:notty    116.24.66.114    Sat Nov 16 08:53 - 08:53  (00:00)
eunice   ssh:notty    40.117.129.28    Sat Nov 16 08:53 - 08:53  (00:00)
ruehle   ssh:notty    106.12.114.173   Sat Nov 16 08:53 - 08:53  (00:00)
ruehle   ssh:notty    106.12.114.173   Sat Nov 16 08:53 - 08:53  (00:00)
facturac ssh:notty    168.181.104.30   Sat Nov 16 08:52 - 08:52  (00:00)
facturac ssh:notty    168.181.104.30   Sat Nov 16 08:52 - 08:52  (00:00)
gbenga   ssh:notty    40.117.129.28    Sat Nov 16 08:52 - 08:52  (00:00)
federiko ssh:notty    209.97.161.46    Sat Nov 16 08:52 - 08:52  (00:00)
gbenga   ssh:notty    40.117.129.28    Sat Nov 16 08:52 - 08:52  (00:00)
federiko ssh:notty    209.97.161.46    Sat Nov 16 08:52 - 08:52  (00:00)
tayo     ssh:notty    40.117.129.28    Sat Nov 16 08:52 - 08:52  (00:00)
tayo     ssh:notty    40.117.129.28    Sat Nov 16 08:52 - 08:52  (00:00)
gbolahan ssh:notty    40.117.129.28    Sat Nov 16 08:52 - 08:52  (00:00)
gbolahan ssh:notty    40.117.129.28    Sat Nov 16 08:51 - 08:51  (00:00)
visa     ssh:notty    180.68.177.15    Sat Nov 16 08:51 - 08:51  (00:00)
visa     ssh:notty    180.68.177.15    Sat Nov 16 08:51 - 08:51  (00:00)
wenhann  ssh:notty    84.45.251.243    Sat Nov 16 08:51 - 08:51  (00:00)
wenhann  ssh:notty    84.45.251.243    Sat Nov 16 08:51 - 08:51  (00:00)
damilare ssh:notty    40.117.129.28    Sat Nov 16 08:51 - 08:51  (00:00)
damilare ssh:notty    40.117.129.28    Sat Nov 16 08:51 - 08:51  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 08:51 - 08:51  (00:00)
avilion  ssh:notty    138.68.50.18     Sat Nov 16 08:51 - 08:51  (00:00)
oyelade  ssh:notty    40.117.129.28    Sat Nov 16 08:51 - 08:51  (00:00)
avilion  ssh:notty    138.68.50.18     Sat Nov 16 08:51 - 08:51  (00:00)
oyelade  ssh:notty    40.117.129.28    Sat Nov 16 08:51 - 08:51  (00:00)
itunu    ssh:notty    40.117.129.28    Sat Nov 16 08:50 - 08:50  (00:00)
itunu    ssh:notty    40.117.129.28    Sat Nov 16 08:50 - 08:50  (00:00)
marion   ssh:notty    40.117.129.28    Sat Nov 16 08:50 - 08:50  (00:00)
marion   ssh:notty    40.117.129.28    Sat Nov 16 08:50 - 08:50  (00:00)
femi     ssh:notty    40.117.129.28    Sat Nov 16 08:50 - 08:50  (00:00)
femi     ssh:notty    40.117.129.28    Sat Nov 16 08:50 - 08:50  (00:00)
delgross ssh:notty    188.166.109.87   Sat Nov 16 08:50 - 08:50  (00:00)
delgross ssh:notty    188.166.109.87   Sat Nov 16 08:50 - 08:50  (00:00)
bin      ssh:notty    51.38.237.214    Sat Nov 16 08:49 - 08:49  (00:00)
bola     ssh:notty    40.117.129.28    Sat Nov 16 08:49 - 08:49  (00:00)
bola     ssh:notty    40.117.129.28    Sat Nov 16 08:49 - 08:49  (00:00)
share    ssh:notty    40.117.129.28    Sat Nov 16 08:49 - 08:49  (00:00)
share    ssh:notty    40.117.129.28    Sat Nov 16 08:49 - 08:49  (00:00)
zookeepe ssh:notty    40.117.129.28    Sat Nov 16 08:49 - 08:49  (00:00)
zookeepe ssh:notty    40.117.129.28    Sat Nov 16 08:49 - 08:49  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 08:48 - 08:48  (00:00)
uptime   ssh:notty    62.80.164.18     Sat Nov 16 08:48 - 08:48  (00:00)
httpfs   ssh:notty    40.117.129.28    Sat Nov 16 08:48 - 08:48  (00:00)
uptime   ssh:notty    62.80.164.18     Sat Nov 16 08:48 - 08:48  (00:00)
httpfs   ssh:notty    40.117.129.28    Sat Nov 16 08:48 - 08:48  (00:00)
ramzi    ssh:notty    168.181.104.30   Sat Nov 16 08:48 - 08:48  (00:00)
ramzi    ssh:notty    168.181.104.30   Sat Nov 16 08:48 - 08:48  (00:00)
nicolabc ssh:notty    40.117.129.28    Sat Nov 16 08:48 - 08:48  (00:00)
nicolabc ssh:notty    40.117.129.28    Sat Nov 16 08:48 - 08:48  (00:00)
celente  ssh:notty    84.45.251.243    Sat Nov 16 08:48 - 08:48  (00:00)
celente  ssh:notty    84.45.251.243    Sat Nov 16 08:48 - 08:48  (00:00)
mas_dest ssh:notty    40.117.129.28    Sat Nov 16 08:48 - 08:48  (00:00)
mas_dest ssh:notty    40.117.129.28    Sat Nov 16 08:47 - 08:47  (00:00)
clark    ssh:notty    106.12.114.173   Sat Nov 16 08:47 - 08:47  (00:00)
clark    ssh:notty    106.12.114.173   Sat Nov 16 08:47 - 08:47  (00:00)
faze     ssh:notty    40.117.129.28    Sat Nov 16 08:47 - 08:47  (00:00)
faze     ssh:notty    40.117.129.28    Sat Nov 16 08:47 - 08:47  (00:00)
dulcie   ssh:notty    116.24.66.114    Sat Nov 16 08:47 - 08:47  (00:00)
dulcie   ssh:notty    116.24.66.114    Sat Nov 16 08:47 - 08:47  (00:00)
amoly    ssh:notty    40.117.129.28    Sat Nov 16 08:47 - 08:47  (00:00)
amoly    ssh:notty    40.117.129.28    Sat Nov 16 08:47 - 08:47  (00:00)
ui       ssh:notty    138.68.50.18     Sat Nov 16 08:47 - 08:47  (00:00)
ui       ssh:notty    138.68.50.18     Sat Nov 16 08:47 - 08:47  (00:00)
pufferd  ssh:notty    40.117.129.28    Sat Nov 16 08:46 - 08:46  (00:00)
pufferd  ssh:notty    40.117.129.28    Sat Nov 16 08:46 - 08:46  (00:00)
pcap     ssh:notty    188.166.109.87   Sat Nov 16 08:46 - 08:46  (00:00)
pcap     ssh:notty    188.166.109.87   Sat Nov 16 08:46 - 08:46  (00:00)
admin    ssh:notty    49.235.240.21    Sat Nov 16 08:46 - 08:46  (00:00)
admin    ssh:notty    49.235.240.21    Sat Nov 16 08:46 - 08:46  (00:00)
llama    ssh:notty    40.117.129.28    Sat Nov 16 08:46 - 08:46  (00:00)
cvega    ssh:notty    51.38.237.214    Sat Nov 16 08:46 - 08:46  (00:00)
llama    ssh:notty    40.117.129.28    Sat Nov 16 08:46 - 08:46  (00:00)
cvega    ssh:notty    51.38.237.214    Sat Nov 16 08:46 - 08:46  (00:00)
hdfs     ssh:notty    40.117.129.28    Sat Nov 16 08:46 - 08:46  (00:00)
czanik   ssh:notty    180.68.177.15    Sat Nov 16 08:46 - 08:46  (00:00)
hdfs     ssh:notty    40.117.129.28    Sat Nov 16 08:46 - 08:46  (00:00)
czanik   ssh:notty    180.68.177.15    Sat Nov 16 08:46 - 08:46  (00:00)
yarn     ssh:notty    40.117.129.28    Sat Nov 16 08:45 - 08:45  (00:00)
yarn     ssh:notty    40.117.129.28    Sat Nov 16 08:45 - 08:45  (00:00)
earth    ssh:notty    40.117.129.28    Sat Nov 16 08:45 - 08:45  (00:00)
earth    ssh:notty    40.117.129.28    Sat Nov 16 08:45 - 08:45  (00:00)
ciborn   ssh:notty    40.117.129.28    Sat Nov 16 08:45 - 08:45  (00:00)
ciborn   ssh:notty    40.117.129.28    Sat Nov 16 08:45 - 08:45  (00:00)
mapsusa  ssh:notty    84.45.251.243    Sat Nov 16 08:44 - 08:44  (00:00)
kangoo   ssh:notty    40.117.129.28    Sat Nov 16 08:44 - 08:44  (00:00)
info     ssh:notty    209.97.161.46    Sat Nov 16 08:44 - 08:44  (00:00)
mapsusa  ssh:notty    84.45.251.243    Sat Nov 16 08:44 - 08:44  (00:00)
kangoo   ssh:notty    40.117.129.28    Sat Nov 16 08:44 - 08:44  (00:00)
info     ssh:notty    209.97.161.46    Sat Nov 16 08:44 - 08:44  (00:00)
hive     ssh:notty    40.117.129.28    Sat Nov 16 08:44 - 08:44  (00:00)
hive     ssh:notty    40.117.129.28    Sat Nov 16 08:44 - 08:44  (00:00)
123456   ssh:notty    168.181.104.30   Sat Nov 16 08:44 - 08:44  (00:00)
123456   ssh:notty    168.181.104.30   Sat Nov 16 08:44 - 08:44  (00:00)
mapred   ssh:notty    40.117.129.28    Sat Nov 16 08:44 - 08:44  (00:00)
mapred   ssh:notty    40.117.129.28    Sat Nov 16 08:44 - 08:44  (00:00)
kms      ssh:notty    40.117.129.28    Sat Nov 16 08:43 - 08:43  (00:00)
kms      ssh:notty    40.117.129.28    Sat Nov 16 08:43 - 08:43  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 08:43 - 08:43  (00:00)
uucp     ssh:notty    40.117.129.28    Sat Nov 16 08:43 - 08:43  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 08:43 - 08:43  (00:00)
rp       ssh:notty    138.68.50.18     Sat Nov 16 08:43 - 08:43  (00:00)
rp       ssh:notty    138.68.50.18     Sat Nov 16 08:43 - 08:43  (00:00)
root     ssh:notty    40.117.129.28    Sat Nov 16 08:43 - 08:43  (00:00)
root     ssh:notty    40.117.129.28    Sat Nov 16 08:42 - 08:42  (00:00)
erminia  ssh:notty    116.24.66.114    Sat Nov 16 08:42 - 08:42  (00:00)
erminia  ssh:notty    116.24.66.114    Sat Nov 16 08:42 - 08:42  (00:00)
root     ssh:notty    40.117.129.28    Sat Nov 16 08:42 - 08:42  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 08:42 - 08:42  (00:00)
test123  ssh:notty    40.117.129.28    Sat Nov 16 08:41 - 08:41  (00:00)
test123  ssh:notty    40.117.129.28    Sat Nov 16 08:41 - 08:41  (00:00)
test3    ssh:notty    40.117.129.28    Sat Nov 16 08:41 - 08:41  (00:00)
test3    ssh:notty    40.117.129.28    Sat Nov 16 08:41 - 08:41  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 08:41 - 08:41  (00:00)
kafka    ssh:notty    40.117.129.28    Sat Nov 16 08:41 - 08:41  (00:00)
kafka    ssh:notty    40.117.129.28    Sat Nov 16 08:41 - 08:41  (00:00)
test2    ssh:notty    40.117.129.28    Sat Nov 16 08:40 - 08:40  (00:00)
test2    ssh:notty    40.117.129.28    Sat Nov 16 08:40 - 08:40  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 08:40 - 08:40  (00:00)
yoyo     ssh:notty    62.80.164.18     Sat Nov 16 08:40 - 08:40  (00:00)
user     ssh:notty    209.97.161.46    Sat Nov 16 08:40 - 08:40  (00:00)
yoyo     ssh:notty    62.80.164.18     Sat Nov 16 08:40 - 08:40  (00:00)
user     ssh:notty    209.97.161.46    Sat Nov 16 08:40 - 08:40  (00:00)
zhangcj  ssh:notty    40.117.129.28    Sat Nov 16 08:40 - 08:40  (00:00)
zhangcj  ssh:notty    40.117.129.28    Sat Nov 16 08:40 - 08:40  (00:00)
c1       ssh:notty    40.117.129.28    Sat Nov 16 08:40 - 08:40  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 08:40 - 08:40  (00:00)
c1       ssh:notty    40.117.129.28    Sat Nov 16 08:40 - 08:40  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 08:39 - 08:39  (00:00)
hadoop   ssh:notty    40.117.129.28    Sat Nov 16 08:39 - 08:39  (00:00)
hadoop   ssh:notty    40.117.129.28    Sat Nov 16 08:39 - 08:39  (00:00)
nimda    ssh:notty    168.181.104.30   Sat Nov 16 08:39 - 08:39  (00:00)
nimda    ssh:notty    168.181.104.30   Sat Nov 16 08:39 - 08:39  (00:00)
ela      ssh:notty    40.117.129.28    Sat Nov 16 08:39 - 08:39  (00:00)
ela      ssh:notty    40.117.129.28    Sat Nov 16 08:39 - 08:39  (00:00)
jenkins  ssh:notty    40.117.129.28    Sat Nov 16 08:39 - 08:39  (00:00)
jenkins  ssh:notty    40.117.129.28    Sat Nov 16 08:39 - 08:39  (00:00)
sandvin  ssh:notty    138.68.50.18     Sat Nov 16 08:38 - 08:38  (00:00)
sandvin  ssh:notty    138.68.50.18     Sat Nov 16 08:38 - 08:38  (00:00)
impala   ssh:notty    40.117.129.28    Sat Nov 16 08:38 - 08:38  (00:00)
impala   ssh:notty    40.117.129.28    Sat Nov 16 08:38 - 08:38  (00:00)
ts3serve ssh:notty    40.117.129.28    Sat Nov 16 08:38 - 08:38  (00:00)
ts3serve ssh:notty    40.117.129.28    Sat Nov 16 08:38 - 08:38  (00:00)
vmail    ssh:notty    40.117.129.28    Sat Nov 16 08:38 - 08:38  (00:00)
vmail    ssh:notty    40.117.129.28    Sat Nov 16 08:38 - 08:38  (00:00)
db2inst1 ssh:notty    84.45.251.243    Sat Nov 16 08:37 - 08:37  (00:00)
db2inst1 ssh:notty    84.45.251.243    Sat Nov 16 08:37 - 08:37  (00:00)
bjoernsu ssh:notty    49.235.240.21    Sat Nov 16 08:37 - 08:37  (00:00)
bjoernsu ssh:notty    49.235.240.21    Sat Nov 16 08:37 - 08:37  (00:00)
sadmin   ssh:notty    40.117.129.28    Sat Nov 16 08:37 - 08:37  (00:00)
sadmin   ssh:notty    40.117.129.28    Sat Nov 16 08:37 - 08:37  (00:00)
test     ssh:notty    116.24.66.114    Sat Nov 16 08:37 - 08:37  (00:00)
test     ssh:notty    116.24.66.114    Sat Nov 16 08:37 - 08:37  (00:00)
frappe   ssh:notty    40.117.129.28    Sat Nov 16 08:37 - 08:37  (00:00)
frappe   ssh:notty    40.117.129.28    Sat Nov 16 08:37 - 08:37  (00:00)
teamspea ssh:notty    106.12.114.173   Sat Nov 16 08:37 - 08:37  (00:00)
teamspea ssh:notty    106.12.114.173   Sat Nov 16 08:37 - 08:37  (00:00)
postal   ssh:notty    40.117.129.28    Sat Nov 16 08:36 - 08:36  (00:00)
postal   ssh:notty    40.117.129.28    Sat Nov 16 08:36 - 08:36  (00:00)
lisa     ssh:notty    188.166.109.87   Sat Nov 16 08:36 - 08:36  (00:00)
lisa     ssh:notty    188.166.109.87   Sat Nov 16 08:36 - 08:36  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 08:36 - 08:36  (00:00)
sapach   ssh:notty    40.117.129.28    Sat Nov 16 08:36 - 08:36  (00:00)
sapach   ssh:notty    40.117.129.28    Sat Nov 16 08:36 - 08:36  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 08:36 - 08:36  (00:00)
yw       ssh:notty    40.117.129.28    Sat Nov 16 08:36 - 08:36  (00:00)
yw       ssh:notty    40.117.129.28    Sat Nov 16 08:36 - 08:36  (00:00)
drcomadm ssh:notty    40.117.129.28    Sat Nov 16 08:35 - 08:35  (00:00)
drcomadm ssh:notty    40.117.129.28    Sat Nov 16 08:35 - 08:35  (00:00)
stack    ssh:notty    40.117.129.28    Sat Nov 16 08:35 - 08:35  (00:00)
stack    ssh:notty    40.117.129.28    Sat Nov 16 08:35 - 08:35  (00:00)
lindback ssh:notty    168.181.104.30   Sat Nov 16 08:35 - 08:35  (00:00)
lindback ssh:notty    168.181.104.30   Sat Nov 16 08:35 - 08:35  (00:00)
vidalenc ssh:notty    180.68.177.15    Sat Nov 16 08:35 - 08:35  (00:00)
vidalenc ssh:notty    180.68.177.15    Sat Nov 16 08:35 - 08:35  (00:00)
russ     ssh:notty    40.117.129.28    Sat Nov 16 08:35 - 08:35  (00:00)
russ     ssh:notty    40.117.129.28    Sat Nov 16 08:35 - 08:35  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 08:34 - 08:34  (00:00)
shuting  ssh:notty    40.117.129.28    Sat Nov 16 08:34 - 08:34  (00:00)
shuting  ssh:notty    40.117.129.28    Sat Nov 16 08:34 - 08:34  (00:00)
xfl      ssh:notty    40.117.129.28    Sat Nov 16 08:34 - 08:34  (00:00)
xfl      ssh:notty    40.117.129.28    Sat Nov 16 08:34 - 08:34  (00:00)
loperena ssh:notty    84.45.251.243    Sat Nov 16 08:34 - 08:34  (00:00)
loperena ssh:notty    84.45.251.243    Sat Nov 16 08:34 - 08:34  (00:00)
jmago    ssh:notty    40.117.129.28    Sat Nov 16 08:34 - 08:34  (00:00)
jmago    ssh:notty    40.117.129.28    Sat Nov 16 08:34 - 08:34  (00:00)
dottie   ssh:notty    40.117.129.28    Sat Nov 16 08:33 - 08:33  (00:00)
dottie   ssh:notty    40.117.129.28    Sat Nov 16 08:33 - 08:33  (00:00)
liuziyua ssh:notty    40.117.129.28    Sat Nov 16 08:33 - 08:33  (00:00)
liuziyua ssh:notty    40.117.129.28    Sat Nov 16 08:33 - 08:33  (00:00)
jaquann  ssh:notty    49.235.240.21    Sat Nov 16 08:33 - 08:33  (00:00)
jaquann  ssh:notty    49.235.240.21    Sat Nov 16 08:33 - 08:33  (00:00)
michaelm ssh:notty    51.38.237.214    Sat Nov 16 08:33 - 08:33  (00:00)
michaelm ssh:notty    51.38.237.214    Sat Nov 16 08:33 - 08:33  (00:00)
hjt      ssh:notty    40.117.129.28    Sat Nov 16 08:33 - 08:33  (00:00)
hjt      ssh:notty    40.117.129.28    Sat Nov 16 08:33 - 08:33  (00:00)
root     ssh:notty    209.97.161.46    Sat Nov 16 08:32 - 08:32  (00:00)
zsp      ssh:notty    40.117.129.28    Sat Nov 16 08:32 - 08:32  (00:00)
zsp      ssh:notty    40.117.129.28    Sat Nov 16 08:32 - 08:32  (00:00)
basic    ssh:notty    116.24.66.114    Sat Nov 16 08:32 - 08:32  (00:00)
basic    ssh:notty    116.24.66.114    Sat Nov 16 08:32 - 08:32  (00:00)
root     ssh:notty    40.117.129.28    Sat Nov 16 08:32 - 08:32  (00:00)
apetroae ssh:notty    188.166.109.87   Sat Nov 16 08:32 - 08:32  (00:00)
guimao   ssh:notty    40.117.129.28    Sat Nov 16 08:32 - 08:32  (00:00)
apetroae ssh:notty    188.166.109.87   Sat Nov 16 08:32 - 08:32  (00:00)
guimao   ssh:notty    40.117.129.28    Sat Nov 16 08:32 - 08:32  (00:00)
mysql    ssh:notty    106.12.114.173   Sat Nov 16 08:31 - 08:31  (00:00)
mysql    ssh:notty    106.12.114.173   Sat Nov 16 08:31 - 08:31  (00:00)
zimbra   ssh:notty    40.117.129.28    Sat Nov 16 08:31 - 08:31  (00:00)
zimbra   ssh:notty    40.117.129.28    Sat Nov 16 08:31 - 08:31  (00:00)
drcom    ssh:notty    40.117.129.28    Sat Nov 16 08:31 - 08:31  (00:00)
drcom    ssh:notty    40.117.129.28    Sat Nov 16 08:31 - 08:31  (00:00)
mahani   ssh:notty    168.181.104.30   Sat Nov 16 08:31 - 08:31  (00:00)
mahani   ssh:notty    168.181.104.30   Sat Nov 16 08:31 - 08:31  (00:00)
minerhub ssh:notty    40.117.129.28    Sat Nov 16 08:31 - 08:31  (00:00)
minerhub ssh:notty    40.117.129.28    Sat Nov 16 08:31 - 08:31  (00:00)
rosaleen ssh:notty    138.68.50.18     Sat Nov 16 08:30 - 08:30  (00:00)
rosaleen ssh:notty    138.68.50.18     Sat Nov 16 08:30 - 08:30  (00:00)
judge    ssh:notty    40.117.129.28    Sat Nov 16 08:30 - 08:30  (00:00)
judge    ssh:notty    40.117.129.28    Sat Nov 16 08:30 - 08:30  (00:00)
ts3      ssh:notty    40.117.129.28    Sat Nov 16 08:30 - 08:30  (00:00)
ts3      ssh:notty    40.117.129.28    Sat Nov 16 08:30 - 08:30  (00:00)
roeser   ssh:notty    180.68.177.15    Sat Nov 16 08:30 - 08:30  (00:00)
roeser   ssh:notty    180.68.177.15    Sat Nov 16 08:30 - 08:30  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 08:29 - 08:29  (00:00)
root     ssh:notty    84.45.251.243    Sat Nov 16 08:29 - 08:29  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:29 - 08:29  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    188.166.109.87   Sat Nov 16 08:28 - 08:28  (00:00)
angers   ssh:notty    209.97.161.46    Sat Nov 16 08:28 - 08:28  (00:00)
angers   ssh:notty    209.97.161.46    Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
andy     ssh:notty    49.235.240.21    Sat Nov 16 08:28 - 08:28  (00:00)
andy     ssh:notty    49.235.240.21    Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
root     ssh:notty    222.186.175.167  Sat Nov 16 08:28 - 08:28  (00:00)
markland ssh:notty    116.24.66.114    Sat Nov 16 08:27 - 08:27  (00:00)
markland ssh:notty    116.24.66.114    Sat Nov 16 08:27 - 08:27  (00:00)
groven   ssh:notty    138.68.50.18     Sat Nov 16 08:26 - 08:26  (00:00)
groven   ssh:notty    138.68.50.18     Sat Nov 16 08:26 - 08:26  (00:00)
winston  ssh:notty    168.181.104.30   Sat Nov 16 08:26 - 08:26  (00:00)
bmw      ssh:notty    106.12.114.173   Sat Nov 16 08:26 - 08:26  (00:00)
winston  ssh:notty    168.181.104.30   Sat Nov 16 08:26 - 08:26  (00:00)
bmw      ssh:notty    106.12.114.173   Sat Nov 16 08:26 - 08:26  (00:00)
melger   ssh:notty    51.38.237.214    Sat Nov 16 08:26 - 08:26  (00:00)
melger   ssh:notty    51.38.237.214    Sat Nov 16 08:26 - 08:26  (00:00)
irc      ssh:notty    84.45.251.243    Sat Nov 16 08:26 - 08:26  (00:00)
cocuzzo  ssh:notty    188.166.109.87   Sat Nov 16 08:25 - 08:25  (00:00)
cocuzzo  ssh:notty    188.166.109.87   Sat Nov 16 08:25 - 08:25  (00:00)
test     ssh:notty    180.68.177.15    Sat Nov 16 08:25 - 08:25  (00:00)
test     ssh:notty    180.68.177.15    Sat Nov 16 08:25 - 08:25  (00:00)
root     ssh:notty    62.80.164.18     Sat Nov 16 08:24 - 08:24  (00:00)
chun-she ssh:notty    49.235.240.21    Sat Nov 16 08:24 - 08:24  (00:00)
chun-she ssh:notty    49.235.240.21    Sat Nov 16 08:24 - 08:24  (00:00)
ftp      ssh:notty    114.67.80.39     Sat Nov 16 08:23 - 08:23  (00:00)
ftp      ssh:notty    114.67.80.39     Sat Nov 16 08:23 - 08:23  (00:00)
hirakawa ssh:notty    51.38.237.214    Sat Nov 16 08:23 - 08:23  (00:00)
hirakawa ssh:notty    51.38.237.214    Sat Nov 16 08:23 - 08:23  (00:00)
hubregs  ssh:notty    138.68.50.18     Sat Nov 16 08:22 - 08:22  (00:00)
hubregs  ssh:notty    138.68.50.18     Sat Nov 16 08:22 - 08:22  (00:00)
root     ssh:notty    116.24.66.114    Sat Nov 16 08:22 - 08:22  (00:00)
rusty    ssh:notty    168.181.104.30   Sat Nov 16 08:22 - 08:22  (00:00)
rusty    ssh:notty    168.181.104.30   Sat Nov 16 08:22 - 08:22  (00:00)
mt       ssh:notty    106.12.114.173   Sat Nov 16 08:21 - 08:21  (00:00)
mt       ssh:notty    106.12.114.173   Sat Nov 16 08:21 - 08:21  (00:00)
lh       ssh:notty    209.97.161.46    Sat Nov 16 08:20 - 08:20  (00:00)
lh       ssh:notty    209.97.161.46    Sat Nov 16 08:20 - 08:20  (00:00)
codeawip ssh:notty    180.68.177.15    Sat Nov 16 08:20 - 08:20  (00:00)
rpm      ssh:notty    49.235.240.21    Sat Nov 16 08:19 - 08:19  (00:00)
codeawip ssh:notty    180.68.177.15    Sat Nov 16 08:19 - 08:19  (00:00)
rpm      ssh:notty    49.235.240.21    Sat Nov 16 08:19 - 08:19  (00:00)
sys      ssh:notty    51.38.237.214    Sat Nov 16 08:19 - 08:19  (00:00)
gregorz  ssh:notty    180.97.31.28     Sat Nov 16 08:19 - 08:19  (00:00)
gregorz  ssh:notty    180.97.31.28     Sat Nov 16 08:19 - 08:19  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 08:19 - 08:19  (00:00)
merridie ssh:notty    138.68.50.18     Sat Nov 16 08:18 - 08:18  (00:00)
merridie ssh:notty    138.68.50.18     Sat Nov 16 08:18 - 08:18  (00:00)
seeger   ssh:notty    168.181.104.30   Sat Nov 16 08:18 - 08:18  (00:00)
seeger   ssh:notty    168.181.104.30   Sat Nov 16 08:18 - 08:18  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 08:17 - 08:17  (00:00)
waals    ssh:notty    116.24.66.114    Sat Nov 16 08:17 - 08:17  (00:00)
waals    ssh:notty    116.24.66.114    Sat Nov 16 08:17 - 08:17  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 08:17 - 08:17  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 08:17 - 08:17  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 08:17 - 08:17  (00:00)
root     ssh:notty    222.186.190.92   Sat Nov 16 08:17 - 08:17  (00:00)
laiman   ssh:notty    106.12.114.173   Sat Nov 16 08:16 - 08:16  (00:00)
laiman   ssh:notty    106.12.114.173   Sat Nov 16 08:16 - 08:16  (00:00)
saywers  ssh:notty    51.38.237.214    Sat Nov 16 08:16 - 08:16  (00:00)
saywers  ssh:notty    51.38.237.214    Sat Nov 16 08:16 - 08:16  (00:00)
chiet    ssh:notty    49.235.240.21    Sat Nov 16 08:15 - 08:15  (00:00)
chiet    ssh:notty    49.235.240.21    Sat Nov 16 08:15 - 08:15  (00:00)
server   ssh:notty    180.97.31.28     Sat Nov 16 08:15 - 08:15  (00:00)
hjk      ssh:notty    114.67.80.39     Sat Nov 16 08:15 - 08:15  (00:00)
server   ssh:notty    180.97.31.28     Sat Nov 16 08:15 - 08:15  (00:00)
hjk      ssh:notty    114.67.80.39     Sat Nov 16 08:15 - 08:15  (00:00)
bremer   ssh:notty    84.45.251.243    Sat Nov 16 08:14 - 08:14  (00:00)
bremer   ssh:notty    84.45.251.243    Sat Nov 16 08:14 - 08:14  (00:00)
allison  ssh:notty    180.68.177.15    Sat Nov 16 08:14 - 08:14  (00:00)
ftp_test ssh:notty    217.61.121.48    Sat Nov 16 08:14 - 08:14  (00:00)
allison  ssh:notty    180.68.177.15    Sat Nov 16 08:14 - 08:14  (00:00)
ftp_test ssh:notty    217.61.121.48    Sat Nov 16 08:14 - 08:14  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 08:14 - 08:14  (00:00)
test321  ssh:notty    168.181.104.30   Sat Nov 16 08:13 - 08:13  (00:00)
test321  ssh:notty    168.181.104.30   Sat Nov 16 08:13 - 08:13  (00:00)
root     ssh:notty    116.24.66.114    Sat Nov 16 08:13 - 08:13  (00:00)
huelvasp ssh:notty    51.38.237.214    Sat Nov 16 08:12 - 08:12  (00:00)
huelvasp ssh:notty    51.38.237.214    Sat Nov 16 08:12 - 08:12  (00:00)
support  ssh:notty    65.153.45.34     Sat Nov 16 08:11 - 08:11  (00:00)
support  ssh:notty    65.153.45.34     Sat Nov 16 08:11 - 08:11  (00:00)
rpm      ssh:notty    106.12.114.173   Sat Nov 16 08:11 - 08:11  (00:00)
rpm      ssh:notty    106.12.114.173   Sat Nov 16 08:11 - 08:11  (00:00)
sinus    ssh:notty    165.227.96.190   Sat Nov 16 08:11 - 08:11  (00:00)
clish    ssh:notty    188.166.109.87   Sat Nov 16 08:11 - 08:11  (00:00)
sinus    ssh:notty    165.227.96.190   Sat Nov 16 08:11 - 08:11  (00:00)
clish    ssh:notty    188.166.109.87   Sat Nov 16 08:11 - 08:11  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 08:10 - 08:10  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 08:10 - 08:10  (00:00)
miau     ssh:notty    180.97.31.28     Sat Nov 16 08:10 - 08:10  (00:00)
miau     ssh:notty    180.97.31.28     Sat Nov 16 08:10 - 08:10  (00:00)
lauria   ssh:notty    138.68.50.18     Sat Nov 16 08:10 - 08:10  (00:00)
lauria   ssh:notty    138.68.50.18     Sat Nov 16 08:10 - 08:10  (00:00)
wwwrun   ssh:notty    51.38.237.214    Sat Nov 16 08:09 - 08:09  (00:00)
wwwrun   ssh:notty    51.38.237.214    Sat Nov 16 08:09 - 08:09  (00:00)
koki     ssh:notty    180.68.177.15    Sat Nov 16 08:09 - 08:09  (00:00)
koki     ssh:notty    180.68.177.15    Sat Nov 16 08:09 - 08:09  (00:00)
nobie    ssh:notty    168.181.104.30   Sat Nov 16 08:09 - 08:09  (00:00)
nobie    ssh:notty    168.181.104.30   Sat Nov 16 08:09 - 08:09  (00:00)
admin    ssh:notty    51.77.147.95     Sat Nov 16 08:09 - 08:09  (00:00)
admin    ssh:notty    51.77.147.95     Sat Nov 16 08:09 - 08:09  (00:00)
root     ssh:notty    116.24.66.114    Sat Nov 16 08:08 - 08:08  (00:00)
kiem     ssh:notty    165.227.96.190   Sat Nov 16 08:08 - 08:08  (00:00)
root     ssh:notty    222.186.175.202  Sat Nov 16 08:08 - 08:08  (00:00)
kiem     ssh:notty    165.227.96.190   Sat Nov 16 08:08 - 08:08  (00:00)
root     ssh:notty    222.186.175.202  Sat Nov 16 08:08 - 08:08  (00:00)
root     ssh:notty    222.186.175.202  Sat Nov 16 08:07 - 08:07  (00:00)
root     ssh:notty    222.186.175.202  Sat Nov 16 08:07 - 08:07  (00:00)
yoyo     ssh:notty    65.153.45.34     Sat Nov 16 08:07 - 08:07  (00:00)
yoyo     ssh:notty    65.153.45.34     Sat Nov 16 08:07 - 08:07  (00:00)
root     ssh:notty    222.186.175.202  Sat Nov 16 08:07 - 08:07  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 08:06 - 08:06  (00:00)
test     ssh:notty    49.235.240.21    Sat Nov 16 08:06 - 08:06  (00:00)
silva    ssh:notty    180.97.31.28     Sat Nov 16 08:06 - 08:06  (00:00)
test     ssh:notty    49.235.240.21    Sat Nov 16 08:06 - 08:06  (00:00)
silva    ssh:notty    180.97.31.28     Sat Nov 16 08:06 - 08:06  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 08:06 - 08:06  (00:00)
test     ssh:notty    138.68.50.18     Sat Nov 16 08:06 - 08:06  (00:00)
test     ssh:notty    138.68.50.18     Sat Nov 16 08:06 - 08:06  (00:00)
root     ssh:notty    106.12.114.173   Sat Nov 16 08:06 - 08:06  (00:00)
carter   ssh:notty    118.89.35.251    Sat Nov 16 08:06 - 08:06  (00:00)
carter   ssh:notty    118.89.35.251    Sat Nov 16 08:06 - 08:06  (00:00)
gdm      ssh:notty    51.77.147.95     Sat Nov 16 08:05 - 08:05  (00:00)
gdm      ssh:notty    51.77.147.95     Sat Nov 16 08:05 - 08:05  (00:00)
rudy     ssh:notty    165.227.96.190   Sat Nov 16 08:05 - 08:05  (00:00)
rudy     ssh:notty    165.227.96.190   Sat Nov 16 08:04 - 08:04  (00:00)
misery   ssh:notty    168.181.104.30   Sat Nov 16 08:04 - 08:04  (00:00)
misery   ssh:notty    168.181.104.30   Sat Nov 16 08:04 - 08:04  (00:00)
giffer   ssh:notty    180.68.177.15    Sat Nov 16 08:04 - 08:04  (00:00)
giffer   ssh:notty    180.68.177.15    Sat Nov 16 08:04 - 08:04  (00:00)
ol       ssh:notty    65.153.45.34     Sat Nov 16 08:03 - 08:03  (00:00)
ol       ssh:notty    65.153.45.34     Sat Nov 16 08:03 - 08:03  (00:00)
sampless ssh:notty    116.24.66.114    Sat Nov 16 08:03 - 08:03  (00:00)
sampless ssh:notty    116.24.66.114    Sat Nov 16 08:03 - 08:03  (00:00)
pcap     ssh:notty    51.38.237.214    Sat Nov 16 08:02 - 08:02  (00:00)
pcap     ssh:notty    51.38.237.214    Sat Nov 16 08:02 - 08:02  (00:00)
guest    ssh:notty    114.67.80.39     Sat Nov 16 08:02 - 08:02  (00:00)
guest    ssh:notty    114.67.80.39     Sat Nov 16 08:02 - 08:02  (00:00)
yoyo     ssh:notty    51.77.147.95     Sat Nov 16 08:02 - 08:02  (00:00)
yoyo     ssh:notty    51.77.147.95     Sat Nov 16 08:02 - 08:02  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 08:02 - 08:02  (00:00)
administ ssh:notty    49.235.240.21    Sat Nov 16 08:02 - 08:02  (00:00)
administ ssh:notty    49.235.240.21    Sat Nov 16 08:02 - 08:02  (00:00)
sissy    ssh:notty    180.97.31.28     Sat Nov 16 08:01 - 08:01  (00:00)
sissy    ssh:notty    180.97.31.28     Sat Nov 16 08:01 - 08:01  (00:00)
root     ssh:notty    165.227.96.190   Sat Nov 16 08:01 - 08:01  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 08:01 - 08:01  (00:00)
marlon   ssh:notty    106.12.114.173   Sat Nov 16 08:01 - 08:01  (00:00)
marlon   ssh:notty    106.12.114.173   Sat Nov 16 08:01 - 08:01  (00:00)
1        ssh:notty    168.181.104.30   Sat Nov 16 08:00 - 08:00  (00:00)
1        ssh:notty    168.181.104.30   Sat Nov 16 08:00 - 08:00  (00:00)
willhoff ssh:notty    65.153.45.34     Sat Nov 16 08:00 - 08:00  (00:00)
willhoff ssh:notty    65.153.45.34     Sat Nov 16 08:00 - 08:00  (00:00)
test     ssh:notty    51.38.237.214    Sat Nov 16 07:59 - 07:59  (00:00)
test     ssh:notty    51.38.237.214    Sat Nov 16 07:59 - 07:59  (00:00)
named    ssh:notty    51.77.147.95     Sat Nov 16 07:59 - 07:59  (00:00)
named    ssh:notty    51.77.147.95     Sat Nov 16 07:59 - 07:59  (00:00)
bnq_ops  ssh:notty    165.227.96.190   Sat Nov 16 07:58 - 07:58  (00:00)
bnq_ops  ssh:notty    165.227.96.190   Sat Nov 16 07:58 - 07:58  (00:00)
admin    ssh:notty    180.68.177.15    Sat Nov 16 07:58 - 07:58  (00:00)
admin    ssh:notty    180.68.177.15    Sat Nov 16 07:58 - 07:58  (00:00)
ciuser   ssh:notty    116.24.66.114    Sat Nov 16 07:58 - 07:58  (00:00)
ciuser   ssh:notty    116.24.66.114    Sat Nov 16 07:58 - 07:58  (00:00)
mavis    ssh:notty    114.67.80.39     Sat Nov 16 07:58 - 07:58  (00:00)
mavis    ssh:notty    114.67.80.39     Sat Nov 16 07:58 - 07:58  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 07:58 - 07:58  (00:00)
asterisk ssh:notty    118.89.35.251    Sat Nov 16 07:57 - 07:57  (00:00)
root     ssh:notty    222.186.173.154  Sat Nov 16 07:57 - 07:57  (00:00)
asterisk ssh:notty    118.89.35.251    Sat Nov 16 07:57 - 07:57  (00:00)
root     ssh:notty    222.186.173.154  Sat Nov 16 07:57 - 07:57  (00:00)
host     ssh:notty    49.235.240.21    Sat Nov 16 07:57 - 07:57  (00:00)
root     ssh:notty    222.186.173.154  Sat Nov 16 07:57 - 07:57  (00:00)
host     ssh:notty    49.235.240.21    Sat Nov 16 07:57 - 07:57  (00:00)
root     ssh:notty    180.97.31.28     Sat Nov 16 07:57 - 07:57  (00:00)
root     ssh:notty    222.186.173.154  Sat Nov 16 07:57 - 07:57  (00:00)
root     ssh:notty    222.186.173.154  Sat Nov 16 07:57 - 07:57  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 07:56 - 07:56  (00:00)
bwadmin  ssh:notty    106.12.114.173   Sat Nov 16 07:56 - 07:56  (00:00)
bwadmin  ssh:notty    106.12.114.173   Sat Nov 16 07:56 - 07:56  (00:00)
yyyyyyyy ssh:notty    168.181.104.30   Sat Nov 16 07:56 - 07:56  (00:00)
yyyyyyyy ssh:notty    168.181.104.30   Sat Nov 16 07:56 - 07:56  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 07:56 - 07:56  (00:00)
ftp      ssh:notty    65.153.45.34     Sat Nov 16 07:56 - 07:56  (00:00)
ftp      ssh:notty    65.153.45.34     Sat Nov 16 07:56 - 07:56  (00:00)
carol    ssh:notty    51.77.147.95     Sat Nov 16 07:55 - 07:55  (00:00)
carol    ssh:notty    51.77.147.95     Sat Nov 16 07:55 - 07:55  (00:00)
admin    ssh:notty    165.227.96.190   Sat Nov 16 07:54 - 07:54  (00:00)
admin    ssh:notty    165.227.96.190   Sat Nov 16 07:54 - 07:54  (00:00)
lp       ssh:notty    138.68.50.18     Sat Nov 16 07:54 - 07:54  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 07:54 - 07:54  (00:00)
kd       ssh:notty    116.24.66.114    Sat Nov 16 07:54 - 07:54  (00:00)
kd       ssh:notty    116.24.66.114    Sat Nov 16 07:54 - 07:54  (00:00)
ggg      ssh:notty    118.89.35.251    Sat Nov 16 07:53 - 07:53  (00:00)
ggg      ssh:notty    118.89.35.251    Sat Nov 16 07:53 - 07:53  (00:00)
krisi    ssh:notty    104.236.244.98   Sat Nov 16 07:53 - 07:53  (00:00)
krisi    ssh:notty    104.236.244.98   Sat Nov 16 07:53 - 07:53  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 07:53 - 07:53  (00:00)
guest    ssh:notty    180.97.31.28     Sat Nov 16 07:53 - 07:53  (00:00)
guest    ssh:notty    180.97.31.28     Sat Nov 16 07:53 - 07:53  (00:00)
webadmin ssh:notty    51.38.237.214    Sat Nov 16 07:53 - 07:53  (00:00)
webadmin ssh:notty    51.38.237.214    Sat Nov 16 07:53 - 07:53  (00:00)
mortense ssh:notty    51.77.147.95     Sat Nov 16 07:52 - 07:52  (00:00)
mortense ssh:notty    51.77.147.95     Sat Nov 16 07:52 - 07:52  (00:00)
lp       ssh:notty    65.153.45.34     Sat Nov 16 07:52 - 07:52  (00:00)
password ssh:notty    168.181.104.30   Sat Nov 16 07:51 - 07:51  (00:00)
password ssh:notty    168.181.104.30   Sat Nov 16 07:51 - 07:51  (00:00)
guest    ssh:notty    106.12.114.173   Sat Nov 16 07:51 - 07:51  (00:00)
guest    ssh:notty    106.12.114.173   Sat Nov 16 07:51 - 07:51  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 07:50 - 07:50  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 07:50 - 07:50  (00:00)
graig    ssh:notty    51.38.237.214    Sat Nov 16 07:49 - 07:49  (00:00)
graig    ssh:notty    51.38.237.214    Sat Nov 16 07:49 - 07:49  (00:00)
mysql    ssh:notty    104.236.244.98   Sat Nov 16 07:49 - 07:49  (00:00)
admin    ssh:notty    165.227.96.190   Sat Nov 16 07:49 - 07:49  (00:00)
mysql    ssh:notty    104.236.244.98   Sat Nov 16 07:49 - 07:49  (00:00)
admin    ssh:notty    165.227.96.190   Sat Nov 16 07:49 - 07:49  (00:00)
fusco    ssh:notty    51.77.147.95     Sat Nov 16 07:49 - 07:49  (00:00)
fusco    ssh:notty    51.77.147.95     Sat Nov 16 07:49 - 07:49  (00:00)
youngblo ssh:notty    118.89.35.251    Sat Nov 16 07:49 - 07:49  (00:00)
guest    ssh:notty    116.24.66.114    Sat Nov 16 07:49 - 07:49  (00:00)
youngblo ssh:notty    118.89.35.251    Sat Nov 16 07:49 - 07:49  (00:00)
guest    ssh:notty    116.24.66.114    Sat Nov 16 07:49 - 07:49  (00:00)
root     ssh:notty    180.97.31.28     Sat Nov 16 07:48 - 07:48  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 07:48 - 07:48  (00:00)
named    ssh:notty    65.153.45.34     Sat Nov 16 07:48 - 07:48  (00:00)
named    ssh:notty    65.153.45.34     Sat Nov 16 07:48 - 07:48  (00:00)
admin123 ssh:notty    168.181.104.30   Sat Nov 16 07:47 - 07:47  (00:00)
admin123 ssh:notty    168.181.104.30   Sat Nov 16 07:47 - 07:47  (00:00)
pfaendle ssh:notty    180.68.177.15    Sat Nov 16 07:47 - 07:47  (00:00)
pfaendle ssh:notty    180.68.177.15    Sat Nov 16 07:47 - 07:47  (00:00)
root     ssh:notty    106.12.114.173   Sat Nov 16 07:46 - 07:46  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 07:46 - 07:46  (00:00)
sbignami ssh:notty    51.38.237.214    Sat Nov 16 07:46 - 07:46  (00:00)
sbignami ssh:notty    51.38.237.214    Sat Nov 16 07:46 - 07:46  (00:00)
server   ssh:notty    51.77.147.95     Sat Nov 16 07:46 - 07:46  (00:00)
games    ssh:notty    114.67.80.39     Sat Nov 16 07:46 - 07:46  (00:00)
server   ssh:notty    51.77.147.95     Sat Nov 16 07:46 - 07:46  (00:00)
tallett  ssh:notty    104.236.244.98   Sat Nov 16 07:45 - 07:45  (00:00)
tallett  ssh:notty    104.236.244.98   Sat Nov 16 07:45 - 07:45  (00:00)
test     ssh:notty    62.80.164.18     Sat Nov 16 07:45 - 07:45  (00:00)
test     ssh:notty    62.80.164.18     Sat Nov 16 07:45 - 07:45  (00:00)
lily     ssh:notty    118.89.35.251    Sat Nov 16 07:45 - 07:45  (00:00)
lily     ssh:notty    118.89.35.251    Sat Nov 16 07:45 - 07:45  (00:00)
ching    ssh:notty    180.97.31.28     Sat Nov 16 07:44 - 07:44  (00:00)
ching    ssh:notty    180.97.31.28     Sat Nov 16 07:44 - 07:44  (00:00)
starzins ssh:notty    49.235.240.21    Sat Nov 16 07:44 - 07:44  (00:00)
starzins ssh:notty    49.235.240.21    Sat Nov 16 07:44 - 07:44  (00:00)
sulai    ssh:notty    65.153.45.34     Sat Nov 16 07:44 - 07:44  (00:00)
root     ssh:notty    116.24.66.114    Sat Nov 16 07:44 - 07:44  (00:00)
sulai    ssh:notty    65.153.45.34     Sat Nov 16 07:44 - 07:44  (00:00)
jopling  ssh:notty    168.181.104.30   Sat Nov 16 07:43 - 07:43  (00:00)
jopling  ssh:notty    168.181.104.30   Sat Nov 16 07:43 - 07:43  (00:00)
vogelman ssh:notty    51.38.237.214    Sat Nov 16 07:43 - 07:43  (00:00)
vogelman ssh:notty    51.38.237.214    Sat Nov 16 07:43 - 07:43  (00:00)
corte    ssh:notty    51.77.147.95     Sat Nov 16 07:42 - 07:42  (00:00)
corte    ssh:notty    51.77.147.95     Sat Nov 16 07:42 - 07:42  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 07:42 - 07:42  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 07:42 - 07:42  (00:00)
nobody   ssh:notty    114.67.80.39     Sat Nov 16 07:42 - 07:42  (00:00)
gerhards ssh:notty    106.12.114.173   Sat Nov 16 07:41 - 07:41  (00:00)
gerhards ssh:notty    106.12.114.173   Sat Nov 16 07:41 - 07:41  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 07:41 - 07:41  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 07:41 - 07:41  (00:00)
admin    ssh:notty    65.153.45.34     Sat Nov 16 07:40 - 07:40  (00:00)
admin    ssh:notty    65.153.45.34     Sat Nov 16 07:40 - 07:40  (00:00)
ftpuser  ssh:notty    180.97.31.28     Sat Nov 16 07:40 - 07:40  (00:00)
ftpuser  ssh:notty    180.97.31.28     Sat Nov 16 07:40 - 07:40  (00:00)
sshd     ssh:notty    49.235.240.21    Sat Nov 16 07:40 - 07:40  (00:00)
skachenk ssh:notty    116.24.66.114    Sat Nov 16 07:39 - 07:39  (00:00)
skachenk ssh:notty    116.24.66.114    Sat Nov 16 07:39 - 07:39  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 07:39 - 07:39  (00:00)
tayfur   ssh:notty    168.181.104.30   Sat Nov 16 07:39 - 07:39  (00:00)
tayfur   ssh:notty    168.181.104.30   Sat Nov 16 07:39 - 07:39  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 07:38 - 07:38  (00:00)
royce    ssh:notty    138.68.50.18     Sat Nov 16 07:38 - 07:38  (00:00)
royce    ssh:notty    138.68.50.18     Sat Nov 16 07:38 - 07:38  (00:00)
backup   ssh:notty    104.236.244.98   Sat Nov 16 07:38 - 07:38  (00:00)
aqibur   ssh:notty    114.67.80.39     Sat Nov 16 07:37 - 07:37  (00:00)
aqibur   ssh:notty    114.67.80.39     Sat Nov 16 07:37 - 07:37  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 07:37 - 07:37  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 07:37 - 07:37  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 07:37 - 07:37  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 07:37 - 07:37  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 07:37 - 07:37  (00:00)
home     ssh:notty    106.12.114.173   Sat Nov 16 07:37 - 07:37  (00:00)
home     ssh:notty    106.12.114.173   Sat Nov 16 07:37 - 07:37  (00:00)
artoo    ssh:notty    118.89.35.251    Sat Nov 16 07:37 - 07:37  (00:00)
artoo    ssh:notty    118.89.35.251    Sat Nov 16 07:36 - 07:36  (00:00)
chatard  ssh:notty    65.153.45.34     Sat Nov 16 07:36 - 07:36  (00:00)
chatard  ssh:notty    65.153.45.34     Sat Nov 16 07:36 - 07:36  (00:00)
subzero  ssh:notty    180.97.31.28     Sat Nov 16 07:36 - 07:36  (00:00)
subzero  ssh:notty    180.97.31.28     Sat Nov 16 07:36 - 07:36  (00:00)
guest    ssh:notty    51.77.147.95     Sat Nov 16 07:36 - 07:36  (00:00)
guest    ssh:notty    51.77.147.95     Sat Nov 16 07:36 - 07:36  (00:00)
http     ssh:notty    49.235.240.21    Sat Nov 16 07:35 - 07:35  (00:00)
http     ssh:notty    49.235.240.21    Sat Nov 16 07:35 - 07:35  (00:00)
root     ssh:notty    51.38.237.214    Sat Nov 16 07:35 - 07:35  (00:00)
fred     ssh:notty    116.24.66.114    Sat Nov 16 07:35 - 07:35  (00:00)
fred     ssh:notty    116.24.66.114    Sat Nov 16 07:35 - 07:35  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 07:35 - 07:35  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 07:34 - 07:34  (00:00)
recabarr ssh:notty    168.181.104.30   Sat Nov 16 07:34 - 07:34  (00:00)
recabarr ssh:notty    168.181.104.30   Sat Nov 16 07:34 - 07:34  (00:00)
pcap     ssh:notty    138.68.50.18     Sat Nov 16 07:34 - 07:34  (00:00)
pcap     ssh:notty    138.68.50.18     Sat Nov 16 07:34 - 07:34  (00:00)
claudia  ssh:notty    114.67.80.39     Sat Nov 16 07:33 - 07:33  (00:00)
claudia  ssh:notty    114.67.80.39     Sat Nov 16 07:33 - 07:33  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 07:33 - 07:33  (00:00)
marymarg ssh:notty    178.62.117.106   Sat Nov 16 07:32 - 07:32  (00:00)
marymarg ssh:notty    178.62.117.106   Sat Nov 16 07:32 - 07:32  (00:00)
fantoni  ssh:notty    118.89.35.251    Sat Nov 16 07:32 - 07:32  (00:00)
fantoni  ssh:notty    118.89.35.251    Sat Nov 16 07:32 - 07:32  (00:00)
server   ssh:notty    65.153.45.34     Sat Nov 16 07:32 - 07:32  (00:00)
server   ssh:notty    65.153.45.34     Sat Nov 16 07:32 - 07:32  (00:00)
chihara  ssh:notty    51.38.237.214    Sat Nov 16 07:32 - 07:32  (00:00)
chihara  ssh:notty    51.38.237.214    Sat Nov 16 07:32 - 07:32  (00:00)
root     ssh:notty    106.12.114.173   Sat Nov 16 07:32 - 07:32  (00:00)
sgamer   ssh:notty    180.97.31.28     Sat Nov 16 07:32 - 07:32  (00:00)
sgamer   ssh:notty    180.97.31.28     Sat Nov 16 07:32 - 07:32  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 07:31 - 07:31  (00:00)
ident    ssh:notty    104.236.244.98   Sat Nov 16 07:31 - 07:31  (00:00)
ident    ssh:notty    104.236.244.98   Sat Nov 16 07:31 - 07:31  (00:00)
alberghi ssh:notty    116.24.66.114    Sat Nov 16 07:30 - 07:30  (00:00)
alberghi ssh:notty    116.24.66.114    Sat Nov 16 07:30 - 07:30  (00:00)
operator ssh:notty    138.68.50.18     Sat Nov 16 07:30 - 07:30  (00:00)
operator ssh:notty    138.68.50.18     Sat Nov 16 07:30 - 07:30  (00:00)
manol    ssh:notty    168.181.104.30   Sat Nov 16 07:30 - 07:30  (00:00)
manol    ssh:notty    168.181.104.30   Sat Nov 16 07:30 - 07:30  (00:00)
root     ssh:notty    62.80.164.18     Sat Nov 16 07:30 - 07:30  (00:00)
test     ssh:notty    51.77.147.95     Sat Nov 16 07:29 - 07:29  (00:00)
test     ssh:notty    51.77.147.95     Sat Nov 16 07:29 - 07:29  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 07:29 - 07:29  (00:00)
millward ssh:notty    180.68.177.15    Sat Nov 16 07:29 - 07:29  (00:00)
millward ssh:notty    180.68.177.15    Sat Nov 16 07:29 - 07:29  (00:00)
majordom ssh:notty    51.38.237.214    Sat Nov 16 07:29 - 07:29  (00:00)
majordom ssh:notty    51.38.237.214    Sat Nov 16 07:29 - 07:29  (00:00)
root     ssh:notty    65.153.45.34     Sat Nov 16 07:28 - 07:28  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 07:28 - 07:28  (00:00)
mascella ssh:notty    180.97.31.28     Sat Nov 16 07:27 - 07:27  (00:00)
mascella ssh:notty    180.97.31.28     Sat Nov 16 07:27 - 07:27  (00:00)
kodama   ssh:notty    106.12.114.173   Sat Nov 16 07:27 - 07:27  (00:00)
backup   ssh:notty    104.236.244.98   Sat Nov 16 07:27 - 07:27  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 07:27 - 07:27  (00:00)
kodama   ssh:notty    106.12.114.173   Sat Nov 16 07:27 - 07:27  (00:00)
dumpm    ssh:notty    138.68.50.18     Sat Nov 16 07:26 - 07:26  (00:00)
dumpm    ssh:notty    138.68.50.18     Sat Nov 16 07:26 - 07:26  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 07:26 - 07:26  (00:00)
dian     ssh:notty    116.24.66.114    Sat Nov 16 07:26 - 07:26  (00:00)
dian     ssh:notty    116.24.66.114    Sat Nov 16 07:26 - 07:26  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 07:25 - 07:25  (00:00)
root     ssh:notty    65.153.45.34     Sat Nov 16 07:24 - 07:24  (00:00)
randhawa ssh:notty    118.89.35.251    Sat Nov 16 07:24 - 07:24  (00:00)
randhawa ssh:notty    118.89.35.251    Sat Nov 16 07:24 - 07:24  (00:00)
sshd     ssh:notty    104.236.244.98   Sat Nov 16 07:23 - 07:23  (00:00)
siri     ssh:notty    168.181.104.30   Sat Nov 16 07:23 - 07:23  (00:00)
siri     ssh:notty    168.181.104.30   Sat Nov 16 07:23 - 07:23  (00:00)
oracle   ssh:notty    180.68.177.15    Sat Nov 16 07:23 - 07:23  (00:00)
oliverio ssh:notty    180.97.31.28     Sat Nov 16 07:23 - 07:23  (00:00)
oracle   ssh:notty    180.68.177.15    Sat Nov 16 07:23 - 07:23  (00:00)
oliverio ssh:notty    180.97.31.28     Sat Nov 16 07:23 - 07:23  (00:00)
ident    ssh:notty    49.235.240.21    Sat Nov 16 07:23 - 07:23  (00:00)
admin    ssh:notty    51.77.147.95     Sat Nov 16 07:23 - 07:23  (00:00)
administ ssh:notty    138.68.50.18     Sat Nov 16 07:23 - 07:23  (00:00)
ident    ssh:notty    49.235.240.21    Sat Nov 16 07:23 - 07:23  (00:00)
admin    ssh:notty    51.77.147.95     Sat Nov 16 07:23 - 07:23  (00:00)
administ ssh:notty    138.68.50.18     Sat Nov 16 07:23 - 07:23  (00:00)
mario    ssh:notty    106.12.114.173   Sat Nov 16 07:22 - 07:22  (00:00)
mario    ssh:notty    106.12.114.173   Sat Nov 16 07:22 - 07:22  (00:00)
admin    ssh:notty    62.80.164.18     Sat Nov 16 07:22 - 07:22  (00:00)
admin    ssh:notty    62.80.164.18     Sat Nov 16 07:21 - 07:21  (00:00)
koyoto   ssh:notty    116.24.66.114    Sat Nov 16 07:21 - 07:21  (00:00)
koyoto   ssh:notty    116.24.66.114    Sat Nov 16 07:21 - 07:21  (00:00)
guest    ssh:notty    51.38.237.214    Sat Nov 16 07:21 - 07:21  (00:00)
guest    ssh:notty    51.38.237.214    Sat Nov 16 07:21 - 07:21  (00:00)
wwwadmin ssh:notty    114.67.80.39     Sat Nov 16 07:21 - 07:21  (00:00)
wwwadmin ssh:notty    114.67.80.39     Sat Nov 16 07:21 - 07:21  (00:00)
root     ssh:notty    65.153.45.34     Sat Nov 16 07:20 - 07:20  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 07:20 - 07:20  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 07:20 - 07:20  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 07:19 - 07:19  (00:00)
22       ssh:notty    168.181.104.30   Sat Nov 16 07:19 - 07:19  (00:00)
22       ssh:notty    168.181.104.30   Sat Nov 16 07:19 - 07:19  (00:00)
daemon   ssh:notty    180.97.31.28     Sat Nov 16 07:19 - 07:19  (00:00)
witney   ssh:notty    138.68.50.18     Sat Nov 16 07:19 - 07:19  (00:00)
witney   ssh:notty    138.68.50.18     Sat Nov 16 07:19 - 07:19  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 07:18 - 07:18  (00:00)
ponder   ssh:notty    106.12.114.173   Sat Nov 16 07:18 - 07:18  (00:00)
ponder   ssh:notty    106.12.114.173   Sat Nov 16 07:18 - 07:18  (00:00)
rocheste ssh:notty    180.68.177.15    Sat Nov 16 07:17 - 07:17  (00:00)
rocheste ssh:notty    180.68.177.15    Sat Nov 16 07:17 - 07:17  (00:00)
mhang    ssh:notty    114.67.80.39     Sat Nov 16 07:17 - 07:17  (00:00)
mhang    ssh:notty    114.67.80.39     Sat Nov 16 07:17 - 07:17  (00:00)
pcap     ssh:notty    116.24.66.114    Sat Nov 16 07:17 - 07:17  (00:00)
pcap     ssh:notty    116.24.66.114    Sat Nov 16 07:17 - 07:17  (00:00)
root     ssh:notty    65.153.45.34     Sat Nov 16 07:17 - 07:17  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 07:16 - 07:16  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 07:16 - 07:16  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 07:16 - 07:16  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 07:16 - 07:16  (00:00)
root     ssh:notty    222.186.180.8    Sat Nov 16 07:16 - 07:16  (00:00)
lr       ssh:notty    51.77.147.95     Sat Nov 16 07:16 - 07:16  (00:00)
lr       ssh:notty    51.77.147.95     Sat Nov 16 07:16 - 07:16  (00:00)
operator ssh:notty    118.89.35.251    Sat Nov 16 07:16 - 07:16  (00:00)
operator ssh:notty    118.89.35.251    Sat Nov 16 07:16 - 07:16  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 07:16 - 07:16  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 07:15 - 07:15  (00:00)
org      ssh:notty    180.97.31.28     Sat Nov 16 07:14 - 07:14  (00:00)
org      ssh:notty    180.97.31.28     Sat Nov 16 07:14 - 07:14  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 07:14 - 07:14  (00:00)
server   ssh:notty    106.12.114.173   Sat Nov 16 07:13 - 07:13  (00:00)
server   ssh:notty    106.12.114.173   Sat Nov 16 07:13 - 07:13  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 07:13 - 07:13  (00:00)
thoris   ssh:notty    65.153.45.34     Sat Nov 16 07:13 - 07:13  (00:00)
thoris   ssh:notty    65.153.45.34     Sat Nov 16 07:13 - 07:13  (00:00)
daemon   ssh:notty    114.67.80.39     Sat Nov 16 07:13 - 07:13  (00:00)
backup   ssh:notty    116.24.66.114    Sat Nov 16 07:12 - 07:12  (00:00)
kelsay   ssh:notty    104.236.244.98   Sat Nov 16 07:12 - 07:12  (00:00)
kelsay   ssh:notty    104.236.244.98   Sat Nov 16 07:12 - 07:12  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 07:12 - 07:12  (00:00)
root     ssh:notty    168.181.104.30   Sat Nov 16 07:12 - 07:12  (00:00)
rosebud  ssh:notty    180.68.177.15    Sat Nov 16 07:11 - 07:11  (00:00)
rosebud  ssh:notty    180.68.177.15    Sat Nov 16 07:11 - 07:11  (00:00)
web      ssh:notty    138.68.50.18     Sat Nov 16 07:11 - 07:11  (00:00)
web      ssh:notty    138.68.50.18     Sat Nov 16 07:11 - 07:11  (00:00)
guest    ssh:notty    180.97.31.28     Sat Nov 16 07:10 - 07:10  (00:00)
guest    ssh:notty    180.97.31.28     Sat Nov 16 07:10 - 07:10  (00:00)
terjesen ssh:notty    49.235.240.21    Sat Nov 16 07:10 - 07:10  (00:00)
terjesen ssh:notty    49.235.240.21    Sat Nov 16 07:10 - 07:10  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 07:10 - 07:10  (00:00)
zelonka  ssh:notty    65.153.45.34     Sat Nov 16 07:09 - 07:09  (00:00)
zelonka  ssh:notty    65.153.45.34     Sat Nov 16 07:09 - 07:09  (00:00)
idcwang  ssh:notty    114.67.80.39     Sat Nov 16 07:09 - 07:09  (00:00)
idcwang  ssh:notty    114.67.80.39     Sat Nov 16 07:09 - 07:09  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 07:08 - 07:08  (00:00)
bezhan   ssh:notty    106.12.114.173   Sat Nov 16 07:08 - 07:08  (00:00)
bezhan   ssh:notty    106.12.114.173   Sat Nov 16 07:08 - 07:08  (00:00)
proteu   ssh:notty    118.89.35.251    Sat Nov 16 07:08 - 07:08  (00:00)
proteu   ssh:notty    118.89.35.251    Sat Nov 16 07:08 - 07:08  (00:00)
m1       ssh:notty    116.24.66.114    Sat Nov 16 07:08 - 07:08  (00:00)
m1       ssh:notty    116.24.66.114    Sat Nov 16 07:08 - 07:08  (00:00)
backup   ssh:notty    138.68.50.18     Sat Nov 16 07:07 - 07:07  (00:00)
root     ssh:notty    222.186.175.161  Sat Nov 16 07:06 - 07:06  (00:00)
root     ssh:notty    222.186.175.161  Sat Nov 16 07:06 - 07:06  (00:00)
root     ssh:notty    222.186.175.161  Sat Nov 16 07:06 - 07:06  (00:00)
root     ssh:notty    222.186.175.161  Sat Nov 16 07:06 - 07:06  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 07:06 - 07:06  (00:00)
root     ssh:notty    222.186.175.161  Sat Nov 16 07:06 - 07:06  (00:00)
root     ssh:notty    222.186.175.161  Sat Nov 16 07:06 - 07:06  (00:00)
root     ssh:notty    222.186.175.161  Sat Nov 16 07:06 - 07:06  (00:00)
uucp     ssh:notty    180.97.31.28     Sat Nov 16 07:06 - 07:06  (00:00)
thor     ssh:notty    180.68.177.15    Sat Nov 16 07:06 - 07:06  (00:00)
thor     ssh:notty    180.68.177.15    Sat Nov 16 07:06 - 07:06  (00:00)
faq      ssh:notty    49.235.240.21    Sat Nov 16 07:05 - 07:05  (00:00)
faq      ssh:notty    49.235.240.21    Sat Nov 16 07:05 - 07:05  (00:00)
root     ssh:notty    65.153.45.34     Sat Nov 16 07:05 - 07:05  (00:00)
guest    ssh:notty    104.236.244.98   Sat Nov 16 07:05 - 07:05  (00:00)
guest    ssh:notty    104.236.244.98   Sat Nov 16 07:05 - 07:05  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 07:05 - 07:05  (00:00)
root     ssh:notty    106.12.114.173   Sat Nov 16 07:04 - 07:04  (00:00)
guest    ssh:notty    118.89.35.251    Sat Nov 16 07:04 - 07:04  (00:00)
guest    ssh:notty    118.89.35.251    Sat Nov 16 07:04 - 07:04  (00:00)
lisa     ssh:notty    138.68.50.18     Sat Nov 16 07:03 - 07:03  (00:00)
guest    ssh:notty    116.24.66.114    Sat Nov 16 07:03 - 07:03  (00:00)
lisa     ssh:notty    138.68.50.18     Sat Nov 16 07:03 - 07:03  (00:00)
guest    ssh:notty    116.24.66.114    Sat Nov 16 07:03 - 07:03  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 07:03 - 07:03  (00:00)
mymryk   ssh:notty    180.97.31.28     Sat Nov 16 07:02 - 07:02  (00:00)
mymryk   ssh:notty    180.97.31.28     Sat Nov 16 07:02 - 07:02  (00:00)
admin    ssh:notty    65.153.45.34     Sat Nov 16 07:01 - 07:01  (00:00)
admin    ssh:notty    65.153.45.34     Sat Nov 16 07:01 - 07:01  (00:00)
akiba    ssh:notty    104.236.244.98   Sat Nov 16 07:01 - 07:01  (00:00)
akiba    ssh:notty    104.236.244.98   Sat Nov 16 07:01 - 07:01  (00:00)
tadevich ssh:notty    49.235.240.21    Sat Nov 16 07:01 - 07:01  (00:00)
tadevich ssh:notty    49.235.240.21    Sat Nov 16 07:01 - 07:01  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 07:01 - 07:01  (00:00)
soma     ssh:notty    118.89.35.251    Sat Nov 16 07:00 - 07:00  (00:00)
login    ssh:notty    180.68.177.15    Sat Nov 16 07:00 - 07:00  (00:00)
soma     ssh:notty    118.89.35.251    Sat Nov 16 07:00 - 07:00  (00:00)
login    ssh:notty    180.68.177.15    Sat Nov 16 07:00 - 07:00  (00:00)
washi    ssh:notty    51.77.147.95     Sat Nov 16 07:00 - 07:00  (00:00)
washi    ssh:notty    51.77.147.95     Sat Nov 16 07:00 - 07:00  (00:00)
crash    ssh:notty    106.12.114.173   Sat Nov 16 07:00 - 07:00  (00:00)
crash    ssh:notty    106.12.114.173   Sat Nov 16 06:59 - 06:59  (00:00)
brock    ssh:notty    138.68.50.18     Sat Nov 16 06:59 - 06:59  (00:00)
brock    ssh:notty    138.68.50.18     Sat Nov 16 06:59 - 06:59  (00:00)
root     ssh:notty    116.24.66.114    Sat Nov 16 06:59 - 06:59  (00:00)
root     ssh:notty    54.38.184.235    Sat Nov 16 06:58 - 06:58  (00:00)
sannet   ssh:notty    180.97.31.28     Sat Nov 16 06:58 - 06:58  (00:00)
sannet   ssh:notty    180.97.31.28     Sat Nov 16 06:58 - 06:58  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 06:57 - 06:57  (00:00)
ftpuser  ssh:notty    65.153.45.34     Sat Nov 16 06:57 - 06:57  (00:00)
ftpuser  ssh:notty    65.153.45.34     Sat Nov 16 06:57 - 06:57  (00:00)
wwwrun   ssh:notty    62.80.164.18     Sat Nov 16 06:57 - 06:57  (00:00)
wwwrun   ssh:notty    62.80.164.18     Sat Nov 16 06:57 - 06:57  (00:00)
cardinal ssh:notty    49.235.240.21    Sat Nov 16 06:57 - 06:57  (00:00)
cardinal ssh:notty    49.235.240.21    Sat Nov 16 06:57 - 06:57  (00:00)
keirn    ssh:notty    51.77.147.95     Sat Nov 16 06:57 - 06:57  (00:00)
keirn    ssh:notty    51.77.147.95     Sat Nov 16 06:57 - 06:57  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 06:57 - 06:57  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 06:56 - 06:56  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 06:56 - 06:56  (00:00)
f015     ssh:notty    106.12.114.173   Sat Nov 16 06:55 - 06:55  (00:00)
f015     ssh:notty    106.12.114.173   Sat Nov 16 06:55 - 06:55  (00:00)
vl       ssh:notty    116.24.66.114    Sat Nov 16 06:54 - 06:54  (00:00)
vl       ssh:notty    116.24.66.114    Sat Nov 16 06:54 - 06:54  (00:00)
sqlsrv   ssh:notty    180.68.177.15    Sat Nov 16 06:54 - 06:54  (00:00)
sqlsrv   ssh:notty    180.68.177.15    Sat Nov 16 06:54 - 06:54  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 06:54 - 06:54  (00:00)
root     ssh:notty    65.153.45.34     Sat Nov 16 06:54 - 06:54  (00:00)
finckma  ssh:notty    180.97.31.28     Sat Nov 16 06:53 - 06:53  (00:00)
finckma  ssh:notty    180.97.31.28     Sat Nov 16 06:53 - 06:53  (00:00)
info     ssh:notty    51.77.147.95     Sat Nov 16 06:53 - 06:53  (00:00)
info     ssh:notty    51.77.147.95     Sat Nov 16 06:53 - 06:53  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 06:53 - 06:53  (00:00)
luckyman ssh:notty    49.235.240.21    Sat Nov 16 06:53 - 06:53  (00:00)
luckyman ssh:notty    49.235.240.21    Sat Nov 16 06:52 - 06:52  (00:00)
aspect   ssh:notty    118.89.35.251    Sat Nov 16 06:52 - 06:52  (00:00)
elenore  ssh:notty    138.68.50.18     Sat Nov 16 06:52 - 06:52  (00:00)
aspect   ssh:notty    118.89.35.251    Sat Nov 16 06:52 - 06:52  (00:00)
elenore  ssh:notty    138.68.50.18     Sat Nov 16 06:52 - 06:52  (00:00)
root     ssh:notty    106.12.114.173   Sat Nov 16 06:51 - 06:51  (00:00)
www      ssh:notty    104.236.244.98   Sat Nov 16 06:50 - 06:50  (00:00)
www      ssh:notty    104.236.244.98   Sat Nov 16 06:50 - 06:50  (00:00)
support  ssh:notty    116.24.66.114    Sat Nov 16 06:50 - 06:50  (00:00)
support  ssh:notty    116.24.66.114    Sat Nov 16 06:50 - 06:50  (00:00)
rakesh   ssh:notty    51.77.147.95     Sat Nov 16 06:50 - 06:50  (00:00)
rakesh   ssh:notty    51.77.147.95     Sat Nov 16 06:50 - 06:50  (00:00)
haleigh  ssh:notty    65.153.45.34     Sat Nov 16 06:50 - 06:50  (00:00)
haleigh  ssh:notty    65.153.45.34     Sat Nov 16 06:50 - 06:50  (00:00)
jira     ssh:notty    180.97.31.28     Sat Nov 16 06:49 - 06:49  (00:00)
jira     ssh:notty    180.97.31.28     Sat Nov 16 06:49 - 06:49  (00:00)
belich   ssh:notty    62.80.164.18     Sat Nov 16 06:49 - 06:49  (00:00)
belich   ssh:notty    62.80.164.18     Sat Nov 16 06:49 - 06:49  (00:00)
service  ssh:notty    114.67.80.39     Sat Nov 16 06:49 - 06:49  (00:00)
service  ssh:notty    114.67.80.39     Sat Nov 16 06:49 - 06:49  (00:00)
ftp      ssh:notty    138.68.50.18     Sat Nov 16 06:48 - 06:48  (00:00)
ftp      ssh:notty    138.68.50.18     Sat Nov 16 06:48 - 06:48  (00:00)
labteam  ssh:notty    49.235.240.21    Sat Nov 16 06:48 - 06:48  (00:00)
labteam  ssh:notty    49.235.240.21    Sat Nov 16 06:48 - 06:48  (00:00)
melton   ssh:notty    118.89.35.251    Sat Nov 16 06:48 - 06:48  (00:00)
melton   ssh:notty    118.89.35.251    Sat Nov 16 06:48 - 06:48  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 06:47 - 06:47  (00:00)
tester   ssh:notty    51.77.147.95     Sat Nov 16 06:47 - 06:47  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 06:47 - 06:47  (00:00)
tester   ssh:notty    51.77.147.95     Sat Nov 16 06:47 - 06:47  (00:00)
docker   ssh:notty    65.153.45.34     Sat Nov 16 06:46 - 06:46  (00:00)
docker   ssh:notty    65.153.45.34     Sat Nov 16 06:46 - 06:46  (00:00)
root     ssh:notty    106.12.114.173   Sat Nov 16 06:46 - 06:46  (00:00)
hertel   ssh:notty    116.24.66.114    Sat Nov 16 06:46 - 06:46  (00:00)
hertel   ssh:notty    116.24.66.114    Sat Nov 16 06:46 - 06:46  (00:00)
pro1     ssh:notty    180.97.31.28     Sat Nov 16 06:46 - 06:46  (00:00)
pro1     ssh:notty    180.97.31.28     Sat Nov 16 06:46 - 06:46  (00:00)
schoultz ssh:notty    114.67.80.39     Sat Nov 16 06:45 - 06:45  (00:00)
schoultz ssh:notty    114.67.80.39     Sat Nov 16 06:45 - 06:45  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 06:45 - 06:45  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 06:44 - 06:44  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 06:44 - 06:44  (00:00)
nobody   ssh:notty    51.77.147.95     Sat Nov 16 06:44 - 06:44  (00:00)
hung     ssh:notty    104.236.244.98   Sat Nov 16 06:43 - 06:43  (00:00)
hung     ssh:notty    104.236.244.98   Sat Nov 16 06:43 - 06:43  (00:00)
ansible  ssh:notty    65.153.45.34     Sat Nov 16 06:43 - 06:43  (00:00)
ansible  ssh:notty    65.153.45.34     Sat Nov 16 06:43 - 06:43  (00:00)
sherwynd ssh:notty    106.12.114.173   Sat Nov 16 06:42 - 06:42  (00:00)
sherwynd ssh:notty    106.12.114.173   Sat Nov 16 06:42 - 06:42  (00:00)
test     ssh:notty    116.24.66.114    Sat Nov 16 06:42 - 06:42  (00:00)
test     ssh:notty    116.24.66.114    Sat Nov 16 06:42 - 06:42  (00:00)
uucp     ssh:notty    180.97.31.28     Sat Nov 16 06:41 - 06:41  (00:00)
psychoid ssh:notty    62.80.164.18     Sat Nov 16 06:41 - 06:41  (00:00)
psychoid ssh:notty    62.80.164.18     Sat Nov 16 06:41 - 06:41  (00:00)
lobato   ssh:notty    114.67.80.39     Sat Nov 16 06:41 - 06:41  (00:00)
lobato   ssh:notty    114.67.80.39     Sat Nov 16 06:41 - 06:41  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 06:41 - 06:41  (00:00)
nobuaki  ssh:notty    138.68.50.18     Sat Nov 16 06:41 - 06:41  (00:00)
nobuaki  ssh:notty    138.68.50.18     Sat Nov 16 06:41 - 06:41  (00:00)
macklem  ssh:notty    51.77.147.95     Sat Nov 16 06:41 - 06:41  (00:00)
macklem  ssh:notty    51.77.147.95     Sat Nov 16 06:41 - 06:41  (00:00)
colman   ssh:notty    118.89.35.251    Sat Nov 16 06:40 - 06:40  (00:00)
colman   ssh:notty    118.89.35.251    Sat Nov 16 06:40 - 06:40  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 06:40 - 06:40  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 06:40 - 06:40  (00:00)
tomate   ssh:notty    65.153.45.34     Sat Nov 16 06:39 - 06:39  (00:00)
tomate   ssh:notty    65.153.45.34     Sat Nov 16 06:39 - 06:39  (00:00)
araceli  ssh:notty    106.12.114.173   Sat Nov 16 06:38 - 06:38  (00:00)
araceli  ssh:notty    106.12.114.173   Sat Nov 16 06:38 - 06:38  (00:00)
mccluske ssh:notty    180.97.31.28     Sat Nov 16 06:37 - 06:37  (00:00)
squid    ssh:notty    51.77.147.95     Sat Nov 16 06:37 - 06:37  (00:00)
mccluske ssh:notty    180.97.31.28     Sat Nov 16 06:37 - 06:37  (00:00)
squid    ssh:notty    51.77.147.95     Sat Nov 16 06:37 - 06:37  (00:00)
bin      ssh:notty    116.24.66.114    Sat Nov 16 06:37 - 06:37  (00:00)
tom      ssh:notty    138.68.50.18     Sat Nov 16 06:37 - 06:37  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 06:37 - 06:37  (00:00)
tom      ssh:notty    138.68.50.18     Sat Nov 16 06:37 - 06:37  (00:00)
ident    ssh:notty    118.89.35.251    Sat Nov 16 06:37 - 06:37  (00:00)
ident    ssh:notty    118.89.35.251    Sat Nov 16 06:37 - 06:37  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 06:36 - 06:36  (00:00)
susand   ssh:notty    65.153.45.34     Sat Nov 16 06:36 - 06:36  (00:00)
susand   ssh:notty    65.153.45.34     Sat Nov 16 06:36 - 06:36  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 06:35 - 06:35  (00:00)
root     ssh:notty    180.68.177.15    Sat Nov 16 06:35 - 06:35  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 06:34 - 06:34  (00:00)
root     ssh:notty    138.68.50.18     Sat Nov 16 06:33 - 06:33  (00:00)
Jussi    ssh:notty    106.12.114.173   Sat Nov 16 06:33 - 06:33  (00:00)
packston ssh:notty    180.97.31.28     Sat Nov 16 06:33 - 06:33  (00:00)
Jussi    ssh:notty    106.12.114.173   Sat Nov 16 06:33 - 06:33  (00:00)
guest    ssh:notty    114.67.80.39     Sat Nov 16 06:33 - 06:33  (00:00)
packston ssh:notty    180.97.31.28     Sat Nov 16 06:33 - 06:33  (00:00)
guest    ssh:notty    114.67.80.39     Sat Nov 16 06:33 - 06:33  (00:00)
wangzc   ssh:notty    116.24.66.114    Sat Nov 16 06:33 - 06:33  (00:00)
user     ssh:notty    118.89.35.251    Sat Nov 16 06:33 - 06:33  (00:00)
wangzc   ssh:notty    116.24.66.114    Sat Nov 16 06:33 - 06:33  (00:00)
dpardo   ssh:notty    62.80.164.18     Sat Nov 16 06:33 - 06:33  (00:00)
user     ssh:notty    118.89.35.251    Sat Nov 16 06:33 - 06:33  (00:00)
dpardo   ssh:notty    62.80.164.18     Sat Nov 16 06:33 - 06:33  (00:00)
quota    ssh:notty    104.236.244.98   Sat Nov 16 06:33 - 06:33  (00:00)
quota    ssh:notty    104.236.244.98   Sat Nov 16 06:33 - 06:33  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 06:32 - 06:32  (00:00)
eurythmi ssh:notty    65.153.45.34     Sat Nov 16 06:32 - 06:32  (00:00)
eurythmi ssh:notty    65.153.45.34     Sat Nov 16 06:32 - 06:32  (00:00)
admin    ssh:notty    49.235.240.21    Sat Nov 16 06:31 - 06:31  (00:00)
admin    ssh:notty    49.235.240.21    Sat Nov 16 06:31 - 06:31  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 06:31 - 06:31  (00:00)
deeanna  ssh:notty    138.68.50.18     Sat Nov 16 06:30 - 06:30  (00:00)
deeanna  ssh:notty    138.68.50.18     Sat Nov 16 06:30 - 06:30  (00:00)
info     ssh:notty    114.67.80.39     Sat Nov 16 06:29 - 06:29  (00:00)
info     ssh:notty    114.67.80.39     Sat Nov 16 06:29 - 06:29  (00:00)
lisa     ssh:notty    180.97.31.28     Sat Nov 16 06:29 - 06:29  (00:00)
lisa     ssh:notty    180.97.31.28     Sat Nov 16 06:29 - 06:29  (00:00)
backup   ssh:notty    104.236.244.98   Sat Nov 16 06:29 - 06:29  (00:00)
miyuki   ssh:notty    180.68.177.15    Sat Nov 16 06:29 - 06:29  (00:00)
miyuki   ssh:notty    180.68.177.15    Sat Nov 16 06:29 - 06:29  (00:00)
smmsp    ssh:notty    106.12.114.173   Sat Nov 16 06:29 - 06:29  (00:00)
jaket    ssh:notty    118.89.35.251    Sat Nov 16 06:29 - 06:29  (00:00)
smmsp    ssh:notty    106.12.114.173   Sat Nov 16 06:29 - 06:29  (00:00)
jaket    ssh:notty    118.89.35.251    Sat Nov 16 06:29 - 06:29  (00:00)
backup   ssh:notty    116.24.66.114    Sat Nov 16 06:29 - 06:29  (00:00)
asterisk ssh:notty    65.153.45.34     Sat Nov 16 06:29 - 06:29  (00:00)
asterisk ssh:notty    65.153.45.34     Sat Nov 16 06:28 - 06:28  (00:00)
tuft     ssh:notty    111.231.237.245  Sat Nov 16 06:28 - 06:28  (00:00)
tuft     ssh:notty    111.231.237.245  Sat Nov 16 06:28 - 06:28  (00:00)
losavio  ssh:notty    51.77.147.95     Sat Nov 16 06:28 - 06:28  (00:00)
losavio  ssh:notty    51.77.147.95     Sat Nov 16 06:28 - 06:28  (00:00)
zr       ssh:notty    49.235.240.21    Sat Nov 16 06:27 - 06:27  (00:00)
zr       ssh:notty    49.235.240.21    Sat Nov 16 06:27 - 06:27  (00:00)
attaway  ssh:notty    138.68.50.18     Sat Nov 16 06:26 - 06:26  (00:00)
attaway  ssh:notty    138.68.50.18     Sat Nov 16 06:26 - 06:26  (00:00)
server   ssh:notty    104.236.244.98   Sat Nov 16 06:26 - 06:26  (00:00)
server   ssh:notty    104.236.244.98   Sat Nov 16 06:26 - 06:26  (00:00)
jepson   ssh:notty    114.67.80.39     Sat Nov 16 06:25 - 06:25  (00:00)
jepson   ssh:notty    114.67.80.39     Sat Nov 16 06:25 - 06:25  (00:00)
Cisco    ssh:notty    180.97.31.28     Sat Nov 16 06:25 - 06:25  (00:00)
bisbee   ssh:notty    118.89.35.251    Sat Nov 16 06:25 - 06:25  (00:00)
Cisco    ssh:notty    180.97.31.28     Sat Nov 16 06:25 - 06:25  (00:00)
bisbee   ssh:notty    118.89.35.251    Sat Nov 16 06:25 - 06:25  (00:00)
root     ssh:notty    65.153.45.34     Sat Nov 16 06:25 - 06:25  (00:00)
bin      ssh:notty    106.12.114.173   Sat Nov 16 06:25 - 06:25  (00:00)
yoyo     ssh:notty    116.24.66.114    Sat Nov 16 06:25 - 06:25  (00:00)
yoyo     ssh:notty    116.24.66.114    Sat Nov 16 06:25 - 06:25  (00:00)
ashok    ssh:notty    51.77.147.95     Sat Nov 16 06:25 - 06:25  (00:00)
ashok    ssh:notty    51.77.147.95     Sat Nov 16 06:24 - 06:24  (00:00)
user     ssh:notty    111.231.237.245  Sat Nov 16 06:24 - 06:24  (00:00)
user     ssh:notty    111.231.237.245  Sat Nov 16 06:24 - 06:24  (00:00)
qhsuppor ssh:notty    222.71.134.229   Sat Nov 16 06:24 - 06:24  (00:00)
qhsuppor ssh:notty    222.71.134.229   Sat Nov 16 06:24 - 06:24  (00:00)
bety     ssh:notty    138.68.50.18     Sat Nov 16 06:22 - 06:22  (00:00)
bety     ssh:notty    138.68.50.18     Sat Nov 16 06:22 - 06:22  (00:00)
lian     ssh:notty    49.235.240.21    Sat Nov 16 06:22 - 06:22  (00:00)
lian     ssh:notty    49.235.240.21    Sat Nov 16 06:22 - 06:22  (00:00)
black    ssh:notty    104.236.244.98   Sat Nov 16 06:22 - 06:22  (00:00)
black    ssh:notty    104.236.244.98   Sat Nov 16 06:22 - 06:22  (00:00)
nesvold  ssh:notty    62.80.164.18     Sat Nov 16 06:22 - 06:22  (00:00)
nesvold  ssh:notty    62.80.164.18     Sat Nov 16 06:22 - 06:22  (00:00)
paster   ssh:notty    118.89.35.251    Sat Nov 16 06:22 - 06:22  (00:00)
paster   ssh:notty    118.89.35.251    Sat Nov 16 06:22 - 06:22  (00:00)
poesch   ssh:notty    114.67.80.39     Sat Nov 16 06:21 - 06:21  (00:00)
poesch   ssh:notty    114.67.80.39     Sat Nov 16 06:21 - 06:21  (00:00)
fax      ssh:notty    51.77.147.95     Sat Nov 16 06:21 - 06:21  (00:00)
P@$$word ssh:notty    65.153.45.34     Sat Nov 16 06:21 - 06:21  (00:00)
fax      ssh:notty    51.77.147.95     Sat Nov 16 06:21 - 06:21  (00:00)
P@$$word ssh:notty    65.153.45.34     Sat Nov 16 06:21 - 06:21  (00:00)
cuervo   ssh:notty    180.97.31.28     Sat Nov 16 06:21 - 06:21  (00:00)
cuervo   ssh:notty    180.97.31.28     Sat Nov 16 06:21 - 06:21  (00:00)
HY^JU&   ssh:notty    154.66.196.32    Sat Nov 16 06:21 - 06:21  (00:00)
HY^JU&   ssh:notty    154.66.196.32    Sat Nov 16 06:20 - 06:20  (00:00)
ohma     ssh:notty    106.12.114.173   Sat Nov 16 06:20 - 06:20  (00:00)
ohma     ssh:notty    106.12.114.173   Sat Nov 16 06:20 - 06:20  (00:00)
root     ssh:notty    116.24.66.114    Sat Nov 16 06:20 - 06:20  (00:00)
amjadi   ssh:notty    111.231.237.245  Sat Nov 16 06:20 - 06:20  (00:00)
amjadi   ssh:notty    111.231.237.245  Sat Nov 16 06:20 - 06:20  (00:00)
mysql    ssh:notty    104.236.244.98   Sat Nov 16 06:19 - 06:19  (00:00)
mysql    ssh:notty    104.236.244.98   Sat Nov 16 06:19 - 06:19  (00:00)
kate     ssh:notty    138.68.50.18     Sat Nov 16 06:18 - 06:18  (00:00)
kate     ssh:notty    138.68.50.18     Sat Nov 16 06:18 - 06:18  (00:00)
farce    ssh:notty    51.77.147.95     Sat Nov 16 06:18 - 06:18  (00:00)
farce    ssh:notty    51.77.147.95     Sat Nov 16 06:18 - 06:18  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 06:18 - 06:18  (00:00)
zheng    ssh:notty    65.153.45.34     Sat Nov 16 06:18 - 06:18  (00:00)
squid    ssh:notty    114.67.80.39     Sat Nov 16 06:18 - 06:18  (00:00)
zheng    ssh:notty    65.153.45.34     Sat Nov 16 06:18 - 06:18  (00:00)
squid    ssh:notty    114.67.80.39     Sat Nov 16 06:18 - 06:18  (00:00)
named    ssh:notty    180.97.31.28     Sat Nov 16 06:17 - 06:17  (00:00)
named    ssh:notty    180.97.31.28     Sat Nov 16 06:17 - 06:17  (00:00)
backup   ssh:notty    106.12.114.173   Sat Nov 16 06:16 - 06:16  (00:00)
lescia   ssh:notty    154.66.196.32    Sat Nov 16 06:16 - 06:16  (00:00)
lescia   ssh:notty    154.66.196.32    Sat Nov 16 06:16 - 06:16  (00:00)
wwwrun   ssh:notty    116.24.66.114    Sat Nov 16 06:16 - 06:16  (00:00)
wwwrun   ssh:notty    116.24.66.114    Sat Nov 16 06:16 - 06:16  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 06:15 - 06:15  (00:00)
yoyo     ssh:notty    104.236.244.98   Sat Nov 16 06:15 - 06:15  (00:00)
yoyo     ssh:notty    104.236.244.98   Sat Nov 16 06:15 - 06:15  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:15 - 06:15  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:15 - 06:15  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:15 - 06:15  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:15 - 06:15  (00:00)
mikulak  ssh:notty    51.77.147.95     Sat Nov 16 06:15 - 06:15  (00:00)
mikulak  ssh:notty    51.77.147.95     Sat Nov 16 06:15 - 06:15  (00:00)
test     ssh:notty    138.68.50.18     Sat Nov 16 06:15 - 06:15  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:15 - 06:15  (00:00)
test     ssh:notty    138.68.50.18     Sat Nov 16 06:15 - 06:15  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:15 - 06:15  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:15 - 06:15  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:14 - 06:14  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:14 - 06:14  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:14 - 06:14  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:14 - 06:14  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 06:14 - 06:14  (00:00)
MEGADIAg ssh:notty    65.153.45.34     Sat Nov 16 06:14 - 06:14  (00:00)
MEGADIAg ssh:notty    65.153.45.34     Sat Nov 16 06:14 - 06:14  (00:00)
mambo    ssh:notty    118.89.35.251    Sat Nov 16 06:14 - 06:14  (00:00)
mambo    ssh:notty    118.89.35.251    Sat Nov 16 06:14 - 06:14  (00:00)
tiberius ssh:notty    114.67.80.39     Sat Nov 16 06:14 - 06:14  (00:00)
tiberius ssh:notty    114.67.80.39     Sat Nov 16 06:14 - 06:14  (00:00)
bin      ssh:notty    180.97.31.28     Sat Nov 16 06:13 - 06:13  (00:00)
root     ssh:notty    106.12.114.173   Sat Nov 16 06:12 - 06:12  (00:00)
pfitzner ssh:notty    104.236.244.98   Sat Nov 16 06:12 - 06:12  (00:00)
pfitzner ssh:notty    104.236.244.98   Sat Nov 16 06:12 - 06:12  (00:00)
ignite   ssh:notty    51.77.147.95     Sat Nov 16 06:12 - 06:12  (00:00)
ignite   ssh:notty    51.77.147.95     Sat Nov 16 06:12 - 06:12  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 06:11 - 06:11  (00:00)
rosener  ssh:notty    154.66.196.32    Sat Nov 16 06:11 - 06:11  (00:00)
rosener  ssh:notty    154.66.196.32    Sat Nov 16 06:11 - 06:11  (00:00)
keantre  ssh:notty    65.153.45.34     Sat Nov 16 06:11 - 06:11  (00:00)
keantre  ssh:notty    65.153.45.34     Sat Nov 16 06:11 - 06:11  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 06:10 - 06:10  (00:00)
ftpuser  ssh:notty    114.67.80.39     Sat Nov 16 06:10 - 06:10  (00:00)
ftpuser  ssh:notty    114.67.80.39     Sat Nov 16 06:10 - 06:10  (00:00)
nh       ssh:notty    138.68.50.18     Sat Nov 16 06:09 - 06:09  (00:00)
nh       ssh:notty    138.68.50.18     Sat Nov 16 06:09 - 06:09  (00:00)
arsr1v2a ssh:notty    180.68.177.15    Sat Nov 16 06:09 - 06:09  (00:00)
vcsa     ssh:notty    180.97.31.28     Sat Nov 16 06:09 - 06:09  (00:00)
arsr1v2a ssh:notty    180.68.177.15    Sat Nov 16 06:09 - 06:09  (00:00)
vcsa     ssh:notty    180.97.31.28     Sat Nov 16 06:09 - 06:09  (00:00)
lmx      ssh:notty    51.77.147.95     Sat Nov 16 06:08 - 06:08  (00:00)
lmx      ssh:notty    51.77.147.95     Sat Nov 16 06:08 - 06:08  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 06:08 - 06:08  (00:00)
kolbe    ssh:notty    106.12.114.173   Sat Nov 16 06:08 - 06:08  (00:00)
kolbe    ssh:notty    106.12.114.173   Sat Nov 16 06:08 - 06:08  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 06:07 - 06:07  (00:00)
rolo123  ssh:notty    65.153.45.34     Sat Nov 16 06:07 - 06:07  (00:00)
rolo123  ssh:notty    65.153.45.34     Sat Nov 16 06:07 - 06:07  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 06:07 - 06:07  (00:00)
newberge ssh:notty    154.66.196.32    Sat Nov 16 06:06 - 06:06  (00:00)
newberge ssh:notty    154.66.196.32    Sat Nov 16 06:06 - 06:06  (00:00)
operator ssh:notty    217.182.74.125   Sat Nov 16 06:06 - 06:06  (00:00)
operator ssh:notty    217.182.74.125   Sat Nov 16 06:06 - 06:06  (00:00)
turinsky ssh:notty    114.67.80.39     Sat Nov 16 06:06 - 06:06  (00:00)
turinsky ssh:notty    114.67.80.39     Sat Nov 16 06:06 - 06:06  (00:00)
engeland ssh:notty    51.77.147.95     Sat Nov 16 06:05 - 06:05  (00:00)
engeland ssh:notty    51.77.147.95     Sat Nov 16 06:05 - 06:05  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 06:05 - 06:05  (00:00)
backup   ssh:notty    180.97.31.28     Sat Nov 16 06:05 - 06:05  (00:00)
root     ssh:notty    222.186.175.161  Sat Nov 16 06:04 - 06:04  (00:00)
root     ssh:notty    222.186.175.161  Sat Nov 16 06:04 - 06:04  (00:00)
windows  ssh:notty    106.12.114.173   Sat Nov 16 06:04 - 06:04  (00:00)
windows  ssh:notty    106.12.114.173   Sat Nov 16 06:03 - 06:03  (00:00)
daffa    ssh:notty    65.153.45.34     Sat Nov 16 06:03 - 06:03  (00:00)
daffa    ssh:notty    65.153.45.34     Sat Nov 16 06:03 - 06:03  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 06:03 - 06:03  (00:00)
kupidy   ssh:notty    118.89.35.251    Sat Nov 16 06:03 - 06:03  (00:00)
kupidy   ssh:notty    118.89.35.251    Sat Nov 16 06:03 - 06:03  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 06:02 - 06:02  (00:00)
admin    ssh:notty    51.77.147.95     Sat Nov 16 06:02 - 06:02  (00:00)
admin    ssh:notty    51.77.147.95     Sat Nov 16 06:02 - 06:02  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 06:02 - 06:02  (00:00)
passwd12 ssh:notty    154.66.196.32    Sat Nov 16 06:02 - 06:02  (00:00)
passwd12 ssh:notty    154.66.196.32    Sat Nov 16 06:02 - 06:02  (00:00)
ident    ssh:notty    104.236.244.98   Sat Nov 16 06:01 - 06:01  (00:00)
ident    ssh:notty    104.236.244.98   Sat Nov 16 06:01 - 06:01  (00:00)
root     ssh:notty    180.97.31.28     Sat Nov 16 06:01 - 06:01  (00:00)
root     ssh:notty    49.235.240.21    Sat Nov 16 06:00 - 06:00  (00:00)
mindlab  ssh:notty    65.153.45.34     Sat Nov 16 06:00 - 06:00  (00:00)
mindlab  ssh:notty    65.153.45.34     Sat Nov 16 06:00 - 06:00  (00:00)
guest    ssh:notty    106.12.114.173   Sat Nov 16 05:59 - 05:59  (00:00)
guest    ssh:notty    106.12.114.173   Sat Nov 16 05:59 - 05:59  (00:00)
sheyla   ssh:notty    118.89.35.251    Sat Nov 16 05:59 - 05:59  (00:00)
sheyla   ssh:notty    118.89.35.251    Sat Nov 16 05:59 - 05:59  (00:00)
junli    ssh:notty    51.77.147.95     Sat Nov 16 05:59 - 05:59  (00:00)
junli    ssh:notty    51.77.147.95     Sat Nov 16 05:59 - 05:59  (00:00)
dbus     ssh:notty    111.231.237.245  Sat Nov 16 05:59 - 05:59  (00:00)
dbus     ssh:notty    111.231.237.245  Sat Nov 16 05:59 - 05:59  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 05:59 - 05:59  (00:00)
lp       ssh:notty    114.67.80.39     Sat Nov 16 05:58 - 05:58  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 05:58 - 05:58  (00:00)
narcissi ssh:notty    154.66.196.32    Sat Nov 16 05:57 - 05:57  (00:00)
narcissi ssh:notty    154.66.196.32    Sat Nov 16 05:57 - 05:57  (00:00)
ag       ssh:notty    180.97.31.28     Sat Nov 16 05:57 - 05:57  (00:00)
ag       ssh:notty    180.97.31.28     Sat Nov 16 05:57 - 05:57  (00:00)
leff     ssh:notty    65.153.45.34     Sat Nov 16 05:56 - 05:56  (00:00)
leff     ssh:notty    65.153.45.34     Sat Nov 16 05:56 - 05:56  (00:00)
villasen ssh:notty    51.77.147.95     Sat Nov 16 05:56 - 05:56  (00:00)
villasen ssh:notty    51.77.147.95     Sat Nov 16 05:56 - 05:56  (00:00)
ts3bot   ssh:notty    118.89.35.251    Sat Nov 16 05:55 - 05:55  (00:00)
ts3bot   ssh:notty    118.89.35.251    Sat Nov 16 05:55 - 05:55  (00:00)
apps     ssh:notty    106.12.114.173   Sat Nov 16 05:55 - 05:55  (00:00)
apps     ssh:notty    106.12.114.173   Sat Nov 16 05:55 - 05:55  (00:00)
pluto    ssh:notty    217.182.74.125   Sat Nov 16 05:55 - 05:55  (00:00)
pluto    ssh:notty    217.182.74.125   Sat Nov 16 05:55 - 05:55  (00:00)
test     ssh:notty    111.231.237.245  Sat Nov 16 05:55 - 05:55  (00:00)
test     ssh:notty    111.231.237.245  Sat Nov 16 05:55 - 05:55  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 05:54 - 05:54  (00:00)
bilal    ssh:notty    114.67.80.39     Sat Nov 16 05:54 - 05:54  (00:00)
bilal    ssh:notty    114.67.80.39     Sat Nov 16 05:54 - 05:54  (00:00)
server   ssh:notty    180.97.31.28     Sat Nov 16 05:53 - 05:53  (00:00)
server   ssh:notty    180.97.31.28     Sat Nov 16 05:53 - 05:53  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 05:53 - 05:53  (00:00)
acme     ssh:notty    65.153.45.34     Sat Nov 16 05:53 - 05:53  (00:00)
acme     ssh:notty    65.153.45.34     Sat Nov 16 05:53 - 05:53  (00:00)
pass6666 ssh:notty    154.66.196.32    Sat Nov 16 05:52 - 05:52  (00:00)
pass6666 ssh:notty    154.66.196.32    Sat Nov 16 05:52 - 05:52  (00:00)
admin    ssh:notty    118.89.35.251    Sat Nov 16 05:52 - 05:52  (00:00)
admin    ssh:notty    118.89.35.251    Sat Nov 16 05:52 - 05:52  (00:00)
mysql    ssh:notty    106.12.114.173   Sat Nov 16 05:51 - 05:51  (00:00)
mysql    ssh:notty    106.12.114.173   Sat Nov 16 05:51 - 05:51  (00:00)
root     ssh:notty    116.24.66.114    Sat Nov 16 05:51 - 05:51  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 05:51 - 05:51  (00:00)
admin    ssh:notty    104.236.244.98   Sat Nov 16 05:51 - 05:51  (00:00)
admin    ssh:notty    104.236.244.98   Sat Nov 16 05:51 - 05:51  (00:00)
fujii    ssh:notty    111.231.237.245  Sat Nov 16 05:50 - 05:50  (00:00)
fujii    ssh:notty    111.231.237.245  Sat Nov 16 05:50 - 05:50  (00:00)
lauriann ssh:notty    114.67.80.39     Sat Nov 16 05:50 - 05:50  (00:00)
lauriann ssh:notty    114.67.80.39     Sat Nov 16 05:50 - 05:50  (00:00)
rosenie  ssh:notty    51.77.147.95     Sat Nov 16 05:49 - 05:49  (00:00)
rosenie  ssh:notty    51.77.147.95     Sat Nov 16 05:49 - 05:49  (00:00)
aanonsen ssh:notty    65.153.45.34     Sat Nov 16 05:49 - 05:49  (00:00)
aanonsen ssh:notty    65.153.45.34     Sat Nov 16 05:49 - 05:49  (00:00)
root     ssh:notty    180.97.31.28     Sat Nov 16 05:48 - 05:48  (00:00)
castillo ssh:notty    118.89.35.251    Sat Nov 16 05:48 - 05:48  (00:00)
castillo ssh:notty    118.89.35.251    Sat Nov 16 05:48 - 05:48  (00:00)
solfege  ssh:notty    154.66.196.32    Sat Nov 16 05:48 - 05:48  (00:00)
solfege  ssh:notty    154.66.196.32    Sat Nov 16 05:48 - 05:48  (00:00)
info     ssh:notty    104.236.244.98   Sat Nov 16 05:47 - 05:47  (00:00)
info     ssh:notty    104.236.244.98   Sat Nov 16 05:47 - 05:47  (00:00)
shumway  ssh:notty    217.182.74.125   Sat Nov 16 05:47 - 05:47  (00:00)
shumway  ssh:notty    217.182.74.125   Sat Nov 16 05:47 - 05:47  (00:00)
corlene  ssh:notty    106.12.114.173   Sat Nov 16 05:47 - 05:47  (00:00)
corlene  ssh:notty    106.12.114.173   Sat Nov 16 05:47 - 05:47  (00:00)
gendreau ssh:notty    114.67.80.39     Sat Nov 16 05:46 - 05:46  (00:00)
gendreau ssh:notty    114.67.80.39     Sat Nov 16 05:46 - 05:46  (00:00)
libvirt  ssh:notty    111.231.237.245  Sat Nov 16 05:46 - 05:46  (00:00)
libvirt  ssh:notty    111.231.237.245  Sat Nov 16 05:46 - 05:46  (00:00)
jkapkea  ssh:notty    51.77.147.95     Sat Nov 16 05:46 - 05:46  (00:00)
jkapkea  ssh:notty    51.77.147.95     Sat Nov 16 05:46 - 05:46  (00:00)
xilon123 ssh:notty    65.153.45.34     Sat Nov 16 05:45 - 05:45  (00:00)
xilon123 ssh:notty    65.153.45.34     Sat Nov 16 05:45 - 05:45  (00:00)
csczserv ssh:notty    180.97.31.28     Sat Nov 16 05:44 - 05:44  (00:00)
csczserv ssh:notty    180.97.31.28     Sat Nov 16 05:44 - 05:44  (00:00)
backup   ssh:notty    118.89.35.251    Sat Nov 16 05:44 - 05:44  (00:00)
openelec ssh:notty    104.236.244.98   Sat Nov 16 05:44 - 05:44  (00:00)
openelec ssh:notty    104.236.244.98   Sat Nov 16 05:44 - 05:44  (00:00)
daemon   ssh:notty    217.182.74.125   Sat Nov 16 05:43 - 05:43  (00:00)
yuyu     ssh:notty    154.66.196.32    Sat Nov 16 05:43 - 05:43  (00:00)
yuyu     ssh:notty    154.66.196.32    Sat Nov 16 05:43 - 05:43  (00:00)
charon   ssh:notty    106.12.114.173   Sat Nov 16 05:43 - 05:43  (00:00)
charon   ssh:notty    106.12.114.173   Sat Nov 16 05:43 - 05:43  (00:00)
skogset  ssh:notty    114.67.80.39     Sat Nov 16 05:43 - 05:43  (00:00)
skogset  ssh:notty    114.67.80.39     Sat Nov 16 05:43 - 05:43  (00:00)
asterisk ssh:notty    111.231.237.245  Sat Nov 16 05:42 - 05:42  (00:00)
asterisk ssh:notty    111.231.237.245  Sat Nov 16 05:42 - 05:42  (00:00)
bassigna ssh:notty    65.153.45.34     Sat Nov 16 05:42 - 05:42  (00:00)
bassigna ssh:notty    65.153.45.34     Sat Nov 16 05:42 - 05:42  (00:00)
vitalsig ssh:notty    51.77.147.95     Sat Nov 16 05:41 - 05:41  (00:00)
vitalsig ssh:notty    51.77.147.95     Sat Nov 16 05:41 - 05:41  (00:00)
deanna   ssh:notty    118.89.35.251    Sat Nov 16 05:41 - 05:41  (00:00)
deanna   ssh:notty    118.89.35.251    Sat Nov 16 05:41 - 05:41  (00:00)
ingalls  ssh:notty    180.97.31.28     Sat Nov 16 05:41 - 05:41  (00:00)
ingalls  ssh:notty    180.97.31.28     Sat Nov 16 05:40 - 05:40  (00:00)
curnutte ssh:notty    104.236.244.98   Sat Nov 16 05:40 - 05:40  (00:00)
curnutte ssh:notty    104.236.244.98   Sat Nov 16 05:40 - 05:40  (00:00)
mateyka  ssh:notty    217.182.74.125   Sat Nov 16 05:40 - 05:40  (00:00)
mateyka  ssh:notty    217.182.74.125   Sat Nov 16 05:40 - 05:40  (00:00)
dmsdb    ssh:notty    114.67.80.39     Sat Nov 16 05:39 - 05:39  (00:00)
dmsdb    ssh:notty    114.67.80.39     Sat Nov 16 05:39 - 05:39  (00:00)
pinalez  ssh:notty    106.12.114.173   Sat Nov 16 05:39 - 05:39  (00:00)
pinalez  ssh:notty    106.12.114.173   Sat Nov 16 05:39 - 05:39  (00:00)
sweetpea ssh:notty    65.153.45.34     Sat Nov 16 05:38 - 05:38  (00:00)
sweetpea ssh:notty    65.153.45.34     Sat Nov 16 05:38 - 05:38  (00:00)
onstad   ssh:notty    154.66.196.32    Sat Nov 16 05:38 - 05:38  (00:00)
onstad   ssh:notty    154.66.196.32    Sat Nov 16 05:38 - 05:38  (00:00)
guest    ssh:notty    111.231.237.245  Sat Nov 16 05:38 - 05:38  (00:00)
guest    ssh:notty    111.231.237.245  Sat Nov 16 05:38 - 05:38  (00:00)
admin    ssh:notty    51.77.147.95     Sat Nov 16 05:38 - 05:38  (00:00)
admin    ssh:notty    51.77.147.95     Sat Nov 16 05:38 - 05:38  (00:00)
charlot  ssh:notty    118.89.35.251    Sat Nov 16 05:37 - 05:37  (00:00)
charlot  ssh:notty    118.89.35.251    Sat Nov 16 05:37 - 05:37  (00:00)
okle     ssh:notty    104.236.244.98   Sat Nov 16 05:37 - 05:37  (00:00)
okle     ssh:notty    104.236.244.98   Sat Nov 16 05:37 - 05:37  (00:00)
pastuszo ssh:notty    180.97.31.28     Sat Nov 16 05:36 - 05:36  (00:00)
pastuszo ssh:notty    180.97.31.28     Sat Nov 16 05:36 - 05:36  (00:00)
vandergo ssh:notty    217.182.74.125   Sat Nov 16 05:36 - 05:36  (00:00)
vandergo ssh:notty    217.182.74.125   Sat Nov 16 05:36 - 05:36  (00:00)
root     ssh:notty    51.77.147.95     Sat Nov 16 05:35 - 05:35  (00:00)
lisa     ssh:notty    114.67.80.39     Sat Nov 16 05:35 - 05:35  (00:00)
lisa     ssh:notty    114.67.80.39     Sat Nov 16 05:35 - 05:35  (00:00)
krishnah ssh:notty    65.153.45.34     Sat Nov 16 05:35 - 05:35  (00:00)
krishnah ssh:notty    65.153.45.34     Sat Nov 16 05:35 - 05:35  (00:00)
admin    ssh:notty    106.12.114.173   Sat Nov 16 05:34 - 05:34  (00:00)
admin    ssh:notty    106.12.114.173   Sat Nov 16 05:34 - 05:34  (00:00)
buyse    ssh:notty    111.231.237.245  Sat Nov 16 05:34 - 05:34  (00:00)
buyse    ssh:notty    111.231.237.245  Sat Nov 16 05:34 - 05:34  (00:00)
dumanchi ssh:notty    154.66.196.32    Sat Nov 16 05:34 - 05:34  (00:00)
dumanchi ssh:notty    154.66.196.32    Sat Nov 16 05:34 - 05:34  (00:00)
dangtong ssh:notty    104.236.244.98   Sat Nov 16 05:33 - 05:33  (00:00)
dangtong ssh:notty    104.236.244.98   Sat Nov 16 05:33 - 05:33  (00:00)
fiz      ssh:notty    118.89.35.251    Sat Nov 16 05:33 - 05:33  (00:00)
fiz      ssh:notty    118.89.35.251    Sat Nov 16 05:33 - 05:33  (00:00)
rabbitmq ssh:notty    180.97.31.28     Sat Nov 16 05:32 - 05:32  (00:00)
bin      ssh:notty    217.182.74.125   Sat Nov 16 05:32 - 05:32  (00:00)
rabbitmq ssh:notty    180.97.31.28     Sat Nov 16 05:32 - 05:32  (00:00)
rudy     ssh:notty    51.77.147.95     Sat Nov 16 05:32 - 05:32  (00:00)
rudy     ssh:notty    51.77.147.95     Sat Nov 16 05:32 - 05:32  (00:00)
monitor  ssh:notty    65.153.45.34     Sat Nov 16 05:31 - 05:31  (00:00)
monitor  ssh:notty    65.153.45.34     Sat Nov 16 05:31 - 05:31  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 05:31 - 05:31  (00:00)
wienert  ssh:notty    138.68.99.46     Sat Nov 16 05:31 - 05:31  (00:00)
wienert  ssh:notty    138.68.99.46     Sat Nov 16 05:31 - 05:31  (00:00)
mysql    ssh:notty    106.12.114.173   Sat Nov 16 05:30 - 05:30  (00:00)
mysql    ssh:notty    106.12.114.173   Sat Nov 16 05:30 - 05:30  (00:00)
root     ssh:notty    104.236.244.98   Sat Nov 16 05:30 - 05:30  (00:00)
web      ssh:notty    111.231.237.245  Sat Nov 16 05:30 - 05:30  (00:00)
web      ssh:notty    111.231.237.245  Sat Nov 16 05:30 - 05:30  (00:00)
gnopo    ssh:notty    118.89.35.251    Sat Nov 16 05:29 - 05:29  (00:00)
gnopo    ssh:notty    118.89.35.251    Sat Nov 16 05:29 - 05:29  (00:00)
sandoval ssh:notty    154.66.196.32    Sat Nov 16 05:29 - 05:29  (00:00)
sandoval ssh:notty    154.66.196.32    Sat Nov 16 05:29 - 05:29  (00:00)
apter    ssh:notty    217.182.74.125   Sat Nov 16 05:29 - 05:29  (00:00)
apter    ssh:notty    217.182.74.125   Sat Nov 16 05:29 - 05:29  (00:00)
pe       ssh:notty    180.97.31.28     Sat Nov 16 05:28 - 05:28  (00:00)
pe       ssh:notty    180.97.31.28     Sat Nov 16 05:28 - 05:28  (00:00)
boyden   ssh:notty    65.153.45.34     Sat Nov 16 05:28 - 05:28  (00:00)
boyden   ssh:notty    65.153.45.34     Sat Nov 16 05:28 - 05:28  (00:00)
kubai    ssh:notty    114.67.80.39     Sat Nov 16 05:27 - 05:27  (00:00)
kubai    ssh:notty    114.67.80.39     Sat Nov 16 05:27 - 05:27  (00:00)
erin     ssh:notty    138.68.99.46     Sat Nov 16 05:27 - 05:27  (00:00)
erin     ssh:notty    138.68.99.46     Sat Nov 16 05:27 - 05:27  (00:00)
eugene   ssh:notty    104.236.244.98   Sat Nov 16 05:26 - 05:26  (00:00)
eugene   ssh:notty    104.236.244.98   Sat Nov 16 05:26 - 05:26  (00:00)
guest    ssh:notty    111.231.237.245  Sat Nov 16 05:26 - 05:26  (00:00)
guest    ssh:notty    111.231.237.245  Sat Nov 16 05:26 - 05:26  (00:00)
legg     ssh:notty    118.89.35.251    Sat Nov 16 05:26 - 05:26  (00:00)
legg     ssh:notty    118.89.35.251    Sat Nov 16 05:26 - 05:26  (00:00)
root     ssh:notty    106.12.114.173   Sat Nov 16 05:25 - 05:25  (00:00)
guest    ssh:notty    217.182.74.125   Sat Nov 16 05:25 - 05:25  (00:00)
guest    ssh:notty    217.182.74.125   Sat Nov 16 05:25 - 05:25  (00:00)
gdm      ssh:notty    180.97.31.28     Sat Nov 16 05:24 - 05:24  (00:00)
gdm      ssh:notty    180.97.31.28     Sat Nov 16 05:24 - 05:24  (00:00)
mmmmmm   ssh:notty    154.66.196.32    Sat Nov 16 05:24 - 05:24  (00:00)
mmmmmm   ssh:notty    154.66.196.32    Sat Nov 16 05:24 - 05:24  (00:00)
5678     ssh:notty    65.153.45.34     Sat Nov 16 05:24 - 05:24  (00:00)
5678     ssh:notty    65.153.45.34     Sat Nov 16 05:24 - 05:24  (00:00)
milagro  ssh:notty    114.67.80.39     Sat Nov 16 05:23 - 05:23  (00:00)
milagro  ssh:notty    114.67.80.39     Sat Nov 16 05:23 - 05:23  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 05:23 - 05:23  (00:00)
anastasi ssh:notty    104.236.244.98   Sat Nov 16 05:23 - 05:23  (00:00)
anastasi ssh:notty    104.236.244.98   Sat Nov 16 05:23 - 05:23  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 05:22 - 05:22  (00:00)
vcsa     ssh:notty    111.231.237.245  Sat Nov 16 05:22 - 05:22  (00:00)
vcsa     ssh:notty    111.231.237.245  Sat Nov 16 05:22 - 05:22  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 05:21 - 05:21  (00:00)
jabar    ssh:notty    106.12.114.173   Sat Nov 16 05:21 - 05:21  (00:00)
jabar    ssh:notty    106.12.114.173   Sat Nov 16 05:21 - 05:21  (00:00)
root     ssh:notty    65.153.45.34     Sat Nov 16 05:21 - 05:21  (00:00)
gursevil ssh:notty    180.97.31.28     Sat Nov 16 05:20 - 05:20  (00:00)
gursevil ssh:notty    180.97.31.28     Sat Nov 16 05:20 - 05:20  (00:00)
apache   ssh:notty    51.77.147.95     Sat Nov 16 05:20 - 05:20  (00:00)
anti1128 ssh:notty    104.236.244.98   Sat Nov 16 05:20 - 05:20  (00:00)
apache   ssh:notty    51.77.147.95     Sat Nov 16 05:20 - 05:20  (00:00)
anti1128 ssh:notty    104.236.244.98   Sat Nov 16 05:20 - 05:20  (00:00)
prabha   ssh:notty    154.66.196.32    Sat Nov 16 05:19 - 05:19  (00:00)
prabha   ssh:notty    154.66.196.32    Sat Nov 16 05:19 - 05:19  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 05:19 - 05:19  (00:00)
temp     ssh:notty    138.68.99.46     Sat Nov 16 05:19 - 05:19  (00:00)
temp     ssh:notty    138.68.99.46     Sat Nov 16 05:19 - 05:19  (00:00)
gdm      ssh:notty    118.89.35.251    Sat Nov 16 05:19 - 05:19  (00:00)
gdm      ssh:notty    118.89.35.251    Sat Nov 16 05:18 - 05:18  (00:00)
sincelej ssh:notty    111.231.237.245  Sat Nov 16 05:17 - 05:17  (00:00)
sincelej ssh:notty    111.231.237.245  Sat Nov 16 05:17 - 05:17  (00:00)
smmsp    ssh:notty    217.182.74.125   Sat Nov 16 05:17 - 05:17  (00:00)
smmsp    ssh:notty    217.182.74.125   Sat Nov 16 05:17 - 05:17  (00:00)
yoyo     ssh:notty    65.153.45.34     Sat Nov 16 05:17 - 05:17  (00:00)
yoyo     ssh:notty    65.153.45.34     Sat Nov 16 05:17 - 05:17  (00:00)
ecircles ssh:notty    106.12.114.173   Sat Nov 16 05:17 - 05:17  (00:00)
ecircles ssh:notty    106.12.114.173   Sat Nov 16 05:17 - 05:17  (00:00)
brandyn  ssh:notty    104.236.244.98   Sat Nov 16 05:16 - 05:16  (00:00)
brandyn  ssh:notty    104.236.244.98   Sat Nov 16 05:16 - 05:16  (00:00)
yoyo     ssh:notty    180.97.31.28     Sat Nov 16 05:16 - 05:16  (00:00)
yoyo     ssh:notty    180.97.31.28     Sat Nov 16 05:16 - 05:16  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 05:16 - 05:16  (00:00)
fabi     ssh:notty    138.68.99.46     Sat Nov 16 05:16 - 05:16  (00:00)
fabi     ssh:notty    138.68.99.46     Sat Nov 16 05:16 - 05:16  (00:00)
apache   ssh:notty    154.66.196.32    Sat Nov 16 05:15 - 05:15  (00:00)
apache   ssh:notty    154.66.196.32    Sat Nov 16 05:15 - 05:15  (00:00)
gjendem  ssh:notty    118.89.35.251    Sat Nov 16 05:15 - 05:15  (00:00)
gjendem  ssh:notty    118.89.35.251    Sat Nov 16 05:15 - 05:15  (00:00)
root     ssh:notty    65.153.45.34     Sat Nov 16 05:14 - 05:14  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 05:14 - 05:14  (00:00)
licata   ssh:notty    111.231.237.245  Sat Nov 16 05:13 - 05:13  (00:00)
licata   ssh:notty    111.231.237.245  Sat Nov 16 05:13 - 05:13  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:13 - 05:13  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:13 - 05:13  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:13 - 05:13  (00:00)
root     ssh:notty    106.12.114.173   Sat Nov 16 05:13 - 05:13  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:13 - 05:13  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:13 - 05:13  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:12 - 05:12  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:12 - 05:12  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:12 - 05:12  (00:00)
vcsa     ssh:notty    180.97.31.28     Sat Nov 16 05:12 - 05:12  (00:00)
vcsa     ssh:notty    180.97.31.28     Sat Nov 16 05:12 - 05:12  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:12 - 05:12  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:12 - 05:12  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:12 - 05:12  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 05:12 - 05:12  (00:00)
pineau   ssh:notty    114.67.80.39     Sat Nov 16 05:12 - 05:12  (00:00)
atis     ssh:notty    138.68.99.46     Sat Nov 16 05:12 - 05:12  (00:00)
pineau   ssh:notty    114.67.80.39     Sat Nov 16 05:12 - 05:12  (00:00)
atis     ssh:notty    138.68.99.46     Sat Nov 16 05:12 - 05:12  (00:00)
shenk    ssh:notty    118.89.35.251    Sat Nov 16 05:11 - 05:11  (00:00)
shenk    ssh:notty    118.89.35.251    Sat Nov 16 05:11 - 05:11  (00:00)
wkadmin  ssh:notty    154.66.196.32    Sat Nov 16 05:10 - 05:10  (00:00)
wkadmin  ssh:notty    154.66.196.32    Sat Nov 16 05:10 - 05:10  (00:00)
ka       ssh:notty    217.182.74.125   Sat Nov 16 05:10 - 05:10  (00:00)
ka       ssh:notty    217.182.74.125   Sat Nov 16 05:10 - 05:10  (00:00)
sammy    ssh:notty    65.153.45.34     Sat Nov 16 05:10 - 05:10  (00:00)
sammy    ssh:notty    65.153.45.34     Sat Nov 16 05:10 - 05:10  (00:00)
test     ssh:notty    111.231.237.245  Sat Nov 16 05:09 - 05:09  (00:00)
test     ssh:notty    111.231.237.245  Sat Nov 16 05:09 - 05:09  (00:00)
dakhla   ssh:notty    180.97.31.28     Sat Nov 16 05:08 - 05:08  (00:00)
dakhla   ssh:notty    180.97.31.28     Sat Nov 16 05:08 - 05:08  (00:00)
admin    ssh:notty    138.68.99.46     Sat Nov 16 05:08 - 05:08  (00:00)
admin    ssh:notty    138.68.99.46     Sat Nov 16 05:08 - 05:08  (00:00)
ubnt     ssh:notty    118.89.35.251    Sat Nov 16 05:07 - 05:07  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 05:07 - 05:07  (00:00)
ubnt     ssh:notty    118.89.35.251    Sat Nov 16 05:07 - 05:07  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 05:06 - 05:06  (00:00)
1234     ssh:notty    154.66.196.32    Sat Nov 16 05:05 - 05:05  (00:00)
1234     ssh:notty    154.66.196.32    Sat Nov 16 05:05 - 05:05  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 05:05 - 05:05  (00:00)
root     ssh:notty    180.97.31.28     Sat Nov 16 05:04 - 05:04  (00:00)
dory     ssh:notty    138.68.99.46     Sat Nov 16 05:04 - 05:04  (00:00)
dory     ssh:notty    138.68.99.46     Sat Nov 16 05:04 - 05:04  (00:00)
root     ssh:notty    118.89.35.251    Sat Nov 16 05:04 - 05:04  (00:00)
jira     ssh:notty    217.182.74.125   Sat Nov 16 05:02 - 05:02  (00:00)
jira     ssh:notty    217.182.74.125   Sat Nov 16 05:02 - 05:02  (00:00)
zdziedzi ssh:notty    104.236.244.98   Sat Nov 16 05:02 - 05:02  (00:00)
zdziedzi ssh:notty    104.236.244.98   Sat Nov 16 05:02 - 05:02  (00:00)
test     ssh:notty    111.231.237.245  Sat Nov 16 05:01 - 05:01  (00:00)
test     ssh:notty    111.231.237.245  Sat Nov 16 05:01 - 05:01  (00:00)
root     ssh:notty    154.66.196.32    Sat Nov 16 05:01 - 05:01  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 05:01 - 05:01  (00:00)
nms      ssh:notty    180.97.31.28     Sat Nov 16 05:00 - 05:00  (00:00)
nms      ssh:notty    180.97.31.28     Sat Nov 16 05:00 - 05:00  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 04:59 - 04:59  (00:00)
pgadmin  ssh:notty    118.89.35.251    Sat Nov 16 04:59 - 04:59  (00:00)
pgadmin  ssh:notty    118.89.35.251    Sat Nov 16 04:59 - 04:59  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 04:57 - 04:57  (00:00)
step     ssh:notty    138.68.99.46     Sat Nov 16 04:57 - 04:57  (00:00)
step     ssh:notty    138.68.99.46     Sat Nov 16 04:57 - 04:57  (00:00)
wwwadmin ssh:notty    154.66.196.32    Sat Nov 16 04:56 - 04:56  (00:00)
wwwadmin ssh:notty    154.66.196.32    Sat Nov 16 04:56 - 04:56  (00:00)
russum   ssh:notty    148.70.128.197   Sat Nov 16 04:56 - 04:56  (00:00)
russum   ssh:notty    148.70.128.197   Sat Nov 16 04:56 - 04:56  (00:00)
radcliff ssh:notty    106.12.114.173   Sat Nov 16 04:55 - 04:55  (00:00)
radcliff ssh:notty    106.12.114.173   Sat Nov 16 04:55 - 04:55  (00:00)
bougroug ssh:notty    217.182.74.125   Sat Nov 16 04:55 - 04:55  (00:00)
bougroug ssh:notty    217.182.74.125   Sat Nov 16 04:55 - 04:55  (00:00)
tsang    ssh:notty    111.231.237.245  Sat Nov 16 04:53 - 04:53  (00:00)
tsang    ssh:notty    111.231.237.245  Sat Nov 16 04:53 - 04:53  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 04:53 - 04:53  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:52 - 04:52  (00:00)
root     ssh:notty    222.186.175.215  Sat Nov 16 04:51 - 04:51  (00:00)
root     ssh:notty    154.66.196.32    Sat Nov 16 04:51 - 04:51  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 04:51 - 04:51  (00:00)
root     ssh:notty    114.67.80.39     Sat Nov 16 04:50 - 04:50  (00:00)
core     ssh:notty    65.153.45.34     Sat Nov 16 04:50 - 04:50  (00:00)
core     ssh:notty    65.153.45.34     Sat Nov 16 04:50 - 04:50  (00:00)
gillardi ssh:notty    111.231.237.245  Sat Nov 16 04:49 - 04:49  (00:00)
gillardi ssh:notty    111.231.237.245  Sat Nov 16 04:49 - 04:49  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 04:49 - 04:49  (00:00)
test     ssh:notty    118.89.35.251    Sat Nov 16 04:48 - 04:48  (00:00)
test     ssh:notty    118.89.35.251    Sat Nov 16 04:48 - 04:48  (00:00)
mysql    ssh:notty    217.182.74.125   Sat Nov 16 04:48 - 04:48  (00:00)
mysql    ssh:notty    217.182.74.125   Sat Nov 16 04:48 - 04:48  (00:00)
brathaug ssh:notty    154.66.196.32    Sat Nov 16 04:47 - 04:47  (00:00)
brathaug ssh:notty    154.66.196.32    Sat Nov 16 04:47 - 04:47  (00:00)
gieming  ssh:notty    111.231.237.245  Sat Nov 16 04:45 - 04:45  (00:00)
gieming  ssh:notty    111.231.237.245  Sat Nov 16 04:45 - 04:45  (00:00)
sshd     ssh:notty    138.68.99.46     Sat Nov 16 04:45 - 04:45  (00:00)
nuno     ssh:notty    217.182.74.125   Sat Nov 16 04:44 - 04:44  (00:00)
nuno     ssh:notty    217.182.74.125   Sat Nov 16 04:44 - 04:44  (00:00)
auen     ssh:notty    154.66.196.32    Sat Nov 16 04:42 - 04:42  (00:00)
auen     ssh:notty    154.66.196.32    Sat Nov 16 04:42 - 04:42  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 04:41 - 04:41  (00:00)
rapport  ssh:notty    111.231.237.245  Sat Nov 16 04:41 - 04:41  (00:00)
rapport  ssh:notty    111.231.237.245  Sat Nov 16 04:41 - 04:41  (00:00)
carlee   ssh:notty    180.97.31.28     Sat Nov 16 04:41 - 04:41  (00:00)
carlee   ssh:notty    180.97.31.28     Sat Nov 16 04:41 - 04:41  (00:00)
wwwrun   ssh:notty    217.182.74.125   Sat Nov 16 04:40 - 04:40  (00:00)
wwwrun   ssh:notty    217.182.74.125   Sat Nov 16 04:40 - 04:40  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 04:38 - 04:38  (00:00)
willemse ssh:notty    154.66.196.32    Sat Nov 16 04:38 - 04:38  (00:00)
willemse ssh:notty    154.66.196.32    Sat Nov 16 04:37 - 04:37  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 04:37 - 04:37  (00:00)
server   ssh:notty    217.182.74.125   Sat Nov 16 04:37 - 04:37  (00:00)
server   ssh:notty    217.182.74.125   Sat Nov 16 04:37 - 04:37  (00:00)
gillett  ssh:notty    138.68.99.46     Sat Nov 16 04:34 - 04:34  (00:00)
gillett  ssh:notty    138.68.99.46     Sat Nov 16 04:34 - 04:34  (00:00)
huangjm  ssh:notty    217.182.74.125   Sat Nov 16 04:33 - 04:33  (00:00)
huangjm  ssh:notty    217.182.74.125   Sat Nov 16 04:33 - 04:33  (00:00)
backup   ssh:notty    154.66.196.32    Sat Nov 16 04:33 - 04:33  (00:00)
fery     ssh:notty    111.231.237.245  Sat Nov 16 04:33 - 04:33  (00:00)
fery     ssh:notty    111.231.237.245  Sat Nov 16 04:33 - 04:33  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 04:31 - 04:31  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 04:31 - 04:31  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 04:31 - 04:31  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 04:31 - 04:31  (00:00)
root     ssh:notty    222.186.173.238  Sat Nov 16 04:31 - 04:31  (00:00)
host     ssh:notty    138.68.99.46     Sat Nov 16 04:30 - 04:30  (00:00)
host     ssh:notty    138.68.99.46     Sat Nov 16 04:30 - 04:30  (00:00)
nv       ssh:notty    217.182.74.125   Sat Nov 16 04:29 - 04:29  (00:00)
nv       ssh:notty    217.182.74.125   Sat Nov 16 04:29 - 04:29  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 04:29 - 04:29  (00:00)
test     ssh:notty    154.66.196.32    Sat Nov 16 04:28 - 04:28  (00:00)
test     ssh:notty    154.66.196.32    Sat Nov 16 04:28 - 04:28  (00:00)
ceri     ssh:notty    138.68.99.46     Sat Nov 16 04:26 - 04:26  (00:00)
ceri     ssh:notty    138.68.99.46     Sat Nov 16 04:26 - 04:26  (00:00)
admin    ssh:notty    217.182.74.125   Sat Nov 16 04:26 - 04:26  (00:00)
admin    ssh:notty    217.182.74.125   Sat Nov 16 04:26 - 04:26  (00:00)
nagios   ssh:notty    111.231.237.245  Sat Nov 16 04:25 - 04:25  (00:00)
nagios   ssh:notty    111.231.237.245  Sat Nov 16 04:25 - 04:25  (00:00)
backup   ssh:notty    154.66.196.32    Sat Nov 16 04:24 - 04:24  (00:00)
duclot   ssh:notty    138.68.99.46     Sat Nov 16 04:23 - 04:23  (00:00)
duclot   ssh:notty    138.68.99.46     Sat Nov 16 04:22 - 04:22  (00:00)
sabatini ssh:notty    217.182.74.125   Sat Nov 16 04:22 - 04:22  (00:00)
sabatini ssh:notty    217.182.74.125   Sat Nov 16 04:22 - 04:22  (00:00)
dbus     ssh:notty    111.231.237.245  Sat Nov 16 04:20 - 04:20  (00:00)
dbus     ssh:notty    111.231.237.245  Sat Nov 16 04:20 - 04:20  (00:00)
root     ssh:notty    154.66.196.32    Sat Nov 16 04:19 - 04:19  (00:00)
sysdba   ssh:notty    138.68.99.46     Sat Nov 16 04:19 - 04:19  (00:00)
sysdba   ssh:notty    138.68.99.46     Sat Nov 16 04:19 - 04:19  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 04:18 - 04:18  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 04:16 - 04:16  (00:00)
betsy    ssh:notty    138.68.99.46     Sat Nov 16 04:15 - 04:15  (00:00)
betsy    ssh:notty    138.68.99.46     Sat Nov 16 04:15 - 04:15  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 04:14 - 04:14  (00:00)
merten   ssh:notty    154.66.196.32    Sat Nov 16 04:14 - 04:14  (00:00)
merten   ssh:notty    154.66.196.32    Sat Nov 16 04:14 - 04:14  (00:00)
vnc      ssh:notty    111.231.237.245  Sat Nov 16 04:12 - 04:12  (00:00)
vnc      ssh:notty    111.231.237.245  Sat Nov 16 04:12 - 04:12  (00:00)
rpc      ssh:notty    138.68.99.46     Sat Nov 16 04:12 - 04:12  (00:00)
rpc      ssh:notty    138.68.99.46     Sat Nov 16 04:12 - 04:12  (00:00)
e        ssh:notty    217.182.74.125   Sat Nov 16 04:11 - 04:11  (00:00)
e        ssh:notty    217.182.74.125   Sat Nov 16 04:11 - 04:11  (00:00)
root     ssh:notty    222.186.169.192  Sat Nov 16 04:10 - 04:10  (00:00)
root     ssh:notty    222.186.169.192  Sat Nov 16 04:10 - 04:10  (00:00)
root     ssh:notty    222.186.169.192  Sat Nov 16 04:10 - 04:10  (00:00)
root     ssh:notty    222.186.169.192  Sat Nov 16 04:10 - 04:10  (00:00)
root     ssh:notty    222.186.169.192  Sat Nov 16 04:10 - 04:10  (00:00)
host     ssh:notty    154.66.196.32    Sat Nov 16 04:10 - 04:10  (00:00)
host     ssh:notty    154.66.196.32    Sat Nov 16 04:10 - 04:10  (00:00)
oracle   ssh:notty    138.68.99.46     Sat Nov 16 04:08 - 04:08  (00:00)
oracle   ssh:notty    138.68.99.46     Sat Nov 16 04:08 - 04:08  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 04:08 - 04:08  (00:00)
sync     ssh:notty    217.182.74.125   Sat Nov 16 04:07 - 04:07  (00:00)
ftp      ssh:notty    154.66.196.32    Sat Nov 16 04:05 - 04:05  (00:00)
ftp      ssh:notty    154.66.196.32    Sat Nov 16 04:05 - 04:05  (00:00)
muinck   ssh:notty    111.231.237.245  Sat Nov 16 04:04 - 04:04  (00:00)
muinck   ssh:notty    111.231.237.245  Sat Nov 16 04:04 - 04:04  (00:00)
atse     ssh:notty    217.182.74.125   Sat Nov 16 04:04 - 04:04  (00:00)
atse     ssh:notty    217.182.74.125   Sat Nov 16 04:04 - 04:04  (00:00)
uucp     ssh:notty    138.68.99.46     Sat Nov 16 04:03 - 04:03  (00:00)
test     ssh:notty    154.66.196.32    Sat Nov 16 04:00 - 04:00  (00:00)
test     ssh:notty    154.66.196.32    Sat Nov 16 04:00 - 04:00  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 04:00 - 04:00  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 04:00 - 04:00  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 04:00 - 04:00  (00:00)
webmaste ssh:notty    217.182.74.125   Sat Nov 16 04:00 - 04:00  (00:00)
webmaste ssh:notty    217.182.74.125   Sat Nov 16 04:00 - 04:00  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 04:00 - 04:00  (00:00)
root     ssh:notty    222.186.175.182  Sat Nov 16 04:00 - 04:00  (00:00)
mistero  ssh:notty    111.231.237.245  Sat Nov 16 04:00 - 04:00  (00:00)
mistero  ssh:notty    111.231.237.245  Sat Nov 16 04:00 - 04:00  (00:00)
webmaste ssh:notty    138.68.99.46     Sat Nov 16 03:59 - 03:59  (00:00)
webmaste ssh:notty    138.68.99.46     Sat Nov 16 03:59 - 03:59  (00:00)
brundege ssh:notty    154.66.196.32    Sat Nov 16 03:56 - 03:56  (00:00)
brundege ssh:notty    154.66.196.32    Sat Nov 16 03:56 - 03:56  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 03:55 - 03:55  (00:00)
apache   ssh:notty    111.231.237.245  Sat Nov 16 03:55 - 03:55  (00:00)
apache   ssh:notty    111.231.237.245  Sat Nov 16 03:55 - 03:55  (00:00)
total    ssh:notty    217.182.74.125   Sat Nov 16 03:54 - 03:54  (00:00)
total    ssh:notty    217.182.74.125   Sat Nov 16 03:54 - 03:54  (00:00)
nagios   ssh:notty    138.68.99.46     Sat Nov 16 03:52 - 03:52  (00:00)
nagios   ssh:notty    138.68.99.46     Sat Nov 16 03:52 - 03:52  (00:00)
admin    ssh:notty    154.66.196.32    Sat Nov 16 03:51 - 03:51  (00:00)
admin    ssh:notty    154.66.196.32    Sat Nov 16 03:51 - 03:51  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 03:51 - 03:51  (00:00)
blessed  ssh:notty    111.231.237.245  Sat Nov 16 03:51 - 03:51  (00:00)
blessed  ssh:notty    111.231.237.245  Sat Nov 16 03:51 - 03:51  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 03:48 - 03:48  (00:00)
mwa      ssh:notty    217.182.74.125   Sat Nov 16 03:47 - 03:47  (00:00)
mwa      ssh:notty    217.182.74.125   Sat Nov 16 03:47 - 03:47  (00:00)
lauf     ssh:notty    154.66.196.32    Sat Nov 16 03:46 - 03:46  (00:00)
lauf     ssh:notty    154.66.196.32    Sat Nov 16 03:46 - 03:46  (00:00)
vcsa     ssh:notty    111.231.237.245  Sat Nov 16 03:46 - 03:46  (00:00)
vcsa     ssh:notty    111.231.237.245  Sat Nov 16 03:46 - 03:46  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 03:45 - 03:45  (00:00)
vyatta   ssh:notty    218.58.124.42    Sat Nov 16 03:44 - 03:44  (00:00)
vyatta   ssh:notty    218.58.124.42    Sat Nov 16 03:44 - 03:44  (00:00)
test     ssh:notty    217.182.74.125   Sat Nov 16 03:43 - 03:43  (00:00)
test     ssh:notty    217.182.74.125   Sat Nov 16 03:43 - 03:43  (00:00)
lalu     ssh:notty    111.231.237.245  Sat Nov 16 03:42 - 03:42  (00:00)
lalu     ssh:notty    111.231.237.245  Sat Nov 16 03:42 - 03:42  (00:00)
root     ssh:notty    154.66.196.32    Sat Nov 16 03:42 - 03:42  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 03:41 - 03:41  (00:00)
anuphap  ssh:notty    217.182.74.125   Sat Nov 16 03:40 - 03:40  (00:00)
anuphap  ssh:notty    217.182.74.125   Sat Nov 16 03:40 - 03:40  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 03:38 - 03:38  (00:00)
admin    ssh:notty    138.68.99.46     Sat Nov 16 03:37 - 03:37  (00:00)
admin    ssh:notty    138.68.99.46     Sat Nov 16 03:37 - 03:37  (00:00)
guest    ssh:notty    154.66.196.32    Sat Nov 16 03:37 - 03:37  (00:00)
guest    ssh:notty    154.66.196.32    Sat Nov 16 03:37 - 03:37  (00:00)
fuck     ssh:notty    217.182.74.125   Sat Nov 16 03:36 - 03:36  (00:00)
fuck     ssh:notty    217.182.74.125   Sat Nov 16 03:36 - 03:36  (00:00)
mysql    ssh:notty    138.68.99.46     Sat Nov 16 03:34 - 03:34  (00:00)
mysql    ssh:notty    138.68.99.46     Sat Nov 16 03:34 - 03:34  (00:00)
napoleon ssh:notty    111.231.237.245  Sat Nov 16 03:33 - 03:33  (00:00)
napoleon ssh:notty    111.231.237.245  Sat Nov 16 03:33 - 03:33  (00:00)
www      ssh:notty    154.66.196.32    Sat Nov 16 03:33 - 03:33  (00:00)
admin    ssh:notty    217.182.74.125   Sat Nov 16 03:33 - 03:33  (00:00)
www      ssh:notty    154.66.196.32    Sat Nov 16 03:33 - 03:33  (00:00)
admin    ssh:notty    217.182.74.125   Sat Nov 16 03:33 - 03:33  (00:00)
tollefse ssh:notty    138.68.99.46     Sat Nov 16 03:30 - 03:30  (00:00)
tollefse ssh:notty    138.68.99.46     Sat Nov 16 03:30 - 03:30  (00:00)
ssh      ssh:notty    111.231.237.245  Sat Nov 16 03:29 - 03:29  (00:00)
ssh      ssh:notty    111.231.237.245  Sat Nov 16 03:29 - 03:29  (00:00)
morland  ssh:notty    217.182.74.125   Sat Nov 16 03:29 - 03:29  (00:00)
morland  ssh:notty    217.182.74.125   Sat Nov 16 03:29 - 03:29  (00:00)
net      ssh:notty    154.66.196.32    Sat Nov 16 03:28 - 03:28  (00:00)
net      ssh:notty    154.66.196.32    Sat Nov 16 03:28 - 03:28  (00:00)
berri    ssh:notty    138.68.99.46     Sat Nov 16 03:27 - 03:27  (00:00)
berri    ssh:notty    138.68.99.46     Sat Nov 16 03:27 - 03:27  (00:00)
engelhar ssh:notty    217.182.74.125   Sat Nov 16 03:25 - 03:25  (00:00)
engelhar ssh:notty    217.182.74.125   Sat Nov 16 03:25 - 03:25  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 03:24 - 03:24  (00:00)
root     ssh:notty    154.66.196.32    Sat Nov 16 03:23 - 03:23  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 03:23 - 03:23  (00:00)
lp       ssh:notty    217.182.74.125   Sat Nov 16 03:22 - 03:22  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 03:20 - 03:20  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:20 - 03:20  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
ernsdorf ssh:notty    138.68.99.46     Sat Nov 16 03:19 - 03:19  (00:00)
ernsdorf ssh:notty    138.68.99.46     Sat Nov 16 03:19 - 03:19  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
root     ssh:notty    222.186.180.17   Sat Nov 16 03:19 - 03:19  (00:00)
keating  ssh:notty    154.66.196.32    Sat Nov 16 03:19 - 03:19  (00:00)
keating  ssh:notty    154.66.196.32    Sat Nov 16 03:19 - 03:19  (00:00)
test     ssh:notty    217.182.74.125   Sat Nov 16 03:18 - 03:18  (00:00)
test     ssh:notty    217.182.74.125   Sat Nov 16 03:18 - 03:18  (00:00)
smarald  ssh:notty    138.68.99.46     Sat Nov 16 03:16 - 03:16  (00:00)
smarald  ssh:notty    138.68.99.46     Sat Nov 16 03:16 - 03:16  (00:00)
justin   ssh:notty    111.231.237.245  Sat Nov 16 03:15 - 03:15  (00:00)
justin   ssh:notty    111.231.237.245  Sat Nov 16 03:15 - 03:15  (00:00)
mysql    ssh:notty    154.66.196.32    Sat Nov 16 03:14 - 03:14  (00:00)
mysql    ssh:notty    154.66.196.32    Sat Nov 16 03:14 - 03:14  (00:00)
pritam   ssh:notty    217.182.74.125   Sat Nov 16 03:13 - 03:13  (00:00)
pritam   ssh:notty    217.182.74.125   Sat Nov 16 03:13 - 03:13  (00:00)
typo3    ssh:notty    138.68.99.46     Sat Nov 16 03:12 - 03:12  (00:00)
typo3    ssh:notty    138.68.99.46     Sat Nov 16 03:12 - 03:12  (00:00)
root     ssh:notty    111.231.237.245  Sat Nov 16 03:11 - 03:11  (00:00)
root     ssh:notty    154.66.196.32    Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    222.186.175.155  Sat Nov 16 03:09 - 03:09  (00:00)
borenste ssh:notty    138.68.99.46     Sat Nov 16 03:09 - 03:09  (00:00)
borenste ssh:notty    138.68.99.46     Sat Nov 16 03:09 - 03:09  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 03:06 - 03:06  (00:00)
ashberry ssh:notty    138.68.99.46     Sat Nov 16 03:05 - 03:05  (00:00)
ashberry ssh:notty    138.68.99.46     Sat Nov 16 03:05 - 03:05  (00:00)
panvi    ssh:notty    154.66.196.32    Sat Nov 16 03:05 - 03:05  (00:00)
panvi    ssh:notty    154.66.196.32    Sat Nov 16 03:05 - 03:05  (00:00)
hubby    ssh:notty    111.231.237.245  Sat Nov 16 03:03 - 03:03  (00:00)
hubby    ssh:notty    111.231.237.245  Sat Nov 16 03:03 - 03:03  (00:00)
admin    ssh:notty    138.68.99.46     Sat Nov 16 03:01 - 03:01  (00:00)
admin    ssh:notty    138.68.99.46     Sat Nov 16 03:01 - 03:01  (00:00)
scoles   ssh:notty    154.66.196.32    Sat Nov 16 03:00 - 03:00  (00:00)
scoles   ssh:notty    154.66.196.32    Sat Nov 16 03:00 - 03:00  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 02:58 - 02:58  (00:00)
wubao    ssh:notty    154.66.196.32    Sat Nov 16 02:56 - 02:56  (00:00)
wubao    ssh:notty    154.66.196.32    Sat Nov 16 02:55 - 02:55  (00:00)
gianni   ssh:notty    148.70.128.197   Sat Nov 16 02:55 - 02:55  (00:00)
gianni   ssh:notty    148.70.128.197   Sat Nov 16 02:55 - 02:55  (00:00)
frisak   ssh:notty    138.68.99.46     Sat Nov 16 02:54 - 02:54  (00:00)
frisak   ssh:notty    138.68.99.46     Sat Nov 16 02:54 - 02:54  (00:00)
poynting ssh:notty    154.66.196.32    Sat Nov 16 02:51 - 02:51  (00:00)
poynting ssh:notty    154.66.196.32    Sat Nov 16 02:51 - 02:51  (00:00)
root     ssh:notty    138.68.99.46     Sat Nov 16 02:51 - 02:51  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:49 - 02:49  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:49 - 02:49  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:49 - 02:49  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:49 - 02:49  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:49 - 02:49  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:49 - 02:49  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:49 - 02:49  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:48 - 02:48  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:48 - 02:48  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:48 - 02:48  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:48 - 02:48  (00:00)
root     ssh:notty    222.186.175.220  Sat Nov 16 02:48 - 02:48  (00:00)
compass  ssh:notty    138.68.99.46     Sat Nov 16 02:47 - 02:47  (00:00)
compass  ssh:notty    138.68.99.46     Sat Nov 16 02:47 - 02:47  (00:00)
minium   ssh:notty    154.66.196.32    Sat Nov 16 02:46 - 02:46  (00:00)
minium   ssh:notty    154.66.196.32    Sat Nov 16 02:46 - 02:46  (00:00)
sidny    ssh:notty    138.68.99.46     Sat Nov 16 02:43 - 02:43  (00:00)
sidny    ssh:notty    138.68.99.46     Sat Nov 16 02:43 - 02:43  (00:00)
root     ssh:notty    217.182.74.125   Sat Nov 16 02:43 - 02:43  (00:00)
root     ssh:notty    154.66.196.32    Sat Nov 16 02:41 - 02:41  (00:00)
dasusr1  ssh:notty    138.68.99.46     Sat Nov 16 02:40 - 02:40  (00:00)
dasusr1  ssh:notty    138.68.99.46     Sat Nov 16 02:40 - 02:40  (00:00)
tripsle  ssh:notty    144.91.77.116    Sat Nov 16 02:39 - 02:39  (00:00)
tripsle  ssh:notty    144.91.77.116    Sat Nov 16 02:39 - 02:39  (00:00)
dcadmin  ssh:notty    144.91.77.116    Sat Nov 16 02:38 - 02:38  (00:00)
dcadmin  ssh:notty    144.91.77.116    Sat Nov 16 02:38 - 02:38  (00:00)
tioman   ssh:notty    144.91.77.116    Sat Nov 16 02:38 - 02:38  (00:00)
tioman   ssh:notty    144.91.77.116    Sat Nov 16 02:38 - 02:38  (00:00)
juntosin ssh:notty    144.91.77.116    Sat Nov 16 02:38 - 02:38  (00:00)
juntosin ssh:notty    144.91.77.116    Sat Nov 16 02:38 - 02:38  (00:00)
winmatel ssh:notty    144.91.77.116    Sat Nov 16 02:37 - 02:37  (00:00)
winmatel ssh:notty    144.91.77.116    Sat Nov 16 02:37 - 02:37  (00:00)
root     ssh:notty    154.66.196.32    Sat Nov 16 02:37 - 02:37  (00:00)
danpearc ssh:notty    144.91.77.116    Sat Nov 16 02:37 - 02:37  (00:00)
danpearc ssh:notty    144.91.77.116    Sat Nov 16 02:37 - 02:37  (00:00)
barmstro ssh:notty    144.91.77.116    Sat Nov 16 02:36 - 02:36  (00:00)
barmstro ssh:notty    144.91.77.116    Sat Nov 16 02:36 - 02:36  (00:00)
accuoss  ssh:notty    144.91.77.116    Sat Nov 16 02:36 - 02:36  (00:00)
accuoss  ssh:notty    144.91.77.116    Sat Nov 16 02:36 - 02:36  (00:00)
surveil  ssh:notty    144.91.77.116    Sat Nov 16 02:36 - 02:36  (00:00)
surveil  ssh:notty    144.91.77.116    Sat Nov 16 02:36 - 02:36  (00:00)
dasusr   ssh:notty    144.91.77.116    Sat Nov 16 02:35 - 02:35  (00:00)
dasusr   ssh:notty    144.91.77.116    Sat Nov 16 02:35 - 02:35  (00:00)
rrindels ssh:notty    144.91.77.116    Sat Nov 16 02:35 - 02:35  (00:00)
rrindels ssh:notty    144.91.77.116    Sat Nov 16 02:35 - 02:35  (00:00)
icosuser ssh:notty    144.91.77.116    Sat Nov 16 02:35 - 02:35  (00:00)
icosuser ssh:notty    144.91.77.116    Sat Nov 16 02:35 - 02:35  (00:00)
mimic    ssh:notty    144.91.77.116    Sat Nov 16 02:34 - 02:34  (00:00)
mimic    ssh:notty    144.91.77.116    Sat Nov 16 02:34 - 02:34  (00:00)
mysql    ssh:notty    138.68.99.46     Sat Nov 16 02:34 - 02:34  (00:00)
mysql    ssh:notty    138.68.99.46     Sat Nov 16 02:34 - 02:34  (00:00)
Style    ssh:notty    144.91.77.116    Sat Nov 16 02:34 - 02:34  (00:00)
Style    ssh:notty    144.91.77.116    Sat Nov 16 02:34 - 02:34  (00:00)
mysql    ssh:notty    154.66.196.32    Sat Nov 16 02:32 - 02:32  (00:00)
mysql    ssh:notty    154.66.196.32    Sat Nov 16 02:32 - 02:32  (00:00)
spfhqltm ssh:notty    138.68.99.46     Sat Nov 16 02:30 - 02:30  (00:00)
spfhqltm ssh:notty    138.68.99.46     Sat Nov 16 02:30 - 02:30  (00:00)
libuuid  ssh:notty    27.155.99.173    Sat Nov 16 02:26 - 02:26  (00:00)
libuuid  ssh:notty    27.155.99.173    Sat Nov 16 02:26 - 02:26  (00:00)
liesie   ssh:notty    154.66.196.32    Sat Nov 16 02:22 - 02:22  (00:00)
liesie   ssh:notty    154.66.196.32    Sat Nov 16 02:22 - 02:22  (00:00)
warchol  ssh:notty    138.68.99.46     Sat Nov 16 02:21 - 02:21  (00:00)
warchol  ssh:notty    138.68.99.46     Sat Nov 16 02:21 - 02:21  (00:00)

btmp begins Sat Nov 16 02:21:57 2019

ldd

ldd 列出可执行文件依赖的动态库, 可以把二进制文件和它的动态库一起带到一台新的设备上,指定LD_LIBRARY_PATH就可以运行了。参考这里

linux

查询设备信息

  • 快速查看服务器的硬件信息
sudo lshw -short        #以简短的方式列出服务器的硬件信息
sudo lshw -c network    #观察网卡型号,接口命令,IP的对应关系。 查看某个口属于什么网卡
  • 查看服务器型号,bios, 主板,槽位,cpu,内存等
sudo dmidecode -t  bios         #含厂商、版本等
sudo dmidecode -t  system       #含服务器型号、厂商,发布日期等
sudo dmidecode -t  baseboard    #含厂商,序列号等
sudo dmidecode -t  chassis      #含槽位,最大支持PCI槽位,并不是实际服务器的槽位
sudo dmidecode -t  processor    #含CPU个数,类型:x86/ARM,时钟频率,L1,L2,L3缓存等
sudo dmidecode -t  memory       #含所有内存插槽,每个内存条大小,位宽,类型:DDR4等
sudo dmidecode -t  cache        #含所有L1、L2、L3缓存信息
sudo dmidecode -t  connector    #未调查
sudo dmidecode -t  slot         #含槽位信息

请参考dmidecode例子查看不同设备的输出

查询CPU信息

总核数(总逻辑核数)= 物理CPU个数 × 每颗物理CPU的核数 × 超线程数
  • 查看物理CPU个数
cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l
  • 查看每个物理CPU中core的个数(即核数)
cat /proc/cpuinfo| grep "cpu cores"| uniq
  • 查看逻辑CPU的个数
cat /proc/cpuinfo| grep "processor"| wc -l
另外,可以直接使用lscpu得出CPU概览。
请参考解读CPU信息查看不同设备的输出 ## 查询内存信息 + 查看内存使用情况
me@ubuntu:~$ free -mh
              total        used        free      shared  buff/cache   available
Mem:           125G         20G        1.2G        3.1M        103G        103G
Swap:          2.0G         20M        2.0G
me@ubuntu:~$

查询硬盘信息

1.lsblk

可以看到物理盘和逻辑盘以及挂载情况

me@ubuntu:~$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  3.7T  0 disk
├─sda1   8:1    0  512M  0 part /boot/efi
└─sda2   8:2    0  3.7T  0 part /
sdb      8:16   0  3.7T  0 disk
└─sdb1   8:17   0  3.7T  0 part /home/data
2.fdisk

系统自带的硬盘工具,可以进行格式化硬盘等操作

fdisk -l
#列出所有物理硬盘,做了硬raid只能看到一个硬盘
3. smartctl
smartctl控制ATA-3、ATA later、IDE 和 SCSI-3硬件驱动器的自监测、分析和报告功能。 可以看到硬盘本身的信息:设备型号、序列号,厂家、转速,大小等
smartctl -a /dev/sdb
me@ubuntu:~$ sudo smartctl -a /dev/sdb
smartctl 6.6 2016-05-31 r4324 [aarch64-linux-4.15.0-20-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     HUS726040ALA610
Serial Number:    K4JGB1DB
LU WWN Device Id: 5 000cca 25de2b5aa
Firmware Version: T7R4
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Fri Jan 18 17:26:44 2019 CST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
...........
4. hdparm

hdparm是Linux的命令行程序,用于设置和查看ATA硬盘驱动器的硬件参数和测试性能。它可以设置驱动器缓存,睡眠模式,电源管理,声学管理和DMA设置等参数。

hdparm -I /dev/sdb
me@ubuntu:~$ sudo hdparm -I /dev/sdb

/dev/sdb:

ATA device, with non-removable media
        Model Number:       HUS726040ALA610
        Serial Number:      K4JGB1DB
        Firmware Revision:  T7R4
        Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0; Revision: ATA8-AST T13 Project D1697 Revision 0b
Standards:
        Used: unknown (minor revision code 0x0029)
        Supported: 9 8 7 6 5
        Likely used: 9
Configuration:
        Logical         max     current
        cylinders       16383   16383
        heads           16      16
        sectors/track   63      63
        --
        CHS current addressable sectors:    16514064
        LBA    user addressable sectors:   268435455
        LBA48  user addressable sectors:  7814037168
        Logical  Sector size:                   512 bytes
        Physical Sector size:                   512 bytes
        device size with M = 1024*1024:     3815447 MBytes
        device size with M = 1000*1000:     4000787 MBytes (4000 GB)
        cache/buffer size  = unknown
        Form Factor: 3.5 inch
        Nominal Media Rotation Rate: 7200
Capabilities:

网络操作

设置IP

建议使用ip address命令。ifconfig 可以完成同样配置,ubuntu上使用ifconfig,redhat使用的是ifcfg。ip address可以兼容两个系统

ip address add 10.0.0.3/24 dev eth0
ip address add 192.168.2.223/24 dev eth1
ip address add 192.168.4.223/24 dev eth1
dhcp
有时候不需要配置网络接口文件,希望各个网络接口使用dhcp自动获取IP地址。
redhat7 8
dhclient

会在可用的网络接口下自动获取IP

网络配置文件

ubuntu

me@ceph-client:~$ cat /etc/netplan/01-netcfg.yaml
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
version: 2
renderer: networkd
ethernets:
enp1s0:
  dhcp4: yes
me@ceph-client:~$

【设置DNS教程】 修改好配置文件之后设置生效

sudo netplan apply

redhat7.5 redhat8.0

[me@localhost ~]$ cat /etc/sysconfig/network-scripts/ifcfg-enp1s0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp1s0
UUID=8d5bd07f-3342-424c-9a18-ef91be6cf514
DEVICE=enp1s0
ONBOOT=yes
[me@localhost ~]$

主要修改BOOTPROTO=dhcpONBOOT=yes这两个选项 ###

添加网关

GATEWAY=10.6.1.1

修改配置文件后, 要使设置的ip生效

ip addr flush dev eno1
systemctl restart NetworkManager
# 或者使用

重启网络

ubuntu18.04

sudo systemctl restart systemd-networkd.service

redhat7.5 redhat8.0

sudo systemctl restart NetworkManager

suse 15

systemctl restart network

其他系统上各有不同,即使是ubuntu,也因为版本命令不一样,所以其他发行版请自行搜索。

  • 抓包

在eth0上抓ping包,看是否有ping包到达

tcpdump -v icmp -i eth0
查看网口对应的PCI设备
ls -la /sys/class/net/
total 0
drwxr-xr-x  2 root root 0 May 31 23:57 .
drwxr-xr-x 52 root root 0 Apr 14  2015 ..
lrwxrwxrwx  1 root root 0 Apr 14  2015 eno1 -> ../../devices/pci0000:7c/0000:7c:00.0/0000:7d:00.0/net/eno1
lrwxrwxrwx  1 root root 0 Apr 14  2015 eno2 -> ../../devices/pci0000:7c/0000:7c:00.0/0000:7d:00.1/net/eno2
lrwxrwxrwx  1 root root 0 Apr 14  2015 eno3 -> ../../devices/pci0000:7c/0000:7c:00.0/0000:7d:00.2/net/eno3
lrwxrwxrwx  1 root root 0 Apr 14  2015 eno4 -> ../../devices/pci0000:7c/0000:7c:00.0/0000:7d:00.3/net/eno4
lrwxrwxrwx  1 root root 0 Apr 14  2015 enp189s0f0 -> ../../devices/pci0000:bc/0000:bc:00.0/0000:bd:00.0/net/enp189s0f0
lrwxrwxrwx  1 root root 0 Apr 14  2015 enp189s0f1 -> ../../devices/pci0000:bc/0000:bc:00.0/0000:bd:00.1/net/enp189s0f1
lrwxrwxrwx  1 root root 0 Apr 14  2015 enp189s0f2 -> ../../devices/pci0000:bc/0000:bc:00.0/0000:bd:00.2/net/enp189s0f2
lrwxrwxrwx  1 root root 0 Apr 14  2015 enp189s0f3 -> ../../devices/pci0000:bc/0000:bc:00.0/0000:bd:00.3/net/enp189s0f3
lrwxrwxrwx  1 root root 0 Apr 14  2015 lo -> ../../devices/virtual/net/lo
http代理

有时候服务器需要经过代理服务器访问网络

export http_proxy=http://192.168.1.212:8118

这个命令只对当前终端有效,关闭终端,或者重启机器都会失效。使用wegt 和curl时有用。yum的时无效的。

yum的代理需要在/etc/yum.conf下设置

文件操作

  • 修改文件所有者和文件所在组
chgrp   用户名 文件名  -R
chown   用户名 文件名  -R

sudo chown -R me:me .[^.]*  #更改当前目录下所有的文件,包括隐藏文件的拥有者为me,组为me
sudo chown -R me:me /home/me/code/linux/.[^.]*  更改linux目录下所有的文件,包括隐藏文件的拥有者为me,组为me
  • 递归搜索当前目录下所有.h 文件中包含 linux_binfmt字符串的文件
grep "linux_binfmt" -Ril --include=\*.h
  • 查找ELF64_Sym在所有.h文件中的原型
grep Elf64_Sym /usr/include/*.h | grep typedef
find /etc/httpd/ -name httpd.conf
  • 在linux目录中查找所有的*.h,并在这些文件中查找SYSCALL_VECTOR
find linux -name *.h | xargs grep "SYSCALL_VECTOR"
  • 从根目录开始查找所有扩展名为.log的文本文件,并找出包含”ERROR”的行
find / -type f -name “*.log” | xargs grep “ERROR”
  • 从当前目录开始查找所有扩展名为.in的文本文件,并找出包含”thermcontact”的行
find . -name “*.in” | xargs grep “thermcontact”
  • 查找系统库中包含 “getopt_long”函数的头文件
find /usr/lib/ -name *.h | xargs grep "getopt_long"
  • 查找指定指定文件类型
find . -type d -name debug*

b   block (buffered) special
c   character (unbuffered) special
d   directory
p   named pipe (FIFO)
f   regular file
l   symbolic link
s   socket
D   door (Solaris)
  • 设置深度
find . -maxdepth 2
  • 查找当前目录下所有文件中包含ibv_open_device的文件和行
grep ibv_open_device -rn .
  • 查找时忽略文件.java文件和.js文件
grep -E "http"  . -R --exclude=*.{java,js}
  • 查找时忽略tag文件
grep show_interrupts . -rn --exclude-dir={.git} --exclude=tags --binary-files=without-match
grep ibv_context -rn --exclude={GPATH,GRTAGS,GTAGS,tags}
  • 查找时忽略目录.git,res,bin
grep -E "http"  . -R --exclude-dir={.git,res,bin}
  • 设置环境变量排除目录或者文件
export GREP_OPTIONS="--exclude-dir=\.svn --exclude-dir=\.git --exclude=tags --exclude=cscope\.out"
  • 查找时忽略二进制文件
grep rtc_init . -rn --exclude-dir={.git} --binary-files=without-match
  • 查找文件并且ls
find . -name verbs.h | xargs -n 1 ls -l
  • grep 显示匹配行的后面几行 -A选项
dmidecode|grep "System Information" -A9
  • 复制文件
scp /home/a.txt root@192.168.1.199:/home/code/b.c
  • 复制文件夹
scp -r /home/code-project root@192.168.1.1991:/home/code-project
  • 同步文件
rsync -avzP /path/to/source/ user@192.168.1.5:/path/to/dest/

软件安装

  1. 查找软件包 yum search ~

  2. 列出所有可安装的软件包
    >yum list
  3. 列出所有可更新的软件包 >yumlist updates

  4. 列出所有已安装的软件包 >yum list installed

  5. 列出所有已安装但不在Yum Repository內的软件包 >yum list extras

  6. 列出所指定软件包 >yum list~

  7. 使用YUM获取软件包信息 >yum info~

  8. 列出所有软件包的信息 >yum info

  9. 列出所有可更新的软件包信息 >yum info updates

  10. 列出所有已安裝的软件包信息 >yum info installed

  11. 列出所有已安裝但不在Yum Repository內的软件包信息 >yum info extras

  12. 列出软件包提供哪些文件 >yum provides~

fdisk -l可以看到多个物理硬盘,做了硬raid只能看到一个硬盘
cat /proc/cpuinfo查看cpu具体的信息 13. 查找不常见软件包 >rmadision -S

用户管理

因为安装系统时没有为用户添加到管理员,所以无法执行sudo命令,系统提示

[me@redhat75 ~]$ sudo vim /etc/sysconfig/network-scripts/ifcfg-eth0
[sudo] password for me:
me is not in the sudoers file.  This incident will be reported.

添加用户到sudo组 方法一:

[root@redhat75 me]# usermod -a -G sudo me
usermod: group 'sudo' does not exist

添加不成功,原因是默认没有sudo组,在安装系统时,账户默认是wheel组,wheel也有sudo权限。

[root@redhat75 me]# usermod -a -G wheel me

方法二:

visudo
sudo update-alternatives --config editor     # visudo默认使用nano, 改变默认编辑器
## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL
me      ALL=(ALL)       ALL

允许用户user1无密码执行sudo命令

sudo visudo

## Same thing without a password
# %wheel        ALL=(ALL)       NOPASSWD: ALL
user1   ALL=(ALL)   NOPASSWD: ALL

修改用户主目录, 并且移动内容 -m

usermod -m -s /bin/bash -d /newhome/username username

修改用户名

usermod -l new_name old_name

## 安装linux源码,安装内核源码

sudo apt-get install linux-4.4-source-4.4
xz -d linux-4.4-source-4.4.tar.xz
sudo xz -d linux-4.4-source-4.4.tar.xz
tar -xvf linux-4.4-source-4.4.tar
sudo tar -xvf linux-4.4-source-4.4.tar
Ubuntu

sudo apt-get update sudo apt-get install linux-source

#会在/usr/src下面安装当前内核版本的源码 me@ubuntu:~$ ls /usr/src/ linux-headers-4.15.0-29 linux-headers-4.15.0-29-generic linux-source-4.15.0 linux-source-4.15.0.tar.bz2 me@ubuntu:~$ uname -a Linux ubuntu 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:41:03 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux

Redhat、CentOS

yum install kernel-devel Kernel-headers

## 校验md5
计算文件的md5值
```shell-session
me@ubuntu:~$ md5sum shrc
5d17293b5f05e123c50b04e1cd1b9ff7  shrc

修改键盘布局

有时候键盘布局可能不一样,导致按键错误,可以使用命令进行配置.一般选择1-4键盘

sudo dpkg-reconfigure keyboard-configuration
me@ubuntufio:~$ sudo dpkg-reconfigure keyboard-configuration
Package configuration

         ┌──────────┤ Configuring keyboard-configuration ├───────────┐
         │ Please select the model of the keyboard of this machine.  │
         │                                                           │
         │ Keyboard model:                                           │
         │                                                           │
         │     DTK2000                                            ↑  │
         │     eMachines m6800 laptop                             ▒  │
         │     Ennyah DKB-1008                                    ▒  │
         │     Everex STEPnote                                    ▮  │
         │     FL90                                               ▒  │
         │     Fujitsu-Siemens Amilo laptop                       ▒  │
         │     Generic 101-key PC                                 ▒  │
         │     Generic 101-key PC (intl.)                         ▒  │
         │     Generic 104-key PC                                 ▒  │
         │     Generic 105-key PC (intl.)                         ↓  │
         │                                                           │
         │                                                           │
         │              <Ok>                  <Cancel>               │
         │                                                           │
         └───────────────────────────────────────────────────────────┘

lm-sensor

lm-sensor 目前不支持ARM

ls

设置一个别名rp

使用方法

rp file.name

警告

不能支持包含.的路径

lsblk

查看设备上的硬盘。

如何指导硬盘是固态硬盘还是机械硬盘

1
2
3
4
5
#!/bin/bash
echo "lsblk" | tee -a $hardware_software_conf
lsblk -o name,maj:min,rm,size,ro,type,rota,mountpoint >> $hardware_software_conf
wait
printf "\n\n****************\n" | tee -a $hardware_software_conf

使用-o参数定制输出项

lspci

lspci 查看pci设备

lspci -tv       #树状显示pci设备
lspci -vvv      #显示pci设备的详细信息
lspci -s 0002:e8:00.0 -vvv  #显示某个PCI设备的详细信息

删除pci设备并重新加载

echo 1 > /sys/bus/pci/devices/000c:21:00.0/remove   #移除网卡设备网卡设备
echo 1 > /sys/bus/pci/devices/000c:20:00.0/rescan   #在hostbridge下重新rescan可以再次找到设备
echo 1 > /sys/bus/pci/rescan                        #在pci下面也可以直接scan

lstopo

查看numa 连接图

lstopo -                    #输出到标准输出
lstopo --of txt > a.txt     #以txt格式绘制图像
lstopo --of png > a.png     #以png格式绘制图像

请参考【对比intel和kunpeng】

lustre

编译安装lustre

taishan-arm-cpu08上的安装过程

[root@taishan-arm-cpu08 ~]# history
    1  vi /etc/sysconfig/network-scripts/ifcfg-enp189s0f0
    2  vi /etc/hostname
    3  vi /etc/selinux/config
    4  systemctl disable firewalld
    5  reboot
    6  getenforce
    7  systemctl disable firewalld
    8  systemctl stop firewalld
    9  systemctl status firewalld
   10  exit
   11  ./arm_install.sh
   12   yum install ntpdate -y
   13  /usr/sbin/ntpdate 192.168.6.30
   14  exit
   15  cd kernel/
   16  yum localinstall ./*
   17  grub2-editenv list
   18  reboot
   19  ping 192.168.6.30
   20  mount /root/CentOS-7-aarch64-Everything-1810.iso /var/ftp/pub/
   21  yum install lsof gtk2 atk cairo tcl tcsh tk -y
   22  rpm -e chess-monitor-gmond-python-modules
   23  tar xf MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext.tgz
   24  cd MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext
   25  ./mlnxofedinstall
   26  reboot
   27  cat  > /etc/sysconfig/network-scripts/ifcfg-ib0 << EOF
   28  TYPE=InfiniBand
   29  BOOTPROTO=none
   30  NAME=ib0
   31  UUID=04237ab5-2ac9-4ca0-90ae-15ac3cbe09e5
   32  DEVICE=ib0
   33  ONBOOT=yes
   34  IPADDR=192.168.11.20
   35  NETMASK=255.255.255.0
   36  EOF
   37   vi /etc/sysconfig/network-scripts/ifcfg-ib0
   38  systemctl restart network
   39  exit
   40  rpm -ivh libaec-1.0.4-1.el7.aarch64.rpm  munge-libs-0.5.11-3.el7.aarch64.rpm hdf5-1.8.12-11.el7.aarch64.rpm
   41  yum install munge -y
   42  df -h
   43  mount /root/CentOS-7-aarch64-Everything-1810.iso /var/ftp/pub/
   44  yum install munge -y
   45   yum install slurm-slurmd slurm slurm-pam_slurm slurm-contribs slurm-perlapi -y
   46  exit
   47  rpm -qa|grep kernnel
   48  rpm -qa|grep kernel
   49  exit
   50  rpm -ivh  munge-libs-0.5.11-3.el7.aarch64.rpm hdf5-1.8.12-11.el7.aarch64.rpm  libaec-1.0.4-1.el7.aarch64.rpm
   51  mount /root/CentOS-7-aarch64-Everything-1810.iso /var/ftp/pub/
   52  exit
   53  rpm -ivh  munge-libs-0.5.11-3.el7.aarch64.rpm hdf5-1.8.12-11.el7.aarch64.rpm  libaec-1.0.4-1.el7.aarch64.rpm
   54  df -h
   55  exit
   56  cd MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext
   57  ./uninstall.sh
   58   rpm -e chess-monitor-gmond-python-modules-5.3.0-release.el7.aarch64
   59  ./uninstall.sh
   60  rpm -e kernel-debuginfo-4.14.0-115.el7a.aarch64 kernel-debuginfo-common-aarch64-4.14.0-115.el7a.aarch64 kernel-4.14.0-115.el7a.aarch64 kernel-devel-4.14.0-115.el7a.aarch64
   61  tar xf MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext.tgz && cd MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext && ./mlnxofedinstall
   62   reboot
   63  ip a
   64  cd lustre-arm/ &&  rpm -ivh --nodeps kmod-lustre-client-2.12.2-1.el7.aarch64.rpm  lustre-client-2.12.2-1.el7.aarch64.rpm lustre-iokit-2.12.2-1.el7.aarch64.rpm  lustre-client-debuginfo-2.12.2-1.el7.aarch64.rpm
   65   lustre_rmmod
   66  cat  >  /etc/modprobe.d/lnet.conf  << EOF
   67  options lnet networks="o2ib0(ib0)"
   68  EOF
   69  modprobe lustre && modprobe lnet
   70  lctl network up
   71  lctl ping 192.168.11.21@o2ib0
   72  umount /home/
   73  mount.lustre 192.168.11.21@o2ib0:192.168.11.22@o2ib0:/lustre /home/
   74  exit
   75  vi /etc/sysconfig/network-scripts/ifcfg-enp189s0f0
   76  systemctl restart network
   77   mount -a
   78  df -h
   79  yum -y install bison  cppunit-devel flex git gsl-devel htop  libffi-devel log4cxx-devel  openblas-devel  openssl-devel   patch readline-devel svn  xerces-c-devel
   80  exit
   81  cpupower frequency-set -g performance
   82  exit
   83  vi /etc/rc.local
   84  reboot
   85   rpm --import /etc/pki/rpm-gpg/*
   86   yum install -y epel-release iotop tmux htop perf iostat dstat netstat tree nload
   87  yum install -y epel-release iotop tmux htop perf iostat dstat netstat tree nload
   88  yum -y install wget
   89   wget http://mirrors.sohu.com/fedora-epel/epel-release-latest-7.noarch.rpm
   90  rpm -ivh epel-release-latest-7.noarch.rpm
   91  rpm --import /etc/pki/rpm-gpg/*
   92  yum install -y epel-release wget iotop tmux htop perf sysstat dstat net-tools tree nload
   93  visudo
   94  exit
   95  history | grep configure
   96  history
[root@taishan-arm-cpu08 ~]#

下面是我所用的一个名为8021q.modules的脚本,用来在我的CentOS 5.3中自动加载802.1Q模块:

#! /bin/sh

/sbin/modinfo -F filename 8021q > /dev/null 2>&1
if [ $? -eq 0 ]; then
    /sbin/modprobe 8021q
fi

lz4

据说是压缩解压缩速度最快的

获取数据集

http://sun.aei.polsl.pl/~sdeor/corpus/silesia.zip

lzbench

测试压缩解压缩能力的工具集合。 详细可以查看 加速器介绍 [1]

[1]https://compare-intel-kunpeng.readthedocs.io/zh_CN/latest/accelerator.html

mathjax

When a != 0, there are two solutions to ax^2 + bx + c = 0 and they are

x = (-b +- sqrt(b^2-4ac))/(2a) .

Mellanox ib 100G driver

在HPC场景会用到100G IB网络。 这里介绍编译安装Mellanox驱动

PCIe插有Mellanox的网卡

[root@taishan-arm-cpu02 ~]# lspci
04:00.0 Infiniband controller: Mellanox Technologies MT28908 Family [ConnectX-6]

解压缩 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64.tgz

tar -zxf MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64.tgz
cd MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64

为当前内核生成安装包

[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64]# ./mlnx_add_kernel_support.sh -m ./ --make-tgz
Note: This program will create MLNX_OFED_LINUX TGZ for rhel7.6alternate under /tmp directory.
Do you want to continue?[y/N]:
[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64]# ./mlnx_add_kernel_support.sh -m ./ --make-tgz
Note: This program will create MLNX_OFED_LINUX TGZ for rhel7.6alternate under /tmp directory.
Do you want to continue?[y/N]:y
See log file /tmp/mlnx_iso.17701_logs/mlnx_ofed_iso.17701.log

Checking if all needed packages are installed...
Building MLNX_OFED_LINUX RPMS . Please wait...
Creating metadata-rpms for 4.14.0-115.el7a.0.1.aarch64 ...
WARNING: If you are going to configure this package as a repository, then please note
WARNING: that it contains unsigned rpms, therefore, you need to disable the gpgcheck
WARNING: by setting 'gpgcheck=0' in the repository conf file.
Created /tmp/MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext.tgz
[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64]#

如果出现报错,大慨率是缺少安装依赖的软件包,根据提示安装即可。

把/tmp目录下生成的tgz复制过来,解压缩

[root@taishan-arm-cpu02 ~]# tar -zxf MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext.tgz
[root@taishan-arm-cpu02 ~]# cd MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext
[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext]#

执行安装:

[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext]# ./mlnxofedinstall
Detected rhel7u6alternate aarch64. Disabling installing 32bit rpms...
Logs dir: /tmp/MLNX_OFED_LINUX.47126.logs
General log file: /tmp/MLNX_OFED_LINUX.47126.logs/general.log
This program will install the MLNX_OFED_LINUX package on your machine.
Note that all other Mellanox, OEM, OFED, RDMA or Distribution IB packages will be removed.
Those packages are removed due to conflicts with MLNX_OFED_LINUX, do not reinstall them.

Do you want to continue?[y/N]:y

Uninstalling the previous version of MLNX_OFED_LINUX

rpm --nosignature -e --allmatches --nodeps mft mft.

Starting MLNX_OFED_LINUX-4.5-1.0.1.0 installation ...

Installing mlnx-ofa_kernel 4.5 RPM
Preparing...                          ########################################
Updating / installing...
mlnx-ofa_kernel-4.5-OFED.4.5.1.0.1.1.g########################################
Installing mlnx-ofa_kernel-modules 4.5 RPM
Preparing...                          ########################################
Updating / installing...
mlnx-ofa_kernel-modules-4.5-OFED.4.5.1########################################
Installing mlnx-ofa_kernel-devel 4.5 RPM
Preparing...                          ########################################
Updating / installing...
mlnx-ofa_kernel-devel-4.5-OFED.4.5.1.0########################################
Installing kernel-mft 4.11.0 RPM
Preparing...                          ########################################
Updating / installing...
kernel-mft-4.11.0-103.kver.4.14.0_115.########################################
Installing knem 1.1.3.90mlnx1 RPM
Preparing...                          ########################################
Updating / installing...
knem-1.1.3.90mlnx1-OFED.4.4.2.5.2.1.g9########################################
Installing knem-modules 1.1.3.90mlnx1 RPM
Preparing...                          ########################################
Updating / installing...
knem-modules-1.1.3.90mlnx1-OFED.4.4.2.########################################
Installing iser 4.5 RPM
Preparing...                          ########################################
Updating / installing...
iser-4.5-OFED.4.5.1.0.1.1.gb4fdfac.kve########################################
Installing srp 4.5 RPM
Preparing...                          ########################################
Updating / installing...
srp-4.5-OFED.4.5.1.0.1.1.gb4fdfac.kver########################################
Installing isert 4.5 RPM
Preparing...                          ########################################
Updating / installing...
isert-4.5-OFED.4.5.1.0.1.1.gb4fdfac.kv########################################
Installing mlnx-rdma-rxe 4.5 RPM
Preparing...                          ########################################
Updating / installing...
mlnx-rdma-rxe-4.5-OFED.4.5.1.0.1.1.gb4########################################
Installing mpi-selector RPM
Preparing...                          ########################################
Updating / installing...
mpi-selector-1.0.3-1.45101            ########################################
Installing user level RPMs:
Preparing...                          ########################################
ofed-scripts-4.5-OFED.4.5.1.0.1       ########################################
Preparing...                          ########################################
libibverbs-41mlnx1-OFED.4.5.0.1.0.4510########################################
Preparing...                          ########################################
libibverbs-devel-41mlnx1-OFED.4.5.0.1.########################################
Preparing...                          ########################################
libibverbs-devel-static-41mlnx1-OFED.4########################################
Preparing...                          ########################################
libibverbs-utils-41mlnx1-OFED.4.5.0.1.########################################
Preparing...                          ########################################
libmlx4-41mlnx1-OFED.4.5.0.0.3.45101  ########################################
Preparing...                          ########################################
libmlx4-devel-41mlnx1-OFED.4.5.0.0.3.4########################################
Preparing...                          ########################################
libmlx5-41mlnx1-OFED.4.5.0.3.8.45101  ########################################
Preparing...                          ########################################
libmlx5-devel-41mlnx1-OFED.4.5.0.3.8.4########################################
Preparing...                          ########################################
librxe-41mlnx1-OFED.4.4.2.4.6.45101   ########################################
Preparing...                          ########################################
librxe-devel-static-41mlnx1-OFED.4.4.2########################################
Preparing...                          ########################################
libibcm-41mlnx1-OFED.4.1.0.1.0.45101  ########################################
Preparing...                          ########################################
libibcm-devel-41mlnx1-OFED.4.1.0.1.0.4########################################
Preparing...                          ########################################
libibumad-43.1.1.MLNX20180612.87b4d9b-########################################
Preparing...                          ########################################
libibumad-devel-43.1.1.MLNX20180612.87########################################
Preparing...                          ########################################
libibumad-static-43.1.1.MLNX20180612.8########################################
Preparing...                          ########################################
libibmad-5.0.0.MLNX20181022.0361c15-0.########################################
Preparing...                          ########################################
libibmad-devel-5.0.0.MLNX20181022.0361########################################
Preparing...                          ########################################
libibmad-static-5.0.0.MLNX20181022.036########################################
Preparing...                          ########################################
ibsim-0.7mlnx1-0.11.g85c342b.45101    ########################################
Preparing...                          ########################################
ibacm-41mlnx1-OFED.4.3.3.0.0.45101    ########################################
Preparing...                          ########################################
librdmacm-41mlnx1-OFED.4.2.0.1.3.45101########################################
Preparing...                          ########################################
librdmacm-utils-41mlnx1-OFED.4.2.0.1.3########################################
Preparing...                          ########################################
librdmacm-devel-41mlnx1-OFED.4.2.0.1.3########################################
Preparing...                          ########################################
opensm-libs-5.3.0.MLNX20181108.33944a2########################################
Preparing...                          ########################################
opensm-5.3.0.MLNX20181108.33944a2-0.1.########################################
Preparing...                          ########################################
opensm-devel-5.3.0.MLNX20181108.33944a########################################
Preparing...                          ########################################
opensm-static-5.3.0.MLNX20181108.33944########################################
Preparing...                          ########################################
perftest-4.4-0.5.g1ceab48.45101       ########################################
Preparing...                          ########################################
mstflint-4.11.0-1.5.g264ffeb.45101    ########################################
Preparing...                          ########################################
mft-4.11.0-103                        ########################################
Preparing...                          ########################################
srptools-41mlnx1-5.45101              ########################################
Preparing...                          ########################################
ibutils2-2.1.1-0.100.MLNX20181114.g83a########################################
Preparing...                          ########################################
ibutils-1.5.7.1-0.12.gdcaeae2.45101   ########################################
Preparing...                          ########################################
cc_mgr-1.0-0.39.g32c9c85.45101        ########################################
Preparing...                          ########################################
dump_pr-1.0-0.35.g32c9c85.45101       ########################################
Preparing...                          ########################################
ar_mgr-1.0-0.40.g32c9c85.45101        ########################################
Preparing...                          ########################################
ibdump-5.0.0-1.45101                  ########################################
Preparing...                          ########################################
infiniband-diags-5.0.0.MLNX20181101.2a########################################
Preparing...                          ########################################
infiniband-diags-compat-5.0.0.MLNX2018########################################
Preparing...                          ########################################
qperf-0.4.9-9.45101                   ########################################
Preparing...                          ########################################
ucx-1.5.0-1.45101                     ########################################
Preparing...                          ########################################
ucx-devel-1.5.0-1.45101               ########################################
Preparing...                          ########################################
ucx-static-1.5.0-1.45101              ########################################
Preparing...                          ########################################
sharp-1.7.2.MLNX20181122.e5da787-1.451########################################
Preparing...                          ########################################
hcoll-4.2.2543-1.45101                ########################################
Preparing...                          ########################################
openmpi-4.0.0rc5-1.45101              ########################################
Preparing...                          ########################################
mlnx-ethtool-4.2-1.45101              ########################################
Preparing...                          ########################################
mlnx-iproute2-4.7.0-1.45101           ########################################
Preparing...                          ########################################
mlnxofed-docs-4.5-1.0.1.0             ########################################
Preparing...                          ########################################
mpitests_openmpi-3.2.20-e1a0676.45101 ########################################
Device (04:00.0):
        04:00.0 Infiniband controller: Mellanox Technologies MT28908 Family [ConnectX-6]
        Link Width: x8
        PCI Link Speed: 16GT/s


Installation finished successfully.


Preparing...                          ################################# [100%]
Updating / installing...
   1:mlnx-fw-updater-4.5-1.0.1.0      ################################# [100%]

Added 'RUN_FW_UPDATER_ONBOOT=no to /etc/infiniband/openib.conf

Attempting to perform Firmware update...
Querying Mellanox devices firmware ...

Device #1:
----------

  Device Type:      ConnectX6
  Part Number:      MCX653105A-EFA_Ax
  Description:      ConnectX-6 VPI adapter card; 100Gb/s (HDR100; EDR IB and 100GbE); single-port QSFP56; PCIe3.0/4.0 Socket Direct 2x8 in a row; ROHS R6
  PSID:             MT_0000000237
  PCI Device Name:  04:00.0
  Port1 MAC:        98039bcc40b8
  Port1 GUID:       98039b0300cc40b8
  Port2 MAC:        N/A
  Port2 GUID:
  Versions:         Current        Available
     FW             20.25.0262     20.24.1000
     PXE            3.5.0603       3.5.0603
     UEFI           14.18.0012     14.17.0013

  Status:           Up to date


Log File: /tmp/MLNX_OFED_LINUX.47126.logs/fw_update.log
To load the new driver, run:
/etc/init.d/openibd restart
[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext]#

启动ib驱动, 这个时候就可以看到ib0网卡接口了。 安装成功。

[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext]# /etc/init.d/openibd restart
Unloading HCA driver:                                      [  OK  ]
Loading HCA driver and Access Layer:                       [  OK  ]
[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext]#
[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext]# ip a

5: ib0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 4092 qdisc mq state DOWN group default qlen 256
    link/infiniband 20:00:18:1e:fe:80:00:00:00:00:00:00:98:03:9b:03:00:cc:40:ba brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
    inet 192.168.11.11/24 brd 192.168.11.255 scope global noprefixroute ib0
       valid_lft forever preferred_lft forever
[root@taishan-arm-cpu02 MLNX_OFED_LINUX-4.5-1.0.1.0-rhel7.6alternate-aarch64-ext]#

memory information

dmidecode

dmidecode可以获取内存的完整信息,插槽,最大内存,DRR4,内存频率,电压等。

me@ubuntu:~/stream$ sudo dmidecode -t memory
[sudo] password for me:
# dmidecode 3.1
Getting SMBIOS data from sysfs.
SMBIOS 3.0.0 present.

Handle 0x0007, DMI type 16, 23 bytes
Physical Memory Array
        Location: System Board Or Motherboard
        Use: System Memory
        Error Correction Type: None
        Maximum Capacity: 512 GB
        Error Information Handle: Not Provided
        Number Of Devices: 16

获取设备内存硬件信息:最大支持512GB,最大支持16个内存插槽,当前设备插有4个内存条,每个内存条大小是32GB。

1 内存条 0x0009

Handle 0x0009, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0007
        Error Information Handle: Not Provided
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 32 GB
        Form Factor: DIMM
        Set: None
        Locator: DIMM000 J11
        Bank Locator: SOCKET 0 CHANNEL 0 DIMM 0
        Type: DDR4
        Type Detail: Synchronous Registered (Buffered)
        Speed: 2400 MT/s
        Manufacturer: Samsung
        Serial Number: 0x351254BC
        Asset Tag: 1709
        Part Number: M393A4K40BB1-CRC
        Rank: 2
        Configured Clock Speed: 2400 MT/s
        Minimum Voltage: 1.2 V
        Maximum Voltage: 2.0 V
        Configured Voltage: 1.2 V

2 内存条 0x000D

Handle 0x000D, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0007
        Error Information Handle: Not Provided
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 32 GB
        Form Factor: DIMM
        Set: None
        Locator: DIMM020 J5
        Bank Locator: SOCKET 0 CHANNEL 2 DIMM 0
        Type: DDR4
        Type Detail: Synchronous Registered (Buffered)
        Speed: 2400 MT/s
        Manufacturer: Samsung
        Serial Number: 0x35125985
        Asset Tag: 1709
        Part Number: M393A4K40BB1-CRC
        Rank: 2
        Configured Clock Speed: 2400 MT/s
        Minimum Voltage: 1.2 V
        Maximum Voltage: 2.0 V
        Configured Voltage: 1.2 V

3 内存条 0x0011

Handle 0x0011, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0007
        Error Information Handle: Not Provided
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 32 GB
        Form Factor: DIMM
        Set: None
        Locator: DIMM100 J23
        Bank Locator: SOCKET 1 CHANNEL 0 DIMM 0
        Type: DDR4
        Type Detail: Synchronous Registered (Buffered)
        Speed: 2400 MT/s
        Manufacturer: Samsung
        Serial Number: 0x351258E0
        Asset Tag: 1709
        Part Number: M393A4K40BB1-CRC
        Rank: 2
        Configured Clock Speed: 2400 MT/s
        Minimum Voltage: 1.2 V
        Maximum Voltage: 2.0 V
        Configured Voltage: 1.2 V

4 内存条 0x0015

Handle 0x0015, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0007
        Error Information Handle: Not Provided
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 32 GB
        Form Factor: DIMM
        Set: None
        Locator: DIMM120 J17
        Bank Locator: SOCKET 1 CHANNEL 2 DIMM 0
        Type: DDR4
        Type Detail: Synchronous Registered (Buffered)
        Speed: 2400 MT/s
        Manufacturer: Samsung
        Serial Number: 0x35125924
        Asset Tag: 1709
        Part Number: M393A4K40BB1-CRC
        Rank: 2
        Configured Clock Speed: 2400 MT/s
        Minimum Voltage: 1.2 V
        Maximum Voltage: 2.0 V
        Configured Voltage: 1.2 V

free

free可以获取系统可用内存大小、内存占用情况。

root@ubuntu:~# free -h
              total        used        free      shared  buff/cache   available
Mem:           125G        810M        105G        1.1M         19G        123G
Swap:          2.0G          0B        2.0G
root@ubuntu:~# free -m
              total        used        free      shared  buff/cache   available
Mem:         128665         810      108301           1       19554      126911
Swap:          2047           0        2047
root@ubuntu:~# free -b
              total        used        free      shared  buff/cache   available
Mem:    134915833856   849604608 113562103808     1134592 20504125440 133076762624
Swap:    2147479552           0  2147479552
root@ubuntu:~#

看到可用内存是125GB,和4个43GB内存条的128GB总容量存在差距。

数据对比

计算机中的字节大小换算方式

GB MB KB B
进制 1024 1024 1024
1 1024^1 1024^2 1024^3
1 2^10 2^20 2^30
1 1024 1048576 1073741824

物理内存大小:

128G =128*2^30 B = 137438953472 B

可用实际大小:free 命令可以看到的,应用程序可使用内存为

134915833856 B ≈ 125G

两者相差

137438953472 - 134915833856 = 2523119616 B = 2.34 GB

相差内存查阅资料提示:bios会占用一部分, 内核会预留一部分,需要进一步分析

内存速率

4个内存条,都标识2400MT/sMT/s指的是MegaTransfers per second ,每秒万兆次传输。和时钟频率单位是两码事, 因为一个时钟周期内可能发生两次传输。 内存条的数据位宽是64bit,所以每个内存条的理论带宽是:

2400M * 64bit = 153600 Mbit/s = 19200 MB/s = 18.75 GB/s

stream测出的内存带宽是11416.0 MB/s,是应用程序获得的可持续带宽, 和单条内存的理论贷款还是有差距,并且内存条可以组成多通道,应该可获得的带宽要大于单条内存的带宽

DDR带宽能力

Intel Xeon 6148 1P:

2666MHz * 64bit/s ÷ 8 * 6 * 0.9 ≈ 112.4 GB/s

Kunpeng 920 4826 1P:

2933MHz * 64bit/s ÷ 8 * 8 * 0.9 ≈ 164.9 GB/s

注解

0.9 是DDR控制器效率

memtester

内存压力测试工具

安装

wget http://pyropus.ca/software/memtester/old-versions/memtester-4.3.0.tar.gz
make
#如果需要安装
make install

使用

#无限次数循环测试
./memtester 10G
#测试两次
./memtester 10G 2

minikube

The very simple way [2] to start a kubernetes cluster

在鲲鹏上安装

x86上

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
rm -rf ~/.minikube
# 设置代理
minikube start

官网的文档还没有介绍如何下载ARM64的版本, 这里给出下载办法

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm64
sudo install minikube-linux-arm6  /usr/local/bin/
sudo ln -s /usr/local/bin/minikube-linux-arm64 /usr/local/bin/minikube      # 为了方便

问题记录

无法从https://gcr.io/v2/拉取镜像
user1@Arm64-server:~/opensoftware/minikube/out$ docker pull gcr.io/k8s-minikube/kicbase:v0.0.10
Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

解决办法: 设置proxy [1]

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo vim /etc/systemd/system/docker.service.d/http-proxy.conf

[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/" "NO_PROXY=localhost,127.0.0.1,docker-registry.example.com,.corp"

或者使用cn镜像 [3]

minikube delete
minikube start --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
[1]https://docs.docker.com/config/daemon/systemd/
[2]https://minikube.sigs.k8s.io/docs/start/
[3]https://github.com/kubernetes/minikube/issues/3860

modprobe

modprobe 用于加载内核模块和卸载内核模块

modprobe hello
modprobe -r hello

insmod可以加载任意路径的模块。 modprobe只会查找和加载标准安装目录下的模块,标准安装目录通常是 /lib/modules/(kernel version)/ 。 modprobe会自动对要加载的模块查找依赖关系,如果还需要加载其它模块,那它会加载他们。而insmod会直接报错。

mpstat

显示处理器相关的数据统计

mpstat -P ALL 1

mysql

运行docker mysql [1]

MySQL Server Docker 镜像包含 mysqld , mysql client, mysqladmin, mysqldump [2]

简单运行mysql

docker run --name=mysql80 -d mysql/mysql-server:8.0

自定义配置文件和配置目录运行mysql

docker run --name=mysql80 \
--mount type=bind,src=/path-on-host-machine/my.cnf,dst=/etc/my.cnf \
--mount type=bind,src=/path-on-host-machine/datadir,dst=/var/lib/mysql \
-d mysql/mysql-server:8.0

获取密码

查看mysql-docker [1]

修改密码:

ALTER USER 'root'@'localhost' IDENTIFIED BY 'password'

查看docker容器内的mysql数据 [3]

docker exec -it mysql1 bash
ls /var/lib/mysql

组复制教程 [4]

No match for argument: mysql-community-server

在CentOS8上安装mysql8

Last metadata expiration check: 0:08:03 ago on Mon 13 Jul 2020 04:21:47 PM CST.
No match for argument: mysql-community-server
Error: Unable to find a match: mysql-community-server

解决办法

sudo yum module disable mysql
sudo yum install mysql-community-server

This member has more executed transactions than those present in the group

mysql> START GROUP_REPLICATION USER='rpl_user', PASSWORD='Huawei12#$';
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
mysql> exit
Bye
[root@s2 ~]# tail -f /var/log/mysqld.log
2020-07-21T03:00:35.741951Z 31 [System] [MY-011566] [Repl] Plugin group_replication reported: 'Setting super_read_only=OFF.'
2020-07-21T03:24:50.791249Z 30 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2020-07-21T03:25:25.119886Z 30 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.'
2020-07-21T03:25:25.124377Z 38 [System] [MY-011565] [Repl] Plugin group_replication reported: 'Setting super_read_only=ON.'
2020-07-21T03:25:25.150122Z 39 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2020-07-21T03:25:27.235240Z 0 [ERROR] [MY-011526] [Repl] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: f73f5131-c736-11ea-b750-5254009f4811:1 > Group transactions: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-4'
2020-07-21T03:25:27.235409Z 0 [ERROR] [MY-011522] [Repl] Plugin group_replication reported: 'The member contains transactions not present in the group. The member will now exit the group.'
2020-07-21T03:25:27.235543Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to s1:3306, s2:3306 on view 15953009400073079:2.'
2020-07-21T03:25:30.773866Z 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has left the group.'
2020-07-21T03:25:30.780621Z 38 [System] [MY-011566] [Repl] Plugin group_replication reported: 'Setting super_read_only=OFF.'
^C

解决办法

mysql > reset master;
mysql > START GROUP_REPLICATION USER='rpl_user', PASSWORD='Huawei12#$';
[1](1, 2) https://github.com/mysql/mysql-docker
[2]https://github.com/mysql/mysql-docker#user-content-products-included-in-the-container:~:text=A%20number%20of%20MySQL
[3]https://dev.mysql.com/doc/refman/8.0/en/docker-mysql-getting-started.html#docs-body:~:text=in%20the-,server’s%20data%20directory
[4]https://dev.mysql.com/doc/refman/8.0/en/group-replication-getting-started-deploying-instances.html

nc

使用nc测试端口TCP可用情况

TCP服务端

nc -l 5000      #监听5000端口
nc -k -l 5000   #有客户端接入后不断开

TCP客户端

# nc -v  192.168.10.12 5000     #输入文字可以再服务端看到

Example of successful connection:
# nc -z -v 192.168.10.12 22
Connection to 192.118.20.95 22 port [tcp/ssh] succeeded!
Example of unsuccessful connection:

# nc -z -v 192.168.10.12 22
nc: connect to 192.118.20.95 port 22 (tcp) failed: No route to host

Example of successful connection:

# nc -z -v -u 192.168.10.12 123
Connection to 192.118.20.95 123 port [udp/ntp] succeeded!

UDP服务端

nc -u -l 7778

UDP客户端

nc -u 192.168.1.201 7778

nethogs

很多时候我们想观察系统中哪些程序在使用网络,想知道是它们的即时网速是多少。这个时候可以使用nethogs来实现 在命令行输入

nethogs

下面可以观察到浏览器网络流量最大

PID     USER  PROGRAM                                       DEV         SENT        RECEIVED
22596    pi    ..sr/lib/chromium-browser/chromium-browser   eth0        9.148       271.128 KB/sec
22528   xrdp  /usr/sbin/xrdp                                eth0        792.582     8.918 KB/sec
  ?     root  192.168.2.168:59446-112.90.240.132:443                    0.331       0.895 KB/sec
  ?     root  192.168.2.168:59452-112.90.240.132:443                    0.297       0.194 KB/sec
1266    pi    /home/pi/frp/frpc                             eth0        0.000       0.000 KB/sec
22992    pi    sshd: pi@pts/2                               eth0        0.000       0.000 KB/sec
  ?     root  unknown TCP                                               0.000       0.000 KB/sec

TOTAL                                                                   802.359     281.135 KB/sec

netstat

使用netstat监控网络状态

列出工作中的tcp和udp端口

一般指的是连接已经建立的链接。包含程序PID和程序名,使用-p选项

netstat -tup

列出所有的tcp和udp端口

包含所有状态的链接。包含程序PID和程序名,使用-p选项

netstat -atup

以数字显示端口和主机

使用-n选项

netstat -atupn
root@ubuntu:~# netstat -atupn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      18023/mysqld
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      17722/nginx
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      17590/sshd
tcp        0      0 xxx.xxx.xxx.xxx:991    0.0.0.0:*               LISTEN      21397/python
tcp        0      0 xxx.xxx.xxx.xxx:992    0.0.0.0:*               LISTEN      21397/python
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN      19062/1
tcp6       0      0 :::7500                 :::*                    LISTEN      1832/frps
tcp6       0      0 :::8080                 :::*                    LISTEN      1832/frps
tcp6       0      0 :::80                   :::*                    LISTEN      17722/nginx
tcp6       0      0 :::7000                 :::*                    LISTEN      1832/frps
tcp6       0      0 ::1:6011                :::*                    LISTEN      19062/1
udp    42368      0 0.0.0.0:44810           0.0.0.0:*                           21397/python
udp    27648      0 0.0.0.0:50484           0.0.0.0:*                           1207/miredo
udp        0      0 127.0.0.1:4500          0.0.0.0:*                           1120/pluto
udp     4608      0 xxx.xxx.xxx.xxx:4500     0.0.0.0:*                           1120/pluto
udp        0      0 127.0.0.1:500           0.0.0.0:*                           1120/pluto
udp    15360      0 xxx.xxx.xxx.xxx:500      0.0.0.0:*                           1120/pluto
udp        0      0 xxx.xxx.xxx.xxx:991    0.0.0.0:*                           21397/python
udp    13056      0 xxx.xxx.xxx.xxx:992    0.0.0.0:*                           21397/python
udp6       0      0 ::1:500                 :::*                                1120/pluto
udp6       0      0 :::9910                 :::*                                1824/server_linux_a
root@ubuntu:~#

注意要显示PID和程序名,可能需要有root权限,否则root用户的进程 ### 显示所有网络接口

[root@ubuntu:]~# netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0       1500 0    797518      0      0 0        677533      0      0      0 BMRU
lo        65536 0       340      0      0 0           340      0      0      0 LRU
teredo     1280 0         8      0      0 0            63      0      0      0 MOPRU
root@ubuntu:~#
netstat -ie
#like ifconfig

查询指定端口上的进程

netstat -anp | grep ":80"
root@ubuntu:~# netstat -anp | grep ":80"
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      17722/nginx
tcp6       0      0 :::8080                 :::*                    LISTEN      1832/frps
tcp6       0      0 :::80                   :::*                    LISTEN      17722/nginx

只显示ipv4结果,使用-4选项

root@ubuntu:~# netstat -4anp | grep ":80"
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      17722/nginx

lsof也可以实现类似效果

lsof -i :80

显示主机路由

netstat -r
netstat -rn

IP地址查询主机名

nslookup 139.159.243.11
root@ubuntu:~# nslookup 139.159.243.11
Server:         8.8.8.8
Address:        8.8.8.8#53

Non-authoritative answer:
11.243.159.139.in-addr.arpa     name = ecs-139-159-243-11.compute.hwclouds-dns.com.

Authoritative answers can be found from:

###主机名查询IP地址

ping ecs-139-159-243-11.compute.hwclouds-dns.com
root@ubuntu:~# ping ecs-139-159-243-11.compute.hwclouds-dns.com
PING ecs-139-159-243-11.compute.hwclouds-dns.com (139.159.243.11) 56(84) bytes of data.
64 bytes from ecs-139-159-243-11.compute.hwclouds-dns.com (139.159.243.11): icmp_seq=1 ttl=44 time=160 ms
64 bytes from ecs-139-159-243-11.compute.hwclouds-dns.com (139.159.243.11): icmp_seq=2 ttl=44 time=161 ms
64 bytes from ecs-139-159-243-11.compute.hwclouds-dns.com (139.159.243.11): icmp_seq=3 ttl=44 time=160 ms
64 bytes from ecs-139-159-243-11.compute.hwclouds-dns.com (139.159.243.11): icmp_seq=4 ttl=44 time=160 ms

NFS(Network File System)

NFS网络文件系统,可以使不同系统之间共享文件或者目录。 带来的好处, 每台主机消耗更少的硬盘空间,因为可以通过过NFS共享同一个文件。操作远程目录就像在本地一样方便。

安装

# ubuntu服务端
apt install nfs-kernel-server
# ubuntu客户端
apt install nfs-common

redhat官方教程

yum install nfs-utils

配置

配置共享文件路径,配置文件是

/etc/exports

参考内容:

[root@readhat76 ~]# cat /etc/exports
/ubuntu *(ro,sync,no_root_squash)
/home   *(rw,sync,no_root_squash)
/root/nfs-test-dir *(rw,sync,no_root_squash)

修改配置文件后,可能需要执行命令以使配置文件生效

exportfs -r

重启服务

# redhat
systemctl restart nfs-server
# ubuntu
systemctl start nfs-kernel-server.service

查看共享

showmount -e ip
# 在服务端使用showmount查看是否exports成功
showmount -e localhost

可以使用systemctl查看服务的名字。

注意redhat需要关闭防火墙或者配置防火墙之后才才可以mount
注意redhat需要关闭防火墙或者配置防火墙之后才才可以mount
注意redhat需要关闭防火墙或者配置防火墙之后才才可以mount

挂载

在客户端挂载

mount -o vers=3 192.168.1.227:/root/nfs-test-dir ./1620-mount-point/

# -o 表示option
# vers=3 表示NFSv3
# 192.168.1.227:/root/nfs-test-di 表示挂载服务器下,由前面exports指定的目录
# ./1620-mount-point/   表示本机目录,在本机目录上的操作等同于操作远程目录

在客户端卸载

umount /root/1620-mount-point/
#如果出现 umount.nfs: /root/1620-mount-point: device is busy,可以使用-f选项
umount -f /root/1620-mount-point/

查看nfs服务

pi@raspberrypi:/usr/lib/systemd/system $ rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    1   udp  55205  mountd
    100005    1   tcp  52029  mountd
    100005    2   udp  54228  mountd
    100005    2   tcp  42297  mountd
    100005    3   udp  45438  mountd
    100005    3   tcp  56119  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049
    100021    1   udp  46797  nlockmgr
    100021    3   udp  46797  nlockmgr
    100021    4   udp  46797  nlockmgr
    100021    1   tcp  42021  nlockmgr
    100021    3   tcp  42021  nlockmgr
    100021    4   tcp  42021  nlockmgr
设置静态端口

有时候希望nfs服务能运行在指定端口,可以观察到原来使用的端口号如下:

pi@raspberrypi:/etc/default $ rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    1   udp  41487  mountd
    100005    1   tcp  41073  mountd
    100005    2   udp  53337  mountd
    100005    2   tcp  43843  mountd
    100005    3   udp  59561  mountd
    100005    3   tcp  37855  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049
    100021    1   udp  47977  nlockmgr
    100021    3   udp  47977  nlockmgr
    100021    4   udp  47977  nlockmgr
    100021    1   tcp  41839  nlockmgr
    100021    3   tcp  41839  nlockmgr
    100021    4   tcp  41839  nlockmgr
ubuntu或者树莓派,请参考debian的教程:https://wiki.debian.org/SecuringNFS
设置完之后的效果
pi@raspberrypi:/media/pi $ rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    1   udp   4002  mountd
    100005    1   tcp   4002  mountd
    100005    2   udp   4002  mountd
    100005    2   tcp   4002  mountd
    100005    3   udp   4002  mountd
    100005    3   tcp   4002  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049
    100021    1   udp  32768  nlockmgr
    100021    3   udp  32768  nlockmgr
    100021    4   udp  32768  nlockmgr
    100021    1   tcp  32768  nlockmgr
    100021    3   tcp  32768  nlockmgr
    100021    4   tcp  32768  nlockmgr
只启用NFSv4

有时候希望只启用NFSv4

vim /etc/default/nfs-kernel-server
#修改
RPCMOUNTDOPTS="--manage-gids"
#变为
RPCMOUNTDOPTS="--manage-gids -N 2 -N 3"
#重启服务
sudo systemctl restart nfs-kernel-server

设置之后在客户端可以观察到只有v4成功

ubuntu@ubuntu:~$ sudo mount -t nfs -o vers=3 192.168.1.201:/home/me/syncfile dir_name
mount.nfs: requested NFS version or transport protocol is not supported
ubuntu@ubuntu:~$ sudo mount -t nfs -o vers=2 192.168.1.201:/home/me/syncfile dir_name
mount.nfs: Protocol not supported
ubuntu@ubuntu:~$ sudo mount -t nfs -o vers=4 192.168.1.201:/home/me/syncfile dir_name
问题1 Stale file handle
[root@redhat76 fio-test-dir]# rm config-bash: cannot create temp file for here-document: Stale file handle
^C

可能原因有多个,我遇到的情况是因为在之前使用

mount -t nfs -ver=3 locahost:/roo/test-dir /tmp

然后没有卸载,导致系统认为/tmp满了,解决办法是

umount /tmp

nginx

优秀的反向代理服务器

端口转发

访问 http://localhost:8080 会被转发到 http://localhost:1234 这里定义一个cutomed_http实际提供内容的服务器。定义了proxy_set_header,proxy_set_header,proxy_pass,这里不清楚实际含义,注意proxy_pass写对前面定义的cutomed_http就可以了。 后面的用户认证可忽略。

upstream cutomed_http{
    server 127.0.0.1:1234;
}

# another virtual host using mix of IP-, name-, and port-based configuration
#
server {
    listen       8080;
    server_name  localhost;

    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://cutomed_http;

        auth_basic             "admin";
        auth_basic_user_file    htpasswd;
    }
}

两个端口上上运行服务

定义两个server字段

server {
       listen       8080;
       server_name  localhost;

       #charset koi8-r;

       #access_log  logs/host.access.log  main;

       location / {
           root   "D:\doc\GoodCommand\build\html";
           index  index.html index.htm;
           autoindex   on;
           autoindex_localtime on;
           charset utf-8;
           auth_basic             "admin";
           auth_basic_user_file    htpasswd;
       }
server {
   listen       8088;
   server_name  localhost;

   location / {
       proxy_set_header Host $host;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_pass http://moba_http;
       auth_basic             "admin";
       auth_basic_user_file    htpasswd;
   }

}

添加密码

htpasswd -bc publishpdf a pdf

在nginx.conf中指明文件

nmap

Network Mapper,开源的网络工具,用于网络探测和安全审计。可以扫描大规模网络

安装

sudo apt install nmap
nmap -A -T4 scanme.nmap.org     #扫描主机
nmap -sP 192.168.1.*            #ping扫描
nmap -sP 10.0-255.0-255.1-254   #ping扫描

扫描局域网

nmap -sP 192.168.1.*

出现如下结果,可以知道一共扫描了256个IP,有69个主机在线

Starting Nmap 7.60 ( https://nmap.org ) at 2019-04-03 14:26 CST
Nmap scan report for 192.168.1.1
Host is up (0.034s latency).
Nmap scan report for 192.168.1.4
Host is up (0.020s latency).
Host is up (0.00016s latency).
Nmap scan report for test-compute-1 (192.168.1.94)
Host is up (0.00029s latency).
Nmap scan report for 192.168.1.95
......
Nmap done: 256 IP addresses (69 hosts up) scanned in 1.90 seconds

扫描某台主机打开的tcp端口,猜测主机OS版本

me@ubuntu:$ sudo nmap -O -sV 192.168.1.211      #-O 操作系统探测, -sV 版本扫描
[sudo] password for me:

Starting Nmap 7.60 ( https://nmap.org ) at 2019-04-03 14:56 CST
Nmap scan report for ubuntu (192.168.1.201)
Host is up (0.000010s latency).
Not shown: 993 closed ports
PORT     STATE SERVICE       VERSION
22/tcp   open  ssh           OpenSSH 7.6p1 Ubuntu 4 (Ubuntu Linux; protocol 2.0)
25/tcp   open  smtp          Postfix smtpd
111/tcp  open  rpcbind       2-4 (RPC #100000)
139/tcp  open  netbios-ssn   Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
445/tcp  open  netbios-ssn   Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
2049/tcp open  nfs_acl       3 (RPC #100227)
3389/tcp open  ms-wbt-server xrdp
Device type: general purpose
Running: Linux 3.X|4.X
OS CPE: cpe:/o:linux:linux_kernel:3 cpe:/o:linux:linux_kernel:4
OS details: Linux 3.8 - 4.9
Network Distance: 0 hops
Service Info: Host:  ubuntu; OS: Linux; CPE: cpe:/o:linux:linux_kernel

OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 15.80 seconds
me@ubuntu:$

nmon

系统性能监测工具

ntfs

经常需要使用移动硬盘或者U盘拷贝软件,可能是格式化为ntfs格式,直接挂在会出错。可以通过额外的软件包ntfs-3G来支撑ntfs格式设备。

下载安装

wget https://tuxera.com/opensource/ntfs-3g_ntfsprogs-2017.3.23.tgz
./configure
make
make install # or 'sudo make install' if you aren't root

挂在移动硬盘

#找到为挂载的移动硬盘
lsblk

#挂载移动硬盘
mount -t ntfs-3g /dev/sda1 /mnt/windows

设置开启时自动挂载

vim /etc/fstab

# 后面添加
/dev/sda1 /mnt/windows ntfs-3g defaults 0 0

ntp

时间同步

查看时间同步过程数据包:

ntpdate -vd 192.168.1.22

可能遇到的问题:

numactl

绑定numa节点

numactl --physcpubind 0 --membind=0 ./lat_mem_rd -N 1 -P  1 10240M 512

1616 节点信息

image0 插4根内存条:DIMM000,DIMM020,DIMM100,DIMM120 内核内存条刚好分配到4个NUMA节点

me@ubuntu:/sys/class$ lscpu
Architecture:        aarch64
Byte Order:          Little Endian
CPU(s):              64
On-line CPU(s) list: 0-63
Thread(s) per core:  1
Core(s) per socket:  32
Socket(s):           2
NUMA node(s):        4
Vendor ID:           ARM
Model:               2
Model name:          Cortex-A72
Stepping:            r0p2
BogoMIPS:            100.00
L1d cache:           32K
L1i cache:           48K
L2 cache:            1024K
L3 cache:            16384K
NUMA node0 CPU(s):   0-15
NUMA node1 CPU(s):   16-31
NUMA node2 CPU(s):   32-47
NUMA node3 CPU(s):   48-63
Flags:               fp asimd evtstrm aes pmull sha1 sha2 crc32 cupid

me@ubuntu:/sys/class$ numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 0 size: 32097 MB
node 0 free: 25097 MB
node 1 cpus: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
node 1 size: 32190 MB
node 1 free: 30674 MB
node 2 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
node 2 size: 32190 MB
node 2 free: 27855 MB
node 3 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
node 3 size: 32187 MB
node 3 free: 22125 MB
node distances:
node   0   1   2   3
  0:  10  15  20  20
  1:  15  10  20  20
  2:  20  20  10  15
  3:  20  20  15  10

1620 numa节点信息

image1 插16根内存条:内存条平均分配到了每个NUMA节点。

[root@CS home]# lscpu
Architecture:          aarch64
Byte Order:            Little Endian
CPU(s):                128
On-line CPU(s) list:   0-127
Thread(s) per core:    1
Core(s) per socket:    64
Socket(s):             2
NUMA node(s):          4
Model:                 0
CPU max MHz:           2600.0000
CPU min MHz:           200.0000
BogoMIPS:              200.00
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
L3 cache:              32768K
NUMA node0 CPU(s):     0-31
NUMA node1 CPU(s):     32-63
NUMA node2 CPU(s):     64-95
NUMA node3 CPU(s):     96-127
Flags:                 fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop
[root@CS home]# numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
node 0 size: 130059 MB
node 0 free: 125156 MB
node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
node 1 size: 130937 MB
node 1 free: 127130 MB
node 2 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
node 2 size: 130937 MB
node 2 free: 113833 MB
node 3 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
node 3 size: 130935 MB
node 3 free: 130438 MB
node distances:
node   0   1   2   3
  0:  10  16  32  33
  1:  16  10  25  32
  2:  32  25  10  16
  3:  33  32  16  10

NUMA架构的内存分配, 调度策略,查看numa miss,有一篇文章 [1] 写得非常好。 使用 taskset numactl 用于绑定指定线程到核心。

[1]https://queue.acm.org/detail.cfm?id=2513149

offsetof

由结构体类型, 结构体成员,获取成员的偏移量。

实现方式【https://en.wikipedia.org/wiki/Offsetof#Implementation】

kernel中的实现

#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)

解析如下:

            (TYPE *)0           #将0转化为结构体类型的指针。
           ((Type *)0)->MEMBER  #用这个结构体指针引用成员
          &((Type *)0)->MEMBER  #获取成员变量的地址。由于起始地址是0,所以成员变量的地址也就是成员的偏移量
(size_t)  &((Type *)0)->MEMBER  #把地址转换成size_t类型

C库中使用

引用头文件

#include <stddef.h>

参考代码:

#include <stddef.h>
#include <stdio.h>

struct address {
   char name[50];
   char street[50];
   int phone;
};

int main () {
   printf("name offset = %d byte in address structure.\n",
   offsetof(struct address, name));

   printf("street offset = %d byte in address structure.\n",
   offsetof(struct address, street));

   printf("phone offset = %d byte in address structure.\n",
   offsetof(struct address, phone));

   return(0);
}

代码来自:https://www.tutorialspoint.com/c_standard_library/c_macro_offsetof.htm

openMP

如果在代码中展开了太多的for循环

diff --git a/deconvolution/clarkloop.cpp b/deconvolution/clarkloop.cpp
index 96fb91f..e017150 100644
--- a/deconvolution/clarkloop.cpp
+++ b/deconvolution/clarkloop.cpp
@@ -86,7 +86,7 @@ boost::optional<double> ClarkLoop::Run(ImageSet& convolvedResidual, const ao::uv
                        double* image = _clarkModel.Residual()[imgIndex];
                        const double* psf = doubleConvolvedPsfs[_clarkModel.Residual().PSFIndex(imgIndex)];
                        double psfFactor = componentValues[imgIndex];
-                       #pragma omp parallel for
+                       //#pragma omp parallel for
                        for(size_t px=0; px <_clarkModel.size(); ++px)
                        {
                                int psfX = _clarkModel.X(px) - x + _width/2;

由于存在数据依赖,循环无法展开,cpu会被占满

[user1@taishan-arm-cpu03 perf]$ htop

  1  [|||||||||||||  60.5%]    25 [|||||||||||||  58.6%]   49 [|||||||||||||| 60.3%]    73 [|||||||||||||  60.5%]
  2  [||||||||||||   56.6%]    26 [|||||||||||||  57.6%]   50 [|||||||||||||  61.7%]    74 [|||||||||||||  59.2%]
  3  [|||||||||||||| 64.8%]    27 [|||||||||||||  57.9%]   51 [|||||||||||||  59.9%]    75 [|||||||||||||  61.2%]
  4  [|||||||||||||  57.1%]    28 [|||||||||||||  59.2%]   52 [|||||||||||||| 61.7%]    76 [||||||||||||   58.9%]
  5  [|||||||||||||  56.2%]    29 [|||||||||||||  60.0%]   53 [|||||||||||||  60.3%]    77 [|||||||||||||  62.2%]
  6  [|||||||||||||  55.9%]    30 [|||||||||||||  58.7%]   54 [|||||||||||||  58.9%]    78 [|||||||||||||  61.0%]
  7  [|||||||||||||  57.9%]    31 [|||||||||||||| 60.8%]   55 [|||||||||||||  60.0%]    79 [|||||||||||||  60.3%]
  8  [|||||||||||||  56.8%]    32 [|||||||||||||  58.4%]   56 [||||||||||||   58.9%]    80 [||              2.0%]
  9  [|||||||||||||  59.7%]    33 [|||||||||||||  60.5%]   57 [|||||||||||||  61.2%]    81 [|||||||||||||  60.0%]
  10 [|||||||||||||  58.1%]    34 [|||||||||||||| 61.0%]   58 [|||||||||||||  60.0%]    82 [|||||||||||||  58.4%]
  11 [|||||||||||||  57.0%]    35 [|||||||||||||  59.7%]   59 [|||||||||||||  59.7%]    83 [|||||||||||||  60.5%]
  12 [|||||||||||||  56.2%]    36 [||||||||||||   58.6%]   60 [|||||||||||||  59.2%]    84 [|||||||||||||  60.8%]
  13 [|||||||||||||||69.8%]    37 [|||||||||||||  59.7%]   61 [||||||||||||   59.5%]    85 [|||||||||||||  58.7%]
  14 [||||||||||||   56.3%]    38 [|||||||||||||  59.9%]   62 [|||||||||||||| 60.3%]    86 [                0.0%]
  15 [||||||||||||   56.2%]    39 [|||||||||||||  59.7%]   63 [|||||||||||||  59.7%]    87 [|||||||||||||  60.0%]
  16 [|||||||||||||  56.2%]    40 [||||||||||||   58.2%]   64 [|||||||||||||  59.7%]    88 [|||||||||||||  59.5%]
  17 [|||||||||||||  56.2%]    41 [|||||||||||||  58.4%]   65 [|||||||||||||  59.2%]    89 [|||||||||||||  58.7%]
  18 [|||||||||||||  60.9%]    42 [|||||||||||||  59.6%]   66 [||||||||||||   57.7%]    90 [|||||||||||||  60.5%]
  19 [||||||||||||   56.5%]    43 [|||||||||||||  59.9%]   67 [|||||||||||||  60.0%]    91 [                0.0%]
  20 [||||||||||||   59.6%]    44 [|||||||||||||  57.2%]   68 [|||||||||||||  60.0%]    92 [|||||||||||||  58.9%]
  21 [||||||||||||   57.1%]    45 [|||||||||||||  59.4%]   69 [|||||||||||||  57.8%]    93 [|||||||||||||  59.5%]
  22 [||||||||||||   54.3%]    46 [|||||||||||||  60.1%]   70 [|||||||||||||  60.3%]    94 [||||||||||||   58.9%]
  23 [|||||||||||||  58.4%]    47 [|||||||||||||  60.5%]   71 [|||||||||||||  60.0%]    95 [|||||||||||||  58.1%]
  24 [||||||||||||   55.2%]    48 [|||||||||||||  59.6%]   72 [|||||||||||||  60.5%]    96 [|||||||||||||  58.3%]
  Mem[|||||                                 28.8G/1021G]   Tasks: 45, 319 thr; 57 running
  Swp[                                         0K/16.0G]   Load average: 37.59 39.73 30.82
                                                           Uptime: 5 days, 18:12:00

  PID CPU USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
52123  13 sjtu_chif  20   0 16.4G 9551M 32064 R 5575  0.9 18h33:20 /home/user1/sourcecode/wsclean-2.7/build/wscle
57611  63 sjtu_chif  20   0 16.4G 9551M 32064 S 59.7  0.9  9:26.69 /home/user1/sourcecode/wsclean-2.7/build/wscle
57632  76 sjtu_chif  20   0 16.4G 9551M 32064 S 59.7  0.9  9:35.22 /home/user1/sourcecode/wsclean-2.7/build/wscle
57643  70 sjtu_chif  20   0 16.4G 9551M 32064 R 59.7  0.9  9:34.92 /home/user1/sourcecode/wsclean-2.7/build/wscle
57677  77 sjtu_chif  20   0 16.4G 9551M 32064 R 59.7  0.9  9:21.89 /home/user1/sourcecode/wsclean-2.7/build/wscle
57678  92 sjtu_chif  20   0 16.4G 9551M 32064 R 60.3  0.9  9:33.56 /home/user1/sourcecode/wsclean-2.7/build/wscle
57658  67 sjtu_chif  20   0 16.4G 9551M 32064 R 59.7  0.9  9:38.84 /home/user1/sourcecode/wsclean-2.7/build/wscle
57651  90 sjtu_chif  20   0 16.4G 9551M 32064 S 59.0  0.9  9:22.89 /home/user1/sourcecode/wsclean-2.7/build/wscle
57668  57 sjtu_chif  20   0 16.4G 9551M 32064 S 60.3  0.9  9:18.94 /home/user1/sourcecode/wsclean-2.7/build/wscle
F1Help  F2Setup F3SearchF4FilterF5Tree  F6SortByF7Nice -F8Nice +F9Kill  F10Quit

在热点中也可以看到

Samples: 2M of event 'cycles:ppp', 4000 Hz, Event count (approx.): 805718621249
Overhead  Shared Object               Symbol
  44.99%  libgomp.so.1.0.0            [.] gomp_barrier_wait_end
  43.95%  libgomp.so.1.0.0            [.] gomp_team_barrier_wait_end
   5.59%  [kernel]                    [k] queued_spin_lock_slowpath
   0.80%  libgomp.so.1.0.0            [.] gomp_barrier_wait
   0.75%  libgomp.so.1.0.0            [.] gomp_team_barrier_wait_final
   0.65%  [kernel]                    [k] arch_cpu_idle
   0.54%  [kernel]                    [k] finish_task_switch

栈区的情况, thread1起了很多线程

Thread 2 (Thread 0xfffcba85f050 (LWP 57701)):
#0  0x0000ffff8632a6f0 in syscall () from /lib64/libc.so.6
#1  0x0000ffff8643abe4 in futex_wait (val=6569840, addr=<optimized out>) at ../.././libgomp/config/linux/futex.h:45
#2  do_wait (val=6569840, addr=<optimized out>) at ../.././libgomp/config/linux/wait.h:67
#3  gomp_barrier_wait_end (bar=<optimized out>, state=6569840) at ../.././libgomp/config/linux/bar.c:48
#4  0x0000ffff864382d8 in gomp_simple_barrier_wait (bar=<optimized out>) at ../.././libgomp/config/posix/simple-bar.h:60
#5  gomp_thread_start (xdata=<optimized out>) at ../.././libgomp/team.c:127
#6  0x0000ffff86677c48 in start_thread () from /lib64/libpthread.so.0
#7  0x0000ffff8632f600 in thread_start () from /lib64/libc.so.6
Thread 1 (Thread 0xffff85a19020 (LWP 52123)):
#0  0x0000ffff8632a6f0 in syscall () from /lib64/libc.so.6
#1  0x0000ffff8643ae74 in futex_wait (val=6569832, addr=<optimized out>) at ../.././libgomp/config/linux/futex.h:45
#2  do_wait (val=6569832, addr=<optimized out>) at ../.././libgomp/config/linux/wait.h:67
#3  gomp_team_barrier_wait_end (bar=<optimized out>, state=6569832) at ../.././libgomp/config/linux/bar.c:112
#4  0x0000ffff8643afe4 in gomp_team_barrier_wait_final (bar=<optimized out>) at ../.././libgomp/config/linux/bar.c:136
#5  0x0000ffff8643949c in gomp_team_end () at ../.././libgomp/team.c:934
#6  0x00000000005bea8c in ClarkLoop::Run (this=this@entry=0xffffc9191190, convolvedResidual=..., doubleConvolvedPsfs=...) at /home/user1/sourcecode/wsclean-2.7/deconvolution/clarkloop.cpp:89
#7  0x00000000004de618 in GenericClean::ExecuteMajorIteration (this=<optimized out>, dirtySet=..., modelSet=..., psfs=..., width=4000, height=4000, reachedMajorThreshold=@0xffffc9191ef0: true) at /home/user1/sourcecode/wsclean-2.7/deconvolution/genericclean.cpp:81
#8  0x00000000004f8d54 in ParallelDeconvolution::ExecuteMajorIteration (this=this@entry=0xffffc91936e8, dataImage=..., modelImage=..., psfImages=..., reachedMajorThreshold=@0xffffc9191ef0: true) at /home/user1/sourcecode/wsclean-2.7/deconvolution/paralleldeconvolution.cpp:164
#9  0x00000000004cdc4c in Deconvolution::Perform (this=this@entry=0xffffc91936e0, groupTable=..., reachedMajorThreshold=@0xffffc9191ef0: true, majorIterationNr=4) at /home/user1/sourcecode/wsclean-2.7/deconvolution/deconvolution.cpp:142
#10 0x0000000000482408 in WSClean::runIndependentGroup (this=this@entry=0xffffc91927f0, groupTable=..., primaryBeam=...) at /home/user1/sourcecode/wsclean-2.7/wsclean/wsclean.cpp:727
#11 0x000000000048afb0 in WSClean::RunClean (this=0xffffc91927f0) at /home/user1/sourcecode/wsclean-2.7/wsclean/wsclean.cpp:472
#12 0x0000000000461ff8 in CommandLine::Run (wsclean=...) at /home/user1/sourcecode/wsclean-2.7/wsclean/commandline.cpp:1308
#13 0x0000000000454aac in main (argc=32, argv=0xffffc9193a08) at /home/user1/sourcecode/wsclean-2.7/wscleanmain.cpp:13

完整的栈区情况请查看 52123

opencl

  • CPU: 鲲鹏920 ARM64

使用OpenCL一般包括:编写OpenCL应用程序,链接到OpenCL ICD loader,调用平台的OpenCL实现。

https://raw.githubusercontent.com/bashbaug/OpenCLPapers/markdown/images/OpenCL-ICDs.png

目前Kunpeng上可用的组合是: OpenCL ICD loader + POCL

  • 在Ubuntu上验证通过
root@0598642de616:~# clinfo
Number of platforms                               1
Platform Name                                   Portable Computing Language
Platform Vendor                                 The pocl project
Platform Version                                OpenCL 1.2 pocl 1.1 None+Asserts, LLVM 6.0.0, SLEEF, POCL_DEBUG, FP16
Platform Profile                                FULL_PROFILE
Platform Extensions                             cl_khr_icd
Platform Extensions function suffix             POCL
root@c698c179d2d2:~/opencl-book-samples/build/src/Chapter_2/HelloWorld# ./HelloWorld
Could not create GPU context, trying CPU...
0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 51 54 57 60 63 66 69 72 75 78 81 84 87
90 93 96 99 102 105 108 111 114 117 120 123 126 129 132 135 138 141 144 147 150 153
156 159 162 165 168 171 174 177 180 183 186 189 192 195 198 201 204 207 210 213 216
219 222 225 228 231 234 237 240 243 246 249 252 255 258 261 264 267 270 273 276 279
  • 在CentOS上未验证通过
[1]OpenCL ICD loader https://github.com/KhronosGroup/OpenCL-ICD-Loader
[2]portable open source implementation of the OpenCL standard http://portablecl.org/

其他问题记录:

编译工程的时候提示找不到lOpenCL的库

# github.com/filecoin-project/filecoin-ffi
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: cannot find -lOpenCL
collect2: error: ld returned 1 exit status

解决办法

sudo dnf install -y ocl-icd-devel.aarch64

openssl

签发证书流程

#CA中心生成自己的证书
# Generate private key::
openssl genrsa -out rootCA.key 2048
# Generate root certificate::
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 365 -out rootCA.pem

#向CA申请签发证书

NAME=localhost
# Generate a private key
openssl genrsa -out $NAME.key 2048
# Create a certificate-signing request
openssl req -new -key $NAME.key -out $NAME.csr

# Create a config file for the extensions
>$NAME.ext cat <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = $NAME # Be sure to include the domain name here because Common Name is not so commonly honoured by itself
DNS.2 = bar.$NAME # Optionally, add additional domains (I've added a subdomain here)
IP.1 = 127.0.0.1 # Optionally, add an IP address (if the connection which you have planned requires it)
EOF

# CA中心签发证书
# Create the signed certificate
openssl x509 -req -in $NAME.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial \
-out $NAME.crt -days 30 -sha256 -extfile $NAME.ext

在浏览器中信任根证书rootCA.pem

在服务器中安装 $NAME.crt 和 $NAME.key

https://stackoverflow.com/questions/7580508/getting-chrome-to-accept-self-signed-localhost-certificate?page=1&tab=active#tab-top

查看证书的指纹

openssl x509 -fingerprint -in server.crt

ovs

OVS(Open vSwitch) 开源虚拟交换机,支持openflow, 可以建立vm或者容器间的VxLAN网络

OVS主要有两个服务 ovsdb-server ovs-vswitchd

快速了解OVS一个简单的视频介绍 [4] 和OVS的openflow实验 [5]

ovs常用命令

ovs命令分成几类, 交换价命令以ovs-vsctl开头, 容器相关命令ovs-docker开头, openflow相关命令以ovs-ofctl开头 [2]

ovs-vsctl list-br               # 列出host上的所有交换机
ovs-vsctl list-ifaces ovs-br2   # 列出ovs-br2上的所有接口

ovs-ofctl show ovs-br2          # openflow操作,查看虚拟交换机ovs-br2信息,端口、速率等

ovs-docker add-port ovs-br1 eth1 containerA --ipaddress=173.16.1.2/24   # 添加接口到docker中
ovs-docker del-port ovs-br2 eth1 container_overlay                      # 删除docker中的接口
ovs-ofctl add-flow ovs-br2 "priority=1,in_port=1,actions=output:4"      # 根据端口添加转发规则
ovs-ofctl add-flow ovs-br2 "dl_src=<mac/mask>,actions=<action>"         #
ovs-ofctl add-flow ovs-br2 "dl_dst=66:54:7a:62:b6:10,actions=output:4"

ovs安装

make install 和 ovs-ctl start 需要root用户执行 [1]

wget https://www.openvswitch.org/releases/openvswitch-2.5.9.tar.gz
tar -xf openvswitch-2.5.9.tar.gz
cd openvswitch-2.5.9/
su - root
./boot.sh
./configure
make
make install
export PATH=$PATH:/usr/share/openvswitch/scripts
ovs-ctl -V
mkdir -p /usr/local/etc/openvswitch
ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema
mkdir -p /usr/local/var/run/openvswitch
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock     \
             --remote=db:Open_vSwitch,Open_vSwitch,manager_options     \
             --private-key=db:Open_vSwitch,SSL,private_key             \
             --certificate=db:Open_vSwitch,SSL,certificate             \
             --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert           \
             --pidfile --detach --log-file
ovs-vsctl --no-wait init
ovs-vswitchd --pidfile --detach --log-file
ovs-vsctl show

OVS上添加docker容器

向docker容器添加接口

ovs-docker add-port ovs-br1 eth1 container1 --ipaddress=173.16.1.2/24

性能测试请查看 docker network

OVS建立VXLAN overlay网络

在host1上执行操作

docker run -itd --name container_overlay ubuntu /bin/bash

ovs-vsctl add-br ovs-br2
ovs-vsctl add-port ovs-br2 vxlan0 -- set interface vxlan0 type=vxlan options:remote_ip=192.168.1.180

ovs-docker add-port ovs-br2 eth1 container_overlay --ipaddress=10.10.10.2/24 --mtu=1450
[root@centos86 ~]# ovs-vsctl show
4c77c506-329c-4c46-9f73-1fcbddcd37f4
    Bridge "ovs-br2"
        Port "ovs-br2"
            Interface "ovs-br2"
                type: internal
        Port "vxlan0"
            Interface "vxlan0"
                type: vxlan
                options: {remote_ip="192.168.1.180"}
[root@centos86 ~]#

在host2上执行操作

docker run -itd --name container_overlay ubuntu /bin/bash

ovs-vsctl add-br ovs-br2
ovs-vsctl add-port ovs-br2 vxlan0 -- set interface vxlan0 type=vxlan options:remote_ip=192.168.1.203

ovs-docker add-port ovs-br2 eth1 container_overlay --ipaddress=100.1.1.3/24
[root@localhost ~]# ovs-vsctl show
ecfa0606-a9fe-45c5-a00b-79dbc1afe918
    Bridge "ovs-br2"
        Port "vxlan0"
            Interface "vxlan0"
                type: vxlan
                options: {remote_ip="192.168.1.203"}
        Port "ovs-br2"
            Interface "ovs-br2"
                type: internal
[root@localhost ~]#

警告

VXLAN 默认使用端口4789端口进行通信。防火墙可能会吧数据包拦截。关闭防火墙或者放心端口。 设置请参考 firewall

在host2的容器中测试ping

root@b5590303e704:/# ping 100.1.1.2
PING 100.1.1.2 (100.1.1.2) 56(84) bytes of data.
64 bytes from 100.1.1.2: icmp_seq=1 ttl=64 time=0.249 ms
64 bytes from 100.1.1.2: icmp_seq=2 ttl=64 time=0.191 ms
64 bytes from 100.1.1.2: icmp_seq=3 ttl=64 time=0.148 ms
64 bytes from 100.1.1.2: icmp_seq=4 ttl=64 time=0.136 ms

更加详细的性能测试实验 docker network

ovs问题记录:

ovsdb-server nice: cannot set niceness: Permission denied
[user1@centos86 openvswitch-2.5.9]$ ovs-ctl start
Starting ovsdb-server nice: cannot set niceness: Permission denied
ovsdb-server: /var/run/openvswitch/ovsdb-server.pid.tmp: create failed (Permission denied)
                                                        [FAILED]
ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
[user1@centos86 openvswitch-2.5.9]$ sudo ovs-ctl start

[user1@centos86 ~]$ ovs-cvsctl add-br vovs-br0
ovs-vsctl: unix:/usr/local/var/run/openvswitch/db.sock: database connection failed (No such file or directory

解决办法: 参考安装步骤创建db.sock, 并且以root用户启动

[root@centos86 openvswitch-2.5.9]# ovs-ctl start
system ID not configured, please use –system-id
[root@centos86 openvswitch-2.5.9]# ovs-ctl start
Starting ovsdb-server                                      [  OK  ]
system ID not configured, please use --system-id ... failed!
Configuring Open vSwitch system IDs                        [  OK  ]
Inserting openvswitch module                               [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]

解决办法: 随机分配一个id [#ovs-ctl]_
[root@centos86 openvswitch-2.5.9]# ovs-ctl --system-id=random start

pandoc

下载地址

https://github.com/jgm/pandoc/releases/download/2.7.3/pandoc-2.7.3-windows-x86_64.msi

pandoc --from markdown --to rst

# 在docker中运行,转换html到rst文档
docker run --rm -v "$(pwd):/data" pandoc/core -f html -t rst -o /data/eipa.rst /data/eipa.html

parted

parted /dev/sdv mkpart primary 0.0GB 30.0GB parted /dev/sdv mkpart primary 30.0GB 60.0GB parted /dev/sdv mkpart primary 60.0GB 75.0GB parted /dev/sdv mkpart primary 75.0GB 90.0GB parted /dev/sdv mkpart primary 90.0GB 490.0GB parted /dev/sdv mkpart primary 490.0GB 890.0GB

warning the resulting partition is not properly aligned for best performance

如何获得扇区最好性能

# cat /sys/block/sdb/queue/optimal_io_size 1048576 # cat /sys/block/sdb/queue/minimum_io_size 262144 # cat /sys/block/sdb/alignment_offset 0 # cat /sys/block/sdb/queue/physical_block_size 512 Add optimal_io_size to alignment_offset and divide the result by physical_block_size. In my case this was (1048576 + 0) / 512 = 2048. This number is the sector at which the partition should start. Your new parted command should look like mkpart primary 2048s 100%

https://rainbow.chard.org/2013/01/30/how-to-align-partitions-for-best-performance-using-parted/

parted -a optimal /dev/sda mkpart primary 0% 4096MB

pdsh

多台服务器同时执行命令。

pdsh -w ^host.txt "uptime"

使用前提:

  1. 运行pdsh的设备可以免密登录到被控机

    参考ssh免密登录设置 设置免密登录

  2. 指定好主机列表

[root@ceph4 bringup]# cat host.txt
128.5.65.117
128.5.65.118
128.5.65.119
128.5.65.120
  1. 一般设置为ssh

原因是pdsh默认采用的是rsh登录,修改成ssh登录即可,在环境变量/etc/profile里加入:

export PDSH_RCMD_TYPE=ssh

否则会出现:

[root@ceph4 bringup]# pdsh -w ^host.txt "uptime"
pdsh@ceph4: 128.5.65.120: connect: Connection refused
pdsh@ceph4: 128.5.65.119: connect: Connection refused
pdsh@ceph4: 128.5.65.118: connect: Connection refused
pdsh@ceph4: 128.5.65.117: connect: Connection refused

perf

在内核源码当中散落有一些hook, 叫做Tracepoint的,当内核运行到Tracepoint时,会产生事件通知,这个时候Perf收集这些事件,生成报告,根据报告可以了解程序运行时的内核细节

安装

ubuntu

sudo apt install linux-tools-common
sudo apt install linux-tools-4.15.0-46-generic

常用命令

生成火焰图步骤

生成SVG三个步骤:

sudo perf record -F 99 -a -g -p 66350 -o ClarkLoop_x86.data -- sleep 60

perf script -i ClarkLoop_x86.data > ClarkLoop_x86.perf
../FlameGraph/stackcollapse-perf.pl ClarkLoop_x86.perf > ClarkLoop_x86.folded
../FlameGraph/flamegraph.pl ClarkLoop_x86.folded > ClarkLoop_x86.svg

复制粘贴执行

flame_graph_path=../FlameGraph/

perf_file="$(hostnamectl --static)-${perf_pid}-$(date +%Y-%m-%d-%H-%M-%S)"

sudo perf record -F 99 -a -g -o "$perf_file".data -- sleep 60

sudo perf script -i "$perf_file".data > "$perf_file".perf
sudo ${flame_graph_path}/stackcollapse-perf.pl "$perf_file".perf > "$perf_file".folded
sudo ${flame_graph_path}/flamegraph.pl "$perf_file".folded > "$perf_file".svg


if [ -e "$perf_file".svg ]; then
    sudo rm "$perf_file".perf "$perf_file".folded
fi

如果要去除cpu_idle

grep -v cpu_idle out.folded | ./flamegraph.pl > nonidle.svg

常用命令

perf record -o result.perf
perf stat -ddd   -a -- sleep 2

资料

design.txt 描述有perf的实现 https://elixir.bootlin.com/linux/latest/source/tools/perf/design.txt http://taozj.net/201703/linux-perf-intro.html

perf record -e block:block_rq_issue -ag
ctrl+c
perf report
perf report -i file
block:block_rq_issue    块设备IO请求发出时触发的事件
-a                      追踪所有CPU
-g                      捕获调用图(stack traces)

快捷键停止程序后,捕获的数据会保存在perf.data中,使用perf report可以打印出保存的数据。 perf report 可以打印堆栈, 公共路径,以及每个路径的百分比。

Samples: 81  of event 'block:block_rq_issue', Event count (approx.): 81
  Children      Self  Trace output
-    2.47%     2.47%  8,0 FF 0 () 18446744073709551615 + 0 [jbd2/sda2-8]
     ret_from_fork
     kthread
     kjournald2
     jbd2_journal_commit_transaction
     journal_submit_commit_record
     submit_bh
     submit_bh_wbc
     submit_bio
     generic_make_request
     blk_queue_bio
     __blk_run_queue
     scsi_request_fn
     blk_peek_request
     blk_peek_request
+    1.23%     1.23%  8,0 FF 0 () 18446744073709551615 + 0 [swapper/0]
+    1.23%     1.23%  8,0 FF 0 () 18446744073709551615 + 0 [swapper/37]
+    1.23%     1.23%  8,0 W 4096 () 1050624 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 5327136 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 12288 () 1287264 + 24 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 12288 () 5334608 + 24 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1280136 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1282984 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1285440 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1287392 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1287448 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1287480 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1287912 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1291360 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1291456 + 8 [kworker/u129:1]
+    1.23%     1.23%  8,0 W 4096 () 1291560 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1291656 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1291760 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1292360 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1292456 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1292568 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1294896 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1295416 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1295536 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1295568 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1295616 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1295808 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 1295848 + 8 [swapper/0]
+    1.23%     1.23%  8,0 W 4096 () 15747672 + 8 [swapper/0]
+    1.23%     1.23%  8,0 WM 4096 () 1050640 + 8 [kworker/u129:1]

perf list

perf list [--no-desc] [--long-desc]
            [hw|sw|cache|tracepoint|pmu|sdt|metric|metricgroup|event_glob]
cache-misses                                       [Hardware event]
cache-references                                   [Hardware event]
..........
cpu-clock                                          [Software event]
cpu-migrations OR migrations                       [Software event]
..........
bpf-output                                         [Software event]
context-switches OR cs                             [Software event]
cpu-clock                                          [Software event]
cpu-migrations OR migrations                       [Software event]
..........
armv8_pmuv3_0/br_mis_pred/                         [Kernel PMU event]
armv8_pmuv3_0/br_pred/                             [Kernel PMU event]
..........
rNNN                                               [Raw hardware event descriptor]
cpu/t1=v1[,t2=v2,t3 ...]/modifier                  [Raw hardware event descriptor]
..........
block:block_bio_backmerge                          [Tracepoint event]
block:block_bio_bounce                             [Tracepoint event]
block:block_bio_complete                           [Tracepoint event]
block:block_bio_frontmerge                         [Tracepoint event]
block:block_bio_queue                              [Tracepoint event]
block:block_bio_remap                              [Tracepoint event]
dma_fence:dma_fence_emit                           [Tracepoint event]
ext4:ext4_allocate_blocks                          [Tracepoint event]
iommu:add_device_to_group                          [Tracepoint event]
kvm:kvm_entry                                      [Tracepoint event]
...........
syscalls:sys_enter_fchmod                          [Tracepoint event]
syscalls:sys_enter_fchmodat                        [Tracepoint event]
syscalls:sys_enter_fchown                          [Tracepoint event]
syscalls:sys_enter_fchownat                        [Tracepoint event]
syscalls:sys_enter_fcntl                           [Tracepoint event]

常用事件

cpu-cycles          :统计cpu周期数,cpu周期:指一条指令的操作时间。
instructions        :机器指令数目
cache-references    :cache命中次数
cache-misses        :cache失效次数
branch-instructions :分支预测成功次数
branch-misses       :分支预测失败次数
alignment-faults    :统计内存对齐错误发生的次数,当访问的非对齐的内存地址时,内核会进行处理,已保存不会发生问题,但会降低性能
context-switches    :上下文切换次数,
cpu-clock           :cpu clock的统计,每个cpu都有一个高精度定时器
task-clock          :cpu clock中有task运行的统计
cpu-migrations      :进程运行过程中从一个cpu迁移到另一cpu的次数
page-faults         :页错误的统计
major-faults        :页错误,内存页已经被swap到硬盘上,需要I/O换回
minor-faults        :页错误,内存页在物理内存中,只是没有和逻辑页进行映射

##事件统计

perf list | awk -F: '/Tracepoint event/ { lib[$1]++ } END {
    for (l in lib) { printf "  %-16.16s %d\n", l, lib[l] } }' | sort | column

perf record 出现错误

[root@localhost perf_data]# perf record -ag fio --ramp_time=5 --runtime=60 --size=10g --ioengine=libaio --filename=/dev/sda --name=4k_read --numjobs=1 --iodepth=128 --rw=randread --bs=4k --direct=1
failed to mmap with 12 (Cannot allocate memory)

解决办法

[root@localhost perf_data]# sysctl -w vm.max_map_count=1048576
vm.max_map_count = 1048576
[root@localhost perf_data]#

最优编译选项下对比x86和ARM的差别

gcc -mcmodel=medium -O -DSTREAM_ARRAY_SIZE=100000000 stream.c -o option_O_100M_stream

ARM不支持perf mem

arm不支持

root@ubuntu:~/app/stream# perf mem record ls
failed: memory events not supported
root@ubuntu:~/app/stream#
root@ubuntu:~/app/stream# perf mem record -e list
failed: memory events not supported
root@ubuntu:~/app/stream#

x86支持

[root@localhost stream]# perf mem record -e list
ldlat-loads  : available
ldlat-stores : available
[root@localhost stream]#

perf 的cache-misses 是统计哪一层的

perf 支持下面cache相关的事件:

cache-misses            [Hardware event]        cache失效。指内存访问不由cache提供服务的事件。
cache-references        [Hardware event]        cache命中。
L1-dcache-load-misses   [Hardware cache event]  L1 数据取miss
L1-dcache-loads         [Hardware cache event]  L1 数据取命中
L1-dcache-store-misses  [Hardware cache event]  L1 数据存miss
L1-dcache-stores        [Hardware cache event]  L1 数据存命中
L1-icache-load-misses   [Hardware cache event]  L1 指令miss
L1-icache-loads         [Hardware cache event]  L1 指令命中

cache-misses 参考 内存访问不是由cache提供的记为cache-misses。含L1,L2,L3。

为什么perf统计的LDR指令比STR指令耗时更多

      :              for (j=0; j<STREAM_ARRAY_SIZE; j++)
 0.00 :        1054:       mov     x0, #0x0                        // #0
      :                  b[j] = scalar*c[j];
19.14 :        1058:       ldr     d0, [x19, x0, lsl #3]
 0.00 :        105c:       fmul    d0, d0, d8
 0.10 :        1060:       str     d0, [x21, x0, lsl #3]

可能的原因:

  1. 根据Cortex-A57的文档 , stream代码中的LDR需要至少4或2个指令周期。STR需要1个或2个指令周期来完成 (ps:没有找到A72的文档)
  2. STR可以写入cache,并不像LDR只能从内存读取,因为stream的数组大,cache是不命中的。
Instruction Group AArch64 Instructions Exec Latency
Load,scaled register post-indexed LDR,LDRSW,PRFM 4(2)
Store,scaled register post-indexed STR{T},STRB{T} 1(2)

pfring

pfring 是重放网络数据包的有有力工具

pfring发包

/home/PF_RING-6.0.2/userland/examples/pfsend -i dna1 -f /data1/rawdata110   -r4 -n100
/home/PF_RING-6.0.2/userland/examples/pfsend -i dna0 -f /data1/rawdata002 -r 5
/home/PF_RING-6.0.2/userland/examples/pfsend -i dna0 -n 0 -r 5

#更多参数
-a              Active send retry
-f <.pcap file> Send packets as read from a pcap file
-g <core_id>    Bind this app to a core
-h              Print this help
-i <device>     Device name. Use device
-l <length>     Packet length to send. Ignored with -f
-n <num>        Num pkts to send (use 0 for infinite)
-r <rate>       Rate to send (example -r 2.5 sends 2.5 Gbit/sec, -r -1 pcap capture rate)
-m <dst MAC>    Reforge destination MAC (format AA:BB:CC:DD:EE:FF)
-b <num>        Number of different IPs (balanced traffic)
-w <watermark>  TX watermark (low value=low latency) [not effective on DNA]
-z              Disable zero-copy, if supported [DNA only]
-x <if index>   Send to the selected interface, if supported
-d              Daemon mode
-P <pid file>   Write pid to the specified file (daemon mode only)
-v              Verbose
watch -d -n 1 IPNetStat 0

测试线速

134
/home/jiuzhou/bin/jz_dpdk

206
/home/PF_RING-6.0.2/userland/examples/pfsend -i dna0 -f rawdata100 -r10 -n0
/home/PF_RING-6.0.2/userland/examples/pfsend -i dna0 -r10 -n0               #需要制定要发送的IP数据包,否则自行构建的数据包可能不是IP数据包,测试结果较差

pip

python 包管理工具

pip install SomePackage==1.0.4     # specific version
pip install Somepackage --user

python -m pdb xxx.py               #使用pdb调试代码
pip install -i https://mirrors.huaweicloud.com/repository/pypi/simple  <some-package>    #使用国内加速

porting advisor

X86移植arm平台检查工具 [1]

[1]https://www.huaweicloud.com/kunpeng/software/portingadvisor.html

PowerShell

获取当前文件夹的文件名,最后更新时间,创建时间,所有者

Get-ChildItem -path . | select name,lastwritetime,CreationTime,@{Name="Owner";Expression={(Get-ACL $_.Fullname).Owner}}

proc

man 手册的内容 [1]

NAME

proc - process information pseudo-filesystem

proc - 进程信息伪文件系统

DESCRIPTION

The proc filesystem is a pseudo-filesystem which provides an interface to kernel data structures. It is commonly mounted at /proc. Typically, it is mounted automatically by the system, but it can also be mounted manually using a command such as:

proc文件系统是一个伪文件系统, 它提供了一个到内核数据结构的接口。 它通常挂载在 /proc。通常情况下系统会自动挂载它。 但是用以下命令也能挂载

mount -t proc proc /proc

Most of the files in the proc filesystem are read-only, but some files are writable, allowing kernel variables to be changed.

实际尝试了一下:

user1@intel6248:~$ mkdir fakeproc
user1@intel6248:~$ sudo mount -t proc proc ./fakeproc

这个ls fakeproc目录就可以看到进程的信息了

user1@intel6248:~$ ls fakeproc/
1      1124   1201   133    1435   153    166    1871   204    2213   24324  270    305    336    366    399    432    464    5001   586    64320  73937  81     874   9719         locks
10     1125   1202   135    1436   154    167    18731  2044   222    244    272    3053   338    3677   4      434    465    501    587    65     74     816    88    98           mdstat
100    1127   1203   1350   1437   155    168    1877   2049   22283  24430  273    306    339    368    40     435    466    502    5897   652    74081  81692  8814  99           meminfo
[1]https://man7.org/linux/man-pages/man5/proc.5.html

ps

查看线程的三种方法:

  1. 使用top命令,具体用法是 top -H

    加上这个选项,top的每一行就不是显示一个进程,而是一个线程。

  2. 使用ps命令,具体用法是 ps -xH

    这样可以查看所有存在的线程,也可以使用grep作进一步的过滤。

  3. 使用ps命令,具体用法是 ps -mq PID

    这样可以看到指定的进程产生的线程数目。

  4. ps -e -T | grep ffmpeg | wc

    -e 显示所有进程 -T 显示所有线程

pssh

常用命令如下

pssh -i -h client_hosts.txt "cat /sys/class/net/bond0/mtu"
pssh -h client_hosts.txt -i -P "iostat | grep sd* "
pssh -h hosts.txt -A -l ben -P -I<./uptime.sh
-i 指的是把每一个远程主机的输出合并后输出 对应 -P 参数
-P 远程主机有输出时马上打印
-A 提示输入密码
-h 指定主机列表文件
-l 指定用户名
-I 读取标准输入

client_hosts.txt的格式如下

root@client1:22
root@client2:22
root@client3:22
root@client4:22

其它常用选项:

pstack

linux总打印进程堆栈

pstack pid-xxx

pv

pv查看数据传输过程

[user1@centos /]$
[user1@centos /]$ pv -pra /mnt/CentOS-7-aarch64-Minimal-1810.iso > /home/user1/centos.iso
[6.23MiB/s] [5.79MiB/s] [======================>                                         ] 30%

python

pdb使用 https://www.cnblogs.com/xiaohai2003ly/p/8529472.html

主要使用的文档:python 官方中文文档 [1] Python 非官方中文翻译 [2]

[1]python 官方文档 https://docs.python.org/zh-cn/3/tutorial/index.html
[2]Python 非官方翻译 https://learnku.com/docs/tutorial/3.7.0/modules/3508

使用虚拟环境

使用虚拟环境可以保证工程依赖包完全独立于另一个工程。

安装工具包:

sudo apt install python3-vitualenv
sudo apt install python3-venv

创建虚拟环境

python3 -m venv tutorial-env

激活虚拟环境:

source tutorial-env/bin/activate

这个时候可以使用PIP安装所需要的软件包了。

退出虚拟环境:

deactivate

使用国内软件源代理安装软件包

pip install --trusted-host https://repo.huaweicloud.com -i https://repo.huaweicloud.com/repository/pypi/simple -r common/dockerfiles/requirements.txt
pip install --trusted-host https://repo.huaweicloud.com -i https://repo.huaweicloud.com/repository/pypi/simple wheel

rdtsc

rdtsc是x86的时时间戳计数器,在ARM上是没有这个寄存器的,所以需要把它替换为相应的实现。

在C/C++源文件当中,嵌入了汇编代码,使用rdtsc(X86特有的汇编指令)获取时间戳计数器的值。

#include "util/tc_monitor.h"
#include "util/tc_thread.h"
#include "util/tc_autoptr.h"

#define rdtsc(low,high) \
     __asm__ __volatile__("rdtsc" : "=a" (low), "=d" (high))

#define TNOW     tars::TC_TimeProvider::getInstance()->getNow()
#define TNOWMS   tars::TC_TimeProvider::getInstance()->getNowMs()

我们把它替换掉,使用内联函数即可:

#include "util/tc_monitor.h"
#include "util/tc_thread.h"
#include "util/tc_autoptr.h"

__inline __attribute__((always_inline)) uint64_t rdtsc() {
#if defined(__i386__)
    int64_t ret;
    __asm__ volatile ("rdtsc" : "=A" (ret) );
    return ret;
#elif defined(__x86_64__) || defined(__amd64__)
    uint32_t lo, hi;
    __asm__ __volatile__("rdtsc" : "=a" (lo), "=d" (hi));
    return (((uint64_t)hi << 32) | lo);
#elif defined(__aarch64__)
uint64_t cntvct;
    asm volatile ("isb; mrs %0, cntvct_el0; isb; " : "=r" (cntvct) :: "memory");
    return cntvct;
#else
#warning No high-precision counter available for your OS/arch
    return 0;
#endif
}

#define TNOW     tars::TC_TimeProvider::getInstance()->getNow()
#define TNOWMS   tars::TC_TimeProvider::getInstance()->getNowMs()

原调用方法

void TC_TimeProvider::setTsc(timeval& tt)
{
    uint32_t low    = 0;
    uint32_t high   = 0;
    rdtsc(low,high);
    uint64_t current_tsc    = ((uint64_t)high << 32) | low;

    uint64_t& last_tsc      = _tsc[!_buf_idx];
    timeval& last_tt        = _t[_buf_idx];
    //.....
}

修改为新调用方法:

void TC_TimeProvider::setTsc(timeval& tt)
{
    uint64_t current_tsc    = rdtsc();

    uint64_t& last_tsc      = _tsc[!_buf_idx];
    timeval& last_tt        = _t[_buf_idx];
    //....
}

linux 获取时间的办法:

man 3 clock_gettime

int clock_gettime(clockid_t clk_id, struct timespec *tp);

clk_id可以是以下的值

  • CLOCK_REALTIME 系统实时时间,随系统实时时间改变而改变
  • CLOCK_REALTIME_COARSE 低精度,更快的CLOCK_REALTIME版本
  • CLOCK_MONOTONIC 系统启动到现在的技术,不受系统用户时间跳跃的影响,但是受adjtime()和ntp影响
  • CLOCK_MONOTONIC_COARSE 低精度,更快的CLOCK_MONOTONIC版本
  • CLOCK_MONOTONIC_RAW 和CLOCK_MONOTONIC类似,但是提供基于硬件的时间,不受ntp影响
  • CLOCK_BOOTTIME 和CLOCK_MONOTONIC一样,除了记录系统休眠时间。
  • CLOCK_PROCESS_CPUTIME_ID,本进程到当前代码系统CPU花费的时间
  • CLOCK_THREAD_CPUTIME_ID,本线程到当前代码系统CPU花费的时间

示例程序:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>

int main()
{
        int i;
        struct timespec t1 = {0, 0};
        struct timespec t2 = {0, 0};

        for(i=0; i<5; i++)
        {
                if ( clock_gettime(CLOCK_REALTIME, &t1) == -1)
                {
                        perror("clock gettime");
                        exit(EXIT_FAILURE);
                }

                sleep(1);

                if ( clock_gettime(CLOCK_REALTIME, &t2) == -1)
                {
                        perror("clock gettime");
                        exit(EXIT_FAILURE);
                }
                printf("time pass:%ld ms\n", (t2.tv_sec-t1.tv_sec)*1000+
                                (t2.tv_nsec-t1.tv_nsec)/1000000);
                //t2.tv_nsec-t1.nsec是有可能是负数的
        }
}

确认机器支持哪些时钟寄存器

cat /proc/cpuinfo | grep -i tsc
flags : ... tsc  rdtscp constant_tsc nonstop_tsc ...

RDTSC没有保序的功能,所以会导致想测的指令在RDTSC区间之外进行。这样为避免CPU乱序,需要用cpuid保序,之后的CPU都有RDTSCP ,这是已经保序的指令,所以只要有这个指令应该使用这个,而不是老版的 # 获取时间戳本延时测试

在我的x86服务器上:

ClockBench.cpp
                   Method       samples     min     max     avg  median   stdev
           CLOCK_REALTIME       1023      21.00   25.00   22.37   23.00    0.88
    CLOCK_REALTIME_COARSE       1023       0.00    0.00    0.00    0.00    0.00
          CLOCK_MONOTONIC       1023      21.00 2173.00   24.37 1097.00   67.33
      CLOCK_MONOTONIC_RAW       1023     385.00  415.00  388.77  400.00    5.80
   CLOCK_MONOTONIC_COARSE       1023       0.00    0.00    0.00    0.00    0.00
              cpuid+rdtsc       1023     112.00  136.00  113.02  124.00    1.88
                   rdtscp       1023      32.00   32.00   32.00   32.00    0.00
                    rdtsc       1023      24.00   28.00   24.50   26.00    1.32
Using CPU frequency = 1.000000

reposync

启动容器用于同步

docker run -itd --rm --name reposync -v /mnt/repo/:/mnt/repo/ centos8-reposync

centos8-reposync 是已经配置好软件源的centos8镜像

reposync -p /mnt/repo --download-metadata --repo=epel

这样会在/mnt/repo/下面生成一个子目录epel

https://www.jianshu.com/p/6c3090968d71

restructuredtext

[1]语法学习资料:https://learn-rst.readthedocs.io/zh_CN/latest/rst%E6%8C%87%E4%BB%A4.html
[2]https://www.sphinx-doc.org/en/1.5/markup/inline.html
[3]https://hawkmoth.readthedocs.io/en/latest/syntax.html#syntax

英文版: https://runawayhorse001.github.io/SphinxGithub/rtxt.html 支持的高亮类型: pygments.org

文档介绍地址

https://docutils-zh-cn.readthedocs.io/zh_CN/latest/ref/rst/restructuredtext.html# https://tech.silverrainz.me/2017/03/29/use-sphinx-and-rst-to-manage-your-notes.html

中文分词 https://docs.huihoo.com/scipy/scipy-zh-cn/pydoc_write_tools.html#html 选项列表 http://docutils.sourceforge.net/docs/user/rst/quickref.html 如何添加 链接到最末尾 https://docutils-zh-cn.readthedocs.io/zh_CN/latest/ref/rst/restructuredtext.html#rst-hyperlink-references url 如何统一在末尾管理 :doc: vdbench

生成单个文件html

交叉引用例子 [2]

在图像,或者标题前,使用下划线开始设置标签, 可以在整个文档的任意地方使用 :ref: 引用这个标签

.. _my-reference-label:

Section to cross-reference
--------------------------

This is the text of the section.

It refers to the section itself, see :ref:`my-reference-label`.

分栏,或者所示边栏例子

Simple tables

Simple tables are preceded and ended with a sequence of “=” to indicate the columns, e.g:

aA bB
cC dD

Headers are indicated by another sequence of “=”, e.g:

Vokal Umlaut
aA äÄ
oO öÖ

Column spans are followed by a sequence of “-” (except for the last header or last row of the table where we must have “=”), e.g:

Inputs Output
A B A or B
False False
True False True
False True True
True True

https://rest-sphinx-memo.readthedocs.io/en/latest/ReST.html#epigraph-and-highlights

Field lists: [4]

what:

Field lists map field names to field bodies, like database records. They are often part of an extension syntax.

how:

The field marker is a colon, the field name, and a colon.

The field body may contain one or more body elements, indented relative to the field marker.

kernel svg

如何编写C语言文档 [3]

使用kerneldoc: https://return42.github.io/linuxdoc/

[4]https://sphinx-rtd-theme.readthedocs.io/en/stable/demo/lists_tables.html

rpmbuild

编译rpm包,打包rpm包

rpmbuild --rebuild httpd-2.4.6-90.el7.centos.src.rpm

问题: warning: user mockbuild does not exist

[user1@centos ~]$ rpm -ivh kernel-4.18.0-80.7.2.el7.src.rpm
Updating / installing...
   1:kernel-4.18.0-80.7.2.el7         ################################# [100%]
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root
warning: user mockbuild does not exist - using root
warning: group mockbuild does not exist - using root

解决办法

::
sudo useradd mockbuild

rsync

rsync -Pav -e "ssh -i $HOME/.ssh/somekey" username@hostname:/from/dir/ /to/dir/

http://www.lining0806.com/%e6%96%87%e4%bb%b6%e5%90%8c%e6%ad%a5%e5%88%a9%e5%99%a8%ef%bc%9arsync/

samba

在linux下挂载samba共享

sudo mount -t cifs //192.168.2.1/sda /mnt/disk

其中sda是共享命名,window访问samba共享

\\192.168.1.1\sda

win10无法发现samba参考

今天遇到了同样的问题,查了很久资料,试了几种方法。
最后通过改注册表的方法解决了

参考文章:https://www.getnas.com/2015/11/2090.html

方法
注册表定位到
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters
右边新建 → DWORD (32位) 值,新建项命名为 AllowInsecureGuestAuth ,将该项的值设置为 1。

实测改完后正常访问,可以无密码访问,win10可以微软账户,也可以pin,都不影响

sar

查看网络性能

sar -n DEV 1
Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
Average:     enp139s0    531.83    543.08    701.08    695.39      0.00      0.00      0.00      0.06
Average:    enp125s0f2      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    enp125s0f3      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    enp125s0f1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    enp125s0f0     27.57     48.99      1.96      5.88      0.00      0.00      0.00      0.00
Average:     enp131s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:     enp138s0  32810.45  19189.66   3207.88 685787.70      0.00      0.00      0.00     56.18
Average:     enp140s0    500.32    513.04    667.00    663.98      0.00      0.00      0.00      0.05
Average:        bond1   1032.14   1056.12   1368.08   1359.36      0.00      0.00      0.00      0.06
Average:           lo     98.53     98.53    244.92    244.92      0.00      0.00      0.00      0.00
Average:        bond0  66661.68  39426.31   6489.19 1411346.36      0.00      0.00      0.00     57.81
Average:     enp134s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:     enp137s0  33851.24  20236.65   3281.31 725558.67      0.00      0.00      0.00     59.44
Average:     enp133s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:     enp132s0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00

查看缺页中断

sar -B 1
12:44:19 AM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff
12:44:21 AM      0.00      0.00   1567.50      0.00  10067.50      0.00      0.00      0.00      0.00
12:44:23 AM      0.00      0.00    308.00      0.00  57089.50      0.00      0.00      0.00      0.00
12:44:25 AM      0.00      0.00   1854.00      0.00  58106.00      0.00      0.00      0.00      0.00
12:44:27 AM      0.00      0.00    681.50      0.00 136089.50      0.00      0.00      0.00      0.00
12:44:29 AM      0.00      0.00    395.00      0.00  84721.00      0.00      0.00      0.00      0.00
12:44:31 AM      0.00      0.00   1826.00      0.00  92157.00      0.00      0.00      0.00      0.00
12:44:33 AM      0.00      0.00    307.00      0.00   9526.50      0.00      0.00      0.00      0.00
12:44:35 AM      0.00      0.00   1136.50      0.00   9094.00      0.00      0.00      0.00      0.00
12:44:37 AM      0.00     12.00    684.50      0.00   7098.00      0.00      0.00      0.00      0.00
12:44:39 AM      0.00      0.00   1980.50      0.00  59208.00      0.00      0.00      0.00      0.00

Where,

3 = interval 10 = count

To view process creation statistics, enter: # sar -c 3 10

To view I/O and transfer rate statistics, enter: # sar -b 3 10

To view paging statistics, enter: # sar -B 3 10

To view block device statistics, enter: # sar -d 3 10

To view statistics for all interrupt statistics, enter: # sar -I XALL 3 10

To view device specific network statistics, enter: # sar -n DEV 3 10 # sar -n EDEV 3 10

To view CPU specific statistics, enter: # sar -P ALL # Only 1st CPU stats # sar -P 1 3 10

To view queue length and load averages statistics, enter: # sar -q 3 10

To view memory and swap space utilization statistics, enter: # sar -r 3 10 # sar -R 3 10

To view status of inode, file and other kernel tables statistics, enter: # sar -v 3 10

To view system switching activity statistics, enter: # sar -w 3 10

To view swapping statistics, enter: # sar -W 3 10

To view statistics for a given process called Apache with PID # 3256, enter: # sar -x 3256 3 10

[1]https://www.cyberciti.biz/tips/identifying-linux-bottlenecks-sar-graphs-with-ksar.html

script

平时执行命令,希望保留命令以及命令的执行结果为log,复制粘贴的手工方式太繁琐了, 是时候自动记录log了

执行命令后会重新会到命令行,但是之后的输出会保存到typescript当中

script

也可以手动指定文件

script -f program_all.log

退出script。推出shell的时候script的记录会自动结束

ctrl+d
exit

sftp

get     #下载命令
lls     #显示客户端本地文件
lpwd    #显示客户端本地文件存储路径

ShellCheck

ShellCheck 是一个shell脚本静态分析工具

写个脚本,一直要到运行时才能知道是什么错误, 不知道会在调试过程要反复执行多少次。 中途还可能产生中间文件,命令不正确很多文件,所以,写完后检查一下吧

me@ubuntu:~/virtual_machine$ shellcheck get_vm_ip.sh

In get_vm_ip.sh line 8:
                mac=$(virsh domiflist $vm | awk 'NR !=1 {print $5}')
                                      ^-- SC2086: Double quote to prevent globbing and word splitting.


In get_vm_ip.sh line 9:
                ip_match=$(arp -na | grep $mac | awk '{print $2}')
                                          ^-- SC2086: Double quote to prevent globbing and word splitting.

slurm

slurm申请集群资源

salloc -N 1 -w taishan-arm-cpu03 -p arm -n 96 salloc -N 1 -w taishan-arm-cpu02 -p arm -n 96

cal_year1.sh patch sjtu’s optimization image.sh patch sjtu’s optimization fits_warp.py patch sjtu’s optimization

socat

支持UDP and TCP 端口转发

specpu

步骤

#执行编译
cd tools/src/
/buildtools
#安装,可选,不安装也可以执行,需要引用bin目录的路径
./install.sh

#设置环境
su root
. ./shrc
#执行
./runspec -c ../config/lemon-2cpu.cfg 450 --rate 1 -noreportable
./runspec -c ../config/lemon-2cpu.cfg 450 --rate 1 -noreportable
./runspec -c ../config/lemon-2cpu.cfg 400 --rate 1 -noreportable
./runspec -c ../config/lemon-2cpu.cfg all

在ARM上编译speccpu2006

合入更改:

方法1:切换到cpu2006 ISO文件解压的根目录执行

patch -R -p1 < all_in_one.patch

有可能提示需要对应文件或这目录的写入权限。

chmod +w 目录名/文件名

patch下载地址:[all_in_one]

方法2:切换到cpu2006 git目录执行

git am --abort #保证上次合入操作停止
git am 0001-modify-to-make-compile-success.patch
如果提示权限不足,请修改文件权限或者使用sudo命令
编译

执行编译前,可能需要修改某些目录和文件的权限

sudo chmod +w tools/src -R
sudo chmod +w tools
sudo chmod +w config
sudo chmod +w MANIFEST

执行编译

cd tools/src/
/buildtools

执行成功的log

命令在根目录下执行

./bin/runspec -c d05-2cpu.cfg all --rate 64
Success: 3x400.perlbench 3x401.bzip2 3x403.gcc 3x410.bwaves 3x416.gamess 3x429.mcf 3x433.milc 3x434.zeusmp 3x435.gromacs 3x436.cactusADM 3x437.leslie3d 3x444.namd 3x445.g
obmk 3x447.dealII 3x450.soplex 3x453.povray 3x454.calculix 3x456.hmmer 3x458.sjeng 3x459.GemsFDTD 3x462.libquantum 3x464.h264ref 3x465.tonto 3x470.lbm 3x471.omnetpp 3x473
.astar 3x481.wrf 3x482.sphinx3 3x483.xalancbmk 3x998.specrand 3x999.specrand
Producing Raw Reports
mach: default
  ext: gcc43-64bit
    size: ref
      set: int
        format: raw -> /home/me/syncfile/cputool/speccpu2006/result/CINT2006.001.ref.rsf
Parsing flags for 400.perlbench base: done
Parsing flags for 401.bzip2 base: done
Parsing flags for 403.gcc base: done
Parsing flags for 429.mcf base: done
Parsing flags for 445.gobmk base: done
Parsing flags for 456.hmmer base: done
Parsing flags for 458.sjeng base: done
Parsing flags for 462.libquantum base: done
Parsing flags for 464.h264ref base: done
Parsing flags for 471.omnetpp base: done
Parsing flags for 473.astar base: done
Parsing flags for 483.xalancbmk base: done
Doing flag reduction: done
        format: flags -> /home/me/syncfile/cputool/speccpu2006/result/CINT2006.001.ref.flags.html
        format: ASCII -> /home/me/syncfile/cputool/speccpu2006/result/CINT2006.001.ref.txt
        format: CSV -> /home/me/syncfile/cputool/speccpu2006/result/CINT2006.001.ref.csv
        format: HTML -> /home/me/syncfile/cputool/speccpu2006/result/CINT2006.001.ref.html, /home/me/syncfile/cputool/speccpu2006/result/invalid.gif, /home/me/syncfile/c$
utool/speccpu2006/result/CINT2006.001.ref.gif
      set: fp
        format: raw -> /home/me/syncfile/cputool/speccpu2006/result/CFP2006.001.ref.rsf
Parsing flags for 410.bwaves base: done
Parsing flags for 416.gamess base: done
Parsing flags for 433.milc base: done
Parsing flags for 434.zeusmp base: done
Parsing flags for 435.gromacs base: done
Parsing flags for 436.cactusADM base: done
Parsing flags for 437.leslie3d base: done
Parsing flags for 444.namd base: done
Parsing flags for 447.dealII base: done
Parsing flags for 450.soplex base: done
Parsing flags for 453.povray base: done
Parsing flags for 454.calculix base: done
Parsing flags for 459.GemsFDTD base: done
Parsing flags for 465.tonto base: done
Parsing flags for 470.lbm base: done
Parsing flags for 481.wrf base: done
Parsing flags for 482.sphinx3 base: done
Doing flag reduction: done
        format: flags -> /home/me/syncfile/cputool/speccpu2006/result/CFP2006.001.ref.flags.html
        format: ASCII -> /home/me/syncfile/cputool/speccpu2006/result/CFP2006.001.ref.txt
        format: CSV -> /home/me/syncfile/cputool/speccpu2006/result/CFP2006.001.ref.csv
        format: HTML -> /home/me/syncfile/cputool/speccpu2006/result/CFP2006.001.ref.html, /home/me/syncfile/cputool/speccpu2006/result/CFP2006.001.ref.gif

The log for this run is in /home/me/syncfile/cputool/speccpu2006/result/CPU2006.001.log

runspec finished at Sat May 18 05:04:05 2019; 187651 total seconds elapsed

执行结果,请参考:

case 分数
[1616 int结果] 421
[1616 fp结果] 383
[1620 int结果] 394
[1620 fp结果] 283

分数和软硬件强相关,请注意差别。

所有的错误报告请查看 [spec cpu2006编译报错]

speccpu patch

合入针对ARM的更改,使speccpu编译成功

合入更改:

方法1: 切换到cpu2006 ISO文件解压的根目录执行

..code-block:

git apply -p1 all_in_one.patch

有可能提示需要对应文件或这目录的写入权限。

chmod +w 目录名/文件名

patch下载地址 ../resources/all_in_one.patch

方法2: 切换到cpu2006 git目录执行

git am --abort #保证上次合入操作停止
git am 0001-modify-to-make-compile-success.patch

如果提示权限不足,请修改文件权限或者使用sudo命令 patch下载地址 ../resources/0001-modify-to-make-compile-success.patch

编译

执行编译前,可能需要修改某些目录和文件的权限

sudo chmod +w tools/src -R
sudo chmod +w tools
sudo chmod +w config
sudo chmod +w MANIFEST

执行编译

cd tools/src/
./buildtools

ssh

ssh 是非常普遍的登录方式

ubuntu 默认情况下是不允许root用户通过ssh登录的。
修改/etc/ssh/sshd_config
PermitRootLogin yes

跨越堡垒机scp

一台主机很多时候在一个防火墙/堡垒机后面,我们只能先登陆一台设备,再跳转到内网的设备上。上传下载文件的时候也需要这样操作。这就很繁琐了。 事实上,使用ssh tunnel转发,可以直接ssh内网设备或者拷贝文件。

                        +-----------------+
                        | gate_user       |
+---------------+       | gate.machine.net|      +-------------------+
|               |       | :8080           |      | target_user       |
|  host 9999    +--------------------------------> target server     |
|               |       |                 |      | 192.168.2.182:22  |
+---------------+       |                 |      +-------------------+
                        |  Gate Server    |
                        |                 |
                        +-----------------+

使用单行命令

scp -o "ProxyCommand ssh gate_user@gate.machine.net -p 8080 -W %h:%p" target_user@target.machine:/home/file.png .

使用多行命令

ssh -f -N -L 19999:192.168.2.182:22 gate_user@gate.machine.net -p 8080
ssh target-user@localhost -p 19999          #login into target server
scp -P 19999 target-user@localhost:/remote/file  .

proxy web

ssh -N -D 127.0.0.1:3128 xxx@xx.x.xx.xx -p 23231

命令会在前台执行,如果想在后台执行,使用-f参数

ssh -f -N -D 127.0.0.1:3128 xxx@xx.x.xx.xx -p 23231

浏览器设置本地代理127.0.0.1:3128即可.

使用如下命令进行测试,如果能获取到网页内容,说明设置成功

curl --socks5 127.0.0.1:3128 --verbose www.baidu.com
curl -x socks5://127.0.0.1:3128 cip.cc

终端代理设置:

export http_proxy=socks5://127.0.0.1:7777
export https_proxy=socks5://127.0.0.1:7777

proxy yum

1. 请确认您的socks5服务可以连接。

telnet your_socks5_server port

  1. 修改/etc/yum.conf 文件

在文件的最后加入一行:

proxy=socks5://your_ip:port

proxy=socks5://192.168.0.47:3333

远程执行任务

ssh nick@xxx.xxx.xxx.xxx "df -h"                            #执行普通命令
ssh nick@xxx.xxx.xxx.xxx -t "top"                           #执行交互命令
ssh nick@xxx.xxx.xxx.xxx < test.sh                          #在远程主机上执行本地脚本
ssh nick@xxx.xxx.xxx.xxx 'bash -s' < test.sh helloworld    #在远程主机上执行待参数的本地脚本
ssh nick@xxx.xxx.xxx.xxx "/home/nick/test.sh"                #在远程主机上执行远程主机上的脚本

批量执行远程任务

很多时候希望在很多主机上批量执行任务

pdsh -w ^all.txt -R ssh "uptime"

all.txt保存主机IP列表

192.168.100.101
192.168.100.102
192.168.100.103
192.168.100.104
192.168.100.105
192.168.100.106
192.168.100.107
192.168.100.108

所有得主机都应该设置免密登录,也就是ssh可以直接登录主机

ssh 192.168.100.101

设置免密登录

ssh-keygen
ssh-copy-id 192.168.100.101

禁止使用用户密码登录

diff --git a/etc/ssh/sshd_config b/sshd_config
index 3194915..12a0d77 100644
--- a/etc/ssh/sshd_config
+++ b/sshd_config
@@ -62,7 +62,7 @@ AuthorizedKeysFile    .ssh/authorized_keys
 # To disable tunneled clear text passwords, change to no here!
 #PasswordAuthentication yes
 #PermitEmptyPasswords no
-PasswordAuthentication yes
+PasswordAuthentication no

 # Change to no to disable s/key passwords
 #ChallengeResponseAuthentication yes

禁止后效果

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Session stopped
    - Press <return> to exit tab
    - Press R to restart session
    - Press S to save terminal output to file

Disconnected: No supported authentication methods available (server sent: publickey,gssapi-keyex,gssapi-with-mic)

设置多都端口

在/etc/ssh/sshd_config中添加多个port选项

::
Port 22 Port 2222

知识介绍

非对称加密解密,公钥用于加密,私钥用于解密。

你可以将公钥发送给别人,加密后的数据只能通过你手中的私钥解密,第三者拦截到也没有意义。

一、https 的传输安全的原理

image0

1、客户端请求服务端

2、服务端将用于数据加密的公钥 cert_pub 返回给客户端

3、客户端对公钥进行验证(有没过期啊,办法机构合不合法啊之类的)

4、客户端生成用于加密本次会话数据的密钥 sess_key,并通过服务端返回的 cert_pub 进行加密发送给服务端(安全传输,只有 cert_pri 的拥有者才能解密出此数据)

5、服务端通过私钥 cert_pri 解密拿到 sess_key,至此,服务端和客户端都拿到了加密会话传输数据的 sess_key

6、剩下的事件就是用 sess_key 加密发送数据,接受数据后用 sess_key 解密的工作了

二、ssh 密码登录

1、客户端发送登录请求,ssh user@hostname

2、服务端接受请求,将服务端的公钥 ser_rsa.pub 发送给客户端

3、客户端输入密码,密码使用 ser_rsa.pub 加密后发送给服务端(敏感信息安全传输了)

4、服务端接受加密后的密码,使用服务端私钥 ser_rsa 解密,匹配认证密码是否合法

5、客户端生成会话数据加密 sess_key,使用 ser_rsa.pub 加密后传输给服务端(敏感信息安全传输了)

6、服务端获取到后使用 ser_rsa 解密,客户端和服务端通过 sess_key 进行会话数据安全传输

三、ssh 公钥认证登录

所谓的密钥认证,实际上是使用一对加密字符串,一个称为公钥(public key), 任何人都可以看到其内容,

用于加密;另一个称为密钥(private key),只有拥有者才能看到,用于解密。 通过公钥加密过的密文使用密

钥可以轻松解密,但根据公钥来猜测密钥却十分困难。

ssh 的密钥认证就是使用了这一特性。服务器和客户端都各自拥有自己的公钥和密钥。 为了说明方便,以下

将使用这些符号。

  • cli_pub 客户端公钥
  • cli_pri 客户端密钥
  • ser_pub 服务器公钥
  • ser_pri 服务器密钥

在认证之前,客户端需要通过某种方法将公钥 cli_pub 注册到服务器上。

认证过程分为两个步骤。

  1. 会话密钥(session key)生成
    1. 客户端 请求连接服务器,服务器将 ser_pub 发送给客户端。
    2. 服务器生成会话ID(session id),设为 sess_id,发送给客户端。
    3. 客户端生成会话密钥(session key),设为 sess_key,并计算 sess_xor = sess_id xor sess_key。
    4. 客户端将 sess_xor 用 ser_pub 进行加密,结果发送给服务器。(敏感信息加密传输)
    5. 服务器用 ser_pri 进行解密,获得 sess_xor。
    6. 服务器进行 sess_xor xor sess_id 的运算,获得 sess_key。
    7. 至此服务器和客户端都知道了会话密钥 sess_key,以后的传输都将被 sess_key 加密。
  2. 认证
    1. 服务器 生成随机数 random_str,并用 cli_pub 加密后生成结果 ency(random_str),发送给客户端
    2. 客户端使用 cli_pri 解密 ency(random_str) 得到 random_str
    3. 客户端计算 sess_key+random_str 的 md5 值 cli_md5(sess_key+random_str),sess_key 为上一步得到的会话密钥
    4. 服务器计算 sess_key+random_str 的 md5 值 ser_md5(sess_key+random_str)
    5. 客户端将 cli_md5(sess_key+random_str) 发送给服务器
    6. 服务器比较 ser_md5(sess_key+random_str) 和 cli_md5(sess_key+random_str),两者相同则认证
  3. 传输
    1. 传输的话就使用会话密钥 sess_key 进行加密和解密传输

sshfs

使用ssh映射远程主机目录到当前主机

storcli64

OS内划分raid

storcli64用法:

查看所有FW信息:storcli64 /c0 show all
查看 storcli64.exe /c0 show
创建raid1: storcli64.exe /c0 add vd r1 drives=33:0-1
创建raid10: storcli64 /c0 add vd type=raid10 size=2gb,3gb,4gb names=tmp1,tmp2,tmp3 drives=252:2-3,5,7 padperarray=2(参考)
storcli64 /c0 add vd r10 drives=0,1,2,3 pdperarray=2
storcli64 /c1 add vd r1 size=30GB drives=2,3

删除  storcli64.exe /c0/vall del

对VD*完全初始化  storcli64 /c0/v* start init full
对VD*快速初始化  storcli64 /c0/v* start init force
查询初始化进度   storcli64 /c0/vall show init

关闭硬盘cache    storcli64 /c0/vall set pdcache=off

查看误码:storcli64 /c0/pall show all

清除误码
echo -e "\n8\n2\n10\n1\nPHYID\n05\n7\n0\n11\n08\n0\n0\n" | storelibtest -expert

for((i=0;i<=23;i++)); do echo -e "\n8\n2\n10\n1\n$i\n05\n7\n0\n11\n08\n0\n0\n" | storelibtest -expert; done
清除0-23 phy误码,23可以根据需要设置

stoponerror

strace

strace 跟踪程序的系统调用

有几个参数

-t 来显示每个调用执行的时间
-T 来显示调用中花费的时间

strace 可以指定事件 [1]

strace -e open,unshare,setns,mount,umount2 ip netns exec ns1 cat /etc/whatever 2>&1
[1]https://unix.stackexchange.com/questions/471122/namespace-management-with-ip-netns-iproute2/471214#471214

stream

stream是内存性能评估的工业标准之一,工具现由弗吉尼亚计算机系维护。

官方指导: 教程

下载源码

这里以C源码为例。

wget https://www.cs.virginia.edu/stream/FTP/Code/stream.c

完整的项目代码,请访问 链接 ## 编译

gcc -O2 -mcmodel=large -fopenmp -DSTREAM_ARRAY_SIZE=10000000 -DNTIMES=30 -DOFFSET=4096 stream.c -o stream

-mcmodel=large 大内存服务器使用参数
-DSTREAM_ARRAY_SIZE=10000000 根据L3 cache的大小选择数组元素,使数组的占用的内存大小超过L3 cache的大小
-DNTIMES=30 执行测试的次数,选择最好的依次打印
-DOFFSET=4096 有可能改变数组再内存中的对齐方式

执行

./stream

1616服务器

me@ubuntu:~/code/stream$ ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 100000000 (elements), Offset = 4096 (elements)
Memory per array = 762.9 MiB (= 0.7 GiB).
Total memory required = 2288.8 MiB (= 2.2 GiB).
Each kernel will be executed 30 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Number of Threads requested = 64
Number of Threads counted = 64
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 37823 microseconds.
   (= 37823 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           58415.5     0.033761     0.027390     0.073813
Scale:          58925.3     0.031476     0.027153     0.074888
Add:            56900.2     0.047931     0.042179     0.076715
Triad:          57035.6     0.049256     0.042079     0.089866
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

1620服务器

[me@centos stream]$ ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 10000000 (elements), Offset = 4096 (elements)
Memory per array = 76.3 MiB (= 0.1 GiB).
Total memory required = 228.9 MiB (= 0.2 GiB).
Each kernel will be executed 30 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Number of Threads requested = 128
Number of Threads counted = 128
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 3460 microseconds.
   (= 3460 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:          103292.1     0.002324     0.001549     0.004953
Scale:          89145.7     0.002493     0.001795     0.004599
Add:           101608.3     0.003173     0.002362     0.004439
Triad:         105318.4     0.003154     0.002279     0.005893
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

ARM树莓派执行结果

树莓派总内存大小为1GB,内存频率没有标明

pi@raspberrypi:~/app/stream $ ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 10000000 (elements), Offset = 0 (elements)
Memory per array = 76.3 MiB (= 0.1 GiB).
Total memory required = 228.9 MiB (= 0.2 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 114310 microseconds.
   (= 114310 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            2030.0     0.079971     0.078817     0.083276
Scale:           2030.5     0.080576     0.078797     0.084133
Add:             1912.1     0.126776     0.125519     0.129104
Triad:           1652.5     0.145481     0.145232     0.145794
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

x86 PC执行结果

root@SZX:~/working/stream# ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 10000000 (elements), Offset = 0 (elements)
Memory per array = 76.3 MiB (= 0.1 GiB).
Total memory required = 228.9 MiB (= 0.2 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 14092 microseconds.
   (= 14092 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            7528.7     0.024472     0.021252     0.027480
Scale:           7773.3     0.024656     0.020583     0.028275
Add:             7866.3     0.034299     0.030510     0.036829
Triad:           8017.6     0.035185     0.029934     0.038185
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------
root@SZX:~/working/stream#

x86 服务器执行结果

me@Board:~/stream$ ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 10000000 (elements), Offset = 0 (elements)
Memory per array = 76.3 MiB (= 0.1 GiB).
Total memory required = 228.9 MiB (= 0.2 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 26998 microseconds.
   (= 26998 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            8830.0     0.018140     0.018120     0.018157
Scale:           8800.5     0.018211     0.018181     0.018317
Add:             9812.8     0.024520     0.024458     0.024679
Triad:           9722.5     0.024715     0.024685     0.024746
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------
me@Board:~/stream$ lscpu

结果分析

1616内存硬件信息:

Array Handle: 0x0007
Error Information Handle: Not Provided
Total Width: 72 bits
Data Width: 64 bits
Size: 32 GB
Form Factor: DIMM
Set: None
Locator: DIMM120 J17
Bank Locator: SOCKET 1 CHANNEL 2 DIMM 0
Type: DDR4
Type Detail: Synchronous Registered (Buffered)
Speed: 2400 MT/s
Manufacturer: Samsung
Serial Number: 0x35125924
Asset Tag: 1709
Part Number: M393A4K40BB1-CRC
Rank: 2
Configured Clock Speed: 2400 MT/s
Minimum Voltage: 1.2 V
Maximum Voltage: 1.2 V
Configured Voltage: 1.2 V

数量:4

1620内存硬件信息:

Array Handle: 0x0006
Error Information Handle: Not Provided
Total Width: 72 bits
Data Width: 64 bits
Size: 32 GB
Form Factor: DIMM
Set: None
Locator: DIMM170 J31
Bank Locator: SOCKET 1 CHANNEL 7 DIMM 0
Type: DDR4
Type Detail: Synchronous Registered (Buffered)
Speed: 2666 MT/s
Manufacturer: Samsung
Serial Number: 0x40C3BA1D
Asset Tag: 1838
Part Number: M393A4K40BB2-CTD
Rank: 2
Configured Clock Speed: 2666 MT/s
Minimum Voltage: 1.2 V
Maximum Voltage: 2.0 V
Configured Voltage: 1.2 V

数量:16

计算公式:

speed * data size /8 * DIMM number / 1024 /1024 = bandwidth
服务器 理论带宽 stream测试值
1616 2400*64/8*4/1024/1024=75GiB/s 55GiB/s
1620 2666*64/8*16/1024/1024=333GiB/s 102GiB/s

问题记录:静态数组内存大小限制

当设置的数组大小比较大时,编译器会给出报警。

[root@localhost stream]# gcc -DSTREAM_ARRAY_SIZE=100000000  stream.c -o option_no_100M_stream
/tmp/ccTzV1dQ.o: In function `main':
stream.c:(.text+0x546): relocation truncated to fit: R_X86_64_32S against `.bss'
stream.c:(.text+0x57a): relocation truncated to fit: R_X86_64_32S against `.bss'
stream.c:(.text+0x5f9): relocation truncated to fit: R_X86_64_32S against `.bss'
stream.c:(.text+0x62e): relocation truncated to fit: R_X86_64_32S against `.bss'
stream.c:(.text+0x65e): relocation truncated to fit: R_X86_64_32S against `.bss'
stream.c:(.text+0x6a0): relocation truncated to fit: R_X86_64_32S against `.bss'
stream.c:(.text+0x6b9): relocation truncated to fit: R_X86_64_32S against `.bss'
stream.c:(.text+0x6c5): relocation truncated to fit: R_X86_64_32S against `.bss'
stream.c:(.text+0x6dd): relocation truncated to fit: R_X86_64_32S against `.bss'
collect2: error: ld returned 1 exit status
[root@localhost stream]#

解决办法是添加编译选项

-mcmodel=medium

stream run on arm

  1. 查询GNU compiler 得知 -O-O1优化级别相同。stream测试结果也相同。
  2. -O-O1测试结果较好;-O2-O3可以使add 和 Triad结果变好,但是整体结果可能变差。stream官方例子带-o
  3. 不指定编译优化选项,或者指定编译优化选项为-O0时,测试结果最差
  4. stream官方要求数组大小使每个数组大于cache的4倍。数组大小满足这个需求之后,增大数组,cache-misses增加,但是性能测试结果不变

硬件信息

CPU
root@ubuntu:~# lscpu
Architecture:        aarch64
Byte Order:          Little Endian
CPU(s):              64
On-line CPU(s) list: 0-63
Thread(s) per core:  1
Core(s) per socket:  32
Socket(s):           2
NUMA node(s):        4
Vendor ID:           ARM
Model:               2
Model name:          Cortex-A72
Stepping:            r0p2
BogoMIPS:            100.00
L1d cache:           32K
L1i cache:           48K
L2 cache:            1024K
L3 cache:            16384K
NUMA node0 CPU(s):   0-15
NUMA node1 CPU(s):   16-31
NUMA node2 CPU(s):   32-47
NUMA node3 CPU(s):   48-63
Flags:               fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
内存
Handle 0x0009, DMI type 17, 40 bytes
Memory Device
Array Handle: 0x0007
Error Information Handle: Not Provided
Total Width: 72 bits
Data Width: 64 bits
Size: 32 GB
Form Factor: DIMM
Set: None
Locator: DIMM000 J11
Bank Locator: SOCKET 0 CHANNEL 0 DIMM 0
Type: DDR4
Type Detail: Synchronous Registered (Buffered)
Speed: 2400 MT/s
Manufacturer: Samsung
Serial Number: 0x351254BC
Asset Tag: 1709
Part Number: M393A4K40BB1-CRC
Rank: 2
Configured Clock Speed: 2400 MT/s
Minimum Voltage: 1.2 V
Maximum Voltage: 1.2 V
Configured Voltage: 1.2 V

root@ubuntu:~# free -h
              total        used        free      shared  buff/cache   available
Mem:           125G        1.8G        123G         17M        786M        123G
Swap:          2.0G          0B        2.0G

软件信息

root@ubuntu:~# cat/tec /etc/os-release
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
perf
root@ubuntu:~# perf -v
perf version 4.15.18
root@ubuntu:~# dpkg -s linux-tools-common
Package: linux-tools-common
Status: install ok installed
Priority: optional
Section: kernel
Installed-Size: 330
Maintainer: Ubuntu Kernel Team <kernel-team@lists.ubuntu.com>
Architecture: all
Multi-Arch: foreign
Source: linux
Version: 4.15.0-46.49
Depends: lsb-release
Description: Linux kernel version specific tools for version 4.15.0
 This package provides the architecture independent parts for kernel
 version locked tools (such as perf and x86_energy_perf_policy) for
 version PGKVER.
gcc
root@ubuntu:~# gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/aarch64-linux-gnu/7/lto-wrapper
Target: aarch64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 7.3.0-27ubuntu1~18.04' --with-bugurl=file:///usr/share/doc/gcc-7/README.Bugs --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --prefix=/usr --with-gcc-major-version-only --program-suffix=-7 --program-prefix=aarch64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-libquadmath --disable-libquadmath-support --enable-plugin --enable-default-pie --with-system-zlib --enable-multiarch --enable-fix-cortex-a53-843419 --disable-werror --enable-checking=release --build=aarch64-linux-gnu --host=aarch64-linux-gnu --target=aarch64-linux-gnu
Thread model: posix
gcc version 7.3.0 (Ubuntu/Linaro 7.3.0-27ubuntu1~18.04)

执行结果

数组10000000,选项无

root@ubuntu:~/app/stream# gcc stream.c -o stream
root@ubuntu:~/app/stream# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 10000000 (elements), Offset = 0 (elements)
Memory per array = 76.3 MiB (= 0.1 GiB).
Total memory required = 228.9 MiB (= 0.2 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 2 microseconds.
Each test below will take on the order of 62633 microseconds.
   (= 31316 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            2549.2     0.062770     0.062765     0.062775
Scale:           3186.0     0.050415     0.050220     0.051743
Add:             4065.9     0.059105     0.059028     0.059161
Triad:           4217.8     0.056916     0.056902     0.056935
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

        60,424,173      cache-misses

       2.718200988 seconds time elapsed

数组10000000,选项-O1

root@ubuntu:~/app/stream# gcc -O1 stream.c -o stream
perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 10000000 (elements), Offset = 0 (elements)
Memory per array = 76.3 MiB (= 0.1 GiB).
Total memory required = 228.9 MiB (= 0.2 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 2 microseconds.
Each test below will take on the order of 15230 microseconds.
   (= 7615 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10603.1     0.015104     0.015090     0.015135
Scale:          11113.3     0.014412     0.014397     0.014426
Add:            11757.3     0.020444     0.020413     0.020470
Triad:          11739.4     0.020467     0.020444     0.020485
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

         8,937,017      cache-misses

       0.935925494 seconds time elapsed

数组10000000,选项-O2

root@ubuntu:~/app/stream# gcc -O2 stream.c -o streamperf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 10000000 (elements), Offset = 0 (elements)
Memory per array = 76.3 MiB (= 0.1 GiB).
Total memory required = 228.9 MiB (= 0.2 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 14916 microseconds.
   (= 14916 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10847.5     0.014777     0.014750     0.014815
Scale:          11175.5     0.014349     0.014317     0.014374
Add:            11782.7     0.020399     0.020369     0.020430
Triad:          11778.0     0.020391     0.020377     0.020417
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

         8,511,736      cache-misses

       0.916443067 seconds time elapsed

数组10000000,选项-O3

root@ubuntu:~/app/stream# gcc -O3 stream.c -o stream
perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 10000000 (elements), Offset = 0 (elements)
Memory per array = 76.3 MiB (= 0.1 GiB).
Total memory required = 228.9 MiB (= 0.2 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 15007 microseconds.
   (= 15007 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           11190.4     0.014314     0.014298     0.014326
Scale:          11327.3     0.014139     0.014125     0.014156
Add:            11374.4     0.021113     0.021100     0.021124
Triad:          11753.8     0.020434     0.020419     0.020447
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

        14,925,428      cache-misses

       0.911908645 seconds time elapsed

数组10000000,选项-O0

root@ubuntu:~/app/stream# gcc -O0 stream.c -o stream
perf stat -e cache-misses ./stream
gcc -O0 stream.c -o streamperf stat -e cache-misses ./stream
-

数组20000000,选项无

root@ubuntu:~/app/stream# gcc -DSTREAM_ARRAY_SIZE 20000000 stream.c= -o stream
root@ubuntu:~/app/stream# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 2 microseconds.
Each test below will take on the order of 125238 microseconds.
   (= 62619 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            2549.6     0.125998     0.125508     0.128678
Scale:           3345.5     0.097012     0.095650     0.101765
Add:             4172.2     0.117473     0.115047     0.120862
Triad:           4232.1     0.190047     0.113418     0.794803
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       135,075,891      cache-misses

       6.586831272 seconds time elapsed

数组20000000,选项-O

root@ubuntu:~/app/stream# gcc -O -DSTREAM_ARRAY_SIZE=20000000 stream.c -o streamperf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 2 microseconds.
Each test below will take on the order of 29887 microseconds.
   (= 14943 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           11145.6     0.028769     0.028711     0.028829
Scale:          11149.8     0.028731     0.028700     0.028757
Add:            12317.8     0.039278     0.038968     0.039673
Triad:          12387.1     0.038914     0.038750     0.039191
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

        19,804,344      cache-misses

       1.803501622 seconds time elapsed

数组20000000,选项-O1

root@ubuntu:~/app/stream# gcc -O1 -DSTREAM_ARRAY_SIZE=20000000 stream.c -o streamperf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 2 microseconds.
Each test below will take on the order of 32049 microseconds.
   (= 16024 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            9760.2     0.032807     0.032786     0.032823
Scale:           9978.5     0.032094     0.032069     0.032113
Add:            11772.8     0.040799     0.040772     0.040848
Triad:          11914.5     0.040312     0.040287     0.040324
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

        21,508,150      cache-misses

       1.925709392 seconds time elapsed

数组20000000,选项-O2

root@ubuntu:~/app/stream# gcc -O2 -DSTREAM_ARRAY_SIZE=20000000 stream.c -o streamperf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 2 microseconds.
Each test below will take on the order of 31427 microseconds.
   (= 15713 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            9762.9     0.032804     0.032777     0.032827
Scale:           9688.2     0.033068     0.033030     0.033112
Add:            12236.8     0.039240     0.039226     0.039267
Triad:          12132.6     0.039607     0.039563     0.039621
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

        19,257,528      cache-misses

       1.883242611 seconds time elapsed

数组20000000,选项-O3

root@ubuntu:~/app/stream# gcc -O3 -DSTREAM_ARRAY_SIZE=20000000 stream.c -o streamperf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 2 microseconds.
Each test below will take on the order of 31440 microseconds.
   (= 15720 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10445.6     0.030642     0.030635     0.030653
Scale:          10470.5     0.030577     0.030562     0.030598
Add:            11711.1     0.040998     0.040987     0.041024
Triad:          11779.7     0.040759     0.040748     0.040772
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

        30,113,752      cache-misses

       1.845901002 seconds time elapsed

数组20000000,选项-O0

root@ubuntu:~/app/stream# gcc -O0 -DSTREAM_ARRAY_SIZE=20000000 stream.c -o streamperf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 2 microseconds.
Each test below will take on the order of 125272 microseconds.
   (= 62636 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            2549.0     0.126023     0.125538     0.128636
Scale:           3220.9     0.099850     0.099352     0.101575
Add:             4206.3     0.117327     0.114115     0.120934
Triad:           4233.4     0.114978     0.113385     0.118181
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       124,664,340      cache-misses

       5.506577423 seconds time elapsed

stream run on x86

  1. 查询GNU compiler 得知 -O-O1优化级别相同。stream测试结果也相同。
  2. -O-O1测试结果较好,stream官方例子带-o
  3. 不指定编译优化选项,或者指定编译优化选项为-O0时,测试结果最差
  4. stream官方要求数组大小使每个数组大于cache的4倍。数组大小满足这个需求之后,增大数组,cache-misses增加,但是性能测试结果不变

硬件信息

cpu E5-2697A v4 @ 2.60GHz 64核 L3 cache 40MB

[root@localhost ~]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                64
On-line CPU(s) list:   0-63
Thread(s) per core:    2
Core(s) per socket:    16
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
Stepping:              1
CPU MHz:               1199.196
CPU max MHz:           3600.0000
CPU min MHz:           1200.0000
BogoMIPS:              5188.11
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              40960K
NUMA node0 CPU(s):     0-15,32-47
NUMA node1 CPU(s):     16-31,48-63
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts

内存

32G*16条 = 512G内存 频率2133MT/s

Handle 0x0017, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0000
        Error Information Handle: Not Provided
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 32 GB
        Form Factor: DIMM
        Set: None
        Locator: DIMM130
        Bank Locator: _Node1_Channel3_Dimm0
        Type: DRAM
        Type Detail: Synchronous Registered (Buffered)
        Speed: 2133 MT/s
        Manufacturer: Hynix
        Serial Number: 0x116E4F85
        Asset Tag: NO DIMM
        Part Number: HMA84GR7MFR4N-TF
        Rank: 2
        Configured Clock Speed: 2133 MT/s
        Minimum Voltage: 1.2 V
        Maximum Voltage: 1.2 V
        Configured Voltage: 1.2 V


[root@localhost ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           503G        926M        499G         26M        2.8G        499G
Swap:          4.0G          0B        4.0G

软件信息

OS centos 7
[root@localhost ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
stream 5.10
/* Program: STREAM                                                       */
/* Revision: $Id: stream.c,v 5.10 2013/01/17 16:01:06 mccalpin Exp mccalpin $ */
/*
perf 3.10.0
Installed Packages
Name        : perf
Arch        : x86_64
Version     : 3.10.0
Release     : 957.5.1.el7
Size        : 5.4 M
Repo        : installed
From repo   : updates
Summary     : Performance monitoring for the Linux kernel
URL         : http://www.kernel.org/
License     : GPLv2
Description : This package contains the perf tool, which enables performance monitoring
            : of the Linux kernel.
gcc 4.8.5
[root@localhost stream]# gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,go,lto --enable-plugin --enable-initfini-array --disable-libgcj --with-isl=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/isl-install --with-cloog=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/cloog-install --enable-gnu-indirect-function --with-tune=generic --with-arch_32=x86-64 --build=x86_64-redhat-linux
Thread model: posix
gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)

执行结果

数组10000000,选项无

[root@localhost stream]# gcc stream.c -o stream
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 10000000 (elements), Offset = 0 (elements)
Memory per array = 76.3 MiB (= 0.1 GiB).
Total memory required = 228.9 MiB (= 0.2 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 27683 microseconds.
   (= 27683 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            6063.6     0.026436     0.026387     0.026485
Scale:           5873.7     0.027301     0.027240     0.027391
Add:             8484.2     0.028379     0.028288     0.028467
Triad:           7965.8     0.030200     0.030129     0.030277
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       111,963,498      cache-misses

       1.291149781 seconds time elapsed

数组20000000,选项无

gcc -DSTREAM_ARRAY_SIZE=20000000 stream.c -o stream

为了让数组大小大于L3cache的4倍,应该设置20000000个数组元素

200000000*8/1024/1024 = 152 MB
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 50360 microseconds.
   (= 50360 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            5960.6     0.053784     0.053686     0.054260
Scale:           5867.8     0.054635     0.054535     0.055155
Add:             8444.3     0.056898     0.056843     0.056956
Triad:           7965.9     0.060358     0.060257     0.060863
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       212,489,174      cache-misses

       2.579120788 seconds time elapsed

[root@localhost stream]#

结果相差不多,默认数组大小在x86上执行结果正确。

数组20000000,选项-O1

指导文档使用-O,经查,等于-O1

gcc -O -DSTREAM_ARRAY_SIZE=20000000 stream.c -o stream
[root@localhost stream]# gcc -O -DSTREAM_ARRAY_SIZE=20000000 stream.c -o stream
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 18355 microseconds.
   (= 18355 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10046.5     0.031868     0.031852     0.031885
Scale:          10236.7     0.031280     0.031260     0.031298
Add:            10847.5     0.044293     0.044250     0.044328
Triad:          11011.7     0.043612     0.043590     0.043641
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       163,072,098      cache-misses

       1.749581755 seconds time elapsed

[root@localhost stream]#

数组20000000,选项-O1

[root@localhost stream]# gcc -O1 -DSTREAM_ARRAY_SIZE=20000000 stream.c -o stream
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 18549 microseconds.
   (= 18549 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10058.8     0.031857     0.031813     0.031907
Scale:          10222.4     0.031368     0.031304     0.031422
Add:            10832.0     0.044360     0.044313     0.044405
Triad:          10977.7     0.043773     0.043725     0.043835
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       162,980,110      cache-misses

       1.757360340 seconds time elapsed

[root@localhost stream]#

数组20000000,选项-O2

[root@localhost stream]# gcc -O2 -DSTREAM_ARRAY_SIZE=20000000 stream.c -o stream
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 18338 microseconds.
   (= 18338 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10048.3     0.031864     0.031846     0.031882
Scale:          10144.9     0.031571     0.031543     0.031592
Add:            10861.4     0.044214     0.044193     0.044234
Triad:          10896.2     0.044092     0.044052     0.044117
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       163,743,497      cache-misses

       1.761638820 seconds time elapsed

数组20000000,选项-O3

[root@localhost stream]# gcc -O3 -DSTREAM_ARRAY_SIZE=20000000 stream.c -o stream
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 18628 microseconds.
   (= 18628 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           16874.0     0.018975     0.018964     0.018988
Scale:           9966.6     0.032122     0.032107     0.032143
Add:            10795.3     0.044488     0.044464     0.044501
Triad:          10761.4     0.044620     0.044604     0.044649
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       155,187,006      cache-misses

       1.653922727 seconds time elapsed

[root@localhost stream]#

数组20000000,选项-O0

[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 20000000 (elements), Offset = 0 (elements)
Memory per array = 152.6 MiB (= 0.1 GiB).
Total memory required = 457.8 MiB (= 0.4 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 50331 microseconds.
   (= 50331 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:            5956.9     0.053873     0.053719     0.054393
Scale:           5870.6     0.054687     0.054509     0.055268
Add:             8448.0     0.056944     0.056818     0.057079
Triad:           7960.3     0.060478     0.060299     0.061003
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       212,044,019      cache-misses

       2.581445722 seconds time elapsed

[root@localhost stream]#

数组30000000,选项-O

[root@localhost stream]# gcc -O -DSTREAM_ARRAY_SIZE=30000000 stream.c -o stream
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 30000000 (elements), Offset = 0 (elements)
Memory per array = 228.9 MiB (= 0.2 GiB).
Total memory required = 686.6 MiB (= 0.7 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 26309 microseconds.
   (= 26309 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10593.9     0.045330     0.045309     0.045347
Scale:          10608.0     0.045292     0.045249     0.045328
Add:            11352.4     0.063469     0.063423     0.063617
Triad:          11288.3     0.063819     0.063783     0.063925
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       249,967,744      cache-misses

       2.531537686 seconds time elapsed

数组40000000,选项-O

[root@localhost stream]# gcc -O -DSTREAM_ARRAY_SIZE=40000000 stream.c -o stream
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 40000000 (elements), Offset = 0 (elements)
Memory per array = 305.2 MiB (= 0.3 GiB).
Total memory required = 915.5 MiB (= 0.9 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 35411 microseconds.
   (= 35411 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10221.9     0.062669     0.062611     0.062891
Scale:          10435.7     0.061434     0.061328     0.061619
Add:            10980.6     0.087577     0.087427     0.087796
Triad:          11013.6     0.087207     0.087165     0.087377
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       334,073,899      cache-misses

       3.469917205 seconds time elapsed

数组50000000,选项-O

[root@localhost stream]# gcc -O -DSTREAM_ARRAY_SIZE=50000000 stream.c -o stream
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 50000000 (elements), Offset = 0 (elements)
Memory per array = 381.5 MiB (= 0.4 GiB).
Total memory required = 1144.4 MiB (= 1.1 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 44100 microseconds.
   (= 44100 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10957.6     0.073081     0.073009     0.073334
Scale:          10329.6     0.077563     0.077447     0.077736
Add:            11045.7     0.108870     0.108640     0.109140
Triad:          11196.7     0.107286     0.107175     0.107587
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       426,955,169      cache-misses

       4.245849035 seconds time elapsed

数组60000000,选项-O

[root@localhost stream]# gcc -O -DSTREAM_ARRAY_SIZE=60000000 stream.c -o stream
[root@localhost stream]# perf stat -e cache-misses ./stream
-------------------------------------------------------------
STREAM version $Revision: 5.10 $
-------------------------------------------------------------
This system uses 8 bytes per array element.
-------------------------------------------------------------
Array size = 60000000 (elements), Offset = 0 (elements)
Memory per array = 457.8 MiB (= 0.4 GiB).
Total memory required = 1373.3 MiB (= 1.3 GiB).
Each kernel will be executed 10 times.
 The *best* time for each kernel (excluding the first iteration)
 will be used to compute the reported bandwidth.
-------------------------------------------------------------
Your clock granularity/precision appears to be 1 microseconds.
Each test below will take on the order of 52796 microseconds.
   (= 52796 clock ticks)
Increase the size of the arrays if this shows that
you are not getting at least 20 clock ticks per test.
-------------------------------------------------------------
WARNING -- The above is only a rough guideline.
For best results, please be sure you know the
precision of your system timer.
-------------------------------------------------------------
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:           10130.0     0.094858     0.094768     0.095173
Scale:          10631.8     0.090408     0.090295     0.090570
Add:            11216.3     0.128531     0.128385     0.128746
Triad:          11289.0     0.127709     0.127558     0.127884
-------------------------------------------------------------
Solution Validates: avg error less than 1.000000e-13 on all three arrays
-------------------------------------------------------------

 Performance counter stats for './stream':

       495,551,653      cache-misses

       5.104867523 seconds time elapsed

[root@localhost stream]#

strip

Discard symbols from object files. 删除目标文件得符号信息

编译目标文件

[me@centos stream]$ gcc -O2 -mcmodel=large -fopenmp -DSTREAM_ARRAY_SIZE=10000000 -DNTIMES=30 -DOFFSET=4096 stream.c -o stream

查看目标文件是 not striped的,大小74144 Byte

[me@centos stream]$ file stream
stream: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 3.7.0, BuildID[sha1]=cdb301912f8c7d837cefa0bccfd6f8962f8aeae7, not stripped
[me@centos stream]$ ls -la stream
-rwxrwxr-x. 1 me me 74144 Aug 21 10:26 stream

strip目标文件

[me@centos stream]$ strip stream

查看目标文件是striped,大小67744 Byte

[me@centos stream]$ file stream
stream: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 3.7.0, BuildID[sha1]=cdb301912f8c7d837cefa0bccfd6f8962f8aeae7, stripped
[me@centos stream]$ ls -la stream
-rwxrwxr-x. 1 me me 67744 Aug 21 10:26 stream

这是其中一个例子,像manggo DB之类编译后二进制文集那,可以由几百M变成几十M

strip之前 image0 strip之后 image1

sysrq

神奇的sysrq机制

https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html

写了一个脚本可以用来观察到nfs_lock的bug发生了make会挂住。 [use_sysrq_dump.sh]
输出结果可以查看[dump_file.txt]

systemd

系统启动服务器,通常,原来设置开机启动我们经常在init.d下放置启动脚本,主流的发行版逐渐使用systemd替代了这种方式。

配置文件路径

一个任务由一个.service文件表示,.service文件可以启动多个程序、脚本、命令作为任务的一部分。
用户创建的.service任务,建议放置在:/usr/lib/systemd/system/ rasppberypi,redhat7.6
系统创建的.service任务,防止在:/lib/systemd/system/
文件防止到相应路径后,需要执行enable命令,这样systemd才开始加载任务,在重启时也会自动加载。
systemctl enable xxx.service

命令的作用是会在路径 /etc/systemd/system创建符号链接指向 /lib/systemd/system/中的.service文件,如:

me@ubuntu:/etc/systemd/system$ tree --charset ascii
.
|-- dbus-org.freedesktop.resolve1.service -> /lib/systemd/system/systemd-resolved.service
|-- default.target.wants
|   `-- ureadahead.service -> /lib/systemd/system/ureadahead.service
|-- emergency.target.wants
|   `-- friendly-recovery.service -> /lib/systemd/system/friendly-recovery.service
|-- final.target.wants
|   `-- snapd.system-shutdown.service -> /lib/systemd/system/snapd.system-shutdown.service
|-- getty.target.wants
|   `-- getty@tty1.service -> /lib/systemd/system/getty@.service
|-- graphical.target.wants
|   `-- accounts-daemon.service -> /lib/systemd/system/accounts-daemon.service
|-- iscsi.service -> /lib/systemd/system/open-iscsi.service
|-- libvirt-bin.service -> /lib/systemd/system/libvirtd.service
|-- multi-user.target.wants
|   |-- atd.service -> /lib/systemd/system/atd.service
|   |-- bind9.service -> /lib/systemd/system/bind9.service
|   |-- console-setup.service -> /lib/systemd/system/console-setup.service
|   |-- cron.service -> /lib/systemd/system/cron.service
|   |-- ebtables.service -> /lib/systemd/system/ebtables.service
|   |-- irqbalance.service -> /lib/systemd/system/irqbalance.service
|   |-- libvirtd.service -> /lib/systemd/system/libvirtd.service
|   |-- libvirt-guests.service -> /lib/systemd/system/libvirt-guests.service
|   |-- lxcfs.service -> /lib/systemd/system/lxcfs.service

或者由.service文件的install字段指定。例如在用户创建的re.service文件中有如下字段

[Install]
WantedBy=multi-user.target

则enable之后,会创建如下链接。

/etc/systemd/system/multi-user.target.wants/re.service → /usr/lib/systemd/system/re.service.

参考样例

假设这样一个需求,需要对服务器进行不断重启,以测试服务器500次重启会不会产生异常。
为此我们编写一个脚本,还没有到500次的时候,就重启服务器。并把脚本做成.service文件。

restart_mission.sh如下:

counter_file="/home/cou.txt"
max_reboot_times=500
reboot_times=0

if [ ! -f "$counter_file" ]; then
        touch $counter_file
        echo 0 > $counter_file
else
        reboot_times=$(cat "$counter_file")
        if (($reboot_times < $max_reboot_times)); then
                nowdate=`date "+%Y-%m-%d %H:%M:%S"`
                echo "$nowdate $reboot_times again reboot"
                let reboot_times=$reboot_times+1
                echo $reboot_times > $counter_file
                wait
                /bin/sleep 30
                /sbin/reboot
        fi
fi

脚本中使用文件保存了计数,设置sleep的原因是给机会停住脚本。

re.service如下:

[Unit]
Description=restart_mission

[Service]
Type=oneshot
ExecStart=/root/restart_mission.sh
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

ExecStart指定了执行的脚本。

安装执行

cp re.service /usr/lib/systemd/system/frpc.service
systemctl enable frpc.service

如果中途修改了文件,

systemctl daemon-reload

这样系统就会不断重启了。

tar, zip

压缩与解压缩

c - 创建一个压缩文件,如果只使用这个参数,不使用 z 参数,那么只会打包,不会压缩
x - 解开一个压缩文件
z - 是否使用 gzip 压缩或解压
j - 是否使用 bzip2 压缩或解压
v - 显示详细信息
f - 指定压缩后的文件名,后面要直接跟文件名,所以将 f 参数放到最后
#打包文档   Create an archive from files:
tar -cf {{target.tar}} {{file1 file2 file3}}

#打包、压缩文档    Create a gzipped archive:
tar -czf {{target.tar.gz}} {{file1 file2 file3}}

#解包文档   Extract an archive in a target directory:
tar -xf {{source.tar}} -C {{directory}}

#解包、解压缩文档   Extract a gzipped archive in the current directory:
tar -xzf {{source.tar.gz}}

#解包、解压缩文档   Extract a bzipped archive in the current directory:
tar -xjf {{source.tar.bz2}}

#根据后缀文件名选择压缩文件 Create a compressed archive, using archive suffix to determine the compression program:
tar -caf {{target.tar.xz}} {{file1 file2 file3}}

#List the contents of a tar file:
tar -tvf {{source.tar}}

#Extract files matching a pattern:
tar -xf {{source.tar}} --wildcards {{"*.html"}}

查看tar包的目录结构

depth=1

tar --exclude="*/*" -tf file.tar

depth=2

tar --exclude="*/*/*" -tf file.tar

解压命令

tar -zxvf xx.tar.gz
tar -jxvf xx.tar.bz2

taskset

CPU亲和力特性,可以让我们在某些核上运行程序。

taskset -c 1-16 -p 6298

#方法1:把CPU隔离出来 isolcpus=0-47

cd /sys/kernel/debug/tracing/ echo function > current_tracer echo 1 > tracing_on

workload

cat tracer

#方法2:使用脚本查出所有的PID,绑到指定核上。

观察到主要的后台服务是lsf_daemons.service systemctl status lsf_daemons.service 获得Main PID

taskset -cp 48-95 9030

cd /sys/kernel/debug/tracing/ echo function > current_tracer echo 1 > tracing_on

workload

cat tracer

#方法2, 把PID是9030 绑到第0核和第4核上。

taskset -cp 0,4 9030

查看绑定结果

top -H -p `pgrep test`
isolcpus=1-2,7-8

tcp6

为什么 netstat 只显示ipv6的监听, 实际上也可以通过ipv4访问? [1]

例如frps, 只设置了frps的ipv6监听端口, 实际上也可以通过过ipv4地址访问。

tcp6       0      0 :::22     :::*     LISTEN      3281/frps
[1]https://www.chengweiyang.cn/2017/03/05/why-netstat-not-showup-tcp4-socket/

tcpdump

tcpdump
tcpdump -i en0  #指定网卡
tcpdump host 182.254.38.55 #本机和主机之间的所有包
tcpdump src host hostname #指定来源
tcpdump dst host hostname #指定目的
tcpdump port 3000   #指定端口
tcpdump ip host 210.27.48.1 and 210.27.48.2 #两个主机之间

tcpdump是linux下著名的抓包手段。是定位网络,协议问题的杀手锏。

tcpdump tcp -i eth1 -t -s 0 -c 100 and dst port ! 22 and src net 192.168.1.0/24 -w ./target.cap
tcpdump -v arp 查看arp包
(1)tcp: ip icmp arp rarp 和 tcp、udp、icmp这些选项等都要放到第一个参数的位置,用来过滤数据报的类型
(2)-i eth1 : 只抓经过接口eth1的包
(3)-t : 不显示时间戳
(4)-s 0 : 抓取数据包时默认抓取长度为68字节。加上-S 0 后可以抓到完整的数据包
(5)-c 100 : 只抓取100个数据包
(6)dst port ! 22 : 不抓取目标端口是22的数据包
(7)src net 192.168.1.0/24 : 数据包的源网络地址为192.168.1.0/24
(8)-w ./target.cap : 保存成cap文件,方便用ethereal(即wireshark)分析
(9)sudo tcpdump -i virbr0 -ent arp. -ent arp
[1]英语,流量匹配。 https://danielmiessler.com/study/tcpdump/

time

time命令可以用来统计程序运行的时间

统计top命令的耗时

time top
ctrl c
real    0m6.396s    #top命令一共运行了6秒
user    0m0.010s    #包含所有子程序用户层cpu耗时
sys     0m0.086s    #包含所有子程序系统层cpu耗时

timedatectl tzselect

ubuntu tzselect设置过程

选择系统所在时区。
有时候服务器在全球各地,虽然安装地点不一样,时区已经自动设置好,但是希望显示的时候按照亚洲时区来显示,好知道什么时间发生了什么事。
查看系统时间,发现是西5区
root@ubuntu:~# date -R
Wed, 13 Feb 2019 03:42:38 -0500
root@ubuntu:~# tzselect
Please identify a location so that time zone rules can be set correctly.
Please select a continent, ocean, "coord", or "TZ".
 1) Africa
 2) Americas
 3) Antarctica
 4) Asia
 5) Atlantic Ocean
 6) Australia
 7) Europe
 8) Indian Ocean
 9) Pacific Ocean
10) coord - I want to use geographical coordinates.
11) TZ - I want to specify the time zone using the Posix TZ format.
#? 4
Please select a country whose clocks agree with yours.
 1) Afghanistan           18) Israel                35) Palestine
 2) Armenia               19) Japan                 36) Philippines
 3) Azerbaijan            20) Jordan                37) Qatar
 4) Bahrain               21) Kazakhstan            38) Russia
 5) Bangladesh            22) Korea (North)         39) Saudi Arabia
 6) Bhutan                23) Korea (South)         40) Singapore
 7) Brunei                24) Kuwait                41) Sri Lanka
 8) Cambodia              25) Kyrgyzstan            42) Syria
 9) China                 26) Laos                  43) Taiwan
10) Cyprus                27) Lebanon               44) Tajikistan
11) East Timor            28) Macau                 45) Thailand
12) Georgia               29) Malaysia              46) Turkmenistan
13) Hong Kong             30) Mongolia              47) United Arab Emirates
14) India                 31) Myanmar (Burma)       48) Uzbekistan
15) Indonesia             32) Nepal                 49) Vietnam
16) Iran                  33) Oman                  50) Yemen
17) Iraq                  34) Pakistan
#? 13

The following information has been given:

        Hong Kong

Therefore TZ='Asia/Hong_Kong' will be used.
Selected time is now:   Wed Feb 13 16:47:13 HKT 2019.
Universal Time is now:  Wed Feb 13 08:47:13 UTC 2019.
Is the above information OK?
1) Yes
2) No
#? 1

You can make this change permanent for yourself by appending the line
        TZ='Asia/Hong_Kong'; export TZ
to the file '.profile' in your home directory; then log out and log in again.

Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:
Asia/Hong_Kong

并没有起效果,系统提示,要想永久修改,在~/.profile后面追加一行 TZ='Asia/Hong_Kong'; export TZ 追加之后,退出重新登录或者执行:

source .profile
root@ubuntu:~# date -R
Wed, 13 Feb 2019 16:52:00 +0800
root@ubuntu:~#

发现已经变成了东8区

RedHat timedatectl 设置过程

选择时区: 参考[官方手册]

timedatectl list-timezones
timedatectl set-timezone Asia/Shanghai
[root@localhost linux]# timedatectl
      Local time: Thu 2019-04-11 16:33:46 CST
  Universal time: Thu 2019-04-11 08:33:46 UTC
        RTC time: Thu 2019-04-11 08:33:47
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a
[root@localhost linux]#

时间同步chrony。redhat 8.0使用chrony作为NTP客户端使用如下命令查看ntp同步状态

yum install chrony

systemctl status chronyd    #查看服务
systemctl enable chronyd    #开机启动
systemctl start chronyd     #启动服务
chronyc sourcestats     #查看同步状态
[root@centos ~]# chronyc sourcestats
210 Number of sources = 4
Name/IP Address            NP  NR  Span  Frequency  Freq Skew  Offset  Std Dev
==============================================================================
tock.ntp.infomaniak.ch      1   0     0     +0.000   2000.000     +0ns  4000ms
120.25.115.20               2   0     2     +0.000   2000.000  -10012h  4000ms
ntp5.flashdance.cx          1   0     0     +0.000   2000.000     +0ns  4000ms
stratum2-1.ntp.led01.ru.>   1   0     0     +0.000   2000.000     +0ns  4000ms

local时间写入RTC。

RTC时间写如后,可以保证/var/log/message和/var/log/dmesg的时间在每次重启后对的。

timedatectl set-local-rtc 1

参考教程 https://www.maketecheasier.com/timedatectl-control-system-time-date-linux/

输出时间date

[root@root ~]# date "+%Y-%m-%d"
2013-02-19
[root@root ~]# date "+%H:%M:%S"
13:13:59
[root@root ~]# date "+%Y-%m-%d %H:%M:%S"
2013-02-19 13:14:19
[root@root ~]# date "+%Y_%m_%d %H:%M:%S"
2013_02_19 13:14:58
[root@root ~]# date -d today
Tue Feb 19 13:10:38 CST 2013
[root@root ~]# date -d now
Tue Feb 19 13:10:43 CST 2013
[root@root ~]# date -d tomorrow
Wed Feb 20 13:11:06 CST 2013
[root@root ~]# date -d yesterday
Mon Feb 18 13:11:58 CST 2013

tldr

一个合理的解释是:too long dont read

查看man文档实在是太长了, 大家都很忙, 哪里有那么多时间看,即使看了,一个命令参数那么多,谁记得住。所以tldr是一个更加简单的、由社区驱动的 man pages。 用它可以直接查看命令的常用例子

me@ubuntu:~$ tldr scp
scp
Secure copy.
Copy files between hosts using Secure Copy Protocol over SSH.

 - Copy a local file to a remote host:
   scp {{path/to/local_file}} {{remote_host}}:{{path/to/remote_file}}

 - Copy a file from a remote host to a local folder:
   scp {{remote_host}}:{{path/to/remote_file}} {{path/to/local_dir}}

 - Recursively copy the contents of a directory from a remote host to a local directory:
   scp -r {{remote_host}}:{{path/to/remote_dir}} {{path/to/local_dir}}

 - Copy a file between two remote hosts transferring through the local host:
   scp -3 {{host1}}:{{path/to/remote_file}} {{host2}}:{{path/to/remote_dir}}

 - Use a specific username when connecting to the remote host:
   scp {{path/to/local_file}} {{remote_username}}@{{remote_host}}:{{path/to/remote_dir}}

 - Use a specific ssh private key for authentication with the remote host:
   scp -i {{~/.ssh/private_key}} {{local_file}} {{remote_host}}:{{/path/remote_file}}

tmux

tmux是终端复用工具,是终端中的神器。 使用tmux可以把一个终端变成一个终端,这样就不需要在多个图形终端中切换来切换去,也不需要担心因为终端关闭而终止前台运行的程序。使用tmux创建一个session之后可以保留到你想关闭为止。下次只需要tmux a就可以恢复所有未关闭的session。

配置文件

/etc/tmux.conf                  全局配置文件路径为,没有请创建
~/.tmux.conf                    当前用户配置文件路径为,没有请创建
tmux show -g > a.txt            导出默认的配置文件
tmux source-file ~/.tmux.conf   重新加载配置文件

ctrl + b                        tmux 界面重新加载配置文件
:source-file ~/.tmux.conf

我的配置文件:

set -g default-terminal "screen-256color"
set-option -g allow-rename off
set -g status-right "#{=21:pane_title} #(date \"+%Y-%m-%d %H:%M:%S\")"
setw -g mode-keys vi

禁止tmux重命名标签页

set allow-rename off
set-option -g allow-rename off
set -g status-keys vi
set -g history-limit 10000

交换窗口顺序

3号窗口交换到1号窗口

ctrl + b
:swap-window -s 3 -t 1

当前窗口交换到0号窗口

ctrl + b
:swap-window -t 0

分离session

ctrl b
d

例如有一个session 0包含5个windows:

tmux ls

0: 5 windows (created Tue Mar 26 14:42:20 2019) [171x47]

挂接session

tmux a
tmux attach
tmux a -t 0
tmux attach-session -t 0 #这几个命令等效,挂接session 0,包含5个windows

结束session

ctrl b; :kill-session          #在tmux界面,进入tmux交互,输入kill-session
tmux kill-session -t 会话名    #在shell界面,指定要kill的session

重命名session

# 在tmux界面
ctrl b
$
#在shell界面
tmux rename-session [-t current-name] [new-name]

切换到另一个session

ctrl b
:choose-session
上下箭头选择,横向选中
enter

新建session:

ctrl b
new -s sname        #在tmux界面,新建session
tmux new -s sname   #在shell界面新建session

pannel

ctrl+b, "   #水平分割当前panel
ctrl+b, %   #垂直分割当前panel
ctrl+b, ←↑→ #在panel之间切换
ctrl+b, o   #在panel之间切换
ctrl+b, z   #当前panel最大化,或者是恢复panel

复制

使用如下配置文件之后, 按住shift可以实现选中终端内容复制到系统剪贴板。

配置文件

# Make mouse useful in copy mode
setw -g mode-mouse on

# Allow mouse to select which pane to use
set -g mouse-select-pane on

# Allow mouse dragging to resize panes
set -g mouse-resize-pane on

# Allow mouse to select windows
set -g mouse-select-window on

# Allow xterm titles in terminal window, terminal scrolling with scrollbar, and setting overrides of C-Up, C-Down, C-Left, C-Right
# (commented out because it disables cursor navigation in vim)
#set -g terminal-overrides "xterm*:XT:smcup@:rmcup@:kUP5=\eOA:kDN5=\eOB:kLFT5=\eOD:kRIT5=\eOC"

# Scroll History
set -g history-limit 30000

# Set ability to capture on start and restore on exit window data when running an application
setw -g alternate-screen on

# Lower escape timing from 500ms to 50ms for quicker response to scroll-buffer access.
set -s escape-time 50

问题

不要听信网上的谣言使用

set -g mouse on

我用的时候mobaxterm就无法用鼠标复制了

tool

常用windows工具:

fastipscan 扫描在线IP

top , htop

查看当前系统进程

查看进程在哪个核上运行

top
f
P
space
q
htop
F2

可以定制输出列内容

tun/tap

创建tun设备

代码在src目录下。

实验: 创建两个net namespace并使他们互通

# 查看已有的net namespace
ip netns list

# 创建两个namespace net0 net1
sudo ip netns add net0
sudo ip netns add net1


#创建veth pair
sudo ip link add type veth

#分别加入到两个namespace当中
sudo ip link set veth0 netns net0
sudo ip link set veth1 netns net1

#分别为两个namespac中的veth添加ip并且up
sudo ip netns exec net0 ip addr add 10.0.1.1/24 dev veth0
sudo ip netns exec net1 ip addr add 10.0.1.2/24 dev veth1
sudo ip netns exec net0 ip link set veth0 up
sudo ip netns exec net1 ip link set veth1 up

测试:

sudo ip netns exec net0 ping -c 3 10.0.1.2

实验: 创建交换机使两个net namespace互通

#连接创建三个namespace
ip netns add net0
ip netns add net1
ip netns add bridge

#net0 连接到bridge
ip link add type veth
ip link set dev veth0 name net0-bridge netns net0
ip link set dev veth1 name bridge-net0 netns bridge

#net1连接到bride
ip link add type veth
ip link set dev veth0 name net1-bridge netns net1
ip link set dev veth1 name bridge-net1 netns bridge

#创建交换机,添加net0和net1的veth接口
ip netns exec bridge brctl addbr br
ip netns exec bridge ip link set dev br up
ip netns exec bridge ip link set dev bridge-net0 up
ip netns exec bridge ip link set dev bridge-net1 up
ip netns exec bridge brctl addif br bridge-net0
ip netns exec bridge brctl addif br bridge-net1

#虚拟网卡配置IP
ip netns exec net0 ip link set dev net0-bridge up
ip netns exec net0 ip address add 10.0.1.1/24 dev net0-bridge
ip netns exec net1 ip link set dev net1-bridge up
ip netns exec net1 ip address add 10.0.1.2/24 dev net1-bridge

测试: 在net0中ping net1

ip netns exec net0 ping -c 3 10.0.1.2

实验: 创建tuntap设备 [2]

ip link add br0 type bridge
ip tuntap add dev tap0 mod tap # 创建 tap
ip tuntap add dev tun0 mod tun # 创建 tun
ip tuntap del dev tap0 mod tap # 删除 tap
ip tuntap del dev tun0 mod tun # 删除 tun

ip link add br0 type bridge
ip netns add netns1
ip link add type veth

ip link set eth0 master br0
ip link set veth1 master br0
ip link set veth0 netns netns1
<interface type='bridge'>
  <mac address='52:54:00:38:06:f9'/>
  <source bridge='br0'/>
  <model type='virtio'/>
  <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

veth

veth是虚拟网卡, 成对出现,从其中一个网卡发出的数据包, 会直接出现在另一张网卡上, 即使这两张网卡在不同的 Network namespace当中。

使用veth的常用技术有: kvm, docker

现在按照下图进行验证

+----------------------------------------------------------+
| +---------------------+     +--------------------+       |
| |  ubuntu1 namespace  |     | ubuntu2 namespace  |       |
| |                     |     |                    |       |
| |                     |     |                    |       |
| |     172.17.0.2/16   |     |   172.17.0.3/16    |       |
| |      +---------+    |     |   +---------+      |       |
| |      |  eth0   |    |     |   |  eth0   |      |       |
| +------+----+----+----+     +---+----+----+------+       |
|             |                        |                   |
|    +-+------+-------+---------+------+-------+-+         |
|    | | .vethd52f2b1 |         | .vethbb7c5c5 | |         |
|    | +--------------+         +--------------+ |         |
|    |           docker0 172.17.0.1/16           |         |
|    +-------------------+-----------------------+         |
|                        | NAT                             |
|                   +----+------+        host namespace    |
|    192.168.1.231  |   eth0    |                          |
+-------------------+-----------+--------------------------+

在host上运行一个ubuntu容器。

运行中的ubuntu容器
[user1@centos86 ~]$ docker run -it ubuntu /bin/bash
[user1@centos86 ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
54f923fd141c        ubuntu              "/bin/bash"         17 hours ago        Up 25 minutes                           vigilant_banach
docker-host-veth
[user1@centos86 ~]$ brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.02428a77c48c       no              vethd52f2b1
ubuntu容器内的veth:eth0if20
root@54f923fd141c:/# ip a
19: eth0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
    valid_lft forever preferred_lft forever

再运行一个ubuntu容器, 可以看到docker0网桥下又添加了一个veth接口vethd52f2b1

docker网桥下的两个veth接口
[user1@centos86 ~]$ brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.02428a77c48c       no              vethbb7c5c5
                                                        vethd52f2b1

但是又一个问题,网桥docker0和host的物理网口怎么通信呢,答案是NAT, 查看主机的iptables iptables -t nat -L。 从172.17.0.0/16出来的所有数据包,都进行地址伪装。所以出去的时候是主机192.168.1.231的网址。

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        anywhere

ovs

参考资料

tun tap veth 解释出处
https://www.fir3net.com/Networking/Terms-and-Concepts/virtual-networking-devices-tun-tap-and-veth-pairs-explained.html
tun、tap、macvlan、mactap的作用
  • tun是一个三层设备, 通过/dev/tunX 收发IP数据包
  • tap是一个二层设备, 通过/dev/tap 收发二层数据包,可以与物理网卡bridge
  • macvlan 实现一个网卡绑定多个mac地址,进而对应多个IP
  • mactap 是对macvlan的改进, macvlan会把数据交给网络协议栈, mactap把数据交给tapX

https://blog.kghost.info/2013/03/27/linux-network-tun/ https://blog.kghost.info/2013/03/01/linux-network-emulator/

创建tun设备的示例程序

https://blog.csdn.net/sld880311/article/details/77854651

https://www.lijiaocn.com/%E6%8A%80%E5%B7%A7/2017/03/31/linux-net-devices.html#tun%E8%AE%BE%E5%A4%87%E5%88%9B%E5%BB%BA

[1]创建一个tun设备的代码 https://github.com/LyleLee/GoodCommand/tree/master/source/src/virtual_net

tun tap 和交换机的配置

[2]https://developers.redhat.com/blog/2018/10/22/introduction-to-linux-interfaces-for-virtual-networking/

问题记录

centos没有tunctl rpm包

解决办法:从fedoras的源进行安装。 实际上可以使用ip tuntap命令替代

sudo dnf install https://rpmfind.net/linux/fedora/linux/releases/30/Everything/aarch64/os/Packages/t/tunctl-1.5-20.fc30.aarch64.rpm

ufw

ubuntu的默认防火墙 参考ubuntu官网

update-alternatives

实现linux下多个版本程序之间的切换。 我们经常需要在系统上安装多个版本的gcc,多个版本的python,以及其他测试工具如fio等,有些问题在特定的版本上出现,linux下切换程序版本并不是很方便, 这里可以考虑使用update-alternatives update-alternatives –config python

UPnP

路由器界面上的介绍:开启 UPnP (Universal Plug and Play,通用即插即用)功能后,局域网中的计算机可以请求路由器自动进行端口转换。这样,互联网上的计算机就能在需要时访问局域网计算机上的资源(如 MSN Messenger 或迅雷、BT、PPTV 等支持 UPnP 协议的应用程序),让您在观看在线视频或使用多点下载等方面的软件时,享受更加稳定的网络。
我使用UPnP来请求路由器为程序映射指定端口。

使用工具时miniupnpc

yum install miniupnpc

使用时注意关闭防火墙,否则可能会出现upnpc被防火墙拦截。例如:

Jul 29 16:11:18 hadoop00 kernel: FINAL_REJECT: IN=enp125s0f0 OUT= MAC=f4:79:60:92:7c:82:c8:c2:fa:40:c1:e6:08:00 SRC=192.168.100.1 DST=192.168.100.12 LEN=420 TOS=0x00
 PREC=0x00 TTL=64 ID=56876 DF PROTO=UDP SPT=1900 DPT=41797 LEN=400
解决办法:
方法一: 禁用防火墙
systemctl stop firewalld

方法二:添加一条规则,主机192.168.100.12接收网关192.168.100.1发送UDP数据。

firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='192.168.100.1' destination address='192.168.100.12' protocol value='udp' log prefix='upnpc' level='warning' accept

添加一条转发规则:

upnpc -a 192.168.1.2 22 3333 TCP    #来自互联网的TCP链接会被转发到192.168.1.2的22号端口上。
upnpc -d 3333 TCP                   #删除这条规则
upnpc -l                            #显示网关路由器
upnpc -u http://192.168.100.1:37215/aceb1e42-5f94-3a9d-c107-53e4485f6b1a/upnpdev.xml -l #查询指定网关上的

被防火墙拦截的现象

[root@hadoop00 ~]# upnpc -l
upnpc : miniupnpc library test client, version 2.0.
 (c) 2005-2016 Thomas Bernard.
Go to http://miniupnp.free.fr/ or http://miniupnp.tuxfamily.org/
for more information.
No IGD UPnP Device found on the network !
[root@hadoop00 ~]#
[root@hadoop00 ~]#
[root@hadoop00 ~]#
[root@hadoop00 ~]# systemctl stop firewalld
[root@hadoop00 ~]# upnpc -l
upnpc : miniupnpc library test client, version 2.0.
 (c) 2005-2016 Thomas Bernard.
Go to http://miniupnp.free.fr/ or http://miniupnp.tuxfamily.org/
for more information.
List of UPNP devices found on the network :
 desc: http://192.168.100.1:37215/aceb1e42-5f94-3a9d-c107-53e4485f6b1a/upnpdev.xml
 st: urn:schemas-upnp-org:device:InternetGatewayDevice:1

Found valid IGD : http://192.168.100.1:37215/ctrlu/1baf2dc8-0d18-6d80-6050-bb858de4d14c/WANIPConn_1
Local LAN ip address : 192.168.100.12
Connection Type : IP_Routed
Status : Connected, uptime=974546s, LastConnectionError :
  Time started : Thu Jul 18 03:13:37 2019
MaxBitRateDown : 100000000 bps (100.0 Mbps)   MaxBitRateUp 100000000 bps (100.0 Mbps)
ExternalIPAddress = 124.127.117.242
 i protocol exPort->inAddr:inPort description remoteHost leaseTime
GetGenericPortMappingEntry() returned 713 (SpecifiedArrayIndexInvalid)

valgrind

我这里无法安装这个调试工具,我这里无法安装这个调试工具

vdbench

Oracle维护的一个磁盘IO性能工具,用于产生磁盘IO 负载测试磁盘性能和数据完整性。

使用前准备

下载解压即可使用。一般不需要编译,如果运行环境存在,可以直接运行。当在ARM服务器上执行时会遇到一些问题,这里介绍如何解决。

下载地址: https://www.oracle.com/downloads/server-storage/vdbench-source-downloads.html

测试运行环境:

#给脚本赋予运行权限
chmod +x vdbench
#执行测试
./vdbench -t
me@ubuntu:~/vdbench50407$ ./vdbench -t
-bash: ./vdbench: /bin/csh: bad interpreter: No such file or directory

出现csh找不到的问题,原因是: + vdbench 5.04.05的vdbench脚本是c shell script文件。 解决办法是:

方法1: 安装csh

sudo apt install csh
方法2: 使用最新版本 5.04.07 【建议】
可以看到最新的5.04.07使用的是bash script。
me@ubuntu:~/vdbench50407$ file vdbench
vdbench: Bourne-Again shell script, ASCII text executable

使用vdbench 5.04.05

出现java版本检测不合法的问题。

me@ubuntu:~/vdbench504$ ./vdbench -t


Vdbench distribution: vdbench504
For documentation, see 'vdbench.pdf'.

*
*
*
* Minimum required Java version for Vdbench is 1.5.0;
* You are currently running 10.0.2
* Vdbench terminated.
*
*
*

CTRL-C requested. vdbench terminating

使用vdbench 5.04.07

在vdbench 5.04.07上没有出现java版本报错的问题。查看源码,已经移除java版本检测checkJavaVersion();。移除原因作者未说明,详细请参考版本发布说明。

  // Removed as per 50407 because of java 1.10.x
  //checkJavaVersion();

  //....

   private static void checkJavaVersion()
{
  if (common.get_debug(common.USE_ANY_JAVA))
    return;
  if (!JVMCheck.isJREValid(System.getProperty("java.version"), 1, 7, 0))
  {
    System.out.print("*\n*\n*\n");
    System.out.println("* Minimum required Java version for Vdbench is 1.7.0; \n" +
                       "* You are currently running " + System.getProperty("java.version") +
                       "\n* Vdbench terminated.");
    System.out.println("*\n*\n*\n");

    System.exit(-99);
  }
}

版本发布说明oracle vdbench 50407rc29

50407rc29

The check to make sure you are running java 1.7 or higher has been removed.

vdbench在ARM服务器上出现共享库aarch64.so问题

在ARM服务器上,会出现共享库不匹配的问题。

me@ubuntu:~$ ./vdbench -t


Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Vdbench distribution: vdbench50407 Tue June 05  9:49:29 MDT 2018
For documentation, see 'vdbench.pdf'.

15:11:24.571 Created output directory '/home/me/output'
15:11:24.720 input argument scanned: '-f/tmp/parmfile'
15:11:24.870 Starting slave: /home/me/vdbench SlaveJvm -m localhost -n localhost-10-190124-15.11.24.528 -l localhost-0 -p 5570
15:11:24.892
15:11:24.893 File /home/me/linux/aarch64.so does not exist.
15:11:24.893 This may be an OS that a shared library currently
15:11:24.893 is not available for. You may have to do your own compile.
15:11:24.893 t: java.lang.UnsatisfiedLinkError: Can't load library: /home/me/linux/aarch64.so
15:11:24.893
15:11:24.894 Loading of shared library /home/me/linux/aarch64.so failed.
15:11:24.894 There may be issues related to a cpu type not being
15:11:24.894 acceptable to Vdbench, e.g. MAC PPC vs. X86
15:11:24.894 Contact me at the Oracle Vdbench Forum for support.
15:11:24.894
15:11:25.397
15:11:25.397 Failure loading shared library
15:11:25.398
java.lang.RuntimeException: Failure loading shared library
        at Vdb.common.failure(common.java:350)
        at Vdb.common.get_shared_lib(common.java:1103)
        at Vdb.Native.<clinit>(Native.java:31)
        at Vdb.common.signal_caller(common.java:737)
        at Vdb.ConnectSlaves.connectToSlaves(ConnectSlaves.java:98)
        at Vdb.Vdbmain.masterRun(Vdbmain.java:814)
        at Vdb.Vdbmain.main(Vdbmain.java:628)

原因是vdbench根目录下/linux/linux64.so是为x86编译的,需要重新编译linux64.so

me@ubuntu:~$ file linux/linux64.so
linux/linux64.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=34a31f32956f21153c372a95e73c02e84ddd29f8, not stripped

ARM版本的aarch64.so编译

下载,解压源码包: 下载地址 需要同意license

unzip vdbench50407.src.zip

进入src创建linux目录

cd src/
mkdir linux

进入Jni修改make.linux。主要修改: + 修改vdb为源码包src的路径 + 修改java为jdk路径。一般在/usr/lib/jvm/下 + 去除-m32m64选项

cd Jni/
vim make.linux

参考如下修改方法

diff --git a/Jni/make.linux b/Jni/make.linux
index 45ed232..024a153 100755
--- a/Jni/make.linux
+++ b/Jni/make.linux
@@ -34,16 +34,16 @@



-vdb=$mine/vdbench504
-java=/net/sbm-240a.us.oracle.com/export/swat/swat_java/linux/jdk1.5.0_22/
+vdb=/home/user1/open_software/vdbench/src
+java=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.232.b09-0.el7_7.aarch64/
 jni=$vdb/Jni

 echo target directory: $vdb



-INCLUDES32="-w -m32 -DLINUX -I$java/include -I/$java/include/linux -I/usr/include/ -fPIC"
-INCLUDES64="-w -m64 -DLINUX -I$java/include -I/$java/include/linux -I/usr/include/ -fPIC"
+INCLUDES32="-w -DLINUX -I$java/include -I/$java/include/linux -I/usr/include/ -fPIC"
+INCLUDES64="-w -DLINUX -I$java/include -I/$java/include/linux -I/usr/include/ -fPIC"


 cd /tmp
@@ -62,7 +62,7 @@ gcc ${INCLUDES32} -c $jni/chmod.c
 echo Linking 32 bit
 echo

-gcc  -o   $vdb/linux/linux32.so vdbjni.o vdblinux.o vdb_dv.o vdb.o chmod.o -lm -shared  -m32 -lrt
+gcc  -o   $vdb/linux/linux32.so vdbjni.o vdblinux.o vdb_dv.o vdb.o chmod.o -lm -shared -lrt

 chmod 777 $vdb/linux/linux32.so

@@ -82,7 +82,7 @@ gcc ${INCLUDES64} -c $jni/chmod.c
 echo Linking 64 bit
 echo

-gcc  -o   $vdb/linux/linux64.so vdbjni.o vdblinux.o vdb_dv.o vdb.o chmod.o -lm -shared -m64 -lrt
+gcc  -o   $vdb/linux/linux64.so vdbjni.o vdblinux.o vdb_dv.o vdb.o chmod.o -lm -shared -lrt

 chmod 777 $vdb/linux/linux64.so 2>/dev/null

执行make.linux,会在src/linux/下生成linux32.so和linux64.so文件,这里我们只需要使用到64位的文件。重命名linux64.so并复制到二进制包(注意不是源码包)的linux/目录下即可。

me@ubuntu:~/vdbench50407src/src/Jni$ ./make.linux
target directory: /home/me/vdbench50407src/src/
Compiling 32 bit
Linking 32 bit

Compiling 64 bit
Linking 64 bit

cp linux64.so aarch64.so
cp aarch64.so ~/vdbench50407/linux/

执行测试

me@ubuntufio:~/vdbench50407$ ./vdbench -t


Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Vdbench distribution: vdbench50407 Tue June 05  9:49:29 MDT 2018
For documentation, see 'vdbench.pdf'.

16:46:11.641 input argument scanned: '-f/tmp/parmfile'
16:46:11.922 Starting slave: /home/me/vdbench50407/vdbench SlaveJvm -m localhost -n localhost-10-190218-16.46.11.421 -l localhost-0 -p 5570
16:46:12.662 All slaves are now connected
16:46:14.003 Starting RD=rd1; I/O rate: 100; elapsed=5; For loops: None

Feb 18, 2019    interval        i/o   MB/sec   bytes   read     resp     read    write     read    write     resp  queue  cpu%  cpu%
                               rate  1024**2     i/o    pct     time     resp     resp      max      max   stddev  depth sys+u   sys
16:46:15.102           1       76.0     0.07    1024  52.63    0.011    0.008    0.014     0.02     0.04    0.006    0.0  23.4   5.6
16:46:16.021           2      109.0     0.11    1024  53.21    0.011    0.010    0.013     0.07     0.03    0.007    0.0  10.2   2.0
16:46:17.012           3      112.0     0.11    1024  50.00    0.036    0.010    0.063     0.02     2.57    0.242    0.0   6.5   1.0
16:46:18.013           4      105.0     0.10    1024  50.48    0.012    0.009    0.015     0.02     0.04    0.006    0.0   4.0   1.0
16:46:19.027           5      126.0     0.12    1024  50.00    0.013    0.010    0.016     0.03     0.04    0.006    0.0   5.0   0.0
16:46:19.060     avg_2-5      113.0     0.11    1024  50.88    0.018    0.010    0.027     0.07     2.57    0.120    0.0   6.4   1.0
16:46:20.050 Vdbench execution completed successfully. Output directory: /home/me/vdbench50407/output
详细测试

配置的文件中的

  • General
  • Host Deinition(HD)
  • Replay Group(RG)
  • Storage Definition(SD)
  • Workload Definition(WD)
  • Run Definition(RD)

必须顺序出现。一个run指的是,RD执行的WD

Master和Slave, Vdbench以一个或者多个JVM运行。由用户运行的JVM是master,负责解析参数和报告。Slave可以运行在本机,也可以在远程主机执行。

裸机单盘性能

vim

编辑工具常用功能

:f      #显示当前文件路径
:set number     #显示行号
:set ff=unix    #更改文件为unix格式
:set invlist    #显示所有不可见字符,set invlist可以关闭 另外cat -A file也可以看到
:wq
:s/vivian/sky/  #替换当前行第一个 vivian 为 sky
:s/vivian/sky/g #替换当前行所有 vivian 为 sky
:noh
5yy             #复制光标开始的十行
:y10            #复制以下十行
:10y            #复制第10行
:p              #黏贴复制内容
:10dd           #剪切10行

:[range]s/源字符串/目标字符串/[option]   #替换命令
:%s/ListNode/ConstructNode/gc            #ListNode→ConstructNode
:%s#/home/sjt/ch/arm#"${od}"#gc          #替换包含路径的字符串,使用#符号隔开参数和字符串,例子把路径替换成了变量
:s/line/lines/g                          #表示将光标所在当前行的line全局替换为lines
:2,3s/line/lines/g                       #表示将2~3行的line全局替换为lines
:%s= *$==                                #表示全局替换行尾的一个或多个空格

shift+*         #搜索当前光标所在单词

列操作

删除列

1.光标定位到要操作的地方。
2.CTRL+v 进入“可视 块”模式,选取这一列操作多少行。选中的字符就是要删除的字符
3.d 删除。

插入列

插入操作的话知识稍有区别。例如我们在每一行前都插入"() ":
1.光标定位到要操作的地方。
2.CTRL+v 进入“可视 块”模式,选取这一列操作多少行。
3.SHIFT+i(I) 输入要插入的内容。
4.ESC 按两次,会在每行的选定的区域出现插入的内容。

vmtouch

把一些文件加载的内存buffer

vmtouch -tv files

缓存cephfs文件

for i in {1..25}; do
    ./vmtouch -tv /mnt/cephfs/vdb.1_$i.dir
done



for i in {26..50}; do
    ./vmtouch -tv /mnt/cephfs/vdb.1_$i.dir
done




for i in {51..75}; do
    ./vmtouch -tv /mnt/cephfs/vdb.1_$i.dir
done



for i in {76..100}; do
    ./vmtouch -tv /mnt/cephfs/vdb.1_$i.dir
done

VNC

方案一: tigervnc

安装

yum install tigervnc

启动

vncserver   #默认会启动一个进程运行在5901端口,服务一个窗口.客户端使用IP:1 或者ip 5901登录

登录

vncviewer  192.168.100.12:1
# or
mobaxterm 192.168.100.12 5901

可以以指定方式启动VNC

vncserver :8    #启用一个进程运行在5908端口

关闭vnc进程

vncserver -kill :8

查看vnc

vncserver -list

设置VNC密码

vncpasswd

wget

wget是一个linux系统普遍提供的一个下载工具。常用于在命令行下载文件。

使用方法:

wget url
wget https://www.cs.virginia.edu/stream/FTP/Code/stream.c

使用代理: 处于内网环境,需要使用代理:

wget https://www.cs.virginia.edu/stream/FTP/Code/stream.c -e "https_proxy=https://用户名:密码@代理服务器:端口" --no-check-certificate

例如:用户名是sam,密码是pc_123,代理服务器是10.10.98.1,端口是8080

wget https://www.cs.virginia.edu/stream/FTP/Code/stream.c -e "https_proxy=https://sam:pc_123@10.10.98.1:8080" --no-check-certificate
户密码带有特殊字符时,需要使用百分号编码替代,例如密码是tom@7642,应该写成tom%407642,更多字符替换请参考维基百科
代理服务器可以是一个域名。例如:proxy.tunnel.com。

小技巧

wget不支持socks5代理, 考虑使用tsocks

wipefs

清除文件系统

wkhtmltopdf

转化网页为pdf,支持书签,运行脚本等。

wkhtmltopdf.exe --debug-javascript --javascript-delay 2000 --run-script "document.getElementsByClassName('rst-footer-buttons')[0].innerHTML = ''" https://compare-intel-kunpeng.readthedocs.io/zh_CN/latest/ compare.pdf

wlan

扫描网络

iwlist scan

指定wlan接口扫描

iwlist wlan0 scanning

过滤显示SSID

iwlist scan | grep ESSID

注意扫描太频繁可能不成功,原因不明,有可能是wlan标准规定。

pi@raspberrypi:~ $ iwlist wlan0 scan | grep ESSID
                    ESSID:"hzh_zfj"
                    ESSID:"ChinaNet-pFRh"
                    ESSID:"303"
                    ESSID:"TP-LINK_9A09"
                    ESSID:"Xiaomi_DEA2"
                    ESSID:""
                    ESSID:"ChinaNet-4r5y"
                    ESSID:"Tamgm"
                    ESSID:"TP-LINK_FA7F"
                    ESSID:"809"
                    ESSID:"Xiaoxiaobai02"
pi@raspberrypi:~ $

连接到网路

wpa_passphrase <SSID> [密码]

xmrig

[root@ceph-test ~]# top
top - 09:51:07 up 96 days, 18:41,  5 users,  load average: 33.12, 33.05, 32.70
Tasks: 502 total,   1 running, 501 sleeping,   0 stopped,   0 zombie
%Cpu(s): 80.6 us,  0.2 sy,  0.0 ni, 19.2 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 13145172+total, 12133246+free,  6652620 used,  3466644 buff/cache
KiB Swap:  4194300 total,  4194300 free,        0 used. 12382584+avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
19448 root      20   0 7414624  19448   4052 S  3202  0.0  46611,49 xmrig
12776 polkitd   20   0  619916  16748   5368 S  11.3  0.0   9696:38 polkitd
12805 dbus      20   0   71044   4492   1940 S   7.9  0.0   7093:01 dbus-daemon
12771 root      20   0  396456   4432   3292 S   7.6  0.0   6953:54 accounts-daemon
16853 root      20   0  456840   3812   2876 S   3.0  0.0   2515:04 gsd-account
16832 root      20   0  648660  32244   9184 S   0.7  0.0 207:15.28 gsd-color
   95 root      20   0       0      0      0 S   0.3  0.0   0:52.62 ksoftirqd/17
30552 root      20   0  162408   2768   1632 R   0.3  0.0   0:00.73 top
    1 root      20   0  196624   9832   4208 S   0.0  0.0   9:27.29 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:02.33 kthreadd
    3 root      20   0       0      0      0 S   0.0  0.0   0:01.32 ksoftirqd/0
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0H
    8 root      rt   0       0      0      0 S   0.0  0.0   0:00.62 migration/0
    9 root      20   0       0      0      0 S   0.0  0.0   0:00.00 rcu_bh
   10 root      20   0       0      0      0 S   0.0  0.0  77:39.05 rcu_sched
   11 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 lru-add-drain
   12 root      rt   0       0      0      0 S   0.0  0.0   0:14.34 watchdog/0
   13 root      rt   0       0      0      0 S   0.0  0.0   0:13.84 watchdog/1

怀疑是挖矿程序

knowleadge

ARM 汇编

简单aarch64汇编编程介绍 [1]

data段

static char  c = 'a';
static short s = 12;
static int  i = 345;
static long l = 6789;
.section instruction (to announce DATA section)
label definition (marks a spot in RAM)
.byte instruction (1 byte)
.short instruction (2 bytes)
.word instruction (4 bytes)
.quad instruction (8 bytes)

adr指令

C语言
static int length = 1;
static int width = 2;
static int perim = 0;
int main()
{
    perim =(length + width) * 2;
    return 0;
}
汇编
.section .data
length: .word 1
width: .word 2
perim: .word 0
.section .text
.global main
main:
adr x0, length
ldr w1, [x0]
adr x0, width
ldr w2, [x0]
add w1, w1, w2
lsl w1, w1, 1
adr x0, perim
str w1, [x0]
mov w0, 0
ret
.data: read-write
.rodata: read-only
.bss: read-write, initialized to zero
.text: read-only, program code
Stack and heap work differently!
[1]https://www.cs.princeton.edu/courses/archive/spring19/cos217/lectures/13_Assembly1.pdf

ARM 和 x86中的编码区别

  1. char在X86上默认是有符号数,在ARM上默认是无符号数

char变量在x86架构下为signed char,在ARM64平台为unsignedchar。 x86代码移植到ARM时,时需要指定char为signed char。使用编译选项“-fsigned-char”

在ARM上的运行结果是,输出吧-10当成了一个无符号数

[me@centos86 ~]$ gcc defaut_char_type.c
[me@centos transplant]$ ./a.out
246:f6

在X86上的运行结果是,输出把-10当成了有符号数

[me@centos86 ~]$ ./a.out
-10:fffffff6

在ARM上使用-fsigned-char把char当成有符号数处理,两者结果一致。

[me@centos transplant]$ gcc default_char_type.c -fsigned-char
[me@centos transplant]$ ./a.out
-10:fffffff6

-10的原码是: 1000 1010

-10的反码是: 1111 0101 求负数的反码,符号位不变,其他位取反

-10的补码是: 1111 0110 (f6) 补码等于反码加1

ARM 资料

在线资料汇总,【请查看】

指令文档ARM Architecture Reference Manual.pdf(ARMv8, for ARMv8-A architecture profile)是最全的ARM指令文档与学习资料,5242页的内容非常全面。学习时候掌握技巧,并蜚需要完整的读一遍,知道如何查阅就好。整本文档按照字母进行章节排序,

  • Part A是架构的概述,让读者对架构以及文档的内容有所基本的了解,
  • Part B讲应用层的程序员模型与内存模型,在这里你可以了解到ARM的寄存器,字节序,缓存以及内存对齐的重要知识,
  • Part C是AArch64的指令集了,讲了AArch64的指令集格式与分类,
  • Part D是AArch64的系统级架构,对于应用层的开发人员与研究人员来说,这里的内容只需要有所了解即可,
  • Part E是32位ARM的应用层架构,对于一般的开发人员来说,这一章是蛮重要的,
  • Part F是32位的ARM指令集格式讲解与分类,同样是相当重要的,
  • Part G是32位ARM的系统级架构,对于应用层的开发人员来说,只需要了解即可。

然后后面的部分可以根据自己的需求看或是不看。

如果32位与64位都想学的话,Part A,Part B,Part C,Part E,PartF都是必须要看的,如果重点关注指令集,则是Part C与Part F。

作者:知乎用户 链接:https://www.zhihu.com/question/23893796/answer/164481040 来源:知乎 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

ARM 虚拟化优化思路 https://www.cs.columbia.edu/~nieh/pubs/isca2016_armvirt.pdf

ARM float point

一篇简介ARM浮点运算的文章 https://embeddedartistry.com/blog/2017/10/9/r1q7pksku2q3gww9rpqef0dnskphtc

arm_neon.h

arm neon 寄存器介绍

在aarch64的设备上,每个CPU有32个neon寄存器。根据比特位大小,分别叫Bn, Hn, Sn, Dn, Qn, n={1..32}。

 127                                 64 63             32 31         16 15  8 7    0
+--------------------------------------+-----------------+-------------+-----+-----+
|                                      |                 |             |     |     |
+----------------------------------------------------------------------------------+
|                                      |                 |             |     |     |
|                                      |                 |             |     |     |
|                                      |                 |             |     +--Bn-+
|                                      |                 |             |           |
|                                      |                 |             +----Hn-----+
|                                      |                 |                         |
|                                      |                 +----------Sn-------------+
|                                      |                                           |
|                                      +-----------------+Dn-----------------------+
|                                                                                  |
+-----------------------------------Qn---------------------------------------------+

在一些资料中提到128位的neon寄存器是16个,根据最新的Arm® Architecture Reference Manual [1] C1-175页,实际上在ARMv8中是32个。

Table C1-3 shows the qualified names for accessing scalar SIMD and floating-point registers. The letter n denotes a register number between 0 and 31.

Table C1-3 SIMD and floating-point scalar register names 浮点neon寄存器

Size Name
8 bits Bn
16 bits Hn
32 bits Sn
64 bits Dn
128 bits Qn

Table C1-4 SIMD vector register names 向量neon寄存器

Shape Name
8 bits × 8 lanes Vn.8B
8 bits × 16 lanes Vn.16B
16 bits × 4 lanes Vn.4H
16 bits × 8 lanes Vn.8H
32 bits × 2 lanes Vn.2S

他们的功能如下表,D0-D7是参数寄存器, D8-D15是被调用者寄存器, D16-D31是调用者寄存器

NEON Programmers Guide [2]

--D0-D7 Argument registers and return register. If the subroutine does not have arguments or return values, then the value in these registers might be uninitialized.
--D8-D15 callee-saved registers.
--D16-D31 caller-saved registers

ARM registers compare

简单对比ARM寄存器和Neon指令

armv7-a neon指令
V{<mod>}<op>{<shape>}{<cond>}{.<dt>}{<dest>}, src1, src2
armv8-a AArch32 Neon指令
{<prefix>}<op>{<suffix>} Vd.<T>, Vn.<T>, Vm.<T>

这里通过一些代码来了解neon寄存器的使用方法,主要是调用GCC的内置实现。

立即数复制到neon寄存器 vmovq_n_u8

这个接口,把通用寄存器r0的低8位(uint8)的值复制到neon寄存器的第0个寄存器q0,q0包含了16个uint8。

#include <stdio.h>
#include "arm_neon.h"

void print_uint8x16(uint8x16_t *a, int n)
{
	uint8_t *p = (uint8_t *)a;
	int i;
	for(i = 0; i < n; i++)	{
		printf("%02d ", *(p+i));
	} 
	printf("\n");
}

int main()
{
	uint8x16_t three = vmovq_n_u8(3);
	print_uint8x16(&three, 16);

	return 0;
}

执行结果:

[user1@centos build]$ ./vmovq_n_u8.out
03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03

对应的反汇编是:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
0000000000400850 <main>:
400850:       a9be7bfd        stp     x29, x30, [sp,#-32]!
400854:       910003fd        mov     x29, sp
400858:       910043a2        add     x2, x29, #0x10
40085c:       aa0203e0        mov     x0, x2
400860:       52800063        mov     w3, #0x3                        // #3
400864:       52800201        mov     w1, #0x10                       // #16
400868:       4e010c60        dup     v0.16b, w3
40086c:       4c007040        st1     {v0.16b}, [x2]
400870:       9400005c        bl      4009e0 <print_uint8x16>
400874:       a8c27bfd        ldp     x29, x30, [sp],#32
400878:       d65f03c0        ret
40087c:       00000000        .inst   0x00000000 ; undefined
  • 第6行,mov把立即数3放到32位寄存器w3。
  • 第8行,dup把寄存器w3的值复制到第0号neon寄存器, 占用16位,所以一共有8个数。
  • 第9行,stl把寄存器的值存到内存

注解

ST1指令可以查看 Arm® Architecture Reference Manual [1] C7 2084页

ST1 (single structure) Store a single-element structure from one lane of one register. This instruction stores the specified element of a SIMD&FP register to memory.

在armv7上的的反汇编 可以看到使用的是v开头的指令

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
00010608 <main>:
10608:       e52de004        push    {lr}            ; (str lr, [sp, #-4]!)
1060c:       f2c00e53        vmov.i8 q8, #3  ; 0x03
10610:       e24dd014        sub     sp, sp, #20
10614:       e3a01010        mov     r1, #16
10618:       e28d0010        add     r0, sp, #16
1061c:       ed600b04        vstmdb  r0!, {d16-d17}
10620:       eb00004c        bl      10758 <print_uint8x16>
10624:       e3a00000        mov     r0, #0
10628:       e28dd014        add     sp, sp, #20
1062c:       e49df004        pop     {pc}            ; (ldr pc, [sp], #4)

内存数据加载到neon寄存器vld1q_u8

ARM: Neon Intrinsics Reference [3] 中的定义

uint8x16_t vld1q_u8 (uint8_t const * ptr)
    Load multiple single-element structures to one, two, three, or four registers
A64 Instruction Argument Preparation Results
DUP Vd.16B, rn value → rn Vd.16B → result

GCC-4.4.1:ARM NEON Intrinsics [4] 中的定的

uint8x16_t vld1q_u8 (const uint8_t *)
Form of expected instruction(s): vld1.8 {d0, d1}, [r0]

注解

可以看到两个的定义不一样的, 值得注意的是在比较新的GCC版本中,GCC的手册已经把NEON内置实现的定义指向了ARM的文档, 所以可以直接参考 ARM: Neon Intrinsics Reference [3]

有如下代码:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#include <stdio.h>
#include "arm_neon.h"

void print_uint8x16(uint8x16_t *a, int n)
{
	uint8_t *p = (uint8_t *)a;
	int i;
	for(i = 0; i < n; i++)
	{
		printf("%02d ", *(p+i));
	}
	printf("\n");
}

int main()
{
	uint8_t data[16] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
	uint8x16_t A = vld1q_u8(data);   //copy data to matrix A
	print_uint8x16(&A, 16);
	return 0;
}

反汇编是:

0000000000400850 <main>:
  400850:       a9bd7bfd        stp     x29, x30, [sp,#-48]!
  400854:       910003fd        mov     x29, sp
  400858:       100001c0        adr     x0, 400890 <main+0x40>
  40085c:       910083a2        add     x2, x29, #0x20
  400860:       4c407000        ld1     {v0.16b}, [x0]
  400864:       910043a3        add     x3, x29, #0x10
  400868:       aa0203e0        mov     x0, x2
  40086c:       52800201        mov     w1, #0x10                       // #16
  400870:       4c007060        st1     {v0.16b}, [x3]
  400874:       4c007040        st1     {v0.16b}, [x2]
  400878:       94000062        bl      400a00 <print_uint8x16>
  40087c:       a8c37bfd        ldp     x29, x30, [sp],#48
  400880:       d65f03c0        ret
  400884:       d503201f        nop
  400888:       d503201f        nop
  40088c:       d503201f        nop
  400890:       04030201        .word   0x04030201
  400894:       08070605        .word   0x08070605
  400898:       0c0b0a09        .word   0x0c0b0a09
  40089c:       100f0e0d        .word   0x100f0e0d
  • 从内存读取数据到neon寄存器 v0, ld1     {v0.16b}, [x0]

如果不使用-O3选项的话, 这里只包含前20行,完整版请查看 vld1q_u8汇编

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
0000000000400a18 <main>:
  400a18:       a9bc7bfd        stp     x29, x30, [sp,#-64]!
  400a1c:       910003fd        mov     x29, sp
  400a20:       9100a3a0        add     x0, x29, #0x28
  400a24:       52800021        mov     w1, #0x1                        // #1
  400a28:       39000001        strb    w1, [x0]
  400a2c:       9100a3a0        add     x0, x29, #0x28
  400a30:       52800041        mov     w1, #0x2                        // #2
  400a34:       39000401        strb    w1, [x0,#1]
  400a38:       9100a3a0        add     x0, x29, #0x28
  400a3c:       52800061        mov     w1, #0x3                        // #3
  400a40:       39000801        strb    w1, [x0,#2]
  400a44:       9100a3a0        add     x0, x29, #0x28
  400a48:       52800081        mov     w1, #0x4                        // #4
  400a4c:       39000c01        strb    w1, [x0,#3]
  400a50:       9100a3a0        add     x0, x29, #0x28
  400a54:       528000a1        mov     w1, #0x5                        // #5
  400a58:       39001001        strb    w1, [x0,#4]
  400a5c:       9100a3a0        add     x0, x29, #0x28
  400a60:       528000c1        mov     w1, #0x6                        // #6

两者的区别是 ld1     {v0.16b}, [x0] 可以单条指令完成数据的加载, 而这里需要16次操作,每次复制一个uint8

在armv7上的反汇编 使用 vld1.8  {d16-d17}, [ip :64] 加载数据,而在armv8上是 ld1     {v0.16b}, [x0]

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
00010608 <main>:
10608:       e3003818        movw    r3, #2072       ; 0x818
1060c:       e3403001        movt    r3, #1
10610:       e52de004        push    {lr}            ; (str lr, [sp, #-4]!)
10614:       e24dd024        sub     sp, sp, #36     ; 0x24
10618:       e893000f        ldm     r3, {r0, r1, r2, r3}
1061c:       e28dc010        add     ip, sp, #16
10620:       e88c000f        stm     ip, {r0, r1, r2, r3}
10624:       e1a0000d        mov     r0, sp
10628:       f46c0a1f        vld1.8  {d16-d17}, [ip :64]
1062c:       e3a01010        mov     r1, #16
10630:       f44d0adf        vst1.64 {d16-d17}, [sp :64]
10634:       eb00004c        bl      1076c <print_uint8x16>
10638:       e3a00000        mov     r0, #0
1063c:       e28dd024        add     sp, sp, #36     ; 0x24
10640:       e49df004        pop     {pc}            ; (ldr pc, [sp], #4)

实现两个矩阵相加vaddq_u8

ARM: Neon Intrinsics Reference [3] 中的定义vaddq_u8

uint8x16_t vaddq_u8 (uint8x16_t a, uint8x16_t b)
A64 Instruction Argument Preparation Results
ADD Vd.16B,Vn.16B,Vm.16B a → Vn.16B b → Vm.16B Vd.16B → result

有如下代码,参考 NEON Hello world [5] 修改而来,实现矩阵A和B相加,得到C

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#include <stdio.h>
#include "arm_neon.h"

void print_uint8x16(uint8x16_t *a, int n)
{
	uint8_t *p = (uint8_t *)a;
	int i;
	for(i = 0; i < n; i++)
	{
		printf("%02d ", *(p+i));
	} 
	printf("\n");
}

int main()
{
	uint8_t data[16] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};

	uint8x16_t A = vld1q_u8(data);   //copy data to matrix A
	uint8x16_t B = vmovq_n_u8(3);    //prepare matrix B, every element with 3 
	uint8x16_t C = vaddq_u8(A, B);   //C = A ⊕ B

	print_uint8x16(&A, 16);
	print_uint8x16(&B, 16);
	print_uint8x16(&C, 16);
	return 0;
}

执行结果, 可以看到相加成功了

[user1@centos build]$ ./matrix_add_number.out
01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16
03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03
04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19
[user1@centos build]$

查看可执行程序反汇编。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
Disassembly of section .text:

0000000000400850 <main>:
400850:       a9ba7bfd        stp     x29, x30, [sp,#-96]!
400854:       910003fd        mov     x29, sp
400858:       10000340        adr     x0, 4008c0 <main+0x70>
40085c:       52800063        mov     w3, #0x3                        // #3
400860:       4c407000        ld1     {v0.16b}, [x0]
400864:       a90153f3        stp     x19, x20, [sp,#16]
400868:       9100c3a2        add     x2, x29, #0x30
40086c:       4e010c61        dup     v1.16b, w3
400870:       910083a3        add     x3, x29, #0x20
400874:       aa0203e0        mov     x0, x2
400878:       4e218402        add     v2.16b, v0.16b, v1.16b
40087c:       910103b4        add     x20, x29, #0x40
400880:       910143b3        add     x19, x29, #0x50
400884:       4c007060        st1     {v0.16b}, [x3]
400888:       52800201        mov     w1, #0x10                       // #16
40088c:       4c007281        st1     {v1.16b}, [x20]
400890:       4c007262        st1     {v2.16b}, [x19]
400894:       4c007040        st1     {v0.16b}, [x2]
400898:       94000066        bl      400a30 <print_uint8x16>
40089c:       aa1403e0        mov     x0, x20
4008a0:       52800201        mov     w1, #0x10                       // #16
4008a4:       94000063        bl      400a30 <print_uint8x16>
4008a8:       aa1303e0        mov     x0, x19
4008ac:       52800201        mov     w1, #0x10                       // #16
4008b0:       94000060        bl      400a30 <print_uint8x16>
4008b4:       a94153f3        ldp     x19, x20, [sp,#16]
4008b8:       a8c67bfd        ldp     x29, x30, [sp],#96
4008bc:       d65f03c0        ret
4008c0:       04030201        .word   0x04030201
4008c4:       08070605        .word   0x08070605
4008c8:       0c0b0a09        .word   0x0c0b0a09
4008cc:       100f0e0d        .word   0x100f0e0d
  • 矩阵A在neon寄存器v0中 ld1 {v0.16b}, [x0]
  • 矩阵B在neon寄存器v1中 dup v1.16b, w3
  • 矩阵C在neon寄存器v2中 add v2.16b, v0.16b, v1.16b

注解

neon add指令可以查看 Arm® Architecture Reference Manual [1] C7.2.2 1377页

在armv7上的的反汇编 使用了 vmov.i8 vld1.8 vadd.i8 vst1.64 等armv7版本的指令

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
00010608 <main>:
10608:       e3003848        movw    r3, #2120       ; 0x848
1060c:       e3403001        movt    r3, #1
10610:       e52de004        push    {lr}            ; (str lr, [sp, #-4]!)
10614:       e24dd044        sub     sp, sp, #68     ; 0x44
10618:       e893000f        ldm     r3, {r0, r1, r2, r3}
1061c:       e28dc030        add     ip, sp, #48     ; 0x30
10620:       f2c00e53        vmov.i8 q8, #3  ; 0x03
10624:       e88c000f        stm     ip, {r0, r1, r2, r3}
10628:       e1a0000d        mov     r0, sp
1062c:       f46c2a1f        vld1.8  {d18-d19}, [ip :64]
10630:       e3a01010        mov     r1, #16
10634:       edcd0b04        vstr    d16, [sp, #16]
10638:       edcd1b06        vstr    d17, [sp, #24]
1063c:       f24208e0        vadd.i8 q8, q9, q8
10640:       f44d2adf        vst1.64 {d18-d19}, [sp :64]
10644:       edcd0b08        vstr    d16, [sp, #32]
10648:       edcd1b0a        vstr    d17, [sp, #40]  ; 0x28
1064c:       eb000052        bl      1079c <print_uint8x16>
10650:       e3a01010        mov     r1, #16
10654:       e08d0001        add     r0, sp, r1
10658:       eb00004f        bl      1079c <print_uint8x16>
1065c:       e28d0020        add     r0, sp, #32
10660:       e3a01010        mov     r1, #16
10664:       eb00004c        bl      1079c <print_uint8x16>
10668:       e3a00000        mov     r0, #0
1066c:       e28dd044        add     sp, sp, #68     ; 0x44
10670:       e49df004        pop     {pc}            ; (ldr pc, [sp], #4)

ARM opensource software

记录可以在ARM服务器上使用的开源软件

软件名称 版本 状态 编译指导 备注
PR_RING 6.0.2 未验证   依赖intel网卡,在X86上验证,可以使用PR_RING向ARM服务器发包。但围在ARM验证
TCPreplay 4.2.6-1 支持 软件源下载  
spaCy 2.1.8 支持 pip下载 基于python,自然语言处理

https://github.com/ntop/PF_RING/releases

ARM registers compare

简单对比ARM寄存器和Neon指令

armv7-a neon指令
V{<mod>}<op>{<shape>}{<cond>}{.<dt>}{<dest>}, src1, src2
armv8-a AArch32 Neon指令
{<prefix>}<op>{<suffix>} Vd.<T>, Vn.<T>, Vm.<T>

ARM生态,存储相关软件

编号 软件 状态 版本 获取方式
1 Ceph 已使能 12.2.8 [github] [发行版软件源]
2 NFS 已使能 1.3.4 [发行版软件源]
3 HDFS 已使能 java [官网]
4 fio 已使能 3.11 [发行版软件源]
5 vdbench 已使能 5.04.07 [官网]
6 GridFS 已使能 MongoDB [官网]
7 MooseFS 未使能 3.0.103 [github] [官方软件源]
8 LizardFS 未使能 3.13.0 [官网] [github] [官方软件源]
9 fastDFS 已使能 5.11 [github]
10 lustre 未使能 2.12.0 [官网]
11 TFS 未使能 2.2.13 [官网] [github]
12 MogileFS 未使能 2.73 [github]
13 GFS 未使能闭源
 
14 glusterfs 已使能
[软件源]

amazon Dynamo key/value 分布式存储系统

ubuntu官方不支持ceph-fuse

me@ubuntufio:~$ rmadison -S ceph-fuse
 ceph-fuse | 0.79-0ubuntu1                 | trusty                  | amd64
 ceph-fuse | 0.80.11-0ubuntu1.14.04.3      | trusty-security         | amd64
 ceph-fuse | 0.80.11-0ubuntu1.14.04.4      | trusty-updates          | amd64
 ceph-fuse | 10.1.2-0ubuntu1               | xenial                  | amd64
 ceph-fuse | 10.2.11-0ubuntu0.16.04.1      | xenial-updates          | amd64
 ceph-fuse | 12.2.4-0ubuntu1               | bionic/universe         | amd64
 ceph-fuse | 12.2.4-0ubuntu1.1build1       | cosmic/universe         | amd64
 ceph-fuse | 12.2.8-0ubuntu0.18.04.1       | bionic-updates/universe | amd64
 ceph-fuse | 13.2.1+dfsg1-0ubuntu2.18.10.1 | cosmic-updates/universe | amd64
 ceph-fuse | 13.2.4+dfsg1-0ubuntu1         | disco/universe          | amd64
me@ubuntufio:~$
ceph的其他组件ubuntu官方支持,源上有。
ceph-fuse需要到download.ceph.com去下载

vebench 官方版本java jni缺少aarch64版本,需要手动编译。

vdbench使用方法

vdb=/home/me/vdbench50407src/src/
java=/usr/lib/jvm/java-11-openjdk-arm64/
jni=$vdb/Jni

INCLUDES32="-w -DLINUX -I$java/include -I/$java/include/linux -I/usr/include/ -fPIC"
INCLUDES64="-w -DLINUX -I$java/include -I/$java/include/linux -I/usr/include/ -fPIC"


gcc  -o   $vdb/linux/linux32.so vdbjni.o vdblinux.o vdb_dv.o vdb.o chmod.o -lm -shared -lrt

gcc  -o   $vdb/linux/linux64.so vdbjni.o vdblinux.o vdb_dv.o vdb.o chmod.o -lm -shared -lrt

执行make.linux,会在src/linux/下生成linux32.so和linux64.so文件,这里我们只需要使用到64位的文件。重命名linux64.so并复制到二进制包(注意不是源码包)的linux/目录下即可。

MooseFS

moosefs 代码托管在 https://github.com/moosefs/moosefs
官方运行付费版本的pro版本 https://moosefs.com/

未见提供ARM版本。

[ICO]   Name    Last modified   Size    Description
[DIR]   Parent Directory        -
[DIR]   binary-amd64/   24-Nov-2018 03:41   -
[DIR]   binary-i386/    24-Nov-2018 03:41   -

LizardFS

LizardFS作为MoseFS的分支发布。 LizardFS官方支持CentOS,Debian,ubuntu,官网提供x86版本rpm包、deb包下载,有官方软件源http://packages.lizardfs.com/yum/el7/

fastDFS

可以编译通过

GridFS

基于MongoDB,使用方法是部署mongoDB,安装mongoDB driver后调用api进行存储。

MogileFS

基于perl github上基本停止更新

TFS

github库已经archive,停止更新。 只有x86安装指导教程

其他信息

GridFS 用于存储和恢复那些超过16M(BSON文件限制)的文件(如:图片、音频、视频等)。
MogileFS    适用于处理海量小文件
Ceph    是一个 Linux PB级别的分布式文件系统
MooseFS 通用简便,适用于研发能力不强的公司
Taobao Filesystem   适用于处理海量小文件
ClusterFS   适用于处理单个大文件
Google Filesystem   GFS+MapReduce擅长处理单个大文件
Hadoop Distributed Filesystem   GFS的山寨版+MapReduce,擅长处理单个大文件

atomic add

解决办法:

cd tars2node
git am 0001-replace-atomic-x86-asm-code-with-gcc-builtin-__sync_.patch
cd build
cmake ../
make

https://github.com/tars-node/tars2node.git

参考资料: https://zhuanlan.zhihu.com/p/32303037

相关问题 https://stackoverflow.com/questions/32470969/segfault-in-libc-upon-running-statically-linked-application https://www.cnblogs.com/silentNight/p/5685629.html

C skill 认证

重构分析

  1. 编程规范

    1. 通用编程规范
    2. 安全编程规范
  2. 模块内代码的坏味道:

    1. 数据的坏味道
    2. 函数的坏味道
    3. 注释的坏味道
  3. 组件/服务代码的坏味道:

    1. 数据的坏味道
    2. 函数的坏味道
    3. 注释的坏味道
    4. 类的坏味道
    5. 结构关系的坏味道
  4. 系统的坏味道

    1. 系统结构关系的坏味道
    2. 过度设计的坏味道
  5. 重构原则: SOLID

    1. S 单一职责原则
    2. 里氏替换原则
    3. 迪米特法则
    4. 开闭原则
    5. 依赖倒置原则
    6. 接口隔离原则
  6. 模块内代码结构重构方法:

    1. 重新组织数据
    2. 重新组织函数
    3. 简化条件表达式
    4. 简化函数掉用
  7. 组件/服务代码的重构方法

    1. 重新组织数据 3. 重新组织函数 2. 简化条件表达式 4. 简化函数掉用

  8. 系统重构方法

    1. 大型重构
    2. 模型驱动
  9. 华为公司产品架构设计原则-ACT15立方

  10. 有限硬件资源、差异化硬件心态、差异化操作系统等受限条件下最优方案设计

  11. 系统重构结果评估

  12. 白盒测试方法: 重构防护网设计原则

cgroup

cgroup (control group [1] 提供一种限制应用程序占用资源的机制。

使用cgroup限制进程使用cpu

这里使用 cgroup_test.sh 做测试。

在没有做任何事情之前, 程序的运行情况如下。第一列是PID, 第二列是CPU序号(从1开始)。 可以看到脚本的主进程64793在24号CPU上运行,脚本中的sleep函数会派生一个进程,在62号CPU上运行。 观察可以发现会在CPU之间进行迁移。 sleep每执行一次,就会生成一个进程,可以很频繁的看到进程在 不同的CPU之前进行迁移

64793  24 user1      20   0  110M  1404  1184 S  0.0  0.0  0:00.30 │  │        └─ bash GoodCommand/source/script/cgroup_test.sh
69648  62 user1      20   0  105M   352   280 S  0.0  0.0  0:00.00 │  │           └─ sleep 1

执行操作步骤,注意这里的CPU需要是0开始

cd /sys/fs/cgroup/cpuset
mkdir Charlie && cd Charlie
echo 2-3 > cpuset.cpus
echo 1 > cpuset.mems
echo $$ > tasks       #把当前shell的进程加入到Charlie当中
cat tasks
bash /home/user1/GoodCommand/source/script/cgroup_test.sh

观察到程序会被固定在cpu3-4上, 也就是前面固定的2-3

72077   4 root       20   0  114M  4120  1804 S  0.0  0.0  0:00.31 │              └─ bash
73899   3 root       20   0  110M  1380  1184 S  0.0  0.0  0:00.03 │                 └─ bash /home/user1/GoodCommand/source/script/cgroup_test.sh
74364   3 root       20   0  105M   356   280 S  0.0  0.0  0:00.00 │                    └─ sleep 1

使用cgroup限制程序使用内存大小

这里使用测试程序 :Mem-limits.c

/*
 * 源程序来自 https://sysadmincasts.com/episodes/14-introduction-to-linux-control-groups-cgroups
 * 并进行了修改添加了死循环不退出
 */

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(void) {

    int i;
    char *p;

    // intro message
    printf("Starting ...\n");

    // loop 50 times, try and consume 50 MB of memory
    for (i = 0; i < 50; ++i) {

        // failure to allocate memory?
        if ((p = malloc(1<<20)) == NULL) {
            printf("Malloc failed at %d MB\n", i);
            return 0;
        }

        // take memory and tell user where we are at
        memset(p, 0, (1<<20));
        printf("Allocated %d to %d MB\n", i, i+1);

    }

    // exit message and return
    printf("Done!\n");

    while (1) {
        sleep(1);
    }
    return 0;

}

未作限制之前, 占用了50MB的内存

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
9455 user1     20   0   55616  51824    392 S   0.0  0.0   0:00.04 mem-limit.out

创建cgroup test2,并且限制资源的使用为5MB

mkdir /sys/fs/cgroup/memory/test2
lscgroup | grep test2
echo 5242880 > /sys/fs/cgroup/memory/test2/memory.limit_in_bytes
echo 5242880 > /sys/fs/cgroup/memory/test2/memory.memsw.limit_in_bytes

程序因为受到内存限制, 申请不到内存而被kill掉

[root@intel6248 src]# cgexec -g memory:/test2 ./mem-limit.out
Starting ...
Allocated 0 to 1 MB
Allocated 1 to 2 MB
Allocated 2 to 3 MB
Allocated 3 to 4 MB
Killed

警告

memory.memsw.limit_in_bytes 是设置swap空间, 如果不设置, 程序在达到内存限制之后就会开始使用swap

使用cgroup限制程序io速率

这里使用的例子来自 sysadmincasts [2]

准备测试文件,在当前目录下会生成一个1M * 3000 = 3G大小的文件

dd if=/dev/zero of=file-abc bs=1M count=3000

不做限制,测试速度

echo 3 > /proc/sys/vm/drop_caches   #测试之前清除内存中的缓存
dd if=file-abc of=/dev/null

读完3G的文件速度是105MB/s

[root@intel6248 user1]# dd if=file-abc of=/dev/null
6144000+0 records in
6144000+0 records out
3145728000 bytes (3.1 GB) copied, 29.891 s, 105 MB/s
print 5*1024*1024

创建cgroup并且限制速度未5MiB/s = 5* 1024 * 1024 = 5242880 B/s

mkdir /sys/fs/cgroup/blkio/test1
lscgroup | grep test1               # 查询创建cgroup是否成功
lsblk                               # 查询当前硬盘的主设备号和次设备号得到是 8:0
echo "8:0 5242880" > /sys/fs/cgroup/blkio/test1/blkio.throttle.read_bps_device

读完3G的文件速度是5MB/s左右, 花了大概10分钟

[root@intel6248 user1]# cgexec -g blkio:/test1 dd if=file-abc of=/dev/null
6144000+0 records in
6144000+0 records out
3145728000 bytes (3.1 GB) copied, 600.566 s, 5.2 MB/s

iostop确认过程中读速度确实是5M左右

Total DISK READ :       5.20 M/s | Total DISK WRITE :      79.79 K/s
Actual DISK READ:       5.20 M/s | Actual DISK WRITE:      81.52 K/s
TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
4961 be/4 root        5.20 M/s    0.00 B/s  0.00 % 99.99 % dd if=file-abc of=/dev/null

docker中的cgroup

docker也使用cgroup限制容器。

创建容器的时候传入cpu和memory的参数,例如限制只能使用4和5号cpu(这里是从0开始),同时限制只能使用10M内存。

docker run -itd --name docker_cgroup_restrict --rm --cpuset-cpus 4,5 -m 10m ubuntu

查询容器对应的cgroup信息, 可以看到 cpuset.cpus: 4-5 和 memory.limit_in_bytes: 10485760

lscgroup | grep docker | grep 5a1e18586f7e

[user1@intel6248 ~]$ cgget -r cpuset.cpus -r  memory.limit_in_bytes  /docker/5a1e18586f7e995c3c02d644eda75e7682118bf16339e0405ba4451fc02d8691
/docker/5a1e18586f7e995c3c02d644eda75e7682118bf16339e0405ba4451fc02d8691:
cpuset.cpus: 4-5
memory.limit_in_bytes: 10485760

或者显示全部变量的信息。

cgget -g cpuset:/docker/5a1e18586f7e995c3c02d644eda75e7682118bf16339e0405ba4451fc02d8691
cgget -g memory:/docker/5a1e18586f7e995c3c02d644eda75e7682118bf16339e0405ba4451fc02d8691

如果在容器中运行一个进程,这个进程会在指定核上运行。

22702  23 root       20   0  105M  9188  2772 S  0.0  0.0  0:02.17 │  ├─ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/5a1e18586f7e995c3c02d644eda75e7682118
25094   5 root       20   0 18496  1428  1200 S  0.0  0.0  0:00.07 │  │  ├─ /bin/bash
25967   5 root       20   0 18364  1604  1320 S  0.0  0.0  0:01.48 │  │  │  └+ bash nothing.sh

申请内存不能申请到超过10M的内存。

root@5a1e18586f7e:~/user1# ./mem-limit.out
Starting ...
Allocated 0 to 1 MB
Allocated 1 to 2 MB
Allocated 2 to 3 MB
Allocated 3 to 4 MB
Allocated 4 to 5 MB
Allocated 5 to 6 MB
Allocated 6 to 7 MB
Allocated 7 to 8 MB
Allocated 8 to 9 MB
Allocated 9 to 10 MB
Allocated 10 to 11 MB
Allocated 11 to 12 MB
Allocated 12 to 13 MB
Killed

芯片对比1616 1620 intel

ARM处理处理器。

项目 1616 1620
核心 32和2.4GHZ 64核 2.6GHZ 3226 4826 6426 6430
内存 4个DDR4通道 2400MHZ 8个DDR4通道 3200MHz
内存带宽 最高96Gb/s 最高240Gb/s
I/O PCIe3.0 PCIe4.0
封装 57.5mm×57.5mm BGA 60mm×75mm BGA
制程 16nm 7nm
功耗 75W 100-200W
  Hi1620 Hi1616 Hi1612 Hi1610
Announced 2018 2017 2016 2015
Cores 24 to 64 32 32 16
Architect ure Ares Cortext-A72 Cortex-A57 Cortex-A57
Frequency (GHz) 2.4 2.6. 3.0 2.4 2.1 2.1
L1 64KB L1-I64KB L1-D 48KB L1-I32KB L1-D 48KB L1-I32KB L1-D 48KB L1-I32KB L1-D
L2 512KB Private 1MB/4 cores 1MB/4 cores 1MB/4 cores
L3 1MB/core Shared 32MB CCN 32 CCN 16MB CCN
Memory 8×DDR4 3200MHz 4×DDR4 2400MHz 4×DDR4 2133MHz 4×DDR4 1866MHz
Interconn ect Up to 4S 240Gbps/por t Up to 2S 96Gbps/port ? ?
IO 40 PCIe4.02×10 0GE 46 PCIe 3.08×10GE 16 PCIe3.0 16PCIe3.0
Process TSMC 7nm TSMC 16nm TSMC 16nm TSMC 16nm
Power 100 to 200W 85W ? ?
Equal intel 6148 2650 V4 ? ?

Intel cpu

cpu型号 ARM
8800  
4800  
4600  
6148 1620
2690 6146  
2680  
2660  
2650 1616
2640  
2630  
2620  
2603  

编译Taishan板载网卡、sas驱动

想要编译网卡驱动,该怎么做,这里以Taishan 200(Kunpeng 920)的板载网卡驱动hns3, sas为例

获取内核源码

git clone --depth=1 https://github.com/torvalds/linux.git

--depth=1 是为了更快复制,指包含所有文件的最近一个commit, 不包含全部的commits.

在源码目录树下编译

在linux内核源码目录目录树下, 例如我的目录树在/home/user1/linux, cd进去执行

make olddefconfig && make prepare                       #生成config
make -C . M=drivers/net/ethernet/hisilicon/hns3 modules #生成ko
make -C . M=drivers/net/ethernet/hisilicon/hns3 clean   #删除ko

-C . 指明内核源码目录 M 指明模块路径

编译结果

Building modules, stage 2.
MODPOST 4 modules
CC [M]  drivers/net/ethernet/hisilicon/hns3/hnae3.mod.o
LD [M]  drivers/net/ethernet/hisilicon/hns3/hnae3.ko
CC [M]  drivers/net/ethernet/hisilicon/hns3/hns3.mod.o
LD [M]  drivers/net/ethernet/hisilicon/hns3/hns3.ko
CC [M]  drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.mod.o
LD [M]  drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.ko
CC [M]  drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf.mod.o
LD [M]  drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf.ko

在源码目录树外编译

make -C ../../linux/ M=$(pwd) modules
make -C ../../linux/ M=$(pwd) clean

问题记录

问题:asm/errno.h: No such file or directory
[user1@centos linux]$ make -C . M=drivers/net/ethernet/hisilicon/hns3 modules
make: Entering directory `/home/user1/linux'
  CC [M]  drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.o
In file included from ./include/linux/errno.h:5:0,
                 from ./include/linux/acpi.h:11,
                 from drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c:4:
./include/uapi/linux/errno.h:1:23: fatal error: asm/errno.h: No such file or directory
 #include <asm/errno.h>
                       ^
compilation terminated.
make[3]: *** [drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.o] Error 1
make[2]: *** [drivers/net/ethernet/hisilicon/hns3/hns3pf] Error 2
make[1]: *** [drivers/net/ethernet/hisilicon/hns3] Error 2
make: *** [sub-make] Error 2
make: Leaving directory `/home/user1/linux'

解决办法

make olddefconfig && make prepare

问题: ERROR: Kernel configuration is invalid ————————————————————-=

[user1@centos linux]$ make -C . M=drivers/scsi/hisi_sas modules
make: Entering directory '/home/user1/linux'

  ERROR: Kernel configuration is invalid.
         include/generated/autoconf.h or include/config/auto.conf are missing.
         Run 'make oldconfig && make prepare' on kernel src to fix it.

Makefile:613: include/config/auto.conf: No such file or directory
make: *** [Makefile:685: include/config/auto.conf] Error 1
make: Leaving directory '/home/user1/linux'

解决办法

make olddefconfig && make prepare
问题:scripts/genksyms/genksyms: No such file or directory
[user1@centos linux-4.18.0-80.7.2.el8_0]$ make -C . M=drivers/scsi/hisi_sas modules
make: Entering directory '/home/user1/open_software/kernel-src-4.18/linux-4.18.0-80.7.2.el8_0'
  CC [M]  drivers/scsi/hisi_sas/hisi_sas_main.o
/bin/sh: scripts/genksyms/genksyms: No such file or directory
make[1]: *** [scripts/Makefile.build:322: drivers/scsi/hisi_sas/hisi_sas_main.o] Error 1
make: *** [Makefile:1528: _module_drivers/scsi/hisi_sas] Error 2
make: Leaving directory '/home/user1/open_software/kernel-src-4.18/linux-4.18.0-80.7.2.el8_0'

解决办法

make olddefconfig && make prepare scripts

编译内核模块

想要编译网卡驱动,该怎么做,这里以Taishan 200(Kunpeng 920)的板载网卡驱动hns3, sas为例

获取内核源码

git clone --depth=1 https://github.com/torvalds/linux.git

--depth=1 是为了更快复制,指包含所有文件的最近一个commit, 不包含全部的commits.

在源码目录树下编译

在linux内核源码目录目录树下, 例如我的目录树在/home/user1/linux, cd进去执行

make olddefconfig && make prepare                       #生成config
make -C . M=drivers/net/ethernet/hisilicon/hns3 modules #生成ko
make -C . M=drivers/net/ethernet/hisilicon/hns3 clean   #删除ko

-C . 指明内核源码目录 M 指明模块路径

编译结果

Building modules, stage 2.
MODPOST 4 modules
CC [M]  drivers/net/ethernet/hisilicon/hns3/hnae3.mod.o
LD [M]  drivers/net/ethernet/hisilicon/hns3/hnae3.ko
CC [M]  drivers/net/ethernet/hisilicon/hns3/hns3.mod.o
LD [M]  drivers/net/ethernet/hisilicon/hns3/hns3.ko
CC [M]  drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.mod.o
LD [M]  drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge.ko
CC [M]  drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf.mod.o
LD [M]  drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf.ko

在源码目录树外编译

make -C ../../linux/ M=$(pwd) modules
make -C ../../linux/ M=$(pwd) clean

问题记录

问题:asm/errno.h: No such file or directory
[user1@centos linux]$ make -C . M=drivers/net/ethernet/hisilicon/hns3 modules
make: Entering directory `/home/user1/linux'
  CC [M]  drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.o
In file included from ./include/linux/errno.h:5:0,
                 from ./include/linux/acpi.h:11,
                 from drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c:4:
./include/uapi/linux/errno.h:1:23: fatal error: asm/errno.h: No such file or directory
 #include <asm/errno.h>
                       ^
compilation terminated.
make[3]: *** [drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.o] Error 1
make[2]: *** [drivers/net/ethernet/hisilicon/hns3/hns3pf] Error 2
make[1]: *** [drivers/net/ethernet/hisilicon/hns3] Error 2
make: *** [sub-make] Error 2
make: Leaving directory `/home/user1/linux'

解决办法

make olddefconfig && make prepare

问题: ERROR: Kernel configuration is invalid ————————————————————-=

[user1@centos linux]$ make -C . M=drivers/scsi/hisi_sas modules
make: Entering directory '/home/user1/linux'

  ERROR: Kernel configuration is invalid.
         include/generated/autoconf.h or include/config/auto.conf are missing.
         Run 'make oldconfig && make prepare' on kernel src to fix it.

Makefile:613: include/config/auto.conf: No such file or directory
make: *** [Makefile:685: include/config/auto.conf] Error 1
make: Leaving directory '/home/user1/linux'

解决办法

make olddefconfig && make prepare
问题:scripts/genksyms/genksyms: No such file or directory
[user1@centos linux-4.18.0-80.7.2.el8_0]$ make -C . M=drivers/scsi/hisi_sas modules
make: Entering directory '/home/user1/open_software/kernel-src-4.18/linux-4.18.0-80.7.2.el8_0'
  CC [M]  drivers/scsi/hisi_sas/hisi_sas_main.o
/bin/sh: scripts/genksyms/genksyms: No such file or directory
make[1]: *** [scripts/Makefile.build:322: drivers/scsi/hisi_sas/hisi_sas_main.o] Error 1
make: *** [Makefile:1528: _module_drivers/scsi/hisi_sas] Error 2
make: Leaving directory '/home/user1/open_software/kernel-src-4.18/linux-4.18.0-80.7.2.el8_0'

解决办法

make olddefconfig && make prepare scripts

containerof

由结构体成员获取机构体指针

定义在include/linux/kernel.h

#define container_of(ptr, type, member) ({          \
    const typeof( ((type *)0)->member ) *__mptr = (ptr);    \
    (type *)( (char *)__mptr - offsetof(type,member) );})

解析如下

offsetof(type,member)                       #member在结构体成员中的偏移量
typeof( ((type *)0)->member )               #获得member的类型,用于定义一个const变量
(char *)__mptr                              #member的指针, 其实就是 ptr,但是是临时定义的const指针。
(char *)__mptr - offsetof(type,member) );}  #结构体的首地址

offsetof的解析请参考offsetof.md

示例代码如下:可以通过过mem2成员获得sample的首地址

#include  <stdio.h>

#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)

#define container_of(ptr, type, member) ({         \
    const typeof( ((type *)0)->member ) *__mptr = (ptr); \
    (type *)( (char *)__mptr - offsetof(type,member) );})

int main(void)
{
    struct sample {
        int mem1;
        char mem2;
    };

    struct sample sample1;

    printf("Address of Structure sample1 (Normal Method) = %p\n", &sample1);

    printf("Address of Structure sample1 (container_of Method) = %p\n",
                            container_of(&sample1.mem2, struct sample, mem2));

    return 0;
}

为什么要定义一个临时变量,换成下面这种写法,结果是一样的。现在还不知道区别在哪里

#include  <stdio.h>

#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)

#define container_of(ptr, type, member) ({      \
        (type *)( (char *)ptr - offsetof(type, member) ); \
})

int main(void)
{
    struct sample {
        int mem1;
        char mem2;
    };

    struct sample sample1;

    printf("Address of Structure sample1 (Normal Method) = %p\n", &sample1);

    printf("Address of Structure sample1 (container_of Method) = %p\n",
                            container_of(&sample1.mem2, struct sample, mem2));

    return 0;
}

解读cpu信息

详细了解某一台设备的cpu信息。可以使用查询文件/proc/cpuinfo,使用工具lscpu或者demidecode。同时选取了多种设备进行对比。

总核数(总逻辑核数)= 物理CPU个数 × 每颗物理CPU的核数 × 超线程数

一台普通PC

lscpu
root@ubuntu:~# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Model name:            Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
Stepping:              4
CPU MHz:               3000.112
BogoMIPS:              6000.22
Hypervisor vendor:     Xen
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              25600K
NUMA node0 CPU(s):     0-3
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good n
opl eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm fsgsbase smep erms xsaveopt
CPU(s):                4 总核数4 或者 逻辑核数4  对应cpuinfo中的processor编号
Socket(s):             1 物理核一共有1个         对应cpuinfo中的physical id
Core(s) per socket:    4 每个物理核封装4个核心   对应cpuinfo中的cpu cores
Thread(s) per core:    1 每个核心开启1个线程,没有超线程 对应cpuinfo中的siblings = cpu cores × 超线程数

总核数(逻辑核数)4 = 物理核数1 × 每个物理封装核心4 × 1超线程

/proc/cpuinfo

查看物理CPU个数1

root@ubuntu:~# cat /proc/cpuinfo | grep "physical id" | sort | uniq | wc -l
1

查看每个物理CPU中core的个数(即核数)4

root@ubuntu:~# cat /proc/cpuinfo | grep "cpu cores"|uniq
cpu cores: 4

查看逻辑CPU的个数4

root@ubuntu:~# cat /proc/cpuinfo | grep "processor" | wc -l
4

cat /proc/cpuinfo其中一段输出

processor       : 2 #编号是2的逻辑cpu
vendor_id       : GenuineIntel
cpu family      : 6
model           : 62
model name      : Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
stepping        : 4
microcode       : 0x428
cpu MHz         : 3000.112
cache size      : 25600 KB
physical id     : 0 #物理核编号,只有一个核,4个逻辑CPU的字段都是0
siblings        : 4 #每个物理核中封装的逻辑cpu数量。如果siblins大于cpu cores,则认为开启了超线程。
core id         : 2 #在物理核中的核心编号,编号不一定连续,编号的数量等于cpu cores。查询办法 cat /proc/cpuinfo | grep "core id" |sort|uniq|wc -l
cpu cores       : 4 #每个物理CPU有4个核心
apicid          : 4
initial apicid  : 4
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl e
agerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm fsgsbase smep erms xsaveopt
bugs            :
bogomips        : 6000.22
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:
dmidecode
root@ubuntu:~# dmidecode -t processor
# dmidecode 3.0
Scanning /dev/mem for entry point.
SMBIOS 2.4 present.

Handle 0x0401, DMI type 4, 26 bytes
Processor Information
Socket Designation: CPU 1
Type              : Central Processor
Family            : Other
Manufacturer      : Intel
ID                : E4 06 03 00 FF FB 8B 17
Version           : Not Specified
Voltage           : Unknown
External Clock    : Unknown
Max Speed         : 3000 MHz
Current Speed     : 3000 MHz
Status            : Populated, Enabled
Upgrade           : Other

Handle 0x0402, DMI type 4, 26 bytes
Processor Information
Socket Designation: CPU 2
Type              : Central Processor
Family            : Other
Manufacturer      : Intel
ID                : E4 06 03 00 FF FB 8B 17
Version           : Not Specified
Voltage           : Unknown
External Clock    : Unknown
Max Speed         : 3000 MHz
Current Speed     : 3000 MHz
Status            : Populated, Enabled
Upgrade           : Other

Handle 0x0403, DMI type 4, 26 bytes
Processor Information
Socket Designation: CPU 3
Type              : Central Processor
Family            : Other
Manufacturer      : Intel
ID                : E4 06 03 00 FF FB 8B 17
Version           : Not Specified
Voltage           : Unknown
External Clock    : Unknown
Max Speed         : 3000 MHz
Current Speed     : 3000 MHz
Status            : Populated, Enabled
Upgrade           : Other

Handle 0x0404, DMI type 4, 26 bytes
Processor Information
Socket Designation: CPU 4
Type              : Central Processor
Family            : Other
Manufacturer      : Intel
ID                : E4 06 03 00 FF FB 8B 17
Version           : Not Specified
Voltage           : Unknown
External Clock    : Unknown
Max Speed         : 3000 MHz
Current Speed     : 3000 MHz
Status            : Populated, Enabled
Upgrade           : Other

一台x86服务器

lscpu
[root@localhost ~]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                48
On-line CPU(s) list:   0-47
Thread(s) per core:    2
Core(s) per socket:    12
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Gold 6126T CPU @ 2.60GHz
Stepping:              4
CPU MHz:               2601.000
CPU max MHz:           2601.0000
CPU min MHz:           1000.0000
BogoMIPS:              5200.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              19712K
NUMA node0 CPU(s):     0-11,24-35
NUMA node1 CPU(s):     12-23,36-47
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
CPU(s):                48   总核数48 或者 逻辑核数48
Socket(s):             2    物理核一共有2个
Core(s) per socket:    12   每个物理核封装12个核心
Thread(s) per core:    2    每个核心开启2个超线程

总核数(逻辑核数)48 = 物理核数2 × 每个物理封装核心12 × 2超线程

cat /proc/cpuinfo

查看物理CPU个数2

[root@localhost ~]# cat /proc/cpuinfo | grep "physical id" | sort | uniq | wc -l
2

查看每个物理CPU中core的个数(即核数)12

[root@localhost ~]# cat /proc/cpuinfo | grep "cpu core" | sort | uniq
cpu cores       : 12

查看逻辑CPU的个数48

[root@localhost ~]# cat /proc/cpuinfo | grep "processor" | wc -l
48

cat /proc/cpuinfo其中一段输出

processor       : 41 #编号为41的逻辑CPU
vendor_id       : GenuineIntel
cpu family      : 6
model           : 85
model name      : Intel(R) Xeon(R) Gold 6126T CPU @ 2.60GHz
stepping        : 4
microcode       : 0x2000043
cpu MHz         : 2601.000
cache size      : 19712 KB
physical id     : 1  #编号为41的逻辑CPU所在的物理CPU编号,这里在第2个物理CPU上
siblings        : 24 #每个物理核CPU的线程数量是24个,结合cpu cores可以知道超线程倍数是2
core id         : 8  #在物理核中的核心编号,编号不一定连续,编号的数量等于cpu cores。查询办法 cat /proc/cpuinfo | grep "core id" |sort|uniq|wc -l
cpu cores       : 12 #每个物理CPU的核心是12个
apicid          : 49
initial apicid  : 49
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
bogomips        : 5205.75
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:
dmidecode

使用dmidecode查看CPU数量

[root@localhost ~]# dmidecode -t processor | grep -E "Socket Designation:|(Core|Thread) Count"
        Socket Designation: CPU01
        Core Count: 12
        Thread Count: 24
        Socket Designation: CPU02
        Core Count: 12
        Thread Count: 24

这里CPU个数是2,分别是CPU01和CPU02,每个物理CPU核心是12个,但是都运行了24个线程,所以超线程倍数是2,总逻辑CPU数量是48。

一台ARM服务器

lscpu
root@ubuntu:~# lscpu
Architecture:        aarch64
Byte Order:          Little Endian
CPU(s):              64
On-line CPU(s) list: 0-63
Thread(s) per core:  1
Core(s) per socket:  32
Socket(s):           2
NUMA node(s):        4
Vendor ID:           ARM
Model:               2
Model name:          Cortex-A72
Stepping:            r0p2
BogoMIPS:            100.00
L1d cache:           32K
L1i cache:           48K
L2 cache:            1024K
L3 cache:            16384K
NUMA node0 CPU(s):   0-15
NUMA node1 CPU(s):   16-31
NUMA node2 CPU(s):   32-47
NUMA node3 CPU(s):   48-63
Flags:               fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
root@ubuntu:~#
总核数(逻辑核数)64 = 物理核数16 × 每个物理封装核心4 × 2超线程
实际上这台ARM服务器是2个物理核,即2个chip,每个chip含2个socket,每个socket含16个核心。

如果ARM服务器上lscpu显示不正确,请考虑升级固件,参考ARM 服务器更新固件

/proc/cpuinfo

ARM服务器的cpuinfo没有相应的 physical id,cpu core,processor字段。所以不能按照intel系统上的查询方式查询信息。

cpuinfo的一段输出

processor       : 62
BogoMIPS        : 100.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd08
CPU revision    : 2
dmidecode

使用dmidecode查看CPU数量

[root@CN-1 ~]# dmidecode -t processor | grep -E "Socket Designation:|(Core|Thread) Count"
        Socket Designation: CPU01
        Core Count: 32
        Thread Count: 32
        Socket Designation: CPU02
        Core Count: 32
        Thread Count: 32
[root@CN-1 ~]#

这里CPU个数是2,分别是CPU01和CPU02,每个物理CPU核心是32个,每个物理CPU核心线程数量也是32,也就是没有启用超线程。总逻辑CPU数量是64。

一台树莓派 3B

官方参数官网

Raspberry Pi 3 Specifications
SoC: Broadcom BCM2837
CPU: 4× ARM Cortex-A53, 1.2GHz
GPU: Broadcom VideoCore IV
RAM: 1GB LPDDR2 (900 MHz)
Networking: 10/100 Ethernet, 2.4GHz 802.11n wireless
Bluetooth: Bluetooth 4.1 Classic, Bluetooth Low Energy
Storage: microSD
GPIO: 40-pin header, populated
Ports: HDMI, 3.5mm analogue audio-video jack, 4× USB 2.0, Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI)
lscpu
pi@raspberrypi:~ $ lscpu
Architecture:          armv7l
Byte Order:            Little Endian
CPU(s):                4            #官方 CPU: 4× ARM Cortex-A53, 1.2GHz
On-line CPU(s) list:   0-3
Thread(s) per core:    1            #没有超线程
Core(s) per socket:    4
Socket(s):             1
Model:                 4
Model name:            ARMv7 Processor rev 4 (v7l)
CPU max MHz:           1200.0000    #和官方的1.2GHZ标称一样,没有买到假货
CPU min MHz:           600.0000     #和官方一致
BogoMIPS:              38.40
Flags:                 half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
/proc/cpuinfo
pi@raspberrypi:~ $ cat /proc/cpuinfo
processor       : 0
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 38.40
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 1
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 38.40
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 2
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 38.40
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 3
model name      : ARMv7 Processor rev 4 (v7l)
BogoMIPS        : 38.40
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

Hardware        : BCM2835 #这里和官方BCM2837不一样,感觉还是有点坑。
Revision        : a32082
Serial          : 0000000076e8446e
dmidecode
pi@raspberrypi:~ $ sudo dmidecode
# dmidecode 3.0
Scanning /dev/mem for entry point.
# No SMBIOS nor DMI entry point found, sorry.

不支持demidecode,网上介绍原因是BIOS没有设置DMI data。

如有问题,欢迎在github上给我留言

c skill

在leetcode上提交代码经常会出现一下错误

  • Heap-buffer-overflow 访问堆内存超出范围
  • Heap-use-after-free 使用已经释放的堆内存
  • Stack-buffer-overflow 非法访问栈空间,通常是数据越界
  • Global-buffer-overflow 非常访问全局空间,通常是全局变量数组访问越界

复现方式1:

gcc -O -g -fsanitize=address  test.c
./a.out

复现方式2:

在CMakeLists.txt添加

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -Wall -fsanitize=address")

字符串数组

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main()
{
        char s1[10]     = "This";
        char *s2        = "This";
        char s3[]       = {'T','h','i','s','\0'};
        char s4[5]      ={0}; s4[0] = 'T'; s4[1]='h'; s4[2]='i'; s4[3]='s';
                        s4[4]='\0';s4[5]='f';s4[6]='g';

        printf("s1 vs s2: %d\n",strcmp(s1,s2));
        printf("s1 vs s3: %d\n",strcmp(s1,s3));
        printf("s2 vs s3: %d\n",strcmp(s2,s3));
        printf("s1 vs s4: %d\n",strcmp(s1,s4));

        for(int i=0; i<6; i++)printf("%d:%c:%x ",i,s1[i],s1[i]);printf("\n");
        for(int i=0; i<6; i++)printf("%d:%c:%x ",i,s2[i],s2[i]);printf("\n");
        for(int i=0; i<6; i++)printf("%d:%c:%x ",i,s3[i],s3[i]);printf("\n");
        for(int i=0; i<6; i++)printf("%d:%c:%x ",i,s4[i],s4[i]);printf("\n");
        printf("\n");

        return 0;
}
   0   1    2    3    4    5   6
+---------------------------------
|'T' |'h' |'i' |'s' |'\0'|   |   |
+---------------------------------

这5个字符串定义等价,一共占用5个存储单元, strlen() = 4, strlen 车辆的数组长度不包含最后一个字符’0’。

在没有地址检查的时候,访问数组是可以随意越界的, 除非越界到系统的阻拦,否则是可以随意越界的。 在编译时使用编译选项

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -Wall -fsanitize=address")
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main()
{
        char s1[10]     = "This";
        char *s2        = "This";       //允许越界, 是否允许越界,和后续的变量定义有关系
        char s3[]       = {'T','h','i','s','\0'};
        char s4[5]      ={0}; s4[0] = 'T'; s4[1]='h'; s4[2]='i'; s4[3]='s';
                        s4[4]='\0';

        printf("s1 vs s2: %d\n",strcmp(s1,s2));
        printf("s1 vs s3: %d\n",strcmp(s1,s3));
        printf("s2 vs s3: %d\n",strcmp(s2,s3));
        printf("s1 vs s4: %d\n",strcmp(s1,s4));

        for(int i=0; i<10; i++)printf("%d:%c:%x ",i,s1[i],s1[i]);printf("\n");
}

https://beginnersbook.com/2014/01/c-strings-string-functions/

C 变量类型和最大限制

glibc 的limit.h [1] 中定义有C语言类型的各种最大限制。

#include <stdio.h>
#include <limits.h>

int main()
{
	printf("minmun char:%d maximun char:%d\n",CHAR_MIN, CHAR_MAX);
}
minmun char:0 maximun char:255

todo:

eval 函数库

后缀表达式 https://juejin.im/post/5d3e55ade51d457761476238

问题记录

[  7%] Building C object CMakeFiles/151.out.dir/151-reverse-words-in-a-string.c.o
/opt/rh/devtoolset-8/root/usr/bin/cc   -g -Wall -fsanitize=address   -o CMakeFiles/151.out.dir/151-reverse-words-in-a-string.c.o   -c /home/user1/GoodCommand/source/src/leetcode/151-reverse-words-in-a-string.c
Linking C executable 151.out
/usr/bin/cmake -E cmake_link_script CMakeFiles/151.out.dir/link.txt --verbose=1
/opt/rh/devtoolset-8/root/usr/bin/cc   -g -Wall -fsanitize=address    CMakeFiles/151.out.dir/151-reverse-words-in-a-string.c.o  -o 151.out  -L/opt/rh/devtoolset-7/root/usr/lib/gcc/aarch64-redhat-linux/7 -rdynamic
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: cannot find libasan_preinit.o: No such file or directory
/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/ld: cannot find -lasan
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/151.out.dir/build.make:92: 151.out] Error 1
make[2]: Leaving directory '/home/user1/GoodCommand/source/src/leetcode/build'
make[1]: *** [CMakeFiles/Makefile2:67: CMakeFiles/151.out.dir/all] Error 2
make[1]: Leaving directory '/home/user1/GoodCommand/source/src/leetcode/build'
make: *** [Makefile:79: all] Error 2
[user1@centos build]$
[1]limits.h https://sourceware.org/git/?p=glibc.git;a=blob;f=include/limits.h;h=8195da78a4a6074d737ec45ba27b8fec6005543e;hb=HEAD

dns over https

经由https进行dns解析。 一般情况下, 访问一个网站,需要使用url提交给dns服务器, dns服务器返回url对应的ip地址。浏览器使用该IP获取网站内容。这个dns解析过程一般是明文传输的, 也就是说ISP可以很清楚地知道你访问了,www.12306.com还是www.123xxx.com。

使用dns over https之后, url被加密了,没人知道你访问了什么网站虽然IP地址还是知道的。

设置方式是:

https://www.zdnet.com/article/how-to-enable-dns-over-https-doh-in-google-chrome/

ARM 服务器更新固件

更新固件可以解决lscpu显示不正确问题,如socket和cache大小。

更新之前

root@ubuntu:~# lscpu
Architecture:        aarch64
Byte Order:          Little Endian
CPU(s):              64
On-line CPU(s) list: 0-63
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           16
NUMA node(s):        4
Vendor ID:           ARM
Model:               2
Model name:          Cortex-A72
Stepping:            r0p2
BogoMIPS:            100.00
NUMA node0 CPU(s):   0-15
NUMA node1 CPU(s):   16-31
NUMA node2 CPU(s):   32-47
NUMA node3 CPU(s):   48-63
Flags:               fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
root@ubuntu:~#

更新之后

root@ubuntu:~# lscpu
Architecture:        aarch64
Byte Order:          Little Endian
CPU(s):              64
On-line CPU(s) list: 0-63
Thread(s) per core:  1
Core(s) per socket:  32
Socket(s):           2
NUMA node(s):        4
Vendor ID:           ARM
Model:               2
Model name:          Cortex-A72
Stepping:            r0p2
BogoMIPS:            100.00
L1d cache:           32K
L1i cache:           48K
L2 cache:            1024K
L3 cache:            16384K
NUMA node0 CPU(s):   0-15
NUMA node1 CPU(s):   16-31
NUMA node2 CPU(s):   32-47
NUMA node3 CPU(s):   48-63
Flags:               fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
root@ubuntu:~#

更新办法

查看当前固件版本
浏览器iBMC界面→系统管理→更新固件
Primary Partition Image Version :   2.45
Backup  Partition Image Version :   2.40
BIOS Version                    :   1.34
CPLD Version                    :   1.05
到服务器官网下载新版本固件,例如: 网址
需要下载iBMC,BIOS,CDLP三个压缩包。 解压后在浏览器界面
浏览器iBMC界面→系统管理→更新固件
依次上传,并点击升级,每个固件升级需要几分钟,升级成功会有提示。

升级成功后

Primary Partition Image Version :   3.22
Backup  Partition Image Version :   2.45
BIOS Version                    :   1.58
CPLD Version                    :   1.05

CPLD升级不成功,是因为服务器版本太低。

golang 列出网络接口

在golang中如何列出主机上的所有网络接口,一个方法是使用net包

package main

import (
        "fmt"
        "net"
)

func main() {
        ifaces, err := net.Interfaces()
        if err != nil {
                fmt.Print(fmt.Errorf("localAddresses: %+v\n", err.Error()))
                return
        }
        for _, iface := range ifaces {
                fmt.Printf("interfaces is : %v\n", iface)
        }
}
interfaces is : {1 65536 lo  up|loopback}
interfaces is : {2 1500 eno1 cc:64:a6:5c:d0:d3 up|broadcast|multicast}
interfaces is : {3 1500 eno2 cc:64:a6:5c:d0:d4 up|broadcast|multicast}
interfaces is : {4 1500 eno3 cc:64:a6:5c:d0:d5 up|broadcast|multicast}
interfaces is : {5 1500 eno4 cc:64:a6:5c:d0:d6 up|broadcast|multicast}
interfaces is : {6 1500 ens3f0 28:41:c6:aa:53:34 up|broadcast|multicast}
interfaces is : {7 1500 ens3f1 28:41:c6:aa:53:35 up|broadcast|multicast}
interfaces is : {8 1500 docker0 02:42:34:eb:e9:e1 up|broadcast|multicast}
interfaces is : {14 1500 vethf0cc960 d2:24:7c:67:95:d2 up|broadcast|multicast}
interfaces is : {20 1500 veth17de1fd 0a:8f:dc:9e:fc:3b up|broadcast|multicast}
interfaces is : {44 1500 vetha43a667 72:ce:b7:51:25:50 up|broadcast|multicast}

或者使用netlink包

package main

import (
        "fmt"
        "github.com/vishvananda/netlink"
)

func main() {
        enos, err := netlink.LinkList()
        if err != nil {
                fmt.Println("get link list error")
                fmt.Println(err)
                return
        }
        for _, eno := range enos {
                attr := eno.Attrs()
                fmt.Println("interface is ", eno.Type(), attr.Index, attr.MTU, attr.Name, attr.HardwareAddr, attr.HardwareAddr)
        }
}

hns dependencies

            ib_core
               ^
               |
               +----- hns_roce <-----  hns_roce_hw_v2
                                           +
                                           |
                      hnae3    <-----------+
                               <----+
                                    +- hns3
                                    |
                                    +- hclge
                                    |
                                    +- hclgevf
scsi_transport_sas
  ^
  |
  +--+ libsas <---+ hisi_sas_main
        ^ ^ ^        ^ ^ ^
        | | |        | | |
        | | +------------+------------ hisi_sas_v2_hw
        | |          | |
        | +------------+-------------- hisi_sas_v2_hw
        |            |
        +------------+---------------+ hisi_sas_v3_hw

hnae3 (Hisilicon Network Acceleration Engine) Framework HNS3 Hisilicon Ethernet Driver hclge HCLGE Driver HCLGEVF HCLGEVF Driver

html editable

使网页可以直接编辑

shift + ctrl + i
document.designMode = 'on'

安装操作系统的办法

据说:链接

安装linux的几种方式

  • Kickstart+HTTP+DHCP+TFTP
  • pxe+kickstart
  • cobber+pxe(http+ftp+dhcp)
  • Windows部署服务统一安装win和linux操作系统
  • 使用raid磁盘阵列技术进行磁盘同步对拷(前提硬件环境相同)
  • 使用VM、qemu等虚拟化软件进行镜像格式转换
  • 使用云服务商提供的控制面板进行操作系统的镜像导入这导出
  • Linux系统克隆为iso镜像盘(类似win gost)

英特尔CPU天梯

[有一个天梯排名]

6   Intel Xeon E5-2679 v4 @ 2.50GHz     25236
8   Intel Xeon E5-2699 v4 @ 2.20GHz     23362
11  Intel Xeon E5-2696 v3 @ 2.30GHz     22526
12  Intel Xeon E5-2699 v3 @ 2.30GHz     22358
15  Intel Xeon E5-2698 v4 @ 2.20GHz     21789
16  Intel Xeon E5-2673 v4 @ 2.30GHz     21625
18  Intel Xeon E5-2697 v4 @ 2.30GHz     21525
19  Intel Xeon E5-2697 v3 @ 2.60GHz     21488
20  Intel Xeon E5-2696 v4 @ 2.20GHz     21331
21  Intel Xeon E5-2690 v4 @ 2.60GHz     21323
22  Intel Xeon E5-2698 v3 @ 2.30GHz     21149
24  Intel Xeon E5-2695 v3 @ 2.30GHz     20346
25  Intel Xeon E5-2695 v4 @ 2.10GHz     20258
27  Intel Xeon E5-2687W v4 @ 3.00GHz        19979
28  Intel Xeon E5-2680 v4 @ 2.40GHz     19953
31  Intel Xeon E5-2689 v4 @ 3.10GHz     19521
32  Intel Xeon E5-2686 v3 @ 2.00GHz     19255
33  Intel Xeon E5-2690 v3 @ 2.60GHz     19240
38  Intel Xeon E5-2680 v3 @ 2.50GHz     18626
41  Intel Xeon E5-2683 v4 @ 2.10GHz     18274
42  Intel Xeon E5-1681 v3 @ 2.90GHz     18238
45  Intel Xeon E5-4660 v3 @ 2.10GHz     18007
47  Intel Xeon E5-2660 v4 @ 2.00GHz     17786
48  Intel Xeon E5-2687W v3 @ 3.10GHz        17719
50  Intel Xeon E5-2676 v3 @ 2.40GHz     17652
51  Intel Xeon E5-2683 v3 @ 2.00GHz     17429
52  Intel Xeon E5-2697 v2 @ 2.70GHz     17405
53  Intel Xeon E5-1680 v4 @ 3.40GHz     17089
54  Intel Xeon E5-2673 v3 @ 2.40GHz     16982
55  Intel Xeon E5-2678 v3 @ 2.50GHz     16905
57  Intel Xeon E5-1680 v3 @ 3.20GHz     16707
58  Intel Xeon E5-2696 v2 @ 2.50GHz     16681
59  Intel Xeon E5-2667 v4 @ 3.20GHz     16654
60  Intel Xeon E5-2670 v3 @ 2.30GHz     16619
61  Intel Xeon E5-1680 v2 @ 3.00GHz     16543
62  Intel Xeon E5-2687W v2 @ 3.40GHz        16507
63  Intel Xeon E5-2690 v2 @ 3.00GHz     16503
65  Intel Xeon E5-2673 v2 @ 3.30GHz     16320
66  Intel Xeon E5-2658 v3 @ 2.20GHz     16298
67  Intel Xeon E5-2658 v4 @ 2.30GHz     16290
68  Intel Xeon E5-1660 v4 @ 3.20GHz     16240
69  Intel Xeon E5-2660 v3 @ 2.60GHz     16151
70  Intel Xeon E5-2667 v2 @ 3.30GHz     16128
72  Intel Xeon E5-2667 v3 @ 3.20GHz     16101
73  Intel Xeon E5-2692 v2 @ 2.20GHz     16018
75  Intel Xeon E5-2650 v4 @ 2.20GHz     15975
76  Intel Xeon E5-2680 v2 @ 2.80GHz     15927
78  Intel Xeon E5-2695 v2 @ 2.40GHz     15847
79  Intel Xeon E5-4627 v4 @ 2.60GHz     15516
81  Intel Xeon E5-2682 v4 @ 2.50GHz     15333
82  Intel Xeon E5-2640 v4 @ 2.40GHz     15331
85  Intel Xeon E5-2675 v3 @ 1.80GHz     15156
86  Intel Xeon E5-2670 v2 @ 2.50GHz     15031
88  Intel Xeon E5-2650 v3 @ 2.30GHz     14941
91  Intel Xeon E5-2649 v3 @ 2.30GHz     14673
94  Intel Xeon E5-2618L v4 @ 2.20GHz        14592
97  Intel Xeon E5-1660 v3 @ 3.00GHz     14423
98  Intel Xeon E5-2687W @ 3.10GHz       14404
100 Intel Xeon E5-1650 v4 @ 3.60GHz     14239
101 Intel Xeon E5-4627 v3 @ 2.60GHz     14219
102 Intel Xeon E5-2685 v3 @ 2.60GHz     14154
103 Intel Xeon E5-2658 v2 @ 2.40GHz     14128
104 Intel Xeon E5-2650L v4 @ 1.70GHz        14093
105 Intel Xeon E5-2690 @ 2.90GHz        14031
106 Intel Xeon E5-2663 v3 @ 2.80GHz     13994
107 Intel Xeon E5-2640 v3 @ 2.60GHz     13976
109 Intel Xeon E5-2630 v4 @ 2.20GHz     13923
111 Intel Xeon E5-2643 v3 @ 3.40GHz     13801
112 Intel Xeon E5-1660 v2 @ 3.70GHz     13777
114 Intel Xeon E5-2689 @ 2.60GHz        13747
116 Intel Xeon E5-4669 v4 @ 2.20GHz     13626
118 Intel Xeon E5-2643 v4 @ 3.40GHz     13600
119 Intel Xeon E5-1650 v3 @ 3.50GHz     13597
120 Intel Xeon E5-2660 v2 @ 2.20GHz     13564
121 Intel Xeon E5-2650L v3 @ 1.80GHz        13390
124 Intel Xeon E5-2628L v4 @ 1.90GHz        13041
125 Intel Xeon E5-2650 v2 @ 2.60GHz     13019
127 Intel Xeon E5-2648L v4 @ 1.80GHz        12883
128 Intel Xeon E5-2630L v4 @ 1.80GHz        12847
129 Intel Xeon E5-2630 v3 @ 2.40GHz     12831
130 Intel Xeon E5-2680 @ 2.70GHz        12803
132 Intel Xeon E5-4640 v3 @ 1.90GHz     12703
134 Intel Xeon E5-1650 v2 @ 3.50GHz     12656
137 Intel Xeon E5-2618L v3 @ 2.30GHz        12508
138 Intel Xeon E5-1660 @ 3.30GHz        12377
139 Intel Xeon E5-2648L v3 @ 1.80GHz        12332
140 Intel Xeon E5-2643 v2 @ 3.50GHz     12324
142 Intel Xeon E5-2670 @ 2.60GHz        12233
145 Intel Xeon E5-4650 @ 2.70GHz        12085
148 Intel Xeon E5-2628L v3 @ 2.00GHz        11965
149 Intel Xeon E5-4650L @ 2.60GHz       11821
151 Intel Xeon E5-2665 @ 2.40GHz        11776
152 Intel Xeon E5-1650 @ 3.20GHz        11763
153 Intel Xeon E3-1285 v6 @ 4.10GHz     11689
155 Intel Xeon E5-2620 v4 @ 2.10GHz     11339
156 Intel Xeon E5-2651 v2 @ 1.80GHz     11275
157 Intel Xeon E3-1280 v6 @ 3.90GHz     11262
158 Intel Xeon E3-1285L v4 @ 3.40GHz        11224
162 Intel Xeon E5-2660 @ 2.20GHz        11100
163 Intel Xeon E5-4648 v3 @ 1.70GHz     11097
164 Intel Xeon E3-1270 v6 @ 3.80GHz     11083
166 Intel Xeon E5-2629 v3 @ 2.40GHz     11022
167 Intel Xeon E3-1275 v6 @ 3.80GHz     11017
175 Intel Xeon E3-1575M v5 @ 3.00GHz        10740
176 Intel Xeon E3-1535M v6 @ 3.10GHz        10724
177 Intel Xeon E3-1280 v5 @ 3.70GHz     10666
179 Intel Xeon E5-4640 @ 2.40GHz        10523
180 Intel Xeon E3-1240 v6 @ 3.70GHz     10512
181 Intel Xeon E3-1515M v5 @ 2.80GHz        10509
182 Intel Xeon E3-1585 v5 @ 3.50GHz     10483
183 Intel Xeon E3-1545M v5 @ 2.90GHz        10474
184 Intel Xeon E5-2637 v4 @ 3.50GHz     10469
185 Intel Xeon E5-2630 v2 @ 2.60GHz     10419
186 Intel Xeon E3-1275 v5 @ 3.60GHz     10390
187 Intel Xeon E3-1240 v5 @ 3.50GHz     10386
189 Intel Xeon E3-1245 v6 @ 3.70GHz     10339
190 Intel Xeon E5-1630 v4 @ 3.70GHz     10315
191 Intel Xeon E3-1270 v5 @ 3.60GHz     10309
192 Intel Xeon E3-1281 v3 @ 3.70GHz     10295
193 Intel Xeon E3-1245 v5 @ 3.50GHz     10260
194 Intel Xeon E5-2667 @ 2.90GHz        10256
195 Intel Xeon E5-1630 v3 @ 3.70GHz     10251
196 Intel Xeon E3-1276 v3 @ 3.60GHz     10218
198 Intel Xeon E5-2450 @ 2.10GHz        10186
200 Intel Xeon E5-2650 @ 2.00GHz        10159
203 Intel Xeon E3-1286L v3 @ 3.20GHz        10129
204 Intel Xeon E5-2637 v3 @ 3.50GHz     10128
205 Intel Xeon E3-1285 v3 @ 3.60GHz     10115
207 Intel Xeon E3-1271 v3 @ 3.60GHz     10086
209 Intel Xeon E3-1260L v5 @ 2.90GHz        10067
210 Intel Xeon E5-2470 @ 2.30GHz        10061
211 Intel Xeon E3-1241 v3 @ 3.50GHz     10040
213 Intel Xeon E3-1285L v3 @ 3.10GHz        10010
214 Intel Xeon E5-2620 v3 @ 2.40GHz     10009
216 Intel Xeon E3-1505M v6 @ 3.00GHz        9987
217 Intel Xeon E5-1620 v4 @ 3.50GHz     9985
218 Intel Xeon E3-1246 v3 @ 3.50GHz     9908
219 Intel Xeon E5-2640 v2 @ 2.00GHz     9904
221 Intel Xeon E3-1286 v3 @ 3.70GHz     9899
223 Intel Xeon E3-1270 v3 @ 3.50GHz     9871
224 Intel Xeon E3-1230 v6 @ 3.50GHz     9856
225 Intel Xeon E3-1275 v3 @ 3.50GHz     9844
231 Intel Xeon E5-1620 v3 @ 3.50GHz     9767
232 Intel Xeon E3-1230 v5 @ 3.40GHz     9754
233 Intel Xeon E3-1290 V2 @ 3.70GHz     9749
236 Intel Xeon E3-1280 v3 @ 3.60GHz     9720
238 Intel Xeon E3-1240 v3 @ 3.40GHz     9697
239 Intel Xeon E5-2630L v2 @ 2.40GHz        9655
240 Intel Xeon E3-1231 v3 @ 3.40GHz     9640
242 Intel Xeon E3-1245 v3 @ 3.40GHz     9579
244 Intel Xeon E3-1280 V2 @ 3.60GHz     9561
246 Intel Xeon E5-1620 v2 @ 3.70GHz     9533
249 Intel Xeon E5-2637 v2 @ 3.50GHz     9504
250 Intel Xeon E5-2640 @ 2.50GHz        9500
251 Intel Xeon E5-2658 @ 2.10GHz        9484
252 Intel Xeon E3-1270 V2 @ 3.50GHz     9481
256 Intel Xeon E5-2440 v2 @ 1.90GHz     9425
258 Intel Xeon E5-2628L v2 @ 1.90GHz        9405
260 Intel Xeon E3-1275 V2 @ 3.50GHz     9344
264 Intel Xeon E3-1230 v3 @ 3.30GHz     9328
265 Intel Xeon E5-2440 @ 2.40GHz        9319
268 Intel Xeon E3-1535M v5 @ 2.90GHz        9277
272 Intel Xeon E5-2630L v3 @ 1.80GHz        9216
275 Intel Xeon E3-1240 V2 @ 3.40GHz     9177
276 Intel Xeon E3-1268L v5 @ 2.40GHz        9175
284 Intel Xeon E3-1245 V2 @ 3.40GHz     9110
285 Intel Xeon E5-1620 @ 3.60GHz        9073
297 Intel Xeon E5-2448L v2 @ 1.80GHz        8954
299 Intel Xeon E5-2623 v3 @ 3.00GHz     8937
300 Intel Xeon E3-1505M v5 @ 2.80GHz        8927
301 Intel Xeon E5-2630 @ 2.30GHz        8887
305 Intel Xeon E5-4617 @ 2.90GHz        8855
306 Intel Xeon E3-1230 V2 @ 3.30GHz     8852
308 Intel Xeon E3-1275L v3 @ 2.70GHz        8798
314 Intel Xeon E3-1265L v3 @ 2.50GHz        8713
315 Intel Xeon E3-1290 @ 3.60GHz        8699
316 Intel Xeon E5-2620 v2 @ 2.10GHz     8686
318 Intel Xeon E5-2650L @ 1.80GHz       8676
322 Intel Xeon E5-2420 v2 @ 2.20GHz     8647
324 Intel Xeon E5-2430 v2 @ 2.50GHz     8608
327 Intel Xeon E5-2643 @ 3.30GHz        8490
329 Intel Xeon E3-1280 @ 3.50GHz        8473
335 Intel Xeon E3-1275 @ 3.40GHz        8348
337 Intel Xeon E3-1225 v6 @ 3.30GHz     8337
341 Intel Xeon E3-1270 @ 3.40GHz        8233
351 Intel Xeon E5-4620 @ 2.20GHz        8127
354 Intel Xeon E5-2623 v4 @ 2.60GHz     8090
359 Intel Xeon E3-1245 @ 3.30GHz        8048
364 Intel Xeon E3-1220 v6 @ 3.00GHz     8010
365 Intel Xeon E3-1240 @ 3.30GHz        8002
369 Intel Xeon E5-2620 @ 2.00GHz        7935
371 Intel Xeon E3-1230 @ 3.20GHz        7907
372 Intel Xeon E5-2630L @ 2.00GHz       7900
373 Intel Xeon E3-1268L v3 @ 2.30GHz        7850
374 Intel Xeon E3-1225 v5 @ 3.30GHz     7848
378 Intel Xeon E3-1240L v5 @ 2.10GHz        7793
380 Intel Xeon E3-1265L V2 @ 2.50GHz        7779
381 Intel Xeon E5-2608L v3 @ 2.00GHz        7768
388 Intel Xeon E3-1235 @ 3.20GHz        7680
389 Intel Xeon E3-1220 v5 @ 3.00GHz     7670
397 Intel Xeon E3-1226 v3 @ 3.30GHz     7573
403 Intel Xeon E3-1240L v3 @ 2.00GHz        7487
408 Intel Xeon E5-1410 v2 @ 2.80GHz     7420
414 Intel Xeon E5-2450L @ 1.80GHz       7389
416 Intel Xeon E5-1607 v4 @ 3.10GHz     7352
419 Intel Xeon E5-1410 @ 2.80GHz        7312
424 Intel Xeon E3-1230L v3 @ 1.80GHz        7231
428 Intel Xeon E3-1225 v3 @ 3.20GHz     7184
432 Intel Xeon E5-2420 @ 1.90GHz        7139
437 Intel Xeon E3-1505L v5 @ 2.00GHz        7082
441 Intel Xeon E3-1220 v3 @ 3.10GHz     7023
448 Intel Xeon E5-1607 v3 @ 3.10GHz     6950
449 Intel Xeon E5649 @ 2.53GHz      6936
451 Intel Xeon E5-2609 v4 @ 1.70GHz     6920
453 Intel Xeon E5-2430 @ 2.20GHz        6878
457 Intel Xeon E3-1225 V2 @ 3.20GHz     6841
470 Intel Xeon E3-1220 V2 @ 3.10GHz     6661
476 Intel Xeon E5-2430L v2 @ 2.40GHz        6627
483 Intel Xeon E3-1260L @ 2.40GHz       6534
484 Intel Xeon E5645 @ 2.40GHz      6528
499 Intel Xeon E5-1603 v4 @ 2.80GHz     6420
516 Intel Xeon E5-1607 v2 @ 3.00GHz     6148
521 Intel Xeon E3-1235L v5 @ 2.00GHz        6122
525 Intel Xeon E3-1220 @ 3.10GHz        6103
526 Intel Xeon E5-1603 v3 @ 2.80GHz     6102
530 Intel Xeon E3-1265L @ 2.40GHz       6038
536 Intel Xeon E5-2609 v3 @ 1.90GHz     5940
539 Intel Xeon E3-1225 @ 3.10GHz        5918
550 Intel Xeon E5-1607 @ 3.00GHz        5838
588 Intel Xeon E5-1603 @ 2.80GHz        5528
596 Intel Xeon E5-2603 v4 @ 1.70GHz     5473
623 Intel Xeon E5640 @ 2.67GHz      5309
630 Intel Xeon E5-2418L @ 2.00GHz       5202
637 Intel Xeon E5-2603 v3 @ 1.60GHz     5140
642 Intel Xeon E5630 @ 2.53GHz      5109
644 Intel Xeon E5-2609 v2 @ 2.50GHz     5091
661 Intel Xeon E5-4603 @ 2.00GHz        5014
685 Intel Xeon E5620 @ 2.40GHz      4860
702 Intel Xeon E5540 @ 2.53GHz      4778
728 Intel Xeon E5-2609 @ 2.40GHz        4625
731 Intel Xeon E5530 @ 2.40GHz      4605
739 Intel Xeon E5-2407 v2 @ 2.40GHz     4575
758 Intel Xeon E3-1220L V2 @ 2.30GHz        4478
760 Intel Xeon E5520 @ 2.27GHz      4447
804 Intel Xeon E5450 @ 3.00GHz      4224
822 Intel Xeon E5472 @ 3.00GHz      4139
852 Intel Xeon E5440 @ 2.83GHz      3985
870 Intel Xeon E5462 @ 2.80GHz      3899
883 Intel Xeon E5-2407 @ 2.20GHz        3832
895 Intel Xeon E5430 @ 2.66GHz      3787
905 Intel Xeon E5-2603 v2 @ 1.80GHz     3766
949 Intel Xeon E5-2603 @ 1.80GHz        3570
952 Intel Xeon E3-1220L @ 2.20GHz       3563
957 Intel Xeon E5420 @ 2.50GHz      3534
958 Intel Xeon E5-2403 @ 1.80GHz        3532
965 Intel Xeon E5-2403 v2 @ 1.80GHz     3492
993 Intel Xeon E5607 @ 2.27GHz      3398
1024    Intel Xeon E5410 @ 2.33GHz      3266
1067    Intel Xeon E5507 @ 2.27GHz      3144
1071    Intel Xeon E7- 2830 @ 2.13GHz       3118
1096    Intel Xeon E5606 @ 2.13GHz      3040
1111    Intel Xeon E5506 @ 2.13GHz      2987
1127    Intel Xeon E5345 @ 2.33GHz      2934
1149    Intel Xeon E5405 @ 2.00GHz      2874
1197    Intel Xeon E5504 @ 2.00GHz      2724
1264    Intel Xeon E5335 @ 2.00GHz      2513
1289    Intel Xeon E3113 @ 3.00GHz      2427
1291    Intel Xeon E5240 @ 3.00GHz      2424
1296    Intel Xeon E7320 @ 2.13GHz      2413
1312    Intel Xeon E5603 @ 1.60GHz      2366
1349    Intel Xeon E5320 @ 1.86GHz      2279
1357    Intel Xeon E5310 @ 1.60GHz      2264
1367    Intel Xeon E3120 @ 3.16GHz      2241
1391    Intel Xeon E3110 @ 3.00GHz      2169
1415    Intel Xeon E3-1220L v3 @ 1.10GHz2110
1739    Intel Xeon E5205 @ 1.86GHz      1401
1743    Intel Xeon E5503 @ 2.00GHz      1396
1753    Intel Xeon E5502 @ 1.87GHz      1375

互联网不安全

一打开一个端口, 马上就被别人发现,然后用搞ddos

2020/03/29 15:43:16 http.go:161: [http] 87.19.135.195:57592 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:16 http.go:251: [route] 87.19.135.195:57592 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:16 http.go:304: [http] 87.19.135.195:57592 <-> line.omnia3.com:25242 2020/03/29 15:43:16 http.go:306: [http] 87.19.135.195:57592 >-< line.omnia3.com:25242 2020/03/29 15:43:16 http.go:161: [http] 87.19.135.195:57593 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:16 http.go:251: [route] 87.19.135.195:57593 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:16 http.go:304: [http] 87.19.135.195:57593 <-> line.omnia3.com:25242 2020/03/29 15:43:16 http.go:306: [http] 87.19.135.195:57593 >-< line.omnia3.com:25242 2020/03/29 15:43:16 http.go:161: [http] 95.216.10.237:12244 -> auto://:9000 -> mbasic.facebook.com:443 2020/03/29 15:43:16 http.go:251: [route] 95.216.10.237:12244 -> auto://:9000 -> mbasic.facebook.com:443 2020/03/29 15:43:16 http.go:304: [http] 95.216.10.237:12244 <-> mbasic.facebook.com:443 2020/03/29 15:43:16 http.go:161: [http] 87.19.135.195:57596 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:16 http.go:251: [route] 87.19.135.195:57596 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:16 http.go:304: [http] 87.19.135.195:57596 <-> line.omnia3.com:25242 2020/03/29 15:43:17 socks.go:888: [socks5] 37.248.153.8:25111 -> auto://:9000 -> www.google.com:80 2020/03/29 15:43:17 socks.go:940: [route] 37.248.153.8:25111 -> auto://:9000 -> www.google.com:80 2020/03/29 15:43:17 socks.go:975: [socks5] 37.248.153.8:25111 <-> www.google.com:80 2020/03/29 15:43:17 http.go:306: [http] 87.19.135.195:57596 >-< line.omnia3.com:25242 2020/03/29 15:43:17 http.go:161: [http] 87.19.135.195:57598 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:17 http.go:251: [route] 87.19.135.195:57598 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:17 http.go:304: [http] 87.19.135.195:57598 <-> line.omnia3.com:25242 2020/03/29 15:43:17 http.go:306: [http] 87.19.135.195:57598 >-< line.omnia3.com:25242 2020/03/29 15:43:17 http.go:161: [http] 87.19.135.195:57600 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:17 http.go:251: [route] 87.19.135.195:57600 -> auto://:9000 -> line.omnia3.com:25242 2020/03/29 15:43:17 http.go:304: [http] 87.19.135.195:57600 <-> line.omnia3.com:25242 2020/03/29 15:43:17 http.go:161: [http] 176.74.16.185:61936 -> auto://:9000 -> fulltime-league.thefa.com:80 2020/03/29 15:43:17 http.go:251: [route] 176.74.16.185:61936 -> auto://:9000 -> fulltime-league.thefa.com:80 2020/03/29 15:43:17 http.go:304: [http] 176.74.16.185:61936 <-> fulltime-league.thefa.com:80

put a program in jail

思路是:

  1. 创建namespace
  2. 创建veth pair, 一个放到namespace里面, 一个在默认的root namespace
  3. 两个veth都配上ip
  4. root namespace的iptables上添加规则。
  5. 从veth捕捉来自namespace中的流量
sudo ip link add vetha type veth peer name vethb
sudo ip link set vethb netns test
sudo ip netns exec test ip link add 10.8.8.2/24 dev vethb
sudo ip netns exec test ip link set vethb up
sudo ip netns exec test ip link set lo up
sudo ip netns exec test ip route add default via 10.8.8.1 dev vethb
sudo ip addr add  10.8.8.1/24 dev vetha
sudo ip link set vetha up

sudo iptables -t nat -A POSTROUTING -s 10.10.10.0/24 ! -o vetha -j MASQUERADE
sudo iptables -A FORWARD -i vetha -j ACCEPT
sudo iptables -A FORWARD -o vetha -j ACCEPT

reexec is part of the Docker codebase and provides a convenient way for an executable to “re-exec” itself

reexec [7] 是一个Docker基础代码中进程重新执行自己的方便方式。

cmd := exec.Command("/bin/echo", "Process already running")
cmd.SysProcAttr = &syscall.SysProcAttr{
    Cloneflags: syscall.CLONE_NEWUTS,
}
cmd.Run()

Once cmd.Run() is called, the namespaces get cloned and then the process gets started straight away. There’s no hook or anything here that allows us to run code after the namespace creation but before the process starts. This is where reexec comes in.

这是我们创建进程通常的方式,问题是,一旦 cmd.Run() 被调用,新的命名空间就会被创建,进程/bin/echo开始在新的命名空间中执行,这里没有回调或者什么东西可以 让我们可以在namspace创建之后,进程没有执行之前执行我们想要执行的代码, 比如设置新的hostname。

[1]capture all packets inside namespace https://blogs.igalia.com/dpino/2016/04/10/network-namespaces/
[2]create veth pair with go https://medium.com/@teddyking/namespaces-in-go-network-fdcf63e76100
[3]https://pkg.go.dev/github.com/vishvananda/netns?tab=overview
[4]https://github.com/teddyking/ns-process
[5]https://golangnews.org/2020/03/network-namespaces-from-go/
[6]https://www.devdungeon.com/content/packet-capture-injection-and-analysis-gopacket
[7]https://medium.com/@teddyking/namespaces-in-go-reexec-3d1295b91af8

JavaScript

动态插入外部JavaScript代码

var script=document.createElement("script");
script.type="text/javascript";
script.src="http://www.microsoftTranslator.com/ajax/v3/WidgetV3.ashx?siteData=ueOIGRSKkd965FeEGM5JtQ**";
document.getElementsByTagName('head')[0].appendChild(script);

作者:GitLqr 链接:https://juejin.cn/post/6844903496274198542 来源:掘金 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

Learn some jquery

$('li')
$('li').first()
$('li').first().show()
$('ul':first).children()
$('li:first').siblings()
$('li:first').parent()
$(this).siblings().remove()
$(this).siblings().addClass('special')
$('li').closest('.list').addClass('special')
$('.list').find('li').filter('.special').remove()
$('.list').find('.special').remove()

if( $(this).is('.special') ) {

}
if( $(this).not('.special')){

}

$('.sublist li').on('click', function(){
  $(this).hide();
});

kernel_levels.h内核打印级别

定义了内核打印级别

include/linux/kern_levels.h

内核源码树的这个文件定义了内核打印级别 或者参考【github】

#define KERN_EMERG  KERN_SOH "0"    /* system is unusable */
#define KERN_ALERT  KERN_SOH "1"    /* action must be taken immediately */
#define KERN_CRIT   KERN_SOH "2"    /* critical conditions */
#define KERN_ERR    KERN_SOH "3"    /* error conditions */
#define KERN_WARNING    KERN_SOH "4"    /* warning conditions */
#define KERN_NOTICE KERN_SOH "5"    /* normal but significant condition */
#define KERN_INFO   KERN_SOH "6"    /* informational */
#define KERN_DEBUG  KERN_SOH "7"    /* debug-level messages */

#define KERN_DEFAULT    ""      /* the default kernel loglevel */

在内核模块代码使用printk打印信息:

printk(KERN_INFO "EBB: Hello %s from the BBB LKM!\n", name);
查看当前系统的打印级别
cat /proc/sys/kernel/printk

使用sysctl可以达到同样效果

sysctl kernel/printk
4       4       1       7

The first value in our output is the current console_loglevel. This is the information we were looking for: the value, 4 in this case, represents the log level currently used. As said before this means that only messages adopting a severity level higher than it, will be displayed on the console.

第一个参数是4, 是当前终端的打印级别

The second value in the output represents the default_message_loglevel. This value is automatically used for messages without a specific log level: if a message is not associated with a log level, this one will be used for it.

第二个参数是4,是/var/log/messages 的默认保存级别

The third value in the output reports the minimum_console_loglevel status. It indicates the minimum loglevel which can be used for console_loglevel. The level here used it’s 1, the highest.

第三个参数是1,是最小化终端的打印级别

Finally, the last value represents the default_console_loglevel, which is the default loglevel used for console_loglevel at boot time.

第四个参数是7,默认终端的打印级别。也就是系统启动时能看到的信息。

修改内核的打印级别
echo "7"  > /proc/sys/kernel/printk

使用sysctl可以达到同样效果

sudo sysctl -w kernel.printk=7

编译内核模块

根据《linux内核设计与实现第3版》如何编译内核模块呢?

模块代码

/*
 * fishing.c __The Hello, World丨我们的第一个内核模块
 */
#include <linux/init.h>
#include <linux/module.h>
#include <linux/kernel.h>
/*
 * * hello_init-初始化函数,当模块装栽时被调用,如果成功装栽返回零,否
 * *則返冋非零值
 *
 */
static int hello_init(void)
{
        printk(KERN_ALERT "I bear a charmed life.\n");
        return 0;
}
/*
* hello_exit—退出函数,当摸块卸栽时被调用
* */
static void hello_exit(void)
{
        printk(KERN_ALERT "Out, out, brief candle!\n");
}

module_init(hello_init);
module_exit(hello_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Shakespeare");
MODULE_DESCRIPTION("A Hello, World Module");

在内核代码树内构建模块【不使用内核配置选项】

模块代码fishing.c放置在linux内核源码的drivers/char/fishing/fishing.c路径下。
模块的Makefile仅有一句,文件放置在linux内核源码的drivers/char/fishing/Makefile路径下
obj-m += fishing.o

修改drivers/char/Makefile,文章末尾添加

#add by fishing module accroding to book
obj-m                           += fishing/

这个时候就可以执行make了。make会自动进入fishing目录,编译出fishing.ko.

[root@localhost linux]# ls -al drivers/char/fishing/fishing.ko
-rw-r--r--. 1 root root 58336 Apr  8 23:26 drivers/char/fishing/fishing.ko

其实可以单独编译fishing模块,也可以单独clean这个模块

make drivers/char/fishing/fishing.ko
make drivers/char/fishing/fishing.ko clean

安装模块,卸载模块

insmod drivers/char/fishing/fishing.ko
rmmod drivers/char/fishing/fishing.ko

可以在dmesg中看到打印

[536435.653281] fishing: module verification failed: signature and/or required key missing - tainting kernel
[536435.655464] I bear a charmed life.
[536808.157762] Out, out, brief candle!

在内核代码树内构建模块【使用编译选项】

模块代码放置在/home/201-code/fishing/fishing.c路径下。 模块的Makefile仅有一句,放置在/home/201-code/fishing/Makefile路径下。

obj-(CONFIG_FISHING_POLE) += fishing.o

修改上一级模块的Makefile,即drivers/char/Makefile

obj-$(CONFIG_FISHING_POLE)      += fishing/

修改上一级模块的Kconfig,即

source "drivers/char/fishing/Kconfig"

详细请查看

diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
index e2e66a40c7f2..73f53caa2dd8 100644
--- a/drivers/char/Kconfig
+++ b/drivers/char/Kconfig
@@ -591,6 +591,7 @@ config TILE_SROM
      how to partition a single ROM for multiple purposes.

 source "drivers/char/xillybus/Kconfig"
+source "drivers/char/fishing/Kconfig"

 endmenu

diff --git a/drivers/char/Makefile b/drivers/char/Makefile
index bfb988a68c7a..169455628796 100644
--- a/drivers/char/Makefile
+++ b/drivers/char/Makefile
@@ -62,3 +62,7 @@ obj-$(CONFIG_XILLYBUS)        += xillybus/
 obj-$(CONFIG_POWERNV_OP_PANEL) += powernv-op-panel.o

 obj-$(CONFIG_CRASH)            += crash.o
+
+#add by fishing module accroding to book
+#obj-m             += fishing/
+obj-$(CONFIG_FISHING_POLE) += fishing/
diff --git a/drivers/char/fishing/Kconfig b/drivers/char/fishing/Kconfig
new file mode 100644
index 000000000000..68560cda570d
--- /dev/null
+++ b/drivers/char/fishing/Kconfig
@@ -0,0 +1,10 @@
+config FISHING_POLE
+       tristate "Fish Master 3000 support"
+       default m
+       help
+          If you say Y here, support for the Firsh Master 3000 with computer
+               interface will be compiled into the kernel and accessible via a device
+               node. You can also say M here and the driver will be built as a module named fishing.ko
+
+               if unsure, say N
+
diff --git a/drivers/char/fishing/Makefile b/drivers/char/fishing/Makefile
new file mode 100644
index 000000000000..35e53bd6a136
--- /dev/null
+++ b/drivers/char/fishing/Makefile
@@ -0,0 +1,2 @@
+#obj-m += fishing.o
+obj-$(CONFIG_FISHING_POLE) += fishing.o

在内核代码树外构建模块【使用编译选项】

模块代码放置在/home/201-code/fishing/fishing.c路径下。
模块的Makefile仅有一句,放置在/home/201-code/fishing/Makefile路径下。
obj-m += fishing.o

编译模块

cd home/201-code/fishing
make -C ../linux SUBDIRS=$PWD modules

../linux是源码树的路径

设置模块参数

module_param(name, charp, S_IRUGO); ///< Param desc. charp = char ptr, S_IRUGO can be read/not changed

第一个参数变量名称。

第二个参数是参数类型,字符串指针。可选类型是byte, int, uint, long, ulong, short, ushort, bool, invbool, charp

第三个参数是权限。我也没有找到S_IRUGO参数指导

插入模块时指定参数

sudo insmod hello.ko name=3232323232323

查看当前运行的模块的办法

lsmod | grep hello
cat /proc/modules
ls -l /sys/module/ | grep hello

问题 modprobe ko not found

modprobe drivers/char/fishing/fishing.ko
modprobe: FATAL: Module drivers/char/fishing/fishing.ko not found.

原因是modprobe只查找/lib/modules/(uname -r)/下的ko。 但是把ko复制到相应目录下并未解决,可能需要make module_install才能起作用

问题:version magic

自行insmod是出现version magic的问题

sudo insmod drivers/char/fishing/fishing.ko
[Sun Jun 23 10:26:52 2019] fishing: version magic '4.15.18-dirty SMP preempt mod_unload aarch64' should be '4.15.0-29-generic SMP mod_unload aarch64'
me@ubuntu:~/code/linux$ modinfo drivers/char/fishing/fishing.ko
filename:       /home/me/code/linux/drivers/char/fishing/fishing.ko
description:    A Hello, World Module
author:         Shakespeare
license:        GPL
depends:
intree:         Y
name:           fishing
vermagic:       4.15.18-dirty SMP preempt mod_unload aarch64

模块在代码树外的解决办法:

ubuntu 18.04

make -C /lib/modules/4.15.0-29-generic/build SUBDIRS=$PWD modules

设备驱动模块编写,可以参考 编写linux设备驱动

内核与用户空间通信的方式

内核与用户空间通信的方式

  • Procfs
  • Sysfs
  • Debugfs
  • Sysctl
  • Socket
    • UDP Sockets
    • Netlink Sockets
[1]http://wiki.tldp.org/kernel_user_space_howto

LDD3 linux设备驱动

主设备号,次设备号

主设备号一般用来标识设备的驱动。 次设备号内核用来区分实际设备。

printk 不支持浮点数

The printk function used in hello.c earlier, for example, is the version of printf
defined within the kernel and exported to modules. It behaves similarly to the original
function, with a few minor differences, the main one being lack of floating-point suppor

内核中和驱动相关重要得数据结构

  1. file_operations
struct file_operations

file_operations是文件操作的抽象,注册了用户态对文件的读写,在内核是由哪些函数具体实现的。

struct file_operations {
	struct module *owner;
	loff_t (*llseek) (struct file *, loff_t, int);
	ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
	ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
	ssize_t (*read_iter) (struct kiocb *, struct iov_iter *);
	ssize_t (*write_iter) (struct kiocb *, struct iov_iter *);
	int (*iopoll)(struct kiocb *kiocb, bool spin);

这里仅节选了一部分,完整请查看文件 本站fs.h github fs.h

用户态的read调用 [1]

#include <unistd.h>

/** 尝试从文件fd读取count字节到buf
*/
ssize_t read(int fd, void *buf, size_t count);

内核态的read 实现

/** @brief 文件的读函数
 *  @file 第一个参数,内核态代表打开的文件,不会在用户态出现,也是重要的数据结构的第三个。
 *  @_user 第二个参数,用户态中程序的地址或者是libc中的地址
 *  @size_t 第三个参数,用户态中的地址的大小
 *  @loff_t 第四个参数,内核态打开文件的文件的偏移量
 */
ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
  1. inode
struct inode

代表内核内的任意一个文件

struct inode {
	umode_t			i_mode;
	unsigned short		i_opflags;
	kuid_t			i_uid;
	kgid_t			i_gid;
	unsigned int		i_flags;
	dev_t			i_rdev;
	loff_t			i_size;
	struct timespec64	i_atime;
	struct timespec64	i_mtime;
	struct timespec64	i_ctime;
	spinlock_t		i_lock;	/* i_blocks, i_bytes, maybe i_size */
	unsigned short          i_bytes;
	u8			i_blkbits;
	u8			i_write_hint;
	blkcnt_t		i_blocks;
	union {
		struct pipe_inode_info	*i_pipe;
		struct block_device	*i_bdev;
		struct cdev		*i_cdev;
		char			*i_link;
		unsigned		i_dir_seq;
	};

  1. file
struct file

代表内核中每一个被打开的文件, 而inode节点代表每一个存在的文件, 很多打开的文件可以指向同一个inode

struct file {
	union {
		struct llist_node	fu_llist;
		struct rcu_head 	fu_rcuhead;
	} f_u;
	struct path		f_path;
	struct inode		*f_inode;	/* cached value */
	const struct file_operations	*f_op;

	/*
	 * Protects f_ep_links, f_flags.
	 * Must not be taken from IRQ context.
	 */
	spinlock_t		f_lock;
	enum rw_hint		f_write_hint;
	atomic_long_t		f_count;
	unsigned int 		f_flags;
	fmode_t			f_mode;
	struct mutex		f_pos_lock;
	loff_t			f_pos;
	struct fown_struct	f_owner;
	const struct cred	*f_cred;
	struct file_ra_state	f_ra;

创建字符设备文件有两种方式

  1. 使用 mknod,然后可以直接用rm删除
mknod  filename type  major  minor

    filename :  设备文件名
    type        :  设备文件类型
    major      :   主设备号
    minor      :   次设备号
  1. 在代码中手动创建
struct class *cdev_class;
cdev_class = class_create(owner,name)
device_create(_cls,_parent,_devt,_device,_fmt)

device_destroy(_cls,_device)
class_destroy(struct class * cls)

引用空指针, 通常会导致oops。

使用/proc文件系统

ldd 使用的函数是比较旧的,所以找了 linux kernel workbook的例子 [2]

  1. 首先在驱动中实现接口

lld3中提到使用read_proc, 实际上在proc_fs.h中已经找不到read_proc,在proc_fs.h中 定义了proc_read [3]

//已经弃用
int (*read_proc)(char *page, char **start, off_t offset, int count, int *eof, void *data);

//现使用
ssize_t     (*proc_read)(struct file *, char __user *, size_t, loff_t *);
  1. 仍然需要创建file_operations
struct file_operations proc_fops = {
    .read = my_proc_read,
    .write = my_proc_write,
};
  1. 创建/proc下的文件。
//已经弃用
struct proc_dir_entry *create_proc_read_entry(const char *name,mode_t mode,
                                            struct proc_dir_entry *base,
                                            read_proc_t *read_proc,
                                            void *data);

//现在使用接口
extern struct proc_dir_entry *proc_create_data(const char *, umode_t,
                    struct proc_dir_entry *,
                    const struct proc_ops *,
                    void *);
//删除文件
remove_proc_entry("scullmem", NULL /* parent dir */);
  1. 一个完整的例子 [4]
static int __init initialization_function(void){
    struct proc_dir_entry *ret = NULL;
    printk("%s: installing module\n", modname);
    ret = proc_create_data(modname, 0666, NULL, &proc_fops, NULL);
    if(!ret) printk("useproc error\n");
    return 0;
}
static void __exit deinitialization_function(void){
    remove_proc_entry(modname, NULL);
    printk("%s, removing..\n",modname);
}

插入模块窗口

user1@ubuntu:~/fish_kernel_module/proc_module$ sudo insmod useproc.ko
user1@ubuntu:~/fish_kernel_module/proc_module$
user1@ubuntu:~/fish_kernel_module/proc_module$ sudo lsmod | grep useproc
useproc               114688  0
user1@ubuntu:~/fish_kernel_module/proc_module$ sudo dmesg | tail
[18467.056449] useproc: installing module
[18492.543355] msg has been save: kkkkkkkkkkkkkk
[18494.560928] read argument: 00000000d6613f3c, 00000000df21919d, 256, 0
[18494.560930] read data:kkkkkkkkkkkkkk

用户态程序测试窗口:

user1@ubuntu:~/fish_kernel_module/proc_module$ sudo ./test_proc_module.out
[sudo] password for user1:
Starting device test code example...
Type in a short string to send to the kernel module:
kkkkkkkkkkkkkkk
Writing message to the device [kkkkkkkkkkkkkkk].
Press ENTER to read back from the device...

Reading from the device...
The received message is: [kkkkkkkkkkkkkk]
End of the program
[1]http://man7.org/linux/man-pages/man2/read.2.html
[2]https://lkw.readthedocs.io/en/latest/doc/05_proc_interface.html
[3]https://github.com/torvalds/linux/blob/master/include/linux/proc_fs.h
[4]https://github.com/LyleLee/fish_kernel_module/tree/master/proc_module

信号量实现

信号量实现临界区互斥

include/linux/semaphore.h
/* Please don't access any members of this structure directly */
struct semaphore {
    raw_spinlock_t          lock;
    unsigned int            count;
    struct list_head        wait_list;
};
include/linux/rwsem.h
struct rw_semaphore {
    atomic_long_t count;
    atomic_long_t owner;
    struct optimistic_spin_queue osq; /* spinner MCS lock */
    raw_spinlock_t wait_lock;
    struct list_head wait_list;
}

extern void down_read(struct rw_semaphore *sem); //申请锁
extern int down_read_trylock(struct rw_semaphore *sem);
extern void up_read(struct rw_semaphore *sem);

complettion

complettion提供等待条件成立的机制,例如一个进程wait_for_complete

leetcode 题目

周次,题型,难度,Leetcode 1,位运算、数组,中等,27/31/48/56,9/9~9/15 2,字符串,中等,3/5/151,9/16~9/22 3,排序(快速、插入、堆等,中等,324/215/373,9/23~9/29 4,堆、栈、队列,中等,20/224/347,10/8~10/13 5,链表(单向、双向)、哈希表,中等, 19/92/82,10/14~10/20 6,双指针、滑动窗,中等, 424/480/567,10/21~10/27 7,树、二叉树、字典树,中等, 220/652/919/ 102,10/28~11/3 8,图算法(有向图、无向图),中等, 1129/802/399/1161, 11/4~11/10 9,递归、迭代,中等, 698、779、794、894,11/11~11/17 10,深度广度优先搜索,中等, 113、417、542、200,11/18~11/24 11,贪心算法,中等,621、842、881,11/25~12/1 12,动态规划,中等,808、838、983、1039,12/2~12/8 13,分治算法(二分法、归并排序),中等,215、240、932、973,12/9~12/15

  滑动窗口的位置                单调递减队列    最大值
---------------                                 -----
[1] 3  -1  -3  5  3  6  7      [1         ]       -
[1  3] -1  -3  5  3  6  7      [3         ]       -
[1  3  -1] -3  5  3  6  7      [3, -1     ]       3
 1 [3  -1  -3] 5  3  6  7      [3, -1, -3 ]       3
 1  3 [-1  -3  5] 3  6  7      [5,        ]       5
 1  3  -1 [-3  5  3] 6  7      [5,  3     ]       5
 1  3  -1  -3 [5  3  6] 7      [6,        ]       6
 1  3  -1  -3  5 [3  6  7]     [7         ]       7

来源:力扣(LeetCode) 链接:https://leetcode-cn.com/problems/sliding-window-maximum 著作权归领扣网络所有。商业转载请联系官方授权,非商业转载请注明出处。

i =4

k = 3

二叉搜索树 https://www.cnblogs.com/gaochundong/p/binary_search_tree.html

nodeCount:7 i:2 before leaf: 0x603000000220, 4 i:2 add leaf: 0x603000000220, 4 i:2 add leaf: 0x603000000220, 4 i:5 before leaf: 0x6030000003d0, 4 leaf: 0x6030000003d0, 4 i:6 before leaf: 0x603000000340, 4 leaf: 0x603000000340, 4

贪心算法 搜索 网易公开课, 麻省理工学院公开课 算法导论。

静态库和动态库

编译静态库

下载以下文件到任意目录

make

生成的lib_mylib.a是静态链接库,生成的driver是静态链接的目标文件。

使用静态库的方法: 把lib_mylib.a和lib_mylib.h拷贝到任意主机,在源文件中include lib_mylib.h即可使用fun函数 如driver.c:

#include "lib_mylib.h"

void main()
{
        fun();
}

编译命令

gcc -o driver driver.c -L. -l_mylib

-L. 代表lib_mylib.a在当前路径,-l_mylib达标在-L指定的目录下查找lib_mylib.a

编译动态库

下载以下文件到任意目录

make
export LD_LIBRARY_PATH=$(pwd):$LD_LIBRARY_PATH
./app

生成的liblowcase.so是动态链接库。 生成的app是动态链接文件。使用ldd可以看到app有应用当前的路径。

[me@centos share_lib]$ ldd app
        linux-vdso.so.1 =>  (0x0000ffff8ef60000)
        liblowcase.so => /home/me/gsrc/share_lib/liblowcase.so (0x0000ffff8ef20000)
        libc.so.6 => /lib64/libc.so.6 (0x0000ffff8ed70000)
        /lib/ld-linux-aarch64.so.1 (0x0000ffff8ef70000)

使用动态链接库的方法,把liblowcase.so放到目录之后。编译指定路径和库。 设置环境变量,默认情况下,ld程序不回去搜索../code/路径,所以需要手动指定

gcc call_dynamic.c -L ../code/ -llowcase -o call
export LD_LIBRARY_PATH=../code/
./call

gcc 指定库文件和头文件

“-I”(大写i),“-L”(大写l),“-l”(小写l)的区别 我们用gcc编译程序时,可能会用到“-I”(大写i),“-L”(大写l),“-l”(小写l)等参数,下面做个记录:

例:

gcc -o hello hello.c -I /home/hello/include -L /home/hello/lib -lworld
-I /home/hello/include表示将/home/hello/include目录作为第一个寻找头文件的目录, 寻找的顺序是:/home/hello/include–>/usr/include–>/usr/local/include
-L

/home/hello/lib表示将/home/hello/lib目录作为第一个寻找库文件的目录,

寻找的顺序是:/home/hello/lib–>/lib–>/usr/lib–>/usr/local/lib

-lworld

表示在上面的lib的路径中寻找libworld.so动态库文件

(如果gcc编译选项中加入了“-static”表示寻找libworld.a静态库文件)

LD_PRELOAD
用法LD_PRELOAD=/usr/local/zlib/lib/libz.so.1.2.7 ./lzbench -ezlib silesia.tar 在加载系统的libz之前先加载自定义的so
LD_LIBRARY_PATH
添加程序执行时的搜索路径。 用法export LD_LIBRARY_PATH=/usr/local/zlib/lib/ 程序执行时从这里搜索动态库

gcc -l参数和-L参数

-l参数就是用来指定程序要链接的库,-l参数紧接着就是库名,那么库名跟真正的库文件名有什么关系呢? 就拿数学库来说,他的库名是m,他的库文件名是libm.so,很容易看出,把库文件名的头lib和尾.so去掉就是库名了。

如何让gcc在生成动态链接库的时候静态链接glibc

$ gcc -fPIC -shared reload.c -o reload.so -nostdlib
$ ldd reload.so
statically linked

参考资料: https://www.bytelang.com/article/content/d3t3i7VmN2g=

编写linux设备驱动

hello

模块的作用,打印信息,分配一段内存空间并打印首地址 [1]

字符设备驱动

一个字符设备驱动,注册一个字符设备,使用用户态程序对设备进行读写 [2]

[1]内核模块 https://github.com/LyleLee/fish_kernel_module/tree/master/fishing
[2]字符设备驱动 https://github.com/LyleLee/exploringBB/tree/version2/extras/kernel/ebbchar

linux资料集合

内核数据结构

linux数据结构主要来自《linux设备驱动》第三版 第11章

https://static.lwn.net/images/pdf/LDD3/ch11.pdf

完整是书在

http://www.makelinux.net/ldd3/

中文版

https://www.w3cschool.cn/fwiris/e2dy8ozt.html

书中涉及的源码:

【基于书中原版,在新内核中可用】

【书中原版,linux老内核可用】

卡内基·梅隆大学 计算机课程

【计算机系统介绍】

  • 使用教材:Computer Systems: A Programmer’s Perspective, Second Edition
  • 关于程序链接

课程:Computer Architecture Spring 2015

圣路易斯大学 计算机课程

完整课程与教学课件:https://cs.slu.edu/~fritts/CSCI224_S15/

使用教材: Computer Systems: A Programmer’s Perspective, 3/E (CS:APP3e)

教材代码: http://csapp.cs.cmu.edu/3e/code.html

linux内核设计与实现

在X86上,struct thread_info在文件<asm/thread_info.h>中定义.

struct task_struct 在include/linux/sched.h中定义。

struct task_struct {
        volatile long state;    /* -1 unrunnable, 0 runnable, >0 stopped */
        void *stack;
        atomic_t usage;
        unsigned int flags;     /* per process flags, defined below */
        unsigned int ptrace;

        int lock_depth;         /* BKL lock depth */

        struct task_struct *real_parent; /* real parent process */
        struct task_struct *parent; /* recipient of SIGCHLD, wait4() reports */
        /*
         * children/sibling forms the list of my natural children
         */
        struct list_head children;      /* list of my children */
        struct list_head sibling;       /* linkage in my parent's children list */
        struct task_struct *group_leader;       /* threadgroup leader */

创建进程:

fork()  vfork()  __clone()
   +      +          +
   |      |          |
   +-----------------+
       clone()
         +
         + do_fork()
               +
               copy_process()
                    +
                   dup_task_struct()
                       thread_info
                       task_struct

                   copy_flags()
                   alloc_pid()

进程终结: 主要由定义在kernel/exit.c中的do_exit()函数执行

exit()

do_exit()
    PF_EXITING
    acct_update_integrals(tsk);
    exit_mm()
    exit_sem()
    exit_sem(tsk);
    exit_files(tsk);
    exit_fs(tsk);
    exit_notify(tsk, group_dead);
    schedule();

进程的优先级: linux采用了两种不同的优先级范围。一种是nice值。范围-20~+19,默认是0. nice值越小,优先级越高,也就是-20拥有最高优先级。

me@ubuntu:~/code/linux$ ps -el
F S   UID   PID  PPID  C PRI  NI ADDR SZ WCHAN  TTY          TIME CMD
4 S     0     1     0  0  80   0 - 40391 -      ?        00:01:09 systemd
1 S     0     2     0  0  80   0 -     0 -      ?        00:00:00 kthreadd
1 I     0     4     2  0  60 -20 -     0 -      ?        00:00:00 kworker/0:0H
1 I     0     7     2  0  60 -20 -     0 -      ?        00:00:00 mm_percpu_wq
1 S     0     8     2  0  80   0 -     0 -      ?        00:04:04 ksoftirqd/0
1 I     0     9     2  0  80   0 -     0 -      ?        00:07:36 rcu_sched
1 I     0    10     2  0  80   0 -     0 -      ?        00:00:00 rcu_bh
1 S     0    11     2  0 -40   - -     0 -      ?        00:00:00 migration/0
5 S     0    12     2  0 -40   - -     0 -      ?        00:00:07 watchdog/0
1 S     0    13     2  0  80   0 -     0 -      ?        00:00:00 cpuhp/0
1 S     0    14     2  0  80   0 -     0 -      ?        00:00:00 cpuhp/1
5 S     0    15     2  0 -40   - -     0 -      ?        00:00:04 watchdog/1
1 S     0    16     2  0 -40   - -     0 -      ?        00:00:00 migration/1
第二种是实时优先级。范围是0~99,其值可配。和nice值相反,值越大优先级越高。
RTPRIO是-的,表示不是实时进程。
me@ubuntu:~/code/linux$ ps -eo state,uid,pid,ppid,rtprio,time,comm
S   UID   PID  PPID RTPRIO     TIME COMMAND
S     0     1     0      - 00:01:09 systemd
S     0     2     0      - 00:00:00 kthreadd
I     0     4     2      - 00:00:00 kworker/0:0H
I     0     7     2      - 00:00:00 mm_percpu_wq
S     0     8     2      - 00:04:04 ksoftirqd/0
I     0     9     2      - 00:07:38 rcu_sched
I     0    10     2      - 00:00:00 rcu_bh
S     0    11     2     99 00:00:00 migration/0
S     0    12     2     99 00:00:07 watchdog/0
S     0    13     2      - 00:00:00 cpuhp/0
S     0    14     2      - 00:00:00 cpuhp/1
S     0    15     2     99 00:00:04 watchdog/1
S     0    16     2     99 00:00:00 migration/1
S     0    17     2      - 00:02:14 ksoftirqd/1
I     0    19     2      - 00:00:00 kworker/1:0H
S     0    20     2      - 00:00:00 cpuhp/2
S     0    21     2     99 00:00:04 watchdog/2
S     0    22     2     99 00:00:00 migration/2
S     0    23     2      - 00:02:11 ksoftirqd/2
I     0    25     2      - 00:00:00 kworker/2:0H

时间片

时间片是一个数值,它表示进程在被抢占前所能持续运行的时间。

基础的调度代码定义在 kernel/sched.c
CFS算法定义在kernel/sched_fair.c

时间,节拍,系统定时器

<arm/param.h> 定义了节拍率。

系统定时器频率(节拍率),通过静态预处理器定义的。HZ。
x86体系结构中,系统定时器默认频率值是100,时钟中断频率是100HZ。每10ms产生一次(原书) x86体系结构中,系统定时器默认频率值是1000,时钟中断频率是1000HZ。每1ms产生一次(根据下述代码)

include/asm-generic/param.h

#ifdef __KERNEL__
# define HZ             CONFIG_HZ       /* Internal kernel timer frequency */
# define USER_HZ        100             /* some user interfaces are */
# define CLOCKS_PER_SEC (USER_HZ)       /* in "ticks" like times() */
#endif
me@ubuntu:~/code/linux$ grep -rn CONFIG_HZ . | grep x86
./arch/x86/configs/i386_defconfig:340:CONFIG_HZ=1000
./arch/x86/configs/x86_64_defconfig:341:CONFIG_HZ=1000

找了一些设备进行验证

名称 架构 OS 内核版本 时钟中断频率 用户接口时钟频率 log
RH2288 V3 x86_64 RHEL7.6 3.10.0-957.el7.x86_64 1000HZ 100HZ 10ms  
Taishan2280v2 ARM RHEL7.6 4.18.0-74.el8.aarch64 100HZ 100HZ 10ms [log]
Red Hat kvm x86_64 ubuntu 4.15.0-20-generic 250HZ 100HZ 10ms [log]

实际时间

当前时间(墙上时间)定义在文件kernel/time/timekeeping.c中:

struct timespec xtime;

timespec数据结构定义在文件<linux/time.h>中:

struct timespec{
    _kernel_time_t tv_sec;
    long tc_nsec;
}

竞争和锁

各种锁的机制区别在于:当锁已经被其他线程持有,因而不可用时的行为表现—–一些锁会简单地执行忙等待,而另外一些锁会使当前任务睡眠直到锁可用为止。 锁解决竞争条件地前提是,锁是原子操作实现的。 在X86体系结构总,锁的实现使用了成为compare和exchange的类似指令。

内核提供了两组原子操作接口

一组针对整数进行操作,另一组针对单独的位进行操作.

整数原子操作数据类型定义在include/linux/types.h

typedef struct {
        volatile int counter;
} atomic_t;

整数原子操作定义在:

include/asm-generic/atomic.h

位原子操作定义在:

include/linux/bitops.h
asm/bitops.h

自旋锁。申请锁的进程旋转等待,耗费处理器时间,持有自旋锁的时间应该小于进程两次上下文切换的时间。 信号量。申请信号量的进程会被睡眠,等待唤醒,不消耗处理器时间。 读写自旋锁。 多个线程可以同时获得读锁,读锁可以递归。写锁会保证没有人能在读或者写。

自旋锁定义在 asm/spinlock.h, 调用结构定义在linux/spinlock.h

内核数类型

参考文档

进程调度

                           红黑树,最左侧是要执行的进程
                     X
                     X
                    XX
                    X                           当代队列
                   XXX                        +---+---+----+---+---+--+--+
              XXXXXX XX XX             <----- |   |   |    |   |   |  |  |
    XXXXXXX XX  XX    X    X  X  X            |   |   |    |   |   |  |  |
  XX  XX      XXX      X                      +---+---+----+---+---+--+--+
 X  XXX      XX  X     XXX
   XX    XXXXX    X    X  X
        XX    X        X   X XXX
    XXXX        X      XX      XX
XX X   XX         XX  XX XXX   X XXXXXX

XXX XX X X XX XX XX XXX X XX X X X 进程调度的入口时schedule(). pick_next_task()负责查找下一个要运行的task。查找最高优先级的调度类。 调度类缓存有下一个要执行的task,直接返回。 CFS是朴廷进程的调度类。 休眠的进程在等待队列wake_queue_haed_t处理. DEFILE_WAIT()创建一个等待队列项。 add_wait_queue()把自己加入到队列中. prepare_to_wait()将进程状态变更为TASK_Interruptible或者TASK_UNINTERRUPTIBLE。条件满足后调用finish_wait()把自己移出等待队列

内存泄露

app内存泄露 slab内存泄露 kmalloc

申请4k以上的内存 vmalloc

申请内存不释放,不叫泄露。 申请了内存没有referrence才是泄露。

内存泄露无法实时检测

kmemleak

asan

内存corruption是可以实时检测的

SLUB_DEBUG, 扫描内存的redzone或者pedding是不是原来的magic值, 如果不是,会打印调用栈, 能知道有没有corruption,但是不知道谁搞的

ASAN: 需要编译器支持。 把ldr store替换层了san_ldr, san_store。 建立申请内存的shadow, 描述原来的内存能不能访问之类的。 所以ASAN 需要消耗多个内存。程序访问内存的时候, asan检测要访问的内存是不是合法的。 is_valid_addr(p).

namespace

namespace 是实现进程资源隔离的技术。 目前linux支持的namespace有cgroup_namespaces、ipc_namespaces、network_namespaces、 mount_namespaces、pid_namespaces、time_namespaces、user_namespaces、uts_namespaces。

这里记录一下mount_namespace

sudo unshare -m --propagation unchanged

上述在shell中创建并进入新的mount namespace。 这个时候执行unmount和mount操作,在其他namespace中不会察觉到。

overlay、underlay、大二层网络概念

名词理解了就简单了,主要参考这篇文章 [1]

undelay:也就是承载网。 路由可以扩散, 我添加一个路由, 全网要通告。
overlay:承载网之上的网络。 客户的CE路由器通过承载网创建隧道, 实现两个客户网络路由互通, 客户网络路由不影响承载网网络的路由。通常建立在承载网之上的各种VPN叫做overlay网络。
大二层:在underlay网络之上建立更大的二层网络,实现虚机迁移需求。

Network Virtualization Overlays(NOV3技术)的代表:

overlay 厂家 技术
VxLAN VMware MAC in UDP
NvGRE Microsoft MAC in GRE
STT Nicira MAC in TCP
[1]https://zhuanlan.zhihu.com/p/32486650

raid介绍:

image0

单盘可以配置成raid0. 也可以选择JBOD(直通模式)。

reexec and namespace

在查看namespace的用法时, 发现reexec包比较难理解 [1]

代码仓库:

package main

import (
    "fmt"
    "os"
    "log"
    "os/exec"
    "syscall"

    "github.com/docker/docker/pkg/reexec"
)


func init() {
    log.SetFlags(log.LstdFlags | log.Lshortfile)
    log.Println("run func init()")
    reexec.Register("nsInitialisation", nsInitialisation)
    log.Println("finish reexec.Register()")
    if reexec.Init() {
        log.Println("reexec.init() have been init()")
        os.Exit(0)
    }
    log.Println("run func init() finish")
}

func nsInitialisation() {
    log.Println(">> namespace setup code goes here <<")
    nsRun()
}

func nsRun() {
    cmd := exec.Command("/bin/sh")

    cmd.Stdin = os.Stdin
    cmd.Stdout = os.Stdout
    cmd.Stderr = os.Stderr

    cmd.Env = []string{"PS1=-[ns-process]- # "}

    if err := cmd.Run(); err != nil {
        fmt.Printf("Error running the /bin/sh command - %s\n", err)
        os.Exit(1)
    }
}

func main() {
    log.Println("main() begin in first line")
    cmd := reexec.Command("nsInitialisation")
    log.Println("main() construct  reexec.Command()")
    log.Println(cmd.Path)
    log.Println(cmd.Args[0])
    cmd.Stdin = os.Stdin
    cmd.Stdout = os.Stdout
    cmd.Stderr = os.Stderr

    cmd.SysProcAttr = &syscall.SysProcAttr{
        Cloneflags: syscall.CLONE_NEWNS |
            syscall.CLONE_NEWUTS |
            syscall.CLONE_NEWIPC |
            syscall.CLONE_NEWPID |
            syscall.CLONE_NEWNET |
            syscall.CLONE_NEWUSER,
        UidMappings: []syscall.SysProcIDMap{
            {
                ContainerID: 0,
                HostID:      os.Getuid(),
                Size:        1,
            },
        },
        GidMappings: []syscall.SysProcIDMap{
            {
                ContainerID: 0,
                HostID:      os.Getgid(),
                Size:        1,
            },
        },
    }

    if err := cmd.Run(); err != nil {
        fmt.Printf("Error running the reexec.Command - %s\n", err)
        os.Exit(1)
    }
}

运行结果是:

user1@intel6248:~/go/src/github.com/Lylelee/ns-process$ ./ns-process
2020/08/28 17:14:46 ns_process.go:16: run func init()
2020/08/28 17:14:46 ns_process.go:18: finish reexec.Register()
2020/08/28 17:14:46 ns_process.go:23: run func init() finish
2020/08/28 17:14:46 ns_process.go:47: main() begin in first line
2020/08/28 17:14:46 ns_process.go:49: main() construct  reexec.Command()
2020/08/28 17:14:46 ns_process.go:50: /proc/self/exe
2020/08/28 17:14:46 ns_process.go:51: nsInitialisation
2020/08/28 17:14:46 ns_process.go:16: run func init()
2020/08/28 17:14:46 ns_process.go:18: finish reexec.Register()
2020/08/28 17:14:46 ns_process.go:27: >> namespace setup code goes here <<
-[ns-process]- # exit
2020/08/28 17:14:50 ns_process.go:20: reexec.init() have been init()

这里解析一下执行过程

./ns-process    执行可执行程序
                注意此时os.Args[0]是空
func init()     在main参数执行前会执行软件包中的init()函数
        2020/08/28 17:14:46 ns_process.go:16: run func init()
    reexec.Register("nsInitialisation", nsInitialisation)
        2020/08/28 17:14:46 ns_process.go:18: finish reexec.Register()
    if reexec.Init() == false   尝试运行新注册的nsInitialisation,失败,因为os.Args[0]是空
        2020/08/28 17:14:46 ns_process.go:23: run func init() finish
func main()     执行main函数
        2020/08/28 17:14:46 ns_process.go:47: main() begin in first line
    cmd := reexec.Command("nsInitialisation")   构造新命令,设置参数为nsInitialisation
        2020/08/28 17:14:46 ns_process.go:49: main() construct  reexec.Command()
        2020/08/28 17:14:46 ns_process.go:50: /proc/self/exe    将要执行的命令是自己,也就是当前进程 ns-process ?
        2020/08/28 17:14:46 ns_process.go:51: nsInitialisation  将要执行进程的参数
    cmd.run()
        /proc/self/exec nsInitialisation 执行命令
        func init()     在main参数执行前会执行软件包中的init()函数
                2020/08/28 17:14:46 ns_process.go:16: run func init()
            reexec.Register("nsInitialisation", nsInitialisation)   进程中也是需要register的
                2020/08/28 17:14:46 ns_process.go:18: finish reexec.Register()
            if reexec.Init() == true   尝试运行新注册的nsInitialisation,成功,因为os.Args[0]已经设置伪nsInitialisation,查看上一句 cmd := reexec.Command("nsInitialisation")

                func nsInitialisation() {
                    func nsRun() {

                        2020/08/28 17:14:46 ns_process.go:27: >> namespace setup code goes here <<

                        cmd := exec.Command("/bin/sh") 进入命令行


                        -[ns-process]- # exit

                2020/08/28 17:14:50 ns_process.go:20: reexec.init() have been init() 执行注册的函数成功

这里花了不少事件去理解他是怎么自己执行自己的。 做法是, 自己在执行时检查自己的参数os.Args[0]是否设置, 如果设置了就执行注册在这个参数下的函数或者时命令。 如果没有设置这个参数,或者设置了这个参数但是没有注册,那么直接退出就好了。

所以, 第一次在命令行输入命令开始执行时, 参数未设置,尝试调用注册的函数失败。 执行到main函数,用rexec.Command设置参数,再调用run, 这个时候就会fork自己。 以设置的参数查找注册函数,执行注册函数。这个时候注册号的函数就再新的命名空间中执行了。

这种设计方法, 感觉会让人比较费解。

据说这种fork自己的办法还可以把内存中可执行的二进制保存到硬盘, 这个样可以实现自己更新自己。

为了简化问题重写了一份比较好理解的代码 [2]

package main

import (
    "github.com/docker/docker/pkg/reexec"
    "log"
    "os"
)

func init() {
    log.SetFlags(log.LstdFlags | log.Lshortfile)
}

func CalmDown() {
    pid := os.Getpid()
    log.Println(pid,"CalmDown() Take a deep breath...")
    // do somthing more
    log.Println(pid, "CalmDown() Yes, I am calmdown already!")
}

func main() {

    pid := os.Getpid()
    log.Println(pid, "os argument: ", os.Args)

    reexec.Register("func1", CalmDown)

    log.Println(pid , "register func1")

    if reexec.Init() {
        log.Println(pid, "reexec have init")
        os.Exit(0)
    }

    log.Println(pid, "test init")

    cmd := reexec.Command("func1")

    log.Println(pid,cmd.Path)
    log.Println(pid,cmd.Args)

    output, err := cmd.CombinedOutput()

    if err != nil {
        log.Println(pid, "cmd run with error: ", err.Error())
        os.Exit(10)
    }
    log.Println(pid, "cmd output: ")
    log.Println(pid, string(output))
    log.Println(pid, "rexec demo finish")
}

输出结果

2020/08/29 11:01:29 reexec_usage.go:23: 65180 os argument:  [./reexec_usage]
2020/08/29 11:01:29 reexec_usage.go:27: 65180 register func1
2020/08/29 11:01:29 reexec_usage.go:34: 65180 test init
2020/08/29 11:01:29 reexec_usage.go:38: 65180 /proc/self/exe
2020/08/29 11:01:29 reexec_usage.go:39: 65180 [func1]
2020/08/29 11:01:29 reexec_usage.go:47: 65180 cmd output:
2020/08/29 11:01:29 reexec_usage.go:48: 65180 2020/08/29 11:01:29 reexec_usage.go:23: 65185 os argument:  [func1]
2020/08/29 11:01:29 reexec_usage.go:27: 65185 register func1
2020/08/29 11:01:29 reexec_usage.go:15: 65185 CalmDown() Take a deep breath...
2020/08/29 11:01:29 reexec_usage.go:17: 65185 CalmDown() Yes, I am calmdown already!
2020/08/29 11:01:29 reexec_usage.go:30: 65185 reexec have init

2020/08/29 11:01:29 reexec_usage.go:49: 65180 rexec demo finish
[1]https://medium.com/@teddyking/namespaces-in-go-reexec-3d1295b91af8
[2]https://play.golang.org/p/ArHJfulbgrO

在x86上编译和运行arm64程序

在x86上交叉编译出arm64程序

有时候我们只有x86环境, 想要编译出arm64的目标二进制。这个时候需要交叉编译工具, 交叉编译工具的安装有很多种。这里选择

docker run --rm  dockcross/linux-arm64 > ./dockcross-linux-arm64
dockcross-linux-arm64 bash -c '$CC hello_world_c.c -o hello_arm64 -static'

查看编译结果

user1@intel6248:~/hello_world_c$ file hello_arm64
hello_arm64: ELF 64-bit LSB executable, ARM aarch64, version 1 (GNU/Linux), statically linked, for GNU/Linux 4.10.8, with debug_info, not stripped

一般情况下执行会出错。 因为平台是x86的, 但是目标文件是arm64的。

./hello_arm64
-bash: ./hello_arm64: cannot execute binary file: Exec format error

arm64目标程序在x86平台上运行

docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

一段C程序

#include <stdio.h>

int main()
{
        printf("hello world c\n");
}

编译后拷贝到其他设备X86上运行,也可以用前面编译出来的hello_world, 注意编译要带 -static,要不然会因为x86主机上没有ARM上的ld解释器,c库导致报错

gcc -o hello_world_c -static hello_world_c.c
user1@intel:~$ uname -m
x86_64
user1@intel:~$ file hello_world_c
hello_world_c: ELF 64-bit LSB executable, ARM aarch64, version 1 (GNU/Linux), statically linked, for GNU/Linux 3.7.0, BuildID[sha1]=58b303f958cea549f2333edbc6e5e6ea56aa476f, not stripped
user1@intel:~$ ./hello_world_c
hello world c

一段Go程序

package main

import "fmt"

func main(){
        fmt.Println("hello world go")
}

编译后拷贝到其他设备X86上运行,

go build -o hello_world_go .
user1@intel6248:~$ uname -m
x86_64
user1@intel6248:~$ file hello_world_go
hello_world_go: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, not stripped
user1@intel6248:~$
user1@intel6248:~$ ./hello_world_go
hello world go
[1]https://github.com/multiarch/qemu-user-static

SHA 安全哈希算法

对输入字符串进行哈希函数计算。[1]

SHA算法家族包括:SHA-1、SHA-224、SHA-256、SHA-384、SHA-512五种算法。

输入:

  • SHA-1、SHA-224、SHA-256可适用于长度不超过2^64的二进制位的消息
  • SHA-384和SHA-512适用于长度不超过2^128二进制位的消息

输出:

  • SHA-1算法的哈希值大小为160位,为20字节。
  • SHA-224算法的哈希值大小为224位,为28字节。
  • SHA-256算法的哈希值大小为256位,为32字节。
  • SHA-384算法的哈希值大小为384位,为48字节。
  • SHA-512算法的哈希值大小为384位,为48字节。

SHA256的demo :

#include <stdio.h>
#include <string.h>
#include "openssl/sha.h"

int main()
{
    unsigned char *str = "string";
    static unsigned char buffer[65];

    SHA256(str, strlen(str), buffer);

    int i;
    for (i = 0; i < 32; i++) {
        printf("%02x", buffer[i]);
    }
    printf("\n");

}

运行结果,”string”这几个字符的哈希值为

gcc sha256.c -lcrypto -o sha256.out
banana@bfc9c8267aa8:/sha256$ ./sha256.out
473287f8298dba7163a897908958f7c0eae733e25d2e027992ea2edc9bed2fa8

可以使用在线工具进行验证。 [2]

在openssl/sha.h [3] 中声明的SHA256函数会依次调用 SHA256_Init(), SHA256_Update(), SHA256_Final(), OPENSSL_cleanse() [4]

unsigned char *SHA256(const unsigned char *d, size_t n, unsigned char *md)
{
    SHA256_CTX c;
    static unsigned char m[SHA256_DIGEST_LENGTH];

    if (md == NULL)
        md = m;
    SHA256_Init(&c);            //初始化CTX, 根据sha256的计算原理,需要把数据补全之类的操作
    SHA256_Update(&c, d, n);    //开始循环计算各个数据块的哈希值
    SHA256_Final(md, &c);       //合并哈希值,8个4字节合到一起
    OPENSSL_cleanse(&c, sizeof(c));
    return md;
}

在SHA256_Update的实际计算中,核心函数是sha256_block_data_order 在ARMv8上有三种实现

  • C语言的实现
  • ARMv7 neon
  • ARMv8 sha256

C语言的实现 [5]

static void sha256_block_data_order(SHA256_CTX *ctx, const void *in,
                                    size_t num)
{
    unsigned MD32_REG_T a, b, c, d, e, f, g, h, s0, s1, T1;
    SHA_LONG X[16];
    int i;
    const unsigned char *data = in;
    DECLARE_IS_ENDIAN;

    while (num--) {

        a = ctx->h[0];
        b = ctx->h[1];
        c = ctx->h[2];
        d = ctx->h[3];
        e = ctx->h[4];
        f = ctx->h[5];
        g = ctx->h[6];
        h = ctx->h[7];

        if (!IS_LITTLE_ENDIAN && sizeof(SHA_LONG) == 4
            && ((size_t)in % 4) == 0) {
            const SHA_LONG *W = (const SHA_LONG *)data;

            T1 = X[0] = W[0];
            ROUND_00_15(0, a, b, c, d, e, f, g, h);
            T1 = X[1] = W[1];
            ROUND_00_15(1, h, a, b, c, d, e, f, g);
            T1 = X[2] = W[2];
            ROUND_00_15(2, g, h, a, b, c, d, e, f);
            T1 = X[3] = W[3];
            ROUND_00_15(3, f, g, h, a, b, c, d, e);
            T1 = X[4] = W[4];
            ROUND_00_15(4, e, f, g, h, a, b, c, d);
            T1 = X[5] = W[5];
            ROUND_00_15(5, d, e, f, g, h, a, b, c);
            T1 = X[6] = W[6];
            ROUND_00_15(6, c, d, e, f, g, h, a, b);
            T1 = X[7] = W[7];
            ROUND_00_15(7, b, c, d, e, f, g, h, a);
            T1 = X[8] = W[8];
            ROUND_00_15(8, a, b, c, d, e, f, g, h);
            T1 = X[9] = W[9];
            ROUND_00_15(9, h, a, b, c, d, e, f, g);
            T1 = X[10] = W[10];
            ROUND_00_15(10, g, h, a, b, c, d, e, f);
            T1 = X[11] = W[11];
            ROUND_00_15(11, f, g, h, a, b, c, d, e);
            T1 = X[12] = W[12];
            ROUND_00_15(12, e, f, g, h, a, b, c, d);
            T1 = X[13] = W[13];
            ROUND_00_15(13, d, e, f, g, h, a, b, c);
            T1 = X[14] = W[14];
            ROUND_00_15(14, c, d, e, f, g, h, a, b);
            T1 = X[15] = W[15];
            ROUND_00_15(15, b, c, d, e, f, g, h, a);

            data += SHA256_CBLOCK;
        } else {
            SHA_LONG l;

            (void)HOST_c2l(data, l);
            T1 = X[0] = l;
            ROUND_00_15(0, a, b, c, d, e, f, g, h);
            (void)HOST_c2l(data, l);
            T1 = X[1] = l;
            ROUND_00_15(1, h, a, b, c, d, e, f, g);
            (void)HOST_c2l(data, l);
            T1 = X[2] = l;
            ROUND_00_15(2, g, h, a, b, c, d, e, f);
            (void)HOST_c2l(data, l);
            T1 = X[3] = l;
            ROUND_00_15(3, f, g, h, a, b, c, d, e);
            (void)HOST_c2l(data, l);
            T1 = X[4] = l;
            ROUND_00_15(4, e, f, g, h, a, b, c, d);
            (void)HOST_c2l(data, l);
            T1 = X[5] = l;
            ROUND_00_15(5, d, e, f, g, h, a, b, c);
            (void)HOST_c2l(data, l);
            T1 = X[6] = l;
            ROUND_00_15(6, c, d, e, f, g, h, a, b);
            (void)HOST_c2l(data, l);
            T1 = X[7] = l;
            ROUND_00_15(7, b, c, d, e, f, g, h, a);
            (void)HOST_c2l(data, l);
            T1 = X[8] = l;
            ROUND_00_15(8, a, b, c, d, e, f, g, h);
            (void)HOST_c2l(data, l);
            T1 = X[9] = l;
            ROUND_00_15(9, h, a, b, c, d, e, f, g);
            (void)HOST_c2l(data, l);
            T1 = X[10] = l;
            ROUND_00_15(10, g, h, a, b, c, d, e, f);
            (void)HOST_c2l(data, l);
            T1 = X[11] = l;
            ROUND_00_15(11, f, g, h, a, b, c, d, e);
            (void)HOST_c2l(data, l);
            T1 = X[12] = l;
            ROUND_00_15(12, e, f, g, h, a, b, c, d);
            (void)HOST_c2l(data, l);
            T1 = X[13] = l;
            ROUND_00_15(13, d, e, f, g, h, a, b, c);
            (void)HOST_c2l(data, l);
            T1 = X[14] = l;
            ROUND_00_15(14, c, d, e, f, g, h, a, b);
            (void)HOST_c2l(data, l);
            T1 = X[15] = l;
            ROUND_00_15(15, b, c, d, e, f, g, h, a);
        }

        for (i = 16; i < 64; i += 8) {
            ROUND_16_63(i + 0, a, b, c, d, e, f, g, h, X);
            ROUND_16_63(i + 1, h, a, b, c, d, e, f, g, X);
            ROUND_16_63(i + 2, g, h, a, b, c, d, e, f, X);
            ROUND_16_63(i + 3, f, g, h, a, b, c, d, e, X);
            ROUND_16_63(i + 4, e, f, g, h, a, b, c, d, X);
            ROUND_16_63(i + 5, d, e, f, g, h, a, b, c, X);
            ROUND_16_63(i + 6, c, d, e, f, g, h, a, b, X);
            ROUND_16_63(i + 7, b, c, d, e, f, g, h, a, X);
        }

        ctx->h[0] += a;
        ctx->h[1] += b;
        ctx->h[2] += c;
        ctx->h[3] += d;
        ctx->h[4] += e;
        ctx->h[5] += f;
        ctx->h[6] += g;
        ctx->h[7] += h;

    }
}

ARMv7 neon [6]

.globl      sha256_block_neon
#endif
.type       sha256_block_neon,%function
.align      4
sha256_block_neon:
.Lneon_entry:
    stp     x29, x30, [sp, #-16]!
    mov     x29, sp
    sub     sp,sp,#16*4
    adr     $Ktbl,.LK256
    add     $num,$inp,$num,lsl#6    // len to point at the end of inp
    ld1.8   {@X[0]},[$inp], #16
    ld1.8   {@X[1]},[$inp], #16
    ld1.8   {@X[2]},[$inp], #16
    ld1.8   {@X[3]},[$inp], #16
    ld1.32  {$T0},[$Ktbl], #16
    ld1.32  {$T1},[$Ktbl], #16
    ld1.32  {$T2},[$Ktbl], #16
    ld1.32  {$T3},[$Ktbl], #16
    rev32   @X[0],@X[0]             // yes, even on
    rev32   @X[1],@X[1]             // big-endian
    rev32   @X[2],@X[2]
    rev32   @X[3],@X[3]
    mov     $Xfer,sp
    add.32  $T0,$T0,@X[0]
    add.32  $T1,$T1,@X[1]
    add.32  $T2,$T2,@X[2]
    st1.32  {$T0-$T1},[$Xfer], #32
    add.32  $T3,$T3,@X[3]
    st1.32  {$T2-$T3},[$Xfer]
    sub     $Xfer,$Xfer,#32
    ldp     $A,$B,[$ctx]
    ldp     $C,$D,[$ctx,#8]
    ldp     $E,$F,[$ctx,#16]
    ldp     $G,$H,[$ctx,#24]
    ldr     $t1,[sp,#0]
    mov     $t2,wzr
    eor     $t3,$B,$C
    mov     $t4,wzr
    b       .L_00_48
.align      4
.L_00_48:
___
    &Xupdate(\&body_00_15);
    &Xupdate(\&body_00_15);
    &Xupdate(\&body_00_15);
    &Xupdate(\&body_00_15);
$code.=<<___;
    cmp     $t1,#0                          // check for K256 terminator
    ldr     $t1,[sp,#0]
    sub     $Xfer,$Xfer,#64
    bne     .L_00_48
    sub     $Ktbl,$Ktbl,#256                // rewind $Ktbl
    cmp     $inp,$num
    mov     $Xfer, #64
    csel    $Xfer, $Xfer, xzr, eq
    sub     $inp,$inp,$Xfer                 // avoid SEGV
    mov     $Xfer,sp
___
    &Xpreload(\&body_00_15);
    &Xpreload(\&body_00_15);
    &Xpreload(\&body_00_15);
    &Xpreload(\&body_00_15);
$code.=<<___;
    add     $A,$A,$t4                       // h+=Sigma0(a) from the past
    ldp     $t0,$t1,[$ctx,#0]
    add     $A,$A,$t2                       // h+=Maj(a,b,c) from the past
    ldp     $t2,$t3,[$ctx,#8]
    add     $A,$A,$t0                       // accumulate
    add     $B,$B,$t1
    ldp     $t0,$t1,[$ctx,#16]
    add     $C,$C,$t2
    add     $D,$D,$t3
    ldp     $t2,$t3,[$ctx,#24]
    add     $E,$E,$t0
    add     $F,$F,$t1
    ldr     $t1,[sp,#0]
    stp     $A,$B,[$ctx,#0]
    add     $G,$G,$t2
    mov     $t2,wzr
    stp     $C,$D,[$ctx,#8]
    add     $H,$H,$t3
    stp     $E,$F,[$ctx,#16]
    eor     $t3,$B,$C
    stp     $G,$H,[$ctx,#24]
    mov     $t4,wzr
    mov     $Xfer,sp
    b.ne    .L_00_48
    ldr     x29,[x29]
    add     sp,sp,#16*4+16
    ret
.size       sha256_block_neon,.-sha256_block_neon

ARMv8 sha256 [7]

.type       sha256_block_armv8,%function
.align      6
sha256_block_armv8:
.Lv8_entry:
    stp             x29,x30,[sp,#-16]!
    add             x29,sp,#0
    ld1.32          {$ABCD,$EFGH},[$ctx]
    adr             $Ktbl,.LK256
.Loop_hw:
    ld1             {@MSG[0]-@MSG[3]},[$inp],#64
    sub             $num,$num,#1
    ld1.32          {$W0},[$Ktbl],#16
    rev32           @MSG[0],@MSG[0]
    rev32           @MSG[1],@MSG[1]
    rev32           @MSG[2],@MSG[2]
    rev32           @MSG[3],@MSG[3]
    orr             $ABCD_SAVE,$ABCD,$ABCD          // offload
    orr             $EFGH_SAVE,$EFGH,$EFGH
___
for($i=0;$i<12;$i++) {
$code.=<<___;
    ld1.32          {$W1},[$Ktbl],#16
    add.i32         $W0,$W0,@MSG[0]
    sha256su0       @MSG[0],@MSG[1]
    orr             $abcd,$ABCD,$ABCD
    sha256h         $ABCD,$EFGH,$W0
    sha256h2        $EFGH,$abcd,$W0
    sha256su1       @MSG[0],@MSG[2],@MSG[3]
___
    ($W0,$W1)=($W1,$W0);    push(@MSG,shift(@MSG));
}
$code.=<<___;
    ld1.32          {$W1},[$Ktbl],#16
    add.i32         $W0,$W0,@MSG[0]
    orr             $abcd,$ABCD,$ABCD
    sha256h         $ABCD,$EFGH,$W0
    sha256h2        $EFGH,$abcd,$W0
    ld1.32          {$W0},[$Ktbl],#16
    add.i32         $W1,$W1,@MSG[1]
    orr             $abcd,$ABCD,$ABCD
    sha256h         $ABCD,$EFGH,$W1
    sha256h2        $EFGH,$abcd,$W1
    ld1.32          {$W1},[$Ktbl]
    add.i32         $W0,$W0,@MSG[2]
    sub             $Ktbl,$Ktbl,#$rounds*$SZ-16     // rewind
    orr             $abcd,$ABCD,$ABCD
    sha256h         $ABCD,$EFGH,$W0
    sha256h2        $EFGH,$abcd,$W0
    add.i32         $W1,$W1,@MSG[3]
    orr             $abcd,$ABCD,$ABCD
    sha256h         $ABCD,$EFGH,$W1
    sha256h2        $EFGH,$abcd,$W1
    add.i32         $ABCD,$ABCD,$ABCD_SAVE
    add.i32         $EFGH,$EFGH,$EFGH_SAVE
    cbnz            $num,.Loop_hw
    st1.32          {$ABCD,$EFGH},[$ctx]
    ldr             x29,[sp],#16
    ret
.size       sha256_block_armv8,.-sha256_block_armv8
[1]https://itbilu.com/tools/crypto/sha1.html
[2]https://emn178.github.io/online-tools/sha256.html
[3]https://github.com/openssl/openssl/blob/914f97eecc9166fbfdb50c2d04e2b9f9d0c52198/include/openssl/sha.h#L71
[4]https://github.com/openssl/openssl/blob/914f97eecc9166fbfdb50c2d04e2b9f9d0c52198/crypto/sha/sha256.c#L70
[5]https://github.com/openssl/openssl/blob/914f97eecc9166fbfdb50c2d04e2b9f9d0c52198/crypto/sha/sha256.c#L253
[6]https://github.com/openssl/openssl/blob/914f97eecc9166fbfdb50c2d04e2b9f9d0c52198/crypto/sha/asm/sha512-armv8.pl#L629
[7]https://github.com/openssl/openssl/blob/914f97eecc9166fbfdb50c2d04e2b9f9d0c52198/crypto/sha/asm/sha512-armv8.pl#L368

shell编程常用参考

r=$(( 40 - 5 ))

文件描述符

exec 3<> /tmp/foo # open fd 3.
echo a >&3 # write to it
exec 3>&- # close fd 3.

Taishan server

泰山服务器的一些介绍。 请查看 链接

tcp 三次握手

访问百度的情况

curl www.baidu.com
user1@intel6248:~/jail-program/pcab/decode_fast$ sudo tcpdump -i eno3 tcp and not port 22 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno3, link-type EN10MB (Ethernet), capture size 262144 bytes
03:37:03.032235 IP 192.168.1.203.40534 > 103.235.46.39.80: Flags [S], seq 245004369, win 64240, options [mss 1460,sackOK,TS val 2783094696 ecr 0,nop,wscale 7], length 0
03:37:03.039428 IP 103.235.46.39.80 > 192.168.1.203.40534: Flags [S.], seq 1556435673, ack 245004370, win 8192, options [mss 1436,sackOK,nop,nop,nop,nop,nop,nop,nop,nop,nop,nop,nop,wscale 5], length 0
03:37:03.039489 IP 192.168.1.203.40534 > 103.235.46.39.80: Flags [.], ack 1, win 502, length 0
03:37:03.039540 IP 192.168.1.203.40534 > 103.235.46.39.80: Flags [P.], seq 1:78, ack 1, win 502, length 77: HTTP: GET / HTTP/1.1
03:37:03.046848 IP 103.235.46.39.80 > 192.168.1.203.40534: Flags [.], ack 78, win 776, length 0
03:37:03.057934 IP 103.235.46.39.80 > 192.168.1.203.40534: Flags [P.], seq 1:1449, ack 78, win 776, length 1448: HTTP: HTTP/1.1 200 OK
03:37:03.057958 IP 192.168.1.203.40534 > 103.235.46.39.80: Flags [.], ack 1449, win 495, length 0
03:37:03.058520 IP 103.235.46.39.80 > 192.168.1.203.40534: Flags [.], seq 1449:2749, ack 78, win 776, length 1300: HTTP
03:37:03.058561 IP 192.168.1.203.40534 > 103.235.46.39.80: Flags [.], ack 2749, win 501, length 0
03:37:03.058582 IP 103.235.46.39.80 > 192.168.1.203.40534: Flags [P.], seq 2749:2782, ack 78, win 776, length 33: HTTP
03:37:03.058590 IP 192.168.1.203.40534 > 103.235.46.39.80: Flags [.], ack 2782, win 501, length 0
03:37:03.058718 IP 192.168.1.203.40534 > 103.235.46.39.80: Flags [F.], seq 78, ack 2782, win 501, length 0
03:37:03.065788 IP 103.235.46.39.80 > 192.168.1.203.40534: Flags [.], ack 79, win 776, length 0
03:37:03.065912 IP 103.235.46.39.80 > 192.168.1.203.40534: Flags [F.], seq 2782, ack 79, win 776, length 0
03:37:03.065939 IP 192.168.1.203.40534 > 103.235.46.39.80: Flags [.], ack 2783, win 501, length 0
^C
15 packets captured
15 packets received by filter
0 packets dropped by kernel

TCP 建立连接

  1. 客户端发送SYN数据包 [S], seq 245004369
  2. 服务器发送SYN数据包 [S.], seq 1556435673, ack 245004370,
  3. 客户端发送确认数据包 [.], ack 1

TCP 结束

  1. 客户端发送关闭请求 [F.], seq 78, ack 2782
  2. 服务端发送确认关闭 [.], ack 79, win 776, length 0
  3. 服务端发送关闭请求 [F.], seq 2782, ack 79,
  4. 客户端发送确认关闭 [.], ack 2783,

travis CI and arm64

极术社区:

Travis在2019年10月初宣布可以支持在不同CPU架构(x86,arm64)的机器上编译,测试代码。[1]

travis 文档:

To enable testing on multiple CPU architectures add the arch key to your .travis.yml [2]

[1]https://aijishu.com/a/1060000000019302
[2]https://docs.travis-ci.com/user/multi-cpu-architectures/

TYPE_STEP_COUNTER

android 计步器

虚拟网络

有多台主机,分布在公网局域网等,需要将他们组成一个网络,方便访问。

方案一:N2N

通过设置supernode节点,实现edgenode节点的互通,主机之间可以直接ping通

[项目地址]

参考配置教程: https://sparkydogx.github.io/2018/12/20/n2n/

方案二:zerotier

wiki

2020年6月5日09:30:33

是不是必须是json。:

是。 如果是富文本: word,txt, rst。 需要约定格式, 除了业界大家资源遵守的格式,大家默认都不太喜欢格式。
json可以像表格一样方便填写。

为什么不用excel

没有人喜欢填写表格的字段。 表格版本控制,等于人工审核。 表格约束了展现形式。

模板引擎

还没有想好

要不要先渲染成rst

如果不渲染成rst,就无法使用sphinx主题。 可以让前端帮忙看一下怎么生成sphinx主题。以软件名作为导航

2020年6月5日09:28:51:

一定是json吗

如果不是要机器读的话,直接编辑文档最方便

对于软件列表要尽可能少属性要尽可能少:

解释软件是用来干嘛的, 从来不是我们要做的。 软件官网有详细的文档
https://fedoraproject.org/wiki/Workstation/Third_party_software_list

刘琦建议gitbook是一个较好的选择

满足我的简单需求,导航+静态页面。
可以随时带走

参考维基百科对软件的介绍 https://en.wikipedia.org/wiki/Apache_Hadoop

介绍软件的事情不是我们应该做的
_images/hadoop_wiki.PNG

怎么获取软件也不是我们应该做的。 发行厂商是正规的获取渠道。 虽然很多工程师想要一个链接直接点击。 不管如何, 都应该从发行厂商处获取, 第三方渠道可信度并不好。 容易引起版权问题。

用 sphinx生成github pages https://github.com/sphinx-doc/sphinx/issues/3382

一些工业软件

软件名称 厂家
CADAM 美国洛克希德
CALMA 美国通用电气公司
CV 美国波音公司
I-DEAS 美国NASA
UG 美国麦道公司
CATIA 法国达索公司
SURF 德国大众汽车公司
PDGS 美国福特汽车公司
EUCL ID 法国雷诺公司
ANSYS 西屋电气太空核电子实验室

性能之巅

硬盘

磁盘的平均IO响应时间是1ms

一次数据库查询需要100ms

性能测试工具

  • top
  • Dtrace
  • dBPF
  • dstat
  • netstat
  • iostat
  • vmstat
  • tcpdump
  • nmon

CPU性能测试工具

  • speccpu
  • LTP
  • perf
  • unixbench
  • specjvm
  • sysbench
  • smallpt

内存性能测试工具

  • stream
  • lmbench
  • stress-ng
  • Intel® Memory Latency Checker v3.7 [1]

IO性能测试工具

  • fio
  • vdbench
  • iozone

求解器

给定一些约束,求出可能的解,这就和解方程一样。

可以参考一篇非常优秀的文章。 z3求解器

计算机的启动

计算机出现问题的时候,希望知道设备是处在什么阶段

BIOS启动

BIOS启动进行设备自检。自检完成后会递交控制权给boot loader.

递交的顺序是在启动顺序中指定的启动项,这些启动项指向一个外部存储设备。

/------------------------------------------------------------------------------\
|                                Boot Manager                                  |
\------------------------------------------------------------------------------/

                                                         Device Path :
   Boot Manager Menu                                     HD(1,GPT,D9F797D0-9E50
                                                         -4D21-B618-CDE854597D5
   Red Hat Enterprise Linux                              F,0x800,0x64000)/\EFI\
   UEFI                                                  redhat\shimaa64.efi
   UEFI  2
   Uefi Redhat Boot
   UEFI Misc Device
   UEFI PXEv4 (MAC:001886000006)
   UEFI PXEv4 (MAC:001886010006)
   UEFI PXEv4 (MAC:001886020006)
   UEFI PXEv4 (MAC:001886030006)
   UEFI PXEv4 (MAC:001886040006)
   UEFI PXEv4 (MAC:001886050006)
   UEFI Shell

/------------------------------------------------------------------------------\
|                                                                              |
| ^v=Move Highlight       <Enter>=Select Entry      Esc=Exit                   |
\------------------------------------------------------------------------------/

主引导记录。

控制权转移到一个外部存储设备之后,会读取这个设备的前面512字节,也就是主引导记录
(1) 第1-446字节,共446字节:调用操作系统的机器码。
(2) 第447-510字节,共64字节:分区表(Partition table)。
(3) 第511-512字节,共2字节:主引导记录签名(0x55和0xAA)。
 1                                                       446 447         510 511 512
+-----------------------------------------------------------+---------------+-------+
|                                                           |               |       |
|  binary code to call OS                                   | partition     |0x55   |
|                                                           |  table        |   x0AA|
+-----------------------------------------------------------+---------------+-------+

分区表

分区表的长度是64字节,分成4项,每项16字节,对应4个一级分区,叫做“主分区”

每个主分区的16个字节,由6个部分组成:
(1) 第1个字节:如果为0x80,就表示该主分区是激活分区,控制权要转交给这个分区。四个主分区里面只能有一个是激活的。
(2) 第2-4个字节:主分区第一个扇区的物理位置(柱面、磁头、扇区号等等)。
(3) 第5个字节:主分区类型。
(4) 第6-8个字节:主分区最后一个扇区的物理位置。
(5) 第9-12字节:该主分区第一个扇区的逻辑地址。
(6) 第13-16字节:主分区的扇区总数。
+-------------+-----------------+---------------+-----------------+--------------+----------------+
|第1个字节    |第2-4个字节      |第5个字节      | 第6-8个字节     |第9-12字节    |第13-17字节     |
|0x80         |主分区第一个     |主分区的类型   | 主分区最后一个  |该主分区第一个|主分区的扇区总数|
|表示激活分区 |扇区的物理位置   |               | 扇区的物理位置  |扇区的逻辑地址|                |
|             |                 |               |                 |              |                |
+-------------+-----------------+---------------+-----------------+--------------+----------------+
最后4个字节的(主分区的扇区总数),决定了这个主分区的长度。也就是说,一个主分区的扇区总数最多不超过2^32。
如果每个扇区为512=29字节,意味着单个分区最大不超过241 =2*2^40 =2 TB。再考虑扇区的逻辑地址也是32位,所以单个硬盘可利用的空间最大也不超过2TB,如果想使用更大的硬盘,只有2个办法:一个是提高每个扇区的字节数,而是增加删去总数

卷引导记录

计算机会读取主分区中的激活分区。并读出第一扇区,叫做“卷引导记录”(Volume boot record,缩写为VBR)。“圈引导记录”的主要作用是,告诉计算机,擦偶哦在系统在这个分区里的位置。然后,计算机就会加载操作系统。

拓展分区

随着硬盘越来越大,4个分区已经不够了,需要更多的分区。但是分区表只有4项,因此规定有且仅有一个区可以被定义为“扩展分区”。 扩展分区,指一个区里面可以分成多个区,叫做“逻辑分区”。 拓展分区的第一个扇区,叫做“拓展引导记录” Extended boot record EBR。EBR也包含64字节的分区表,但是只有两项。也就是两个逻辑分区。

完整启动log

[完整的设备启动log]

从UEFI启动后, CPU都运行在EL2上, Guest OS在 EL1, 用户态El0

计算机存储器

计算机存储器用于为CPU提供指令和数据。
计算机的存储,整体效果是一个大的存储器池,其成本与层次结构底层最便宜的存储设备相当,但是却以接近于层次结构顶部存储设备的高速率向程序提供数据。
  1. 随机访问存储器(Random-Access Memory, RAM)
    • 静态随机访问存储器(SRAM) SRAM将没个位存储在一个双稳态的存储器单元里。SRAM具备双稳态特性,只要有电,就可以永远保持他的值,即使有干扰,当干扰消除时,电路就会恢复到稳定值。
    • 动态随机访问存储器(DRAM) DRAM将每个位存储为对一个电容的充电。DRAM单元在10~100ms时间内失去电荷,需要刷新。DRAM暴露在光线下会导致电容电压改变
  2. 非易失性存储器(nonvolatile memory) 如果断电,SRAM和DRAM都会丢失他们的信息。它们是易失的(volatile)。非易失性存储器即使是在关电后,仍然保存着它们的信息。ROM中有的存储器是可以读也可以写的,但是它们整体上都被成为只读存储器(Read-Only Memory)
    • ROM只读存储器
    • PROM(Programmable ROM)可编程ROM。只能被编程一次。
    • EPROM(Erasable Programmable ROM)。可擦写ROM。用紫外线清0。一般可以擦写1000次
    • EEPROM(Electrically Erasable PROM)电子可擦除PROM,可擦写10^5次 闪存基于EEPROM。
  3. 磁盘存储
    • HDD
    • SSD

硬盘直通模式: 把硬盘设置为JBOD

solve bugs

82599 Bar空间访问问题

Lab1 192.168.1.71 实验 同一台1620ES,CentOS上不能访问82599 bar空间, ubuntu上可以。

CentOS 不能访问

[lixianfa@localhost ~]$ sudo lspci -s 09:00.0 -v
09:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
        Subsystem: Huawei Technologies Co., Ltd. Device d111
        Flags: bus master, fast devsel, latency 0, IRQ 23, NUMA node 0
        Memory at 80000000000 (64-bit, prefetchable) [size=4M]
        I/O ports at 1000 [disabled] [size=32]
        Memory at 80001800000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at e3000000 [disabled] [size=4M]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [e0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 48-7b-6b-ff-ff-a9-26-78
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe

[lixianfa@localhost ~]$ devmem2 0x80000000000 w
Error at line 69, file devmem2.c (13) [Permission denied]
[lixianfa@localhost ~]$

ubuntu 可以访问

root@ubuntu:/etc/apt/sources.list.d# devmem2 0x80000000000
/dev/mem opened.
Memory mapped at address 0xffffb2ea0000.
Value at address 0x0 (0xffffb2ea0000): 0x0
root@ubuntu:/etc/apt/sources.list.d# devmem2 0x80000000008
/dev/mem opened.
Memory mapped at address 0xffffa2f78000.
Value at address 0x8 (0xffffa2f78008): 0x80000
root@ubuntu:/etc/apt/sources.list.d# devmem2 0x80000000010
/dev/mem opened.
Memory mapped at address 0xffff97579000.
Value at address 0x10 (0xffff97579010): 0xDEADBEEF
root@ubuntu:/etc/apt/sources.list.d# devmem2 0x80000000010
/dev/mem opened.
Memory mapped at address 0xffff84f60000.
Value at address 0x10 (0xffff84f60010): 0xDEADBEEF
root@ubuntu:/etc/apt/sources.list.d#

Hardware Corrected Errors

在服务器串口上发现上报Hareware error,最后发现是内存条有问题。设备可以正常启动OS,但是运行一段时间后会自动重启。

在message中查看到重启记录

May 13 15:05:20 hisilicon11 kernel: EDAC MC0: 1 CE reserved error (16) on unknown label (node:0 rank:0 bank:0 row:0 col:0 page:0x400000 offset:0x300 grain:0 syndrome:0x0)
May 13 15:05:20 hisilicon11 kernel: EDAC MC0: 1 CE reserved error (16) on unknown label (node:0 rank:0 bank:0 row:0 col:0 page:0x400000 offset:0x300 grain:0 syndrome:0x0)
May 13 15:05:20 hisilicon11 kernel: EDAC MC0: 1 CE reserved error (16) on unknown label (node:0 rank:0 bank:0 row:0 col:0 page:0x400000 offset:0x300 grain:0 syndrome:0x0)
May 13 15:05:21 hisilicon11 kernel: EDAC MC0: 1 CE reserved error (16) on unknown label (node:0 rank:0 bank:0 row:0 col:0 page:0x400000 offset:0x300 grain:0 syndrome:0x0)
May 13 15:05:21 hisilicon11 kernel: EDAC MC0: 1 CE reserved error (16) on unknown label (node:0 rank:0 bank:0 row:0 col:0 page:0x400000 offset:0x300 grain:0 syndrome:0x0)
May 13 15:05:21 hisilicon11 kernel: EDAC MC0: 1 CE reserved error (16) on unknown label (node:0 rank:0 bank:0 row:0 col:0 page:0x400000 offset:0x300 grain:0 syndrome:0x0)
May  5 18:18:46 hisilicon11 journal: Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 255.5G available → current limit 4.0G).
May  5 18:18:46 hisilicon11 kernel: Booting Linux on physical CPU 0x0000080000 [0x481fd010]
May  5 18:18:46 hisilicon11 kernel: Linux version 4.19.28.3-2019-05-13 (lixianfa@ubuntu) (gcc version 5.4.0 20160609 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.10)) #2 SMP Mon May 13 10:20:47 CST 2019
May  5 18:18:46 hisilicon11 kernel: efi: Getting EFI parameters from FDT:
May  5 18:18:46 hisilicon11 kernel: efi: EFI v2.70 by EDK II
May  5 18:18:46 hisilicon11 kernel: efi:  SMBIOS 3.0=0x3f0f0000  ACPI 2.0=0x39cb0000  MEMATTR=0x3b4bc018  ESRT=0x3f11bc98  RNG=0x3f11bd98  MEMRESERVE=0x39bb4d18
May  5 18:18:46 hisilicon11 kernel: efi: seeding entropy pool
May  5 18:18:46 hisilicon11 kernel: esrt: Reserving ESRT space from 0x000000003f11bc98 to 0x000000003f11bcd0.
May  5 18:18:46 hisilicon11 kernel: crashkernel: memory value expected
May  5 18:18:46 hisilicon

在BIOS启动日子打印NOTICE 可纠正错误

NOTICE:  [TotemRasIntMemoryNodeFhi]:[197L]

NOTICE:  [MemoryErrorFillInHest]:[245L]ErrorType is CE, ErrorSeverity is CORRECTED. #纠正错误

NOTICE:  [IsMemoryError]:[156L]Ierr = 0xf

NOTICE:  RASC socket[0]die[3]channel[3]                     #内存条位置
NOTICE:  [GetMemoryErrorDataErrorType]:[103L]Ierr = 0xf

NOTICE:  RASC H[0]L[0]
NOTICE:  PlatData R[0]B[0] R[0]C[0]
NOTICE:  [CollectArerErrorData]:[226L]SysAddr=4000000300:  #物理地址
NOTICE:  [HestGhesV2ResetAck]:[84L] I[2] CeValid[0]

NOTICE:  [HestGhesV2ResetAck]:[84L] Index 2

NOTICE:  count[0] Severity[2] CeValid[0]

NOTICE:  [HestGhesV2SetGenericErrorData]:[163L] Fill in HEST TABLE ,AckRegister=44010050
NOTICE:  [HestNotifiedOS]:[37L]
NOTICE:  [TotemRasIntM = 0x0

在系统启动过程中打印Hareware error

[   27.740329] {1}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 5
[   27.753985] {1}[Hardware Error]: It has been coHz, action=0.
[   27.791954] {1}[Hardware Error]: event severity: corrected
[   27.791957] {1}[Hardware Error]:  Error 0, type: corrected
[   27.791959] {1}[Hardware Error]:   section_type: memory error
[   27.814227] {1}[Hardware Error]:   physical_address: 0x0000004000000300 #同样的物理地址
[   27.830193] {1}[Hardware Error]:   node: 0 rank: 0 bank: 0 row: 0 column: 0
[   27.830197] {1}[Hardw0 row:0 col:0 page:0x400000 offset:0x300 grain:0 syndrome:0x0)

在OS内部使用edac-utils -v可以查看到可纠正错误。

edac-util -v
mc0: 0 Uncorrected Errors with no DIMM info
mc0: 13 Corrected Errors with no DIMM info          #可纠正错误
mc0: csrow0: 0 Uncorrected Errors
mc0: csrow0: mc#0memory#0: 0 Corrected Errors
mc0: csrow10: 0 Uncorrected Errors
mc0: csrow10: mc#0memory#10: 0 Corrected Errors
mc0: csrow12: 0 Uncorrected Errors
mc0: csrow12: mc#0memory#12: 0 Corrected Errors
mc0: csrow14: 0 Uncorrected Errors
mc0: csrow14: mc#0memory#14: 0 Corrected Errors
mc0: csrow16: 0 Uncorrected Errors
mc0: csrow16: mc#0memory#16: 0 Corrected Errors
mc0: csrow18: 0 Uncorrected Errors
mc0: csrow18: mc#0memory#18: 0 Corrected Errors
mc0: csrow2: 0 Uncorrected Errors
mc0: csrow2: mc#0memory#2: 0 Corrected Errors
mc0: csrow20: 0 Uncorrected Errors
mc0: csrow20: mc#0memory#20: 0 Corrected Errors
mc0: csrow22: 0 Uncorrected Errors
mc0: csrow22: mc#0memory#22: 0 Corrected Errors
mc0: csrow24: 0 Uncorrected Errors
mc0: csrow24: mc#0memory#24: 0 Corrected Errors
mc0: csrow26: 0 Uncorrected Errors
mc0: csrow26: mc#0memory#26: 0 Corrected Errors
mc0: csrow28: 0 Uncorrected Errors
mc0: csrow28: mc#0memory#28: 0 Corrected Errors
mc0: csrow30: 0 Uncorrected Errors
mc0: csrow30: mc#0memory#30: 0 Corrected Errors
mc0: csrow4: 0 Uncorrected Errors
mc0: csrow4: mc#0memory#4: 0 Corrected Errors
mc0: csrow6: 0 Uncorrected Errors
mc0: csrow6: mc#0memory#6: 0 Corrected Errors
mc0: csrow8: 0 Uncorrected Errors
mc0: csrow8: mc#0memory#8: 0 Corrected Errors

在OS内部使用dmesg看到重复上报的可纠正错误

[ 2624.662038] {3}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 5
[ 2624.662200] {3}[Hardware Error]: It has been corrected by h/w and requires no further action
[ 2624.662396] {3}[Hardware Error]: event severity: corrected
[ 2624.662526] {3}[Hardware Error]:  Error 0, type: corrected
[ 2624.662654] {3}[Hardware Error]:   section_type: memory error
[ 2624.662784] {3}[Hardware Error]:   physical_address: 0x0000004000000300      #同样的物理地址
[ 2624.662941] {3}[Hardware Error]:   node: 0 rank: 0 bank: 0 row: 0 column: 0
[ 2624.663102] {3}[Hardware Error]:   error_type: 16, unknown
[ 2624.663236] EDAC MC0: 1 CE reserved error (16) on unknown label (node:0 rank:0 bank:0 row:0 col:0 page:0x400000 offset:0x300 grain:0 syndrome:0x0)
[12083.123880] {4}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 5
[12083.124069] {4}[Hardware Error]: It has been corrected by h/w and requires no further action
[12083.124279] {4}[Hardware Error]: event severity: corrected
[12083.124417] {4}[Hardware Error]:  Error 0, type: corrected
[12083.124557] {4}[Hardware Error]:   section_type: memory error
[12083.124702] {4}[Hardware Error]:   physical_address: 0x0000004000000300
[12083.124870] {4}[Hardware Error]:   node: 0 rank: 0 bank: 0 row: 0 column: 0
[12083.125043] {4}[Hardware Error]:   error_type: 16, unknown
[12083.125188] EDAC MC0: 1 CE reserved error (16) on unknown label (node:0 rank:0 bank:0 row:0 col:0 page:0x400000 offset:0x300 grain:0 syndrome:0x0)
[12383.322871] {5}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 5
[12383.323060] {5}[Hardware Error]: It has been corrected by h/w and requires no further action
[12383.323269] {5}[Hardware Error]: event severity: corrected
[12383.323409] {5}[Hardware Error]:  Error 0, type: corrected
[12383.323546] {5}[Hardware Error]:   section_type: memory error
[12383.323692] {5}[Hardware Error]:   physical_address: 0x0000004000000300
[12383.323857] {5}[Hardware Error]:   node: 0 rank: 0 bank: 0 row: 0 column: 0

解决办法是:

拔掉BIOS启动中提示的内存条,会发现错误消失。具体是那根内存条,由BIOS和EVB确定。

NIC name not correct

82599ES

enp9s0f0 pci@0000:09:00.0 ac:f9:70:8a:fe:58
enp9s0f1 pci@0000:09:00.1 ac:f9:70:8a:fe:59

X550T

enp129s0f0 pci@0000:81:00.0 04:88:5f:ca:a6:ef
enp129s0f1 pci@0000:81:00.1 04:88:5f:ca:a6:f0

MT27710

enp132s0f0 pci@0000:84:00.0 28:41:c6:fb:9f:dd
enp132s0f1 pci@0000:84:00.1 28:41:c6:fb:9f:de

Huawei 1822 driver hinic

enp3s0 pci@0000:03:00.0 10:c1:72:8f:7b:88
enp4s0 pci@0000:04:00.0 10:c1:72:8f:7b:89
eno5   pci@0000:05:00.0 10:c1:72:8f:7b:8a
enp6s0 pci@0000:06:00.0 10:c1:72:8f:7b:8b

Huawei 板载 driver hns3 100G

eno1 pci@0000:7d:00.0 00:18:2d:00:00:31
eno2 pci@0000:7d:00.1 00:18:2d:01:00:31
eno3 pci@0000:7d:00.2 00:18:2d:02:00:31
eno4 pci@0000:7d:00.3 00:18:2d:03:00:31

enp189s0f0 pci@0000:bd:00.0 00:18:2d:04:00:31

ceph-volume failed

[root@hadoop00 ceph]# ceph-deploy osd create --data /dev/sdg ceph-node04
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy osd create --data /dev/sdg ceph-node04
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x400020f504d0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-node04
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x400020ee7d70>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdg
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdg
[ceph-node04][DEBUG ] connected to host: ceph-node04
[ceph-node04][DEBUG ] detect platform information from remote host
[ceph-node04][DEBUG ] detect machine type
[ceph-node04][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 AltArch
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node04
[ceph-node04][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node04][DEBUG ] find the location of an executable
[ceph-node04][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg
[ceph-node04][WARNIN] -->  RuntimeError: command returned non-zero exit status: 5
[ceph-node04][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph-node04][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b45fb23e-6ece-4167-b77f-ce641a09afc4
[ceph-node04][DEBUG ] Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-23bda46f-44e4-4eb5-85f0-d57d7f6ea07f /dev/sdg
[ceph-node04][DEBUG ]  stderr: Physical volume '/dev/sdg' is already in volume group 'ceph-0a29c94d-5e18-4821-969f-5094af730297'
[ceph-node04][DEBUG ]   Unable to add physical volume '/dev/sdg' to volume group 'ceph-0a29c94d-5e18-4821-969f-5094af730297'
[ceph-node04][DEBUG ]   /dev/sdg: physical volume not initialized.
[ceph-node04][DEBUG ] --> Was unable to complete a new OSD, will rollback changes
[ceph-node04][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.31 --yes-i-really-mean-it
[ceph-node04][DEBUG ]  stderr: 2019-07-25 00:33:16.572 ffff6fd0c200 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node04][DEBUG ] 2019-07-25 00:33:16.572 ffff6fd0c200 -1 AuthRegistry(0xffff68063c48) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node04][DEBUG ]  stderr: purged osd.31
[ceph-node04][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

解决办法 https://docs.oracle.com/cd/E52668_01/E96266/html/ceph-luminous-issues-27748402.html

ceph erasure-code 纠删码 插件确认

问下客户,EC X86有没有用ISA-L 库,还是默认的Jerasure。 使用的纠删码不一样会导致测试结果不一样

[root@ceph-node00 ~]# ceph osd erasure-code-profile get default
k=2
m=1
plugin=jerasure
technique=reed_sol_van
[root@ceph-node00 ~]# ceph osd erasure-code-profile get testprofile
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=5
m=3
plugin=jerasure
technique=reed_sol_van
w=8
[root@ceph-node00 ~]#

使用的是jerasure

ceph 4k randwrite max latency over 1ms

randwrite-4k-iodepth=2-numjobs=1.txt

mytest: (groupid=0, jobs=1): err= 0: pid=25587: Sat Jul  6 14:16:34 2019
  write: IOPS=189, BW=756KiB/s (775kB/s)(443MiB/600014msec)
    slat (nsec): min=2651, max=79920, avg=16935.79, stdev=4834.27
    clat (usec): min=775, max=1635.7k, avg=10547.28, stdev=40638.47
     lat (usec): min=792, max=1635.7k, avg=10564.21, stdev=40638.56
    clat percentiles (usec):

randwrite-4k-iodepth=32-numjobs=1.txt

mytest: (groupid=0, jobs=1): err= 0: pid=26945: Sat Jul  6 14:27:35 2019
  write: IOPS=529, BW=2116KiB/s (2167kB/s)(1240MiB/600211msec)
    slat (usec): min=2, max=1252, avg= 6.60, stdev= 8.83
    clat (usec): min=757, max=1643.2k, avg=60380.21, stdev=109530.00
     lat (usec): min=775, max=1643.2k, avg=60386.81, stdev=109530.05
    clat percentiles (usec):
     |  1.00th=[   1123],  5.00th=[   1450], 10.00th=[   1696],
     | 20.00th=[   2073], 30.00th=[   2343], 40.00th=[   2606],
     | 50.00th=[   2835], 60.00th=[   3130], 70.00th=[  13304],
     | 80.00th=[ 147850], 90.00th=[ 258999], 95.00th=[ 278922],
     | 99.00th=[ 371196], 99.50th=[ 471860], 99.90th=[ 683672],

2019年7月27日11:50:40 添加:

远程测试randwrite iodepth=2的情况: 大部分IO在毫秒级完成,平均值时正常的,在1s左右。 最大值1.4秒。 超过两秒的IO占比0.01%。

[root@localhost single_rbd_json]# fio -iodepth=2 -rw=randwrite -ioengine=rbd -rbdname=test-045 -clientname=admin -pool=volumes -bs=4k -numjobs=1 -ramp_time=60 -runtime=600 -size=100% -name=mytest1
mytest1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=2
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=1120KiB/s][r=0,w=280 IOPS][eta 00m:00s]
mytest1: (groupid=0, jobs=1): err= 0: pid=1469666: Sat Jul 27 11:17:07 2019
  write: IOPS=194, BW=780KiB/s (798kB/s)(457MiB/600061msec)
    slat (nsec): min=1263, max=77223, avg=9816.68, stdev=7164.70
    clat (usec): min=642, max=1466.2k, avg=10241.64, stdev=38980.49
     lat (usec): min=652, max=1466.2k, avg=10251.45, stdev=38980.87
    clat percentiles (usec):
     |  1.00th=[   824],  5.00th=[   889], 10.00th=[   930], 20.00th=[   971],
     | 30.00th=[  1012], 40.00th=[  1045], 50.00th=[  1074], 60.00th=[  1123],
     | 70.00th=[  1172], 80.00th=[  1254], 90.00th=[  2212], 95.00th=[ 62653],
     | 99.00th=[208667], 99.50th=[240124], 99.90th=[283116], 99.95th=[295699],
     | 99.99th=[434111]
   bw (  KiB/s): min=   32, max= 2104, per=100.00%, avg=782.48, stdev=291.88, samples=1197
   iops        : min=    8, max=  526, avg=195.57, stdev=72.97, samples=1197
  lat (usec)   : 750=0.05%, 1000=26.71%
  lat (msec)   : 2=61.52%, 4=3.38%, 10=0.83%, 20=0.51%, 50=1.54%
  lat (msec)   : 100=1.27%, 250=3.79%, 500=0.39%, 750=0.01%, 2000=0.01%
  cpu          : usr=0.50%, sys=0.24%, ctx=75826, majf=0, minf=6448
  IO depths    : 1=37.4%, 2=71.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,116962,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=2

Run status group 0 (all jobs):
  WRITE: bw=780KiB/s (798kB/s), 780KiB/s-780KiB/s (798kB/s-798kB/s), io=457MiB (479MB),

远程测试randwrite iodepth=32的情况:

[root@localhost single_rbd_json]# fio -iodepth=32 -rw=randwrite -ioengine=rbd -rbdname=test-045 -clientname=admin -pool=volumes -bs=4k -numjobs=1 -ramp_time=60 -ru
ntime=600 -size=100% -name=mytest1
mytest1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=32
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=2250KiB/s][r=0,w=562 IOPS][eta 00m:00s]
mytest1: (groupid=0, jobs=1): err= 0: pid=1507429: Sat Jul 27 11:47:52 2019
  write: IOPS=570, BW=2281KiB/s (2336kB/s)(1337MiB/600267msec)
    slat (nsec): min=951, max=146901, avg=4029.70, stdev=5302.06
    clat (usec): min=672, max=1447.7k, avg=56048.07, stdev=101412.70
     lat (usec): min=674, max=1447.7k, avg=56052.10, stdev=101412.84
    clat percentiles (usec):
     |  1.00th=[    996],  5.00th=[   1188], 10.00th=[   1303],
     | 20.00th=[   1467], 30.00th=[   1598], 40.00th=[   1729],
     | 50.00th=[   1844], 60.00th=[   1991], 70.00th=[  10683],
     | 80.00th=[ 137364], 90.00th=[ 248513], 95.00th=[ 270533],
     | 99.00th=[ 333448], 99.50th=[ 421528], 99.90th=[ 530580],
     | 99.95th=[ 583009], 99.99th=[1098908]
   bw (  KiB/s): min=  256, max= 4800, per=100.00%, avg=2290.23, stdev=651.62, samples=1197
   iops        : min=   64, max= 1200, avg=572.51, stdev=162.90, samples=1197
  lat (usec)   : 750=0.01%, 1000=1.07%
  lat (msec)   : 2=59.14%, 4=8.36%, 10=1.32%, 20=1.42%, 50=3.57%
  lat (msec)   : 100=3.07%, 250=12.41%, 500=9.40%, 750=0.21%, 1000=0.01%
  lat (msec)   : 2000=0.02%
  cpu          : usr=0.39%, sys=0.05%, ctx=36070, majf=0, minf=19155
  IO depths    : 1=0.3%, 2=1.1%, 4=4.5%, 8=19.0%, 16=77.3%, 32=7.4%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=95.2%, 8=0.6%, 16=1.4%, 32=2.9%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,342312,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=2281KiB/s

看一下其他rbd是不是有同样的情况

fio -iodepth=2 -rw=randwrite -ioengine=rbd -rbdname=test-090 -clientname=admin -pool=volumes -bs=4k -numjobs=1 -ramp_time=60 -runtime=600 -size=100% -name=mytest1

ceph ntp not sync

问题描述

ceph集群出现时间不同步问题

[root@ceph-node00 ~]# ceph -s
  cluster:
    id:     6534efb5-b842-40ea-b807-8e94c398c4a9
    health: HEALTH_WARN
            clock skew detected on mon.ceph-node01, mon.ceph-node06, mon.ceph-node07, mon.ceph-node02

定位过程

查看系统的ntp服务,没有启动。一般CentOS、redhat上ntp使用ntpd或者chrony提供时钟同步服务。

[root@ceph-node00 ~]# ntp
ntpd        ntpdate     ntpdc       ntp-keygen  ntpq        ntpstat     ntptime
[root@ceph-node00 ~]# ntp
ntpd        ntpdate     ntpdc       ntp-keygen  ntpq        ntpstat     ntptime
[root@ceph-node00 ~]# ps aux | grep ntpd
root     2529479  0.0  0.0 109656  1876 pts/2    S+   11:36   0:00 grep --color=auto ntpd
[root@ceph-node00 ~]#
[root@ceph-node00 ~]# systemctl | grep ntpd
[root@ceph-node00 ~]#
[root@ceph-node00 ~]# systemctl | grep chrony
[root@ceph-node00 ~]#

ceph 默认配置允许最大50ms漂移

cepph --admin-daemon ./ceph-mon.ceph-node01.asok config show | grep clock
 "mon_clock_drift_allowed": "0.050000",
 "mon_clock_drift_warn_backoff": "5.000000",

建议措施

  1. 启动ntpd
service start ntpd
  1. 若现象未消失:将时钟写到物理时钟。
timedatectl set-local-rtc 1
  1. 若无法解决,可以考虑设置ceph允许的最大时钟漂移
[mon]
mon_clock_drift_allowed = 0.10
mon clock drift warn backoff = 10

OSD down

在一个ceph集群中, 突然发现有一个节点上的所有OSD其中后不久就被标记为down。重启后显示down,但是过了一段时间之后会显示为down。

现象

[root@ceph-node00 ~]# ceph osd tree
ID  CLASS WEIGHT    TYPE NAME            STATUS REWEIGHT PRI-AFF

 -7        90.09357     host ceph-node2
 34   hdd   7.50780         osd.34           down  1.00000 1.00000
 35   hdd   7.50780         osd.35           down  1.00000 1.00000
 36   hdd   7.50780         osd.36           down  1.00000 1.00000
 37   hdd   7.50780         osd.37           down  1.00000 1.00000
 38   hdd   7.50780         osd.38           down  1.00000 1.00000
 39   hdd   7.50780         osd.39           down  1.00000 1.00000
 40   hdd   7.50780         osd.40           down  1.00000 1.00000
 41   hdd   7.50780         osd.41           down  1.00000 1.00000
 42   hdd   7.50780         osd.42           down  1.00000 1.00000
 43   hdd   7.50780         osd.43           down  1.00000 1.00000
 44   hdd   7.50780         osd.44           down  1.00000 1.00000
 45   hdd   7.50780         osd.45           down  1.00000 1.00000

[root@ceph-node00 ~]#

分析

查看该节点上的osd日志:

2019-07-15 09:46:40.479 ffff91adbbd0  0 log_channel(cluster) log [WRN] : Monitor daemon marked osd.44 down, but it is still running
2019-07-15 09:46:40.479 ffff91adbbd0  0 log_channel(cluster) log [DBG] : map e6540 wrongly marked me down at e6539
2019-07-15 09:46:40.479 ffff91adbbd0  0 osd.44 6540 _committed_osd_maps marked down 6 > osd_max_markdown_count 5 in last 600.000000 seconds, shutting down
2019-07-15 09:46:40.479 ffff91adbbd0  1 osd.44 6540 start_waiting_for_healthy
2019-07-15 09:46:40.489 ffff892cabd0  1 osd.44 pg_epoch: 6539 pg[5.93( empty local-lis/les=0/0 n=0 ec=3129/3129 lis/c 4077/3306 les/c/f 4078/3307/0 6524/6524/6524) [44,1] r=0 lpr=6526 pi=[3306,6524)/8 crt=0'0 mlcod 0'0 peering mbc={}] state<Started/Primary/Peering>: Peering, affected_by_map, going to Reset
2019-07-15 09:46:40.489 ffff89acbbd0  1 osd.44 pg_epoch: 6539 pg[5.15f( v 6431'70954 (6431'67954,6431'70954] lb MIN (bitwise) local-lis/les=6432/6433 n=0 ec=3129/3129 lis/c 6501/6478 les/c/f 6502/6479/0 6539/6539/5395) [17]/[17,33] r=-1 lpr=6539 pi=[3306,6539)/2 crt=6431'70954 lcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [17,44] -> [17], acting [17,33] -> [17,33], acting_primary 17 -> 17, up_primary 17 -> 17, role -1 -> -1, features acting 4611087854031667199 upacting 4611087854031667199
2019-07-15 09:46:40.489 ffff8a2ccbd0  1 osd.44 pg_epoch: 6539 pg[5.2b7( v 3248'22282 (3241'19282,3248'22282] lb 5:ed43d65e:::rbd_data.123d361bb2f645.0000000000001f5d:head (bitwise) local-lis/les=4546/4550 n=165 ec=3142/3129 lis/c 6524/6510 les/c/f 6525/6511/0 6539/6539/6179) [21]/[21,26] r=-1 lpr=6539 pi=[3306,6539)/3 luod=0'0 crt=3248'22282 active+remapped mbc={}] start_peering_interval up [44,21] -> [21], acting [21,26] -> [21,26], acting_primary 21 -> 21, up_primary 44 -> 21, role -1 -> -1, features acting 4611087854031667199 upacting 4611087854031667199
2019-07-15 09:46:40.489 ffff882c8bd0  1 osd.44 pg_epoch: 6540 pg[5.13( v 5954'28297 (3241'25297,5954'28297] lb MIN (bitwise) local-lis/les=6181/6182 n=464 ec=3129/3129 lis/c 6466/6463 les/c/f 6467/6464/0 6540/6540/5876) [0]/[0,33] r=-1 lpr=6540 pi=[5876,6540)/2 crt=5954'28297 lcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [36,0] -> [0], acting [0,33] -> [0,33], acting_primary 0 -> 0, up_primary 36 -> 0, role -1 -> -1, features acting 4611087854031667199 upacting 4611087854031667199
2019-07-15 09:46:40.489 ffff892cabd0  1 osd.44 pg_epoch: 6539 pg[5.93( empty local-lis/les=0/0 n=0 ec=3129/3129 lis/c 4077/3306 les/c/f 4078/3307/0 6539/6539/6539) [1] r=-1 lpr=6539 pi=[3306,6539)/8 crt=0'0 unknown mbc={}] start_peering_interval up [44,1] -> [1], acting [44,1] -> [1], acting_primary 44 -> 1, up_primary 44 -> 1, role 0 -> -1, features acting 4611087854031667199 upacting 4611087854031667199
2019-07-15 09:46:40.489 ffff88ac9bd0  1 osd.44 pg_epoch: 6539 pg[5.1bb( v 6161'65198 (6161'62198,6161'65198] lb MIN (bitwise) local-lis/les=6181/6182 n=0 ec=3129/3129 lis/c 6524/6510 les/c/f 6525/6511/0 6539/6539/5029) [29]/[29,8] r=-1 lpr=6539 pi=[3306,6539)/3 luod=0'0 crt=6161'65198 lcod 0'0 active+remapped mbc={}] start_peering_interval up [29,44] -> [29], acting [29,8] -> [29,8], acting_primary 29 -> 29, up_primary 29 -> 29, role -1 -> -1, features acting 4611087854031667199 upacting 4611087854031667199
2019-07-15 09:46:40.489 ffff89acbbd0  1 osd.44 pg_epoch: 6540 pg[5.15f( v 6431'70954 (6431'67954,6431'70954] lb MIN (bitwise) local-lis/les=6432/6433 n=0 ec=3129/3129 lis/c 6501/6478 les/c/f 6502/6479/0 6539/6539/5395) [17]/[17,33] r=-1 lpr=6539 pi=[3306,6539)/2 crt=6431'70954 lcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
2019-07-15 09:46:40.489 ffff91adbbd0  0 osd.44 6540 _committed_osd_maps shutdown OSD via async signal
2019-07-15 09:46:40.489 ffff882c8bd0  1 osd.44 pg_epoch: 6540 pg[5.13( v 5954'28297 (3241'25297,5954'28297] lb MIN (bitwise) local-lis/les=6181/6182 n=464 ec=3129/3129 lis/c 6466/6463 les/c/f 6467/6464/0 6540/6540/5876) [0]/[0,33] r=-1 lpr=6540 pi=[5876,6540)/2 crt=5954'28297 lcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
2019-07-15 09:46:40.489 ffff88ac9bd0  1 osd.44 pg_epoch: 6540 pg[5.1bb( v 6161'65198 (6161'62198,6161'65198] lb MIN (bitwise) local-lis/les=6181/6182 n=0 ec=3129/3129 lis/c 6524/6510 les/c/f 6525/6511/0 6539/6539/5029) [29]/[29,8] r=-1 lpr=6539 pi=[3306,6539)/3 crt=6161'65198 lcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
2019-07-15 09:46:40.489 ffff8a2ccbd0  1 osd.44 pg_epoch: 6540 pg[5.2b7( v 3248'22282 (3241'19282,3248'22282] lb 5:ed43d65e:::rbd_data.123d361bb2f645.0000000000001f5d:head (bitwise) local-lis/les=4546/4550 n=165 ec=3142/3129 lis/c 6524/6510 les/c/f 6525/6511/0 6539/6539/6179) [21]/[21,26] r=-1 lpr=6539 pi=[3306,6539)/3 crt=3248'22282 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
2019-07-15 09:46:40.489 ffffa1afbbd0 -1 received  signal: Interrupt from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0
2019-07-15 09:46:40.489 ffff892cabd0  1 osd.44 pg_epoch: 6540 pg[5.93( empty local-lis/les=0/0 n=0 ec=3129/3129 lis/c 4077/3306 les/c/f 4078/3307/0 6539/6539/6539) [1] r=-1 lpr=6539 pi=[3306,6539)/8 crt=0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
2019-07-15 09:46:40.489 ffffa1afbbd0 -1 osd.44 6540 *** Got signal Interrupt ***
2019-07-15 09:46:40.489 ffffa1afbbd0  0 osd.44 6540 prepare_to_stop starting shutdown
2019-07-15 09:46:40.489 ffffa1afbbd0  0 osd.44 6540 shutdown
2019-07-15 09:46:40.559 ffffa1afbbd0  1 bluestore(/var/lib/ceph/osd/ceph-44) umount
2019-07-15 09:46:40.679 ffffa1afbbd0  4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rpm/el7/BUILD/ceph-14.2.1/src/rocksdb/db/db_impl.cc:365] Shutdown: canceling all background work
2019-07-15 09:46:40.679 ffffa1afbbd0  4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/arm64/AVAILABLE_ARCH/arm64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rpm/el7/BUILD/ceph-14.2.1/src/rocksdb/db/db_impl.cc:521] Shutdown complete
2019-07-15 09:46:40.679 ffffa1afbbd0  1 bluefs umount

可以看到显示Monitor把osd.44 标记为down,but it is still running. 有6个osd报告osd.44是down。 超过了osd_max_markdown_count最大值。之后Got signal Interrupt就shutdown了。

在其他正常节点上的osd日志上看到, 没有收到来自osd.44的心跳信号。

从日志ceph-osd.23.log分析,osd.23报告osd.44心跳无法检测到

2019-07-12 06:50:18.528 ffff89882bd0 -1 osd.23 6178 heartbeat_check: no reply from 192.168.200.3:6802 osd.44 ever on either front or back, first ping sent
2019-07-12 06:49:58.148429 (oldest deadline 2019-07-12 06:50:18.148429)

从日志ceph-osd.5.log分析,osd.5报告osd.44心跳无法检测到

2019-07-12 06:50:17.874 ffffb575ebd0 -1 osd.5 6178 heartbeat_check: no reply from 192.168.200.3:6802 osd.44 ever on either front or back, first ping sent
2019-07-12 06:49:57.751149 (oldest deadline 2019-07-12 06:50:17.751149)

从日志ceph-osd.25.log分析,osd.25报告osd.44心跳无法检测到

2019-07-12 06:50:18.425 ffffa43c9bd0 -1 osd.25 6178 heartbeat_check: no reply from 192.168.200.3:6802 osd.44 ever on either front or back, first ping sent
2019-07-12 06:49:57.962790 (oldest deadline 2019-07-12 06:50:17.962790)

从ceph和各个节点的现象来看,down掉的osd是被正常标记为down的,由于是只有一个节点上的osd有问题,并且是这个节点的上的所有osd都有问题。想看一下OS本身是不是有异常信息。

在异常节点上查看dmesg,无任何打印。也就是说,至少没有发生软硬件错误信息

[ 7 11 20:34:55 2019] hinic 0000:90:00.0 enp144s0: [NIC]Finally num_qps: 16, num_rss: 16
[ 7 11 20:34:55 2019] hinic 0000:90:00.0 enp144s0: [NIC]Netdev is up
[ 7 11 20:34:55 2019] IPv6: ADDRCONF(NETDEV_UP): enp144s0: link is not ready
[ 7 11 20:34:56 2019] TCP: enp131s0: Driver has suspect GRO implementation, TCP performance may be compromised.
[ 7 11 20:35:17 2019]  nvme1n1: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12
[ 7 11 20:35:41 2019]  nvme1n1: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12
[ 7 11 20:36:05 2019]  nvme1n1: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12
[ 7 11 20:36:29 2019]  nvme1n1: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12
[ 7 11 20:36:54 2019]  nvme1n1: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12
[ 7 11 20:37:19 2019]  nvme1n1: p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12

查看messages获得类似的消息

Jul 12 07:03:13 ceph-node02 systemd: Starting Ceph object storage daemon osd.44...
Jul 12 07:03:13 ceph-node02 systemd: Started Ceph object storage daemon osd.44.
Jul 12 07:03:18 ceph-node02 ceph-osd: 2019-07-12 07:03:18.659 ffffaf9f6010 -1 osd.44 6208 log_to_monitors {default=true}
Jul 12 07:05:46 ceph-node02 ceph-osd: 2019-07-12 07:05:46.969 ffffabf17bd0 -1 received  signal: Interrupt from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0
Jul 12 07:05:46 ceph-node02 ceph-osd: 2019-07-12 07:05:46.969 ffffabf17bd0 -1 osd.44 6293 *** Got signal Interrupt ***
Jul 12 07:52:25 ceph-node02 systemd-logind: Removed session 44.

也就是说OS是没有什么错误信息的。这个时候有点怀疑防火墙了。 也检查了一遍网络, 发现互相之间都是可以ping通的。

先看SELinux, 都关了(其实应该和SELinux没有什么关系)

[2019-07-15 18:48:52]  192.168.100.107: Permissive
[2019-07-15 18:48:52]  192.168.100.104: Permissive
[2019-07-15 18:48:52]  192.168.100.101: Permissive
[2019-07-15 18:48:52]  192.168.100.103: Permissive
[2019-07-15 18:48:52]  192.168.100.102: Permissive
[2019-07-15 18:48:52]  192.168.100.108: Permissive
[2019-07-15 18:48:52]  192.168.100.106: Permissive
[2019-07-15 18:48:52]  192.168.100.105: Permissive

再看Firewalls, 真的有一台在running,也就是node2这一台

pdsh -w ^arm.txt -R ssh "firewall-cmd --state"

[2019-07-15 18:50:47]  192.168.100.107: not running
[2019-07-15 18:50:47]  192.168.100.105: not running
[2019-07-15 18:50:47]  192.168.100.101: not running
[2019-07-15 18:50:47]  192.168.100.108: not running
[2019-07-15 18:50:47]  192.168.100.104: not running
[2019-07-15 18:50:47]  192.168.100.102: not running
[2019-07-15 18:50:47]  192.168.100.106: not running
[2019-07-15 18:50:47]  192.168.100.103: running

解决方案

直接关掉,发现所有OSD都up了。

systectl stop firewalld
-7        90.09357     host ceph-node02
 34   hdd   7.50780         osd.34           up  1.00000 1.00000
 35   hdd   7.50780         osd.35           up  1.00000 1.00000
 36   hdd   7.50780         osd.36           up  1.00000 1.00000
 37   hdd   7.50780         osd.37           up  1.00000 1.00000
 38   hdd   7.50780         osd.38           up  1.00000 1.00000
 39   hdd   7.50780         osd.39           up  1.00000 1.00000
 40   hdd   7.50780         osd.40           up  1.00000 1.00000
 41   hdd   7.50780         osd.41           up  1.00000 1.00000
 42   hdd   7.50780         osd.42           up  1.00000 1.00000
 43   hdd   7.50780         osd.43           up  1.00000 1.00000
 44   hdd   7.50780         osd.44           up  1.00000 1.00000
 45   hdd   7.50780         osd.45           up  1.00000 1.00000

dev root not exist

问题现象:

image0

image1

问题分析:

安装过程未完成, 所以重启后提示/dev/root does not exist

解决办法:

使用kvm工具该在ISO安装系统

ebbchar error

[Tue Aug  6 20:51:23 2019] EBBChar: Initializing the EBBChar LKM
[Tue Aug  6 20:51:23 2019] EBBChar: registered correctly with major number 240
[Tue Aug  6 20:51:23 2019] EBBChar: device class registered correctly
[Tue Aug  6 20:51:23 2019] EBBChar: device class created correctly
[Tue Aug  6 20:51:38 2019] EBBChar: Device has been opened 1 time(s)
[Tue Aug  6 20:51:51 2019] Internal error: Accessing user space memory outside uaccess.h routines: 96000004 [#3] SMP
[Tue Aug  6 20:51:51 2019] Modules linked in: ebbchar(OE) binfmt_misc nls_iso8859_1 joydev input_leds ipmi_ssif shpchp ipmi_si ipmi_devintf ipmi_msghandler sch_fq_codel ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi nfsd auth_rpcgss nfs_acl lockd grace sunrpc ppdev lp parport ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear hid_generic ses enclosure usbhid hid marvell hibmc_drm ttm aes_ce_blk drm_kms_helper aes_ce_cipher crc32_ce syscopyarea crct10dif_ce sysfillrect ghash_ce sysimgblt sha2_ce fb_sys_fops sha256_arm64 sha1_ce drm hisi_sas_v2_hw hisi_sas_main ehci_platform libsas scsi_transport_sas hns_dsaf hns_enet_drv hns_mdio hnae aes_neon_bs aes_neon_blk
[Tue Aug  6 20:51:51 2019]  crypto_simd cryptd aes_arm64 [last unloaded: ebbchar]
[Tue Aug  6 20:51:51 2019] CPU: 51 PID: 20412 Comm: test Tainted: G      D W  OE    4.15.0-29-generic #31-Ubuntu
[Tue Aug  6 20:51:51 2019] Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.58 10/24/2018
[Tue Aug  6 20:51:51 2019] pstate: 20400005 (nzCv daif +PAN -UAO)
[Tue Aug  6 20:51:51 2019] pc : string+0x28/0xa0
[Tue Aug  6 20:51:51 2019] lr : vsnprintf+0x5d4/0x730
[Tue Aug  6 20:51:51 2019] sp : ffff00001c013c90
[Tue Aug  6 20:51:51 2019] x29: ffff00001c013c90 x28: ffff000000b650b2
[Tue Aug  6 20:51:51 2019] x27: ffff000000b650b2 x26: ffff000000b66508
[Tue Aug  6 20:51:51 2019] x25: 00000000ffffffd8 x24: 0000000000000020
[Tue Aug  6 20:51:51 2019] x23: 000000007fffffff x22: ffff0000094f8000
[Tue Aug  6 20:51:51 2019] x21: ffff000008c54b00 x20: ffff000080b66507
[Tue Aug  6 20:51:51 2019] x19: ffff000000b66508 x18: 0000ffff7f758a70
[Tue Aug  6 20:51:51 2019] x17: 0000ffff7f6c7b80 x16: ffff0000082e3a80
[Tue Aug  6 20:51:51 2019] x15: 0000000000000000 x14: 0000000000000001
[Tue Aug  6 20:51:51 2019] x13: 726576726573204d x12: 5241206e6f207473
[Tue Aug  6 20:51:51 2019] x11: ffff00001c013dd0 x10: ffff00001c013dd0
[Tue Aug  6 20:51:51 2019] x9 : 00000000ffffffd0 x8 : fffffffffffffffe
[Tue Aug  6 20:51:51 2019] x7 : ffff000000b66508 x6 : 0000ffffee48e258
[Tue Aug  6 20:51:51 2019] x5 : 0000000000000000 x4 : 0000000000000043
[Tue Aug  6 20:51:51 2019] x3 : ffff0a00ffffff04 x2 : ffff000080b66507
[Tue Aug  6 20:51:51 2019] x1 : ffff000080b66507 x0 : ffffffffffffffff
[Tue Aug  6 20:51:51 2019] Process test (pid: 20412, stack limit = 0x00000000c3b1dafa)
[Tue Aug  6 20:51:51 2019] Call trace:
[Tue Aug  6 20:51:51 2019]  string+0x28/0xa0
[Tue Aug  6 20:51:51 2019]  vsnprintf+0x5d4/0x730
[Tue Aug  6 20:51:51 2019]  sprintf+0x68/0x88
[Tue Aug  6 20:51:51 2019]  dev_write+0x3c/0xb0 [ebbchar]
[Tue Aug  6 20:51:51 2019]  __vfs_write+0x48/0x80
[Tue Aug  6 20:51:51 2019]  vfs_write+0xac/0x1b0
[Tue Aug  6 20:51:51 2019]  SyS_write+0x6c/0xd8
[Tue Aug  6 20:51:51 2019]  el0_svc_naked+0x30/0x34
[Tue Aug  6 20:51:51 2019] Code: f13ffcdf d1000408 540002c9 b4000320 (394000c5)

HDD broken 硬盘损坏

问题现象

硬盘为系统盘,使用scp拷贝数据时,在dmesg出现print_req_error,提示硬盘有损坏。安装gcc时,提示print_req_error。 之后工作不正常。无法使用ls命令。

[ 1522.788557] print_req_error: I/O error, dev sda, sector 7799367864 flags 80700
[ 1522.795796] sd 2:0:0:0: [sda] tag#728 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1522.804227] sd 2:0:0:0: [sda] tag#728 Sense Key : Not Ready [current]
[ 1522.810750] sd 2:0:0:0: [sda] tag#728 Add. Sense: Logical unit not ready, hard reset required
[ 1522.819263] sd 2:0:0:0: [sda] tag#728 CDB: Read(16) 88 00 00 00 00 01 d0 e0 e9 b8 00 00 01 00 00 00
[ 1522.828294] print_req_error: I/O error, dev sda, sector 7799368120 flags 80700
[ 1522.835515] ata3: EH complete
[ 1522.838506] sd 2:0:0:0: [sda] tag#28 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1522.838511] sd 2:0:0:0: [sda] tag#29 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1522.838520] sd 2:0:0:0: [sda] tag#30 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1522.838524] sd 2:0:0:0: [sda] tag#30 CDB: Write(16) 8a 00 00 00 00 01 cb 80 b0 02 00 00 00 01 00 00
[ 1522.838525] print_req_error: I/O error, dev sda, sector 7709175810 flags 1001
[ 1522.838538] XFS (dm-0): metadata I/O error in "xfs_buf_iodone_callback_error" at daddr 0x2 len 1 error 5
[ 1522.847287] sd 2:0:0:0: [sda] tag#28 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[ 1522.856056] sd 2:0:0:0: [sda] tag#29 CDB: Read(16) 88 00 00 00 00 01 d0 e0 e8 b8 00 00 00 80 00 00
[ 1522.859003] sd 2:0:0:0: [sda] tag#31 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1522.859006] sd 2:0:0:0: [sda] tag#31 CDB: Write(16) 8a 00 00 00 00 01 cb 80 b0 02 00 00 00 01 00 00
[ 1522.859008] print_req_error: I/O error, dev sda, sector 7709175810 flags 1001
[ 1522.859020] sd 2:0:0:0: [sda] tag#125 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1522.859023] sd 2:0:0:0: [sda] tag#125 CDB: Write(16) 8a 00 00 00 00 01 cb 80 b0 18 00 00 00 10 00 00
[ 1522.859024] print_req_error: I/O error, dev sda, sector 7709175832 flags 1001
[ 1522.859031] sd 2:0:0:0: [sda] tag#126 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK

问题分析

网上查到,这个问题大概率是硬盘损坏

print_req_error: I/O error, dev sda, sector 7799368120 flags 80700

借助smarttool来查看信息

smartctl -a /dev/sdb

原始的信息请查看[dev_sdb_smartctl_output.txtl]

smartctl 6.6 2017-11-05 r4594 [aarch64-linux-4.18.0-74.el8.aarch64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Enterprise Capacity 3.5 HDD
Device Model:     ST4000NM0035-1V4107
Serial Number:    ZC14R1M8
LU WWN Device Id: 5 000c50 0a60470ef
Firmware Version: TN03
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Apr 17 04:48:27 2019 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                    was completed without error.
                    Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                    without error or no self-test has ever
                    been run.
Total time to complete Offline
data collection:        (  584) seconds.
Offline data collection
capabilities:            (0x7b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                    General Purpose Logging supported.
Short self-test routine
recommended polling time:    (   1) minutes.
Extended self-test routine
recommended polling time:    ( 425) minutes.
Conveyance self-test routine
recommended polling time:    (   2) minutes.
SCT capabilities:          (0x50bd) SCT Status supported.
                    SCT Error Recovery Control supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   054   053   044    Pre-fail  Always       -       8253459
  3 Spin_Up_Time            0x0003   093   092   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       79
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       60   #应为0,重定向分区,一般是硬盘损坏时由硬盘自行完成
  7 Seek_Error_Rate         0x000f   087   060   045    Pre-fail  Always       -       511774162
  9 Power_On_Hours          0x0032   097   097   000    Old_age   Always       -       3334 (205 144 0)
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       63
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   096   096   000    Old_age   Always       -       4
188 Command_Timeout         0x0032   100   089   000    Old_age   Always       -       22 22 31
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   063   049   040    Old_age   Always       -       37 (Min/Max 37/39)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       718
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       23
193 Load_Cycle_Count        0x0032   097   097   000    Old_age   Always       -       7886
194 Temperature_Celsius     0x0022   037   051   000    Old_age   Always       -       37 (0 21 0 0 0)
195 Hardware_ECC_Recovered  0x001a   003   001   000    Old_age   Always       -       8253459
197 Current_Pending_Sector  0x0012   099   099   000    Old_age   Always       -       614   #应为0,停止分区,这些分区处于停止状态
198 Offline_Uncorrectable   0x0010   099   099   000    Old_age   Offline      -       614   #应为0,离线分区。表示不可用
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       1045h+16m+00.994s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       4583746847
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       3489786674

SMART Error Log Version: 1
ATA Error Count: 4
    CR = Command Register [HEX]
    FR = Features Register [HEX]
    SC = Sector Count Register [HEX]
    SN = Sector Number Register [HEX]
    CL = Cylinder Low Register [HEX]
    CH = Cylinder High Register [HEX]
    DH = Device/Head Register [HEX]
    DC = Device Command Register [HEX]
    ER = Error register [HEX]
    ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 4 occurred at disk power-on lifetime: 3306 hours (137 days + 18 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 20 80 d0 b2 40 00      00:10:59.166  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00      00:10:55.078  READ FPDMA QUEUED
  60 00 20 ff ff ff 4f 00      00:10:54.341  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00      00:10:54.334  READ FPDMA QUEUED
  60 00 80 ff ff ff 4f 00      00:10:54.333  READ FPDMA QUEUED

Error 3 occurred at disk power-on lifetime: 3293 hours (137 days + 5 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 ff ff ff 0f  Error: WP at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  61 00 20 ff ff ff 4f 00   1d+03:38:29.611  WRITE FPDMA QUEUED
  60 00 80 ff ff ff 4f 00   1d+03:38:29.611  READ FPDMA QUEUED
  61 00 08 ff ff ff 4f 00   1d+03:38:29.611  WRITE FPDMA QUEUED
  60 00 20 80 d0 b2 40 00   1d+03:38:29.610  READ FPDMA QUEUED
  60 00 20 ff ff ff 4f 00   1d+03:38:29.610  READ FPDMA QUEUED

Error 2 occurred at disk power-on lifetime: 3293 hours (137 days + 5 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 ff ff ff 0f  Error: WP at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  61 00 08 ff ff ff 4f 00   1d+03:37:57.039  WRITE FPDMA QUEUED
  60 00 20 80 d0 b2 40 00   1d+03:37:53.278  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00   1d+03:37:51.151  READ FPDMA QUEUED
  60 00 20 ff ff ff 4f 00   1d+03:37:51.146  READ FPDMA QUEUED
  60 00 20 ff ff ff 4f 00   1d+03:37:51.140  READ FPDMA QUEUED

Error 1 occurred at disk power-on lifetime: 3287 hours (136 days + 23 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 53 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 20 80 d0 b2 40 00      22:32:51.562  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00      22:32:45.502  READ FPDMA QUEUED
  60 00 20 ff ff ff 4f 00      22:32:45.497  READ FPDMA QUEUED
  60 00 20 ff ff ff 4f 00      22:32:45.491  READ FPDMA QUEUED
  60 00 20 ff ff ff 4f 00      22:32:45.484  READ FPDMA QUEUED

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed: read failure       90%      3232         548968
# 2  Short offline       Completed: read failure       90%      3231         548964
# 3  Short offline       Completed: read failure       90%      3231         548969
# 4  Short offline       Completed: read failure       90%      3206         548969

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

安装CentOS时,安装界面无法找到硬盘

安装CentOS的界面,没有看到有硬盘 image0

原因可能是新设备硬盘没有配置raid卡,可以选择组raid或者是硬盘直通。

这里介绍设置硬盘直通模式, 也就是JBOD。 把raid卡的JBOD模式由禁用改成启用

image1

把物理盘的的模式也改成JBOD image2

这个时候重新回到安装界面刷新, 就可以看到硬盘了 image3

libvirtError: unable to find any master

安装虚拟机提示:

libvirtError: operation failed: unable to find any master var store for loader: /usr/share/AAVMF/AAVMF_CODE.fd

解决办法:

把libvirt配置改成/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd 这个就好了

参考一个虚拟机的xml

<domain type='kvm'>
  <name>CentOS7.6</name>
  <uuid>52f824a5-f3bf-4322-a81c-cd9557d0decb</uuid>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='aarch64' machine='virt-rhel7.6.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/AAVMF/AAVMF_CODE.fd</loader>
    <nvram>/home/user1/.config/libvirt/qemu/nvram/CentOS7.6_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>

参考一个qemu.conf [1] 的配置

nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"]
[1]https://github.com/libvirt/libvirt/blob/8e681cdab9a0c93208bbd7f1c8e82998356f4019/src/qemu/qemu.conf

libvirt guest display

bs/../customer/images/科大讯飞3_libvirt.png

解决办法:

virt-install \
  --name CentOS7.6 \
  --os-variant "centos7.0" \
  --memory 8192 \
  --vcpus 4 \
  --disk size=20 \
  --graphics vnc,listen=0.0.0.0,keymap=en-us \
  --location /home/user1/CentOS-7-aarch64-Minimal-1810.iso \
  --extra-args console=ttyS0
<graphics type='vnc' port='5904' autoport='no' listen='0.0.0.0'>

  <listen type='address' address='0.0.0.0'/>
  </graphics>

nfs lock 1670650

在单台机器上nfs lock 不成功

安装nfs

#安装软件
yum install nfs-utils
#关闭防火墙
systemctl stop firewalld.service
#创建挂载目录
mkdir nfs-test-dir
#配置共享文件夹
cat /etc/exports
/root/nfs-test-dir *(rw,sync,no_root_squash)

挂载

mount -t nfs -o vers=3 localhost:/root/nfs-test-dir /tmp

执行测试

[root@redhat75 ~]# ./test.sh /tmp

Test locking file: /tmp/flock-user-test.7901
flock: 10: Bad file descriptor
Lock failed: 65
flock: 10: Bad file descriptor
Unlock failed: 65

脚本内容:

#!/bin/bash
# Usage: $0 <directory> [<directory>...]

for directory in $@
do
    test_file="$directory/flock-user-test.$$"
    printf "\nTest locking file: %s\n" "$test_file"
    touch "$test_file"
    {
        #fuser "$test_file"
        flock -w 2 -x $test_fd || printf "Lock failed: %d\n" $?
        #ls -l /proc/$$/fd
        #fuser "$test_file"
        flock -u -x $test_fd || printf "Unlock failed: %d\n" $?
    } {test_fd}<"$test_file"
    rm -f "$test_file"
done

注意:上述验证,在单台设备上完成,否则可能不会重现。 本人使用redhat7.6作为服务端,使用ubuntu18.04作为客户端挂在nfs时,是没有出现这个问题的。

经过验证, ARM平台上redhat7.4 ok, 7.5 no, 7.6 no, 8.0 ok,X86平台上 7.6 ok,如下表

架构 版本      
RHEL7.4 RHEL7.5 RHEL7.6 RHEL 8.0
X86
ok
aarch64 ok fail fail ok

各redhat版本对应的内核版本如下

RHEL7.4  4.11.0-44.el7a.aarch64
RHEL7.5  4.14.0-49.el7a.aarch64
RHEL7.6  4.14.0-115.el7a.aarch64
RHEL8.0  4.18.0-64.el8.aarch64

strace 跟踪

strace -ff -o strace-thread-output ./test.sh /tmp

是在执行flock系统调用的时候失败,主要差异是:

RHEL7.6 flock(10,LOCK_EX)   = -1 EBADF(Bad file descriptor)
RHEL7.4 flock(10,LOCK_EX)   = 0

flock实现

在文件 fs/locks.c 把LOCK_EX翻译成F_WRLCK

static inline int flock_translate_cmd(int cmd) {
        if (cmd & LOCK_MAND)
                return cmd & (LOCK_MAND | LOCK_RW);
        switch (cmd) {
        case LOCK_SH:
                return F_RDLCK;
        case LOCK_EX:
                return F_WRLCK;
        case LOCK_UN:
                return F_UNLCK;
        }
        return -EINVAL;
}

在文件 fs/locks.c 定义了系统调用flock,由f_op结构体中的函数指针指示

SYSCALL_DEFINE2(flock, unsigned int, fd, unsigned int, cmd)
{
    /*......*/

    if (f.file->f_op->flock && is_remote_lock(f.file))
                error = f.file->f_op->flock(f.file,
                                          (can_sleep) ? F_SETLKW : F_SETLK,
                                          lock);
    /*.....*/
}

在文件fs/nfs/file.c 指示了对NFS文件的lock操作是nfs_lock

const struct file_operations nfs_file_operations = {
        .llseek         = nfs_file_llseek,
        .read_iter      = nfs_file_read,
        .write_iter     = nfs_file_write,
        .mmap           = nfs_file_mmap,
        .open           = nfs_file_open,
        .flush          = nfs_file_flush,
        .release        = nfs_file_release,
        .fsync          = nfs_file_fsync,
        .lock           = nfs_lock,
        .flock          = nfs_flock, //这里
        .splice_read    = generic_file_splice_read,
        .splice_write   = iter_file_splice_write,
        .check_flags    = nfs_check_flags,
        .setlease       = simple_nosetlease,
};
EXPORT_SYMBOL_GPL(nfs_file_operations);
但是不同redhat版本的内核对nfs_flock的实现不一样:
RHEL7.4 4.11.0-44.el7a.aarch64 kernel-alt-4.11.0-44.el7a
int nfs_flock(struct file *filp, int cmd, struct file_lock *fl)
{
        struct inode *inode = filp->f_mapping->host;
        int is_local = 0;

        dprintk("NFS: flock(%pD2, t=%x, fl=%x)\n",
                        filp, fl->fl_type, fl->fl_flags);

        if (!(fl->fl_flags & FL_FLOCK))
                return -ENOLCK;

        /*
         * The NFSv4 protocol doesn't support LOCK_MAND, which is not part of
         * any standard. In principle we might be able to support LOCK_MAND
         * on NFSv2/3 since NLMv3/4 support DOS share modes, but for now the
         * NFS code is not set up for it.
         */
        if (fl->fl_type & LOCK_MAND)
                return -EINVAL;

        if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FLOCK)
                is_local = 1;

        /* We're simulating flock() locks using posix locks on the server */
        if (fl->fl_type == F_UNLCK)
                return do_unlk(filp, cmd, fl, is_local);
        return do_setlk(filp, cmd, fl, is_local);
}
EXPORT_SYMBOL_GPL(nfs_flock);

RHEL7.5 4.14.0-49.el7a.aarch64 kernel-alt-4.14.0-49.el7a

int nfs_flock(struct file *filp, int cmd, struct file_lock *fl)
{
        struct inode *inode = filp->f_mapping->host;
        int is_local = 0;

        dprintk("NFS: flock(%pD2, t=%x, fl=%x)\n",
                        filp, fl->fl_type, fl->fl_flags);

        if (!(fl->fl_flags & FL_FLOCK))
                return -ENOLCK;

        /*
         * The NFSv4 protocol doesn't support LOCK_MAND, which is not part of
         * any standard. In principle we might be able to support LOCK_MAND
         * on NFSv2/3 since NLMv3/4 support DOS share modes, but for now the
         * NFS code is not set up for it.
         */
        if (fl->fl_type & LOCK_MAND)
                return -EINVAL;

        if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FLOCK)
                is_local = 1;

        /*
         * VFS doesn't require the open mode to match a flock() lock's type.
         * NFS, however, may simulate flock() locking with posix locking which
         * requires the open mode to match the lock type.
         */
        switch (fl->fl_type) {
        case F_UNLCK:
                return do_unlk(filp, cmd, fl, is_local);
        case F_RDLCK:
                if (!(filp->f_mode & FMODE_READ))
                        return -EBADF;
                break;
        case F_WRLCK:
                if (!(filp->f_mode & FMODE_WRITE))
                        return -EBADF;
        }

        return do_setlk(filp, cmd, fl, is_local);
}
EXPORT_SYMBOL_GPL(nfs_flock);

RHEL8.0 4.18.0-64.el8.aarch64 kernel-4.18.0-64.el8

int nfs_flock(struct file *filp, int cmd, struct file_lock *fl)
{
        struct inode *inode = filp->f_mapping->host;
        int is_local = 0;

        dprintk("NFS: flock(%pD2, t=%x, fl=%x)\n",
                        filp, fl->fl_type, fl->fl_flags);

        if (!(fl->fl_flags & FL_FLOCK))
                return -ENOLCK;

        /*
         * The NFSv4 protocol doesn't support LOCK_MAND, which is not part of
         * any standard. In principle we might be able to support LOCK_MAND
         * on NFSv2/3 since NLMv3/4 support DOS share modes, but for now the
         * NFS code is not set up for it.
         */
        if (fl->fl_type & LOCK_MAND)
                return -EINVAL;

        if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FLOCK)
                is_local = 1;

        /* We're simulating flock() locks using posix locks on the server */
        if (fl->fl_type == F_UNLCK)
                return do_unlk(filp, cmd, fl, is_local);
        return do_setlk(filp, cmd, fl, is_local);
}
EXPORT_SYMBOL_GPL(nfs_flock);

找到nfs_flock的更改历史

查看关于这个文件的更改

git log --oneline kernel-alt-4.11.0-44.el7a..kernel-4.18.0-64.el8 -- fs/nfs/file.c

结果如下

fcfa447 NFS: Revert "NFS: Move the flock open mode check into nfs_flock()"
bf4b490 NFS: various changes relating to reporting IO errors.
e973b1a5 NFS: Sync the correct byte range during synchronous writes
779eafa NFS: flush data when locking a file to ensure cache coherence for mmap.
6ba80d4 NFS: Optimize fallocate by refreshing mapping when needed.
442ce04 NFS: invalidate file size when taking a lock.
c373fff NFSv4: Don't special case "launder"
f30cb75 NFS: Always wait for I/O completion before unlock
e129372 NFS: Move the flock open mode check into nfs_flock()

锁定两个commit:

e129372 NFS: Move the flock open mode check into nfs_flock()
fcfa447 NFS: Revert "NFS: Move the flock open mode check into nfs_flock()"

再分析两个commit出现的位置,在7.4之后,commit e129372 代码进行了修改, 到8.0版本之前又恢复了回来,和测试现象一致, 7.4和8.0执行OK, 但是7.5和7.6有问题。

RHEL8.0  4.18.0-64.el8.aarch64      kernel-4.18.0-64.el8

fcfa447 NFS: Revert "NFS: Move the flock open mode check into nfs_flock()"

RHEL7.6  4.14.0-115.el7a.aarch64    kernel-alt-4.14.0-115.el7a
RHEL7.5  4.14.0-49.el7a.aarch64     kernel-alt-4.14.0-49.el7a

e129372 NFS: Move the flock open mode check into nfs_flock()

RHEL7.4  4.11.0-44.el7a.aarch64     kernel-alt-4.11.0-44.el7a

根因:文件打开模式和锁类型不应该强制要求一样

根因是commit e129372引入了这个问题。

diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 6682139..b7f4af3 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -820,9 +820,23 @@ int nfs_flock(struct file *filp, int cmd, struct file_lock *fl)
        if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FLOCK)
                is_local = 1;

-       /* We're simulating flock() locks using posix locks on the server */
-       if (fl->fl_type == F_UNLCK)
+       /*
+        * VFS doesn't require the open mode to match a flock() lock's type.
+        * NFS, however, may simulate flock() locking with posix locking which
+        * requires the open mode to match the lock type.
+        */
+       switch (fl->fl_type) {
+       case F_UNLCK:
                return do_unlk(filp, cmd, fl, is_local);
+       case F_RDLCK:
+               if (!(filp->f_mode & FMODE_READ))
+                       return -EBADF;
+               break;
+       case F_WRLCK:
+               if (!(filp->f_mode & FMODE_WRITE))
+                       return -EBADF;
+       }
+
        return do_setlk(filp, cmd, fl, is_local);
 }
 EXPORT_SYMBOL_GPL(nfs_flock);

在执行flock(10,LOCK_EX)时,LOCK_EX会翻译成写锁F_WRLCK,但是文件打开模式可以使用strace观察到是只读模式O_RDONLY打开的,复制了一份文件

openat(AT_FDCWD, "/tmp//flock-user-test.4600", O_RDONLY) = 3
fcntl(3, F_DUPFD, 10)                   = 10
····
flock(10,LOCK_EX)

所以代码进入

+       case F_WRLCK:
+               if (!(filp->f_mode & FMODE_WRITE))
+                       return -EBADF;
+       }
要求,当文件申请写锁时,文件的打开模式f_mode必须是FMODE_WRITE,但是f_mode此时是FMODE_READ, 所以出错
关于f_mode的描述,在文件/inlcude/linux/fs.h中有描述
/*
 * flags in file.f_mode.  Note that FMODE_READ and FMODE_WRITE must correspond
 * to O_WRONLY and O_RDWR via the strange trick in __dentry_open()
 */

关于commit revert的原因,可以查看commit fcfa447,作者写到,在NFS的实现中,会用posix的locking机制模拟flock,不

commit fcfa447062b2061e11f68b846d61cbfe60d0d604
Author: Benjamin Coddington <bcodding@redhat.com>
Date:   Fri Nov 10 06:27:49 2017 -0500

    NFS: Revert "NFS: Move the flock open mode check into nfs_flock()"

    Commit e12937279c8b "NFS: Move the flock open mode check into nfs_flock()"
    changed NFSv3 behavior for flock() such that the open mode must match the
    lock type, however that requirement shouldn't be enforced for flock().

    Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
    Cc: stable@vger.kernel.org # v4.12
    Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>

编译新内核验证

在内核代码根目录,切换到redhat7.6的内核代码

git checkout -b fix_flock kernel-alt-4.14.0-115.el7a

合入commit fcfa447

git cherry-pick fcfa447

添加内核编译配置文件

cp ~/config-4.14.0-115.el7a.aarch64 .config

执行编译脚本

build-kernel-natively.sh

复制内核文件到目标主机

scp /root/rpmbuild/RPMS/aarch64/kernel-4.14.0_alt_115_fix_nfs_flock_2019_02_01-3.aarch64.rpm me@192.168.1.126:~/

执行安装并重启

yum install kernel-4.14.0_alt_115_fix_nfs_flock_2019_02_01-3.aarch64.rpm
reboot -f

选择新内核启动系统

Red Hat Enterprise Linux Server (4.14.0-alt-115-fix-nfs-flock-2019-02-01>
Red Hat Enterprise Linux Server (4.14.0-115.el7a.aarch64) 7.6 (Maipo)
Red Hat Enterprise Linux Server (0-rescue-f0b91f9908f44910b44f62583e5da5>














Use the ^ and v keys to change the selection.
Press 'e' to edit the selected item, or 'c' for a command prompt.

确认版本更新

[root@redhat76 ~]# uname -a
Linux redhat76 4.14.0-alt-115-fix-nfs-flock-2019-02-01 #3 SMP Fri Feb 1 15:47:15 CST 2019 aarch64 aarch64 aarch64 GNU/Linux
[root@redhat76 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)

重新验证成功

[root@redhat76 ~]#bash -x ./test.sh /tmp
+ for directory in '$@'
+ test_file=/tmp/flock-user-test.9916
+ printf '\nTest locking file: %s\n' /tmp/flock-user-test.9916

Test locking file: /tmp/flock-user-test.9916
+ touch /tmp/flock-user-test.9916
+ flock -w 2 -x 10
+ flock -u -x 10
+ rm -f /tmp/flock-user-test.9916

nfs not responding,8BD问题

复现过程

8DB问题复现的条件是:

nfs服务端 nfs客户端 现象
cpu20-RHEL7.6 kernel-alt-4.14.0-115.el7a cpu16 ubuntu 18.04 出现
cpu20-RHEL7.6 kernel-alt-4.14.0-115.7.1.el7 a cpu16 ubuntu 18.04 不出现
cpu16-RHEL7.6 kernel-alt-4.14.0-115.el7a cpu20 RHEL7.6 kernel-alt-4.14.0-115.el7a 出现
cpu16-RHEL7.6 kernel-alt-4.14.0-115.el7a cpu20 RHEL7.6 kernel-alt-4.14.0-115.7.1.el 7a 不出现

系统信息:

[root@readhat76 ~]#cat /etc/os-release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"
[root@readhat76 ~]#

nfs 服务端设置

[root@readhat76 ~]#cat /etc/exports
/root/nfs-test-dir *(rw,sync,no_root_squash)

nfs 客户端设置

[root@readhat76 ~]#mount -o vers=3 root@192.168.1.215:/root/nfs-test-dir /root/nfs-client-dir

[root@readhat76 ~]#df
Filesystem                       1K-blocks     Used  Available Use% Mounted on
devtmpfs                         267835008        0  267835008   0% /dev
tmpfs                            267845760        0  267845760   0% /dev/shm
tmpfs                            267845760    41728  267804032   1% /run
tmpfs                            267845760        0  267845760   0% /sys/fs/cgroup
/dev/mapper/rhel_readhat76-root   52403200 12393324   40009876  24% /
/dev/sdb2                          1038336   127428     910908  13% /boot
/dev/sdb1                           204580     7944     196636   4% /boot/efi
/dev/mapper/rhel_readhat76-home 3847258716    33008 3847225708   1% /home
tmpfs                             53569216        0   53569216   0% /run/user/0
/dev/loop0                         3109414  3109414          0 100% /mnt/cd_redhat7.6
localhost:/root/nfs-test-dir      52403200 12392448   40010752  24% /root/nfs-client-dir

在nfs客户段编译内核源码

源码需要位于挂载的目录下

wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.0.3.tar.xz
xz -d linux-5.0.3.tar.xz

make defconfig

make -j48

复现成功

在nfs客户端编译停止

me@ubuntu:~/nfs-client-dir/linux-5.0.3$ sudo make -j48
  WRAP    arch/arm64/include/generated/uapi/asm/kvm_para.h
  WRAP    arch/arm64/include/generated/uapi/asm/errno.h
  WRAP    arch/arm64/include/generated/uapi/asm/ioctl.h
  WRAP    arch/arm64/include/generated/uapi/asm/ioctls.h
  WRAP    arch/arm64/include/generated/uapi/asm/ipcbuf.h
  WRAP    arch/arm64/include/generated/uapi/asm/mman.h
  WRAP    arch/arm64/include/generated/uapi/asm/msgbuf.h
  WRAP    arch/arm64/include/generated/uapi/asm/poll.h
  WRAP    arch/arm64/include/generated/uapi/asm/resource.h
  WRAP    arch/arm64/include/generated/uapi/asm/sembuf.h
  WRAP    arch/arm64/include/generated/uapi/asm/shmbuf.h
  WRAP    arch/arm64/include/generated/uapi/asm/siginfo.h
  UPD     include/config/kernel.release
  WRAP    arch/arm64/include/generated/uapi/asm/socket.h
  WRAP    arch/arm64/include/generated/uapi/asm/sockios.h
  WRAP    arch/arm64/include/generated/uapi/asm/swab.h
  WRAP    arch/arm64/include/generated/uapi/asm/termbits.h
  WRAP    arch/arm64/include/generated/uapi/asm/termios.h
  WRAP    arch/arm64/include/generated/uapi/asm/types.h
  UPD     include/generated/uapi/linux/version.h

在nfs客户端出现

me@ubuntu:~$ dmesg -T
[Thu Mar 21 15:17:02 2019] nfsacl: server 192.168.1.215 not responding, still trying
[Thu Mar 21 15:17:02 2019] nfsacl: server 192.168.1.215 not responding, still trying

在nfs服务端出现

[root@redhat76 linux-5.0.3]# dmesg -T
[Thu Mar 21 15:19:36 2019] rpc-srv/tcp: nfsd: got error -11 when sending 116 bytes - shutting down socket
[Thu Mar 21 15:21:15 2019] rpc-srv/tcp: nfsd: got error -11 when sending 116 bytes - shutting down socket

其中make的call stack是:

[Sat Apr 13 17:50:11 2019] [<ffff000008085e24>] __switch_to+0x8c/0xa8
[Sat Apr 13 17:50:11 2019] [<ffff000008828f18>] __schedule+0x328/0x860
[Sat Apr 13 17:50:11 2019] [<ffff000008829484>] schedule+0x34/0x8c
[Sat Apr 13 17:50:11 2019] [<ffff000000ef009c>] rpc_wait_bit_killable+0x2c/0xb8 [sunrpc]
[Sat Apr 13 17:50:11 2019] [<ffff000008829a7c>] __wait_on_bit+0xac/0xe0
[Sat Apr 13 17:50:11 2019] [<ffff000008829b58>] out_of_line_wait_on_bit+0xa8/0xcc
[Sat Apr 13 17:50:11 2019] [<ffff000000ef132c>] __rpc_execute+0x114/0x468 [sunrpc]
[Sat Apr 13 17:50:11 2019] [<ffff000000ef1a58>] rpc_execute+0x7c/0x10c [sunrpc]
[Sat Apr 13 17:50:11 2019] [<ffff000000ee1150>] rpc_run_task+0x118/0x168 [sunrpc]
[Sat Apr 13 17:50:11 2019] [<ffff000000ee3b44>] rpc_call_sync+0x6c/0xc0 [sunrpc]
[Sat Apr 13 17:50:11 2019] [<ffff000000de09dc>] nfs3_rpc_wrapper.constprop.11+0x78/0xd4 [nfsv3]
[Sat Apr 13 17:50:11 2019] [<ffff000000de1fd4>] nfs3_proc_getattr+0x70/0xec [nfsv3]
[Sat Apr 13 17:50:11 2019] [<ffff000002c7c114>] __nfs_revalidate_inode+0xf8/0x384 [nfs]
[Sat Apr 13 17:50:11 2019] [<ffff000002c755dc>] nfs_do_access+0x194/0x430 [nfs]
[Sat Apr 13 17:50:11 2019] [<ffff000002c75a48>] nfs_permission+0x15c/0x21c [nfs]
[Sat Apr 13 17:50:11 2019] [<ffff0000082adf08>] __inode_permission+0x98/0xf4
[Sat Apr 13 17:50:11 2019] [<ffff0000082adf94>] inode_permission+0x30/0x6c
[Sat Apr 13 17:50:11 2019] [<ffff0000082b10e4>] link_path_walk+0x7c/0x4ac
[Sat Apr 13 17:50:11 2019] [<ffff0000082b164c>] path_lookupat+0xac/0x230
[Sat Apr 13 17:50:11 2019] [<ffff0000082b29a4>] filename_lookup+0x90/0x158
[Sat Apr 13 17:50:11 2019] [<ffff0000082b2b9c>] user_path_at_empty+0x58/0x64
[Sat Apr 13 17:50:11 2019] [<ffff0000082a7b08>] vfs_statx+0x98/0x108
[Sat Apr 13 17:50:11 2019] [<ffff0000082a810c>] SyS_newfstatat+0x50/0x88

获取call_stack的办法是:

echo "w" > /proc/sysrq-trigger
dmesg

完整的log可以查看[8DB]

编译内核进行验证

根据 [redhat 编译内核] 编译新内核并安装。

重新验证

成功编译内核

  LD [M]  sound/soc/meson/snd-soc-meson-axg-tdm-formatter.ko
  LD [M]  sound/soc/meson/snd-soc-meson-axg-tdm-interface.ko
  LD [M]  sound/soc/meson/snd-soc-meson-axg-tdmin.ko
  LD [M]  sound/soc/meson/snd-soc-meson-axg-tdmout.ko
  LD [M]  sound/soc/meson/snd-soc-meson-axg-toddr.ko
  LD [M]  sound/soc/rockchip/snd-soc-rk3399-gru-sound.ko
  LD [M]  sound/soc/rockchip/snd-soc-rockchip-i2s.ko
  LD [M]  sound/soc/rockchip/snd-soc-rockchip-pcm.ko
  LD [M]  sound/soc/rockchip/snd-soc-rockchip-rt5645.ko
  LD [M]  sound/soc/rockchip/snd-soc-rockchip-spdif.ko
  LD [M]  sound/soc/sh/rcar/snd-soc-rcar.ko
me@ubuntu:~/nfs-client-dir/linux-5.0.3$
me@ubuntu:~/nfs-client-dir/linux-5.0.3$
me@ubuntu:~/nfs-client-dir/linux-5.0.3$
me@ubuntu:~/nfs-client-dir/linux-5.0.3$ ls
arch   built-in.a  COPYING  crypto         drivers   fs       init  Kbuild   kernel  LICENSES     Makefile  modules.builtin  Module.symvers  README   scripts   sound       tools  virt     vmlinux.o
block  certs       CREDITS  Documentation  firmware  include  ipc   Kconfig  lib     MAINTAINERS  mm        modules.order    net             samples  security  System.map  usr    vmlinux

没有出现nfs server not respond

me@ubuntu:~/nfs-client-dir/linux-5.0.3$ dmesg -T
me@ubuntu:~/nfs-client-dir/linux-5.0.3$

复现问题过程的问题 ## 问题1 plex not found

me@ubuntu:~/nfs-client-dir/linux-5.0.3$ sudo make defconfig
  LEX     scripts/kconfig/zconf.lex.c
/bin/sh: 1: flex: not found
scripts/Makefile.lib:193: recipe for target 'scripts/kconfig/zconf.lex.c' failed
make[1]: *** [scripts/kconfig/zconf.lex.c] Error 127
Makefile:538: recipe for target 'defconfig' failed
make: *** [defconfig] Error 2

解决办法是:

apt install plex

问题2 bison: not found

apt install bison

问题3 openssl not found

scripts/extract-cert.c:21:25: fatal error: openssl/bio.h: No such file or directory
 #include <openssl/bio.h>
                         ^
compilation terminated.

解决办法

执行程序出现 No child processes

原因是,再CentOS上,默认的用户空间最大线程数量是4096,当启动超过最大线程之后,会报错。 每种软件报的错可能不一样。

make: vfork: Resource temporarily unavailable
AS      libavfilter/aarch64/vf_nlmeans_neon.o
/bin/sh: fork: retry: No child processes
/bin/sh: fork: retry: No child processes
make: vfork: Resource temporarily unavailable
CC      libavfilter/aeval.o
/bin/sh: fork: retry: No child processes
AR      libavdevice/libavdevice.a

解决办法:

[me@centos ffmpeg]$ ulimit -a

max user processes              (-u) 4096

使用ulimit -u 设置最大进程数量

max user processes              (-u) 65535

修改后不再报错。

注意ulimit -u仅对当前窗口有效,需要永久改变的,需要写到文件当中

[me@centos ffmpeg]$ cat /etc/security/limits.d/20-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     65535
root       soft    nproc     unlimited
[me@centos ffmpeg]$

qemu max socket is 4095

使用qemu时出现最大socket报错:

ubuntu 上的报错

Error polling connection ‘qemu:///system’: internal error: Socket 5418 can’t be handled (max socket is 4095)

image0

` <https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1811198>`__

CentOS上的报错

报错记录

目前有亮起报错,其中一个是:

virsh node info
error:failed to get node information
error: internal error: Socket 6378 can't be handled(max socket is 4095)

rmmod: ERROR: Device or resource busy

问题现象是:在CentOS 7.7 上,编译module, insmod之后无法rmmod

[user1@centos fishing]$ sudo insmod fishing.ko
[user1@centos fishing]$
[user1@centos fishing]$ su root
Password:
[root@centos fishing]# lsmod | grep fishing.g
[root@centos fishing]# lsmod | grep fishing
fishing               262144  0
[root@centos fishing]#
[root@centos fishing]# rmmod fishing
rmmod: ERROR: could not remove 'fishing': Device or resource busy
rmmod: ERROR: could not remove module fishing: Device or resource busy
[root@centos fishing]#

同样的代码在: ubuntu 18.04上是ok的

me@ubuntu:~/fishing$ sudo insmod fishing.ko
me@ubuntu:~/fishing$ sudo rmmod fishing

同样的代码在: centos 7.6上也是OK的。

最终定位是, 编译kernel的gcc版本和编译module的gcc版本不一致导致的。 编译kernel的gcc版本是 8.3, 本地的gcc版本是4.85

[user1@centos ~]$ dmesg | grep gcc
[    0.000000] Linux version 4.18.0-80.7.2.el7.aarch64 (mockbuild@aarch64-01.bsys.centos.org) (gcc version 8.3.1 20190311 (Red Hat 8.3.1-3) (GCC)) #1 SMP Thu Sep 12 16:13:20 UTC 2019
[user1@centos ~]$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/aarch64-redhat-linux/4.8.5/lto-wrapper
Target: aarch64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,lto --enable-plugin --enable-initfini-array --disable-libgcj --with-isl=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-aarch64-redhat-linux/isl-install --with-cloog=/builddir/build/BUILD/gcc-4.8.5-20150702/obj-aarch64-redhat-linux/cloog-install --enable-gnu-indirect-function --build=aarch64-redhat-linux
Thread model: posix
gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)
[user1@centos ~]$

如果反过来,编译kernel的gcc版本是4.85, 编译模块的版本是8.3,那么会出现模块也无法rmmod, 而且引用次数异常的情况

[root@localhost fishing]# gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/local/gcc-7.3.0/libexec/gcc/aarch64-linux/7.3.0/lto-wrapper
Target: aarch64-linux
Configured with: ./configure --prefix=/usr/local/gcc-7.3.0 --enable-languages=c,c++,fortran --enable-shared --enable-linker-build-id --without-included-gettext --enable-threads=posix --disable-multilib --disable-nls --disable-libsanitizer --disable-browser-plugin --enable-checking=release --build=aarch64-linux --with-gmp=/usr/local/gmp-6.1.2 --with-mpfr=/usr/local/mpfr-3.1.5 --with-mpc=/usr/local/mpc-1.0.3 --with-isl=/usr/local/isl-0.18
Thread model: posix
gcc version 7.3.0 (GCC)
[root@localhost fishing]# date
2019年 12月 13日 星期五 21:52:47 EST
[root@localhost fishing]# dmesg |grep gcc
[    0.000000] Linux version 4.14.0-115.el7a.0.1.aarch64 (mockbuild@aarch64-01.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC)) #1 SMP Sun Nov 25 20:54:21 UTC 2018
[root@localhost fishing]#
[root@localhost fishing]# lsmod | grep fishing
fishing               262144  19660798
[root@localhost fishing]#

解决办法:

[root@centos fishing]# dmesg | grep gcc
[    0.000000] Linux version 4.18.0-80.7.2.el7.aarch64 (mockbuild@aarch64-01.bsys.centos.org) (gcc version 8.3.1 20190311 (Red Hat 8.3.1-3) (GCC)) #1 SMP Thu Sep 12 16:13:20 UTC 2019
[root@centos fishing]# gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/opt/rh/devtoolset-8/root/usr/libexec/gcc/aarch64-redhat-linux/8/lto-wrapper
Target: aarch64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,lto --prefix=/opt/rh/devtoolset-8/root/usr --mandir=/opt/rh/devtoolset-8/root/usr/share/man --infodir=/opt/rh/devtoolset-8/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --with-linker-hash-style=gnu --with-default-libstdcxx-abi=gcc4-compatible --enable-plugin --enable-initfini-array --with-isl=/builddir/build/BUILD/gcc-8.3.1-20190311/obj-aarch64-redhat-linux/isl-install --disable-libmpx --enable-gnu-indirect-function --build=aarch64-redhat-linux
Thread model: posix
gcc version 8.3.1 20190311 (Red Hat 8.3.1-3) (GCC)
[root@centos fishing]#

check sr-iov

sr-iov主要由网卡进行支持。 需要在bios开启sr-iov选项。

支持情况:

服务器型号 网卡类型 是否支持
Taishan2280 V1 板载网卡 不支持
Taishan2280 V1 其它网卡 不支持
Taishan2280 V2 板载网卡 支持
Taishan2280 V2 其它网卡 由网卡决定

在Taishan 2280v2上,确认是否支持sr-iov

                        Huawei BIOS Setup Utility V2.0
          Advanced
/--------------------------------------------------------+---------------------\
|                     PCIe Config                        |    Help Message     |
|--------------------------------------------------------+---------------------|
| > CPU 0 PCIE Configuration                             |Press <Enter> to     |
| > CPU 1 PCIE Configuration                             |config this CPU.     |
|   Support DPC                  <Disable>               |                     |
|   SRIOV                        <Enable>                |                     |
|   Hilink5 Work Mode            <PCIe Mode>             |                     |
|   PCIe DSM5# Mode              <BIOS Reserve>          |                     |
|                                                        |                     |
|                                                        |                     |
|                                                        |                     |
|                                                        |                     |
|                                                        |                     |
|                                                        |                     |
|                                                        |                     |
|                                                        |                     |
|                                                        |                     |
|                                                        |                     |
|                                                        |                     ||UniqueQuestionId(), QuestionId :0x32FE.
|--------------------------------------------------------+---------------------|
| F1  Help     ^v  Select Item    -/+   Change Value     | F9  Setup Defaults  |
| Esc Exit     <>  Select Menu    Enter Select>Sub-Menu  | F10 Save & Exit     |
|etUniqueQuestionId(), QuestionId :0x32FE.
[user1@centos ~]$ ls -la /sys/class/net/
total 0
drwxr-xr-x.  2 root root 0 Dec 12 10:54 .
drwxr-xr-x. 70 root root 0 Dec 12 10:52 ..
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 enp125s0f0 -> ../../devices/pci0000:7c/0000:7c:00.0/0000:7d:00.0/net/enp125s0f0
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 enp125s0f1 -> ../../devices/pci0000:7c/0000:7c:00.0/0000:7d:00.1/net/enp125s0f1
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 enp125s0f2 -> ../../devices/pci0000:7c/0000:7c:00.0/0000:7d:00.2/net/enp125s0f2
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 enp125s0f3 -> ../../devices/pci0000:7c/0000:7c:00.0/0000:7d:00.3/net/enp125s0f3
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 enp189s0f0 -> ../../devices/pci0000:bc/0000:bc:00.0/0000:bd:00.0/net/enp189s0f0
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 enp189s0f1 -> ../../devices/pci0000:bc/0000:bc:00.0/0000:bd:00.1/net/enp189s0f1
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 enp189s0f2 -> ../../devices/pci0000:bc/0000:bc:00.0/0000:bd:00.2/net/enp189s0f2
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 enp189s0f3 -> ../../devices/pci0000:bc/0000:bc:00.0/0000:bd:00.3/net/enp189s0f3
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 lo -> ../../devices/virtual/net/lo
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 virbr0 -> ../../devices/virtual/net/virbr0
lrwxrwxrwx.  1 root root 0 Dec 12 10:52 virbr0-nic -> ../../devices/virtual/net/virbr0-nic
[user1@centos ~]$ find /sys -name sriov_numvfs 2>/dev/null
/sys/devices/pci0000:74/0000:74:01.0/0000:75:00.0/sriov_numvfs
/sys/devices/pci0000:b4/0000:b4:01.0/0000:b5:00.0/sriov_numvfs
/sys/devices/pci0000:7c/0000:7c:00.0/0000:7d:00.3/sriov_numvfs
/sys/devices/pci0000:7c/0000:7c:00.0/0000:7d:00.1/sriov_numvfs
/sys/devices/pci0000:7c/0000:7c:00.0/0000:7d:00.2/sriov_numvfs
/sys/devices/pci0000:7c/0000:7c:00.0/0000:7d:00.0/sriov_numvfs
/sys/devices/pci0000:bc/0000:bc:00.0/0000:bd:00.1/sriov_numvfs
/sys/devices/pci0000:bc/0000:bc:00.0/0000:bd:00.2/sriov_numvfs
/sys/devices/pci0000:bc/0000:bc:00.0/0000:bd:00.0/sriov_numvfs
/sys/devices/pci0000:bc/0000:bc:00.0/0000:bd:00.3/srio
[user1@centos ~]$ sudo echo 1 > /sys/devices/pci0000:bc/0000:bc:00.0/0000:bd:00.3/sriov_numvfs
-bash: /sys/devices/pci0000:bc/0000:bc:00.0/0000:bd:00.3/sriov_numvfs: Permission denied
[user1@centos ~]$ su root
Password:
[root@centos user1]# history^C
[root@centos user1]# sudo echo 1 > /sys/devices/pci0000:7c/0000:7c:00.0/0000:7d:00.1/sriov_numvfs
[root@centos user1]# cat /sys/devices/pci0000:7c/0000:7c:00.0/0000:7d:00.1/sriov_numvfs
1
[root@centos user1]# echo 1 > /sys/devices/pci0000:bc/0000:bc:00.0/0000:bd:00.3/sriov_numvfs
[root@centos user1]# cat /sys/devices/pci0000:bc/0000:bc:00.0/0000:bd:00.3/sriov_numvfs
1
[root@centos user1]#

编译选项static导致程序Sgmentation fault

可以轻易使用tar2node复现:

git clone https://github.com/LyleLee/tars2node.git
git checkout static_segmentation_fault
cd tars2node/build
cmake ..
make
[user@centos build]$ ./tars2node
Segmentation fault (core dumped)

定位过程

查阅资料发现pthread静态链接时会有问题,原因是pthread.a没有整个包含到目标程序当中。网上提示使用-Wl,–whole-archive。 但是仍然会有错误

 # flag
-set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -g -O2 -Wall -Wno-sign-compare -Wno-unused-result -static")
-set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -O2 -Wall -static")
+set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -g -O2 -Wall -Wno-sign-compare -Wno-unused-result -static -Wl,--whole-archive")
+set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -O2 -Wall -static -Wl,--whole-archive")

添加

-Wl,--whole-archive

可以看到具体错误:

/usr/lib/gcc/aarch64-redhat-linux/4.8.5/../../../../lib64/libc.a(s_signbitl.o): In function `__signbitl':
(.text+0x0): multiple definition of `__signbitl'
/usr/lib/gcc/aarch64-redhat-linux/4.8.5/../../../../lib64/libm.a(m_signbitl.o):(.text+0x0): first defined here
/usr/lib/gcc/aarch64-redhat-linux/4.8.5/../../../../lib64/libc.a(mp_clz_tab.o):(.rodata+0x0): multiple definition of `__clz_tab'
/usr/lib/gcc/aarch64-redhat-linux/4.8.5/libgcc.a(_clz.o):(.rodata+0x0): first defined here
/usr/lib/gcc/aarch64-redhat-linux/4.8.5/../../../../lib64/libc.a(rcmd.o): In function `__validuser2_sa':
(.text+0x54c): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/usr/lib/gcc/aarch64-redhat-linux/4.8.5/../../../../lib64/libc.a(dl-conflict.o): In function `_dl_resolve_conflicts':
(.text+0x28): undefined reference to `_dl_num_cache_relocations'
/usr/lib/gcc/aarch64-redhat-linux/4.8.5/../../../../lib64/libc.a(dl-conflict.o): In function `_dl_resolve_conflicts':
(.text+0x34): undefined reference to `_dl_num_cache_relocations'
/usr/lib/gcc/aarch64-redhat-linux/4.8.5/../../../../lib64/libc.a(dl-conflict.o): In function `_dl_resolve_conflicts':
(.text+0x48): undefined reference to `_dl_num_cache_relocations'
collect2: error: ld returned 1 exit status
make[2]: *** [tars2node] Error 1
make[1]: *** [CMakeFiles/tars2node.dir/all] Error 2
make: *** [all] Error 2
[me@centos build]$
[me@centos build]$

上面的报错,提示libm和libc中有重复定义的函数, 这个是让人很疑惑的,上网搜索,没有相关资料描述。 同时查询了_dl_num_cache_relocations,仍然没有线索,后来怀疑弱符号等原因,查阅了很多资料。 没有什么思路。

换个思维方式,只是想静态链接程序,什么错误先不管,如何才能正确地静态链接呢。最终找到了要想静态链接,不同编译器地选项是不一样的。

解决问题:

[100%] Linking CXX executable tars2node
/usr/bin/ld: cannot find -lgcc_s
/usr/bin/ld: cannot find -lgcc_s
collect2: error: ld returned 1 exit status
make[2]: *** [tars2node] Error 1
make[1]: *** [CMakeFiles/tars2node.dir/all] Error 2
make: *** [all] Error 2

使用-static-libgcc -static-libstdc++编译选项可以消除这个报错

解决办法:

一、使用动态链接库,去掉-static选项

编辑../CMakeList.txt取消-static

 # flag
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -g -O2 -Wall -Wno-sign-compare -Wno-unused-result -static")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -O2 -Wall -static")
二、使用gcc/g++ 8.0及以上

升级办法参考【devtoolset】

三、使用低版本gcc如4.8.5
yum install glibc-static

使用-Wl,-Bdynamic编译选项

# flag
-set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -g -O2 -Wall -Wno-sign-compare -Wno-unused-result -static")
-set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -O2 -Wall -static")
+set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -g -O2 -Wall -Wno-sign-compare -Wno-unused-result -Wl,-Bdynamic")
+set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -O2 -Wall -Wl,-Bdynamic")

tomcat not start

在apache官网下载tomcat-src版本启动, tomcat启动成功,当时没有看到后台进程,web服务不正常。

image0

查看tomcat目录下的log/catalina.out发现:

Could not find or load main class org.apache.catalina.startup.Bootstrap.tar

解决办法,下载Binary Distribution Core下的 zip 会自动包含 Bootstrap.tar

另外,环境变量要配置正确:

操作过程

[me@hs-es1 bin]$ $CATALINA_HOME/bin/startup.sh
Using CATALINA_BASE:   /home/me/apache-tomcat-9.0.22
Using CATALINA_HOME:   /home/me/apache-tomcat-9.0.22
Using CATALINA_TMPDIR: /home/me/apache-tomcat-9.0.22/temp
Using JRE_HOME:        /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-0.el7_6.aarch64/jre
Using CLASSPATH:       /home/me/apache-tomcat-9.0.22/bin/bootstrap.jar:/home/me/apache-tomcat-9.0.22/bin/tomcat-juli.jar
Tomcat started.
[me@hs-es1 bin]$
[me@hs-es1 bin]$
[me@hs-es1 bin]$ nestat -antup
bash: nestat: command not found
[me@hs-es1 bin]$ sudo netstat -antup
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      11618/master
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      11263/sshd
tcp        0      0 192.168.2.235:22        192.168.1.107:51438     ESTABLISHED 78473/sshd: root@pt
tcp        0      0 192.168.2.235:22        192.168.1.107:51446     ESTABLISHED 78475/sshd: root@no
tcp        0      0 192.168.2.235:22        192.168.1.107:51694     ESTABLISHED 78642/sshd: root@no
tcp        0      0 192.168.2.235:22        192.168.1.107:51690     ESTABLISHED 78640/sshd: root@pt
tcp6       0      0 ::1:25                  :::*                    LISTEN      11618/master
tcp6       0      0 127.0.0.1:8005          :::*                    LISTEN      81902/java
tcp6       0      0 :::8009                 :::*                    LISTEN      81902/java
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd
tcp6       0      0 :::8080                 :::*                    LISTEN      81902/java
tcp6       0      0 :::22                   :::*                    LISTEN      11263/sshd
udp        0      0 0.0.0.0:68              0.0.0.0:*

Indices and tables